From mathisfriesdorf at gmail.com Tue Apr 1 05:37:08 2014 From: mathisfriesdorf at gmail.com (Mathis Friesdorf) Date: Tue, 1 Apr 2014 12:37:08 +0200 Subject: [petsc-users] Tensor product as matrix free method In-Reply-To: References: Message-ID: Thanks for the helpful comments. Apparently my first mail was a bit unclear. I have already implemented it as a matrix-free method using MatShell, but currently it is written with a bunch of for loops and is horribly inefficient and in fact a lot slower than my python implementation with numpy. I am quite sure that using TAIJ matrices would be the solution, but I am unsure on how to do it precisely, since there is so little documentation available. Thanks again and all the best, Mathis On Mon, Mar 31, 2014 at 7:10 PM, Matthew Knepley wrote: > On Mon, Mar 31, 2014 at 11:49 AM, Mathis Friesdorf < > mathisfriesdorf at gmail.com> wrote: > >> Hello everybody, >> >> for my Ph.D. in theoretical quantum mechanics, I am currently trying to >> integrate the Schroedinger equation (a linear partial differential >> equation). In my field, we are working with so called local spin chains, >> which mathematically speaking are described by tensor products of small >> vector spaces over several systems (let's say 20). The matrix corresponding >> to the differential equation is called Hamiltonian and can for typical >> systems be written as a sum over tensor products where it acts as the >> identity on most systems. It normally has the form >> >> *\sum Id \otimes Id ... Id \otimes M \otimes Id \otimes ...* >> >> where M takes different positions.I know how to explicitly construct the >> full matrix and insert it into Petsc, but for the interesting applications >> it is too large to be stored in the RAM. I would therefore like to >> implement it as a matrix free version. >> This should be possible using MatCreateMAIJ() and VecGetArray(), as the >> following very useful post points out >> http://lists.mcs.anl.gov/pipermail/petsc-users/2011-September/009992.html. >> I was wondering whether anybody already made progress with this, as I am >> still a bit lost on how to precisely proceed. These systems really are >> ubiquitous in theoretical quantum mechanics and I am sure it would be >> helpful to quite a lot of people who still shy away a bit from Petsc. >> >> Thanks already for your help and all the best, Mathis >> > > 1) The first thing you could try is MatShell(). However, you would have to > handle all the parallelism, which might be onerous. > > 2) An alternative is to explore the new TAIJ matrices. This is definitely > not for novice programmers, but it is a direct representation > of a Kronecker product, and in addition is vectorized. Jed is the lead > there, so maybe he can comment. > > Thanks, > > Matt > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From torquil at gmail.com Tue Apr 1 07:18:49 2014 From: torquil at gmail.com (=?UTF-8?Q?Torquil_Macdonald_S=C3=B8rensen?=) Date: Tue, 1 Apr 2014 14:18:49 +0200 Subject: [petsc-users] MatView and complex matrices Message-ID: Hi! Using Petsc 3.4.4, compiled to support complex numbers, I'm getting a completely white figure when plotting a matrix with purely imaginary elements, using MatView(mat, PETSC_VIEWER_DRAW_WORLD). So I cannot see the nonzero structure from the plot. For real matrices (still using the complex-enabled Petsc), I'm getting red and blue points corresponding to positive and negative elements. Is it possible to get Petsc to show the nonzero elements of a complex matrix, by checking the absolute values of each element? Best regards Torquil S?rensen -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Tue Apr 1 09:10:45 2014 From: jed at jedbrown.org (Jed Brown) Date: Tue, 01 Apr 2014 08:10:45 -0600 Subject: [petsc-users] Tensor product as matrix free method In-Reply-To: References: Message-ID: <87fvlxb0nu.fsf@jedbrown.org> Mathis Friesdorf writes: > Thanks for the helpful comments. Apparently my first mail was a bit > unclear. I have already implemented it as a matrix-free method using > MatShell, but currently it is written with a bunch of for loops and is > horribly inefficient and in fact a lot slower than my python implementation > with numpy. I am quite sure that using TAIJ matrices would be the solution, > but I am unsure on how to do it precisely, since there is so little > documentation available. TAIJ is a very specialized tensor product between a redundantly-stored (small) dense matrix and a distributed sparse matrix, with an optional shift. It is useful for implicit Runge-Kutta and some types of ensembles, but it's not what it appears you are looking for. Distributed-memory tensor contractions are challenging to implement well and there are several good libraries from the computational chemistry community. I would suggest writing a Mat implementation based on the Cyclops Tensor Framework or the Tensor Contraction Engine. http://ctf.eecs.berkeley.edu/ http://www.csc.lsu.edu/~gb/TCE/ This would be a great contribution to PETSc, but you can also just put one of these inside MatShell (just a few lines of code) if you don't want to make a public interface for tensors. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From mathisfriesdorf at gmail.com Tue Apr 1 09:28:08 2014 From: mathisfriesdorf at gmail.com (Mathis Friesdorf) Date: Tue, 1 Apr 2014 16:28:08 +0200 Subject: [petsc-users] Tensor product as matrix free method In-Reply-To: <87fvlxb0nu.fsf@jedbrown.org> References: <87fvlxb0nu.fsf@jedbrown.org> Message-ID: Thanks Jed! The libaries you are pointing to look very interesting indeed. For the particular implementation I have in mind, I was hoping to get away with something easier, as my time to work on this is a bit limited. All I really need is a way to construct a block-diagonal matrix where each block is given by the same matrix and all blocks are on the diagonal. Am I wrong to assume that this should be possible to implement with TAIJ? Thanks so much for your help and all the best, Mathis On Tue, Apr 1, 2014 at 4:10 PM, Jed Brown wrote: > Mathis Friesdorf writes: > > > Thanks for the helpful comments. Apparently my first mail was a bit > > unclear. I have already implemented it as a matrix-free method using > > MatShell, but currently it is written with a bunch of for loops and is > > horribly inefficient and in fact a lot slower than my python > implementation > > with numpy. I am quite sure that using TAIJ matrices would be the > solution, > > but I am unsure on how to do it precisely, since there is so little > > documentation available. > > TAIJ is a very specialized tensor product between a redundantly-stored > (small) dense matrix and a distributed sparse matrix, with an optional > shift. It is useful for implicit Runge-Kutta and some types of > ensembles, but it's not what it appears you are looking for. > > Distributed-memory tensor contractions are challenging to implement well > and there are several good libraries from the computational chemistry > community. I would suggest writing a Mat implementation based on the > Cyclops Tensor Framework or the Tensor Contraction Engine. > > http://ctf.eecs.berkeley.edu/ > > http://www.csc.lsu.edu/~gb/TCE/ > > This would be a great contribution to PETSc, but you can also just put > one of these inside MatShell (just a few lines of code) if you don't > want to make a public interface for tensors. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Tue Apr 1 09:39:41 2014 From: jed at jedbrown.org (Jed Brown) Date: Tue, 01 Apr 2014 08:39:41 -0600 Subject: [petsc-users] Tensor product as matrix free method In-Reply-To: References: <87fvlxb0nu.fsf@jedbrown.org> Message-ID: <87a9c5azbm.fsf@jedbrown.org> Mathis Friesdorf writes: > Thanks Jed! The libaries you are pointing to look very interesting indeed. > For the particular implementation I have in mind, I was hoping to get away > with something easier, as my time to work on this is a bit limited. All I > really need is a way to construct a block-diagonal matrix where each block > is given by the same matrix and all blocks are on the diagonal. Am I wrong > to assume that this should be possible to implement with TAIJ? That is exactly TAIJ. How large are the diagonal blocks? The implementation is intended to be fast only for small sizes, though I could make it good for large sizes as well. In the special case of (I \otimes S) X where I is the sparse identity and S is the dense diagonal block, this involves interpreting X as a matrix rather than a vector and using dgemm. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From jed at jedbrown.org Tue Apr 1 09:41:02 2014 From: jed at jedbrown.org (Jed Brown) Date: Tue, 01 Apr 2014 08:41:02 -0600 Subject: [petsc-users] MatView and complex matrices In-Reply-To: References: Message-ID: <877g79az9d.fsf@jedbrown.org> Torquil Macdonald S?rensen writes: > Hi! > > Using Petsc 3.4.4, compiled to support complex numbers, I'm getting a > completely white figure when plotting a matrix with purely imaginary > elements, using MatView(mat, PETSC_VIEWER_DRAW_WORLD). So I cannot see the > nonzero structure from the plot. > > For real matrices (still using the complex-enabled Petsc), I'm getting red > and blue points corresponding to positive and negative elements. > > Is it possible to get Petsc to show the nonzero elements of a complex > matrix, by checking the absolute values of each element? If we use the absolute value, how would we distinguish positive and negative values? The complex version could make two plots. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From mathisfriesdorf at gmail.com Tue Apr 1 09:47:23 2014 From: mathisfriesdorf at gmail.com (Mathis Friesdorf) Date: Tue, 1 Apr 2014 16:47:23 +0200 Subject: [petsc-users] Tensor product as matrix free method In-Reply-To: <87a9c5azbm.fsf@jedbrown.org> References: <87fvlxb0nu.fsf@jedbrown.org> <87a9c5azbm.fsf@jedbrown.org> Message-ID: Thanks Jed, this is great. I was trying to implement what was proposed in http://lists.mcs.anl.gov/pipermail/petsc-users/2011-September/009991.html, by you as I just realised. For this the matrices would be of varying size, so for three local systems, I would need: S \otimes Id \otimes Id ; Id \otimes S \otimes Id ; Id \otimes Id \otimes S. Realistic systems are of the order of at least 20 local systems, which would mean that I need the matrices of the above structure for all 20 possible positions. Again, I really appreciate the help. Thanks, Mathis On Tue, Apr 1, 2014 at 4:39 PM, Jed Brown wrote: > Mathis Friesdorf writes: > > > Thanks Jed! The libaries you are pointing to look very interesting > indeed. > > For the particular implementation I have in mind, I was hoping to get > away > > with something easier, as my time to work on this is a bit limited. All I > > really need is a way to construct a block-diagonal matrix where each > block > > is given by the same matrix and all blocks are on the diagonal. Am I > wrong > > to assume that this should be possible to implement with TAIJ? > > That is exactly TAIJ. How large are the diagonal blocks? The > implementation is intended to be fast only for small sizes, though I > could make it good for large sizes as well. In the special case of > > (I \otimes S) X > > where I is the sparse identity and S is the dense diagonal block, this > involves interpreting X as a matrix rather than a vector and using dgemm. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mathisfriesdorf at gmail.com Tue Apr 1 09:48:08 2014 From: mathisfriesdorf at gmail.com (Mathis Friesdorf) Date: Tue, 1 Apr 2014 16:48:08 +0200 Subject: [petsc-users] Tensor product as matrix free method In-Reply-To: References: <87fvlxb0nu.fsf@jedbrown.org> <87a9c5azbm.fsf@jedbrown.org> Message-ID: P.S. the individual matrix S is usually rather small, say 3x3 or 6x6 On Tue, Apr 1, 2014 at 4:47 PM, Mathis Friesdorf wrote: > Thanks Jed, this is great. I was trying to implement what was proposed in > http://lists.mcs.anl.gov/pipermail/petsc-users/2011-September/009991.html, > by you as I just realised. For this the matrices would be of varying size, > so for three local systems, I would need: > > S \otimes Id \otimes Id ; Id \otimes S \otimes Id ; Id \otimes Id \otimes > S. > > Realistic systems are of the order of at least 20 local systems, which > would mean that I need the matrices of the above structure for all 20 > possible positions. > > Again, I really appreciate the help. Thanks, Mathis > > > On Tue, Apr 1, 2014 at 4:39 PM, Jed Brown wrote: > >> Mathis Friesdorf writes: >> >> > Thanks Jed! The libaries you are pointing to look very interesting >> indeed. >> > For the particular implementation I have in mind, I was hoping to get >> away >> > with something easier, as my time to work on this is a bit limited. All >> I >> > really need is a way to construct a block-diagonal matrix where each >> block >> > is given by the same matrix and all blocks are on the diagonal. Am I >> wrong >> > to assume that this should be possible to implement with TAIJ? >> >> That is exactly TAIJ. How large are the diagonal blocks? The >> implementation is intended to be fast only for small sizes, though I >> could make it good for large sizes as well. In the special case of >> >> (I \otimes S) X >> >> where I is the sparse identity and S is the dense diagonal block, this >> involves interpreting X as a matrix rather than a vector and using dgemm. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Tue Apr 1 09:57:04 2014 From: jed at jedbrown.org (Jed Brown) Date: Tue, 01 Apr 2014 08:57:04 -0600 Subject: [petsc-users] Tensor product as matrix free method In-Reply-To: References: <87fvlxb0nu.fsf@jedbrown.org> <87a9c5azbm.fsf@jedbrown.org> Message-ID: <871txhayin.fsf@jedbrown.org> Mathis Friesdorf writes: > Thanks Jed, this is great. I was trying to implement what was proposed in > http://lists.mcs.anl.gov/pipermail/petsc-users/2011-September/009991.html, > by you as I just realised. For this the matrices would be of varying size, > so for three local systems, I would need: > > S \otimes Id \otimes Id ; Id \otimes S \otimes Id ; Id \otimes Id \otimes S. > > Realistic systems are of the order of at least 20 local systems, which > would mean that I need the matrices of the above structure for all 20 > possible positions. This is something for which you should use CTF or TCE. When S moves out of the last position in the tensor product, the operation becomes less local and various forms of transposition is needed. This is not something worth recreating in PETSc unless there is a critical deficiency in the available libraries. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From mathisfriesdorf at gmail.com Tue Apr 1 10:00:01 2014 From: mathisfriesdorf at gmail.com (Mathis Friesdorf) Date: Tue, 1 Apr 2014 17:00:01 +0200 Subject: [petsc-users] Tensor product as matrix free method In-Reply-To: <871txhayin.fsf@jedbrown.org> References: <87fvlxb0nu.fsf@jedbrown.org> <87a9c5azbm.fsf@jedbrown.org> <871txhayin.fsf@jedbrown.org> Message-ID: Alright, so your point is that while this construction is in principal possible, it will not be efficient. Thanks for clearing this up! On Tue, Apr 1, 2014 at 4:57 PM, Jed Brown wrote: > Mathis Friesdorf writes: > > > Thanks Jed, this is great. I was trying to implement what was proposed in > > > http://lists.mcs.anl.gov/pipermail/petsc-users/2011-September/009991.html, > > by you as I just realised. For this the matrices would be of varying > size, > > so for three local systems, I would need: > > > > S \otimes Id \otimes Id ; Id \otimes S \otimes Id ; Id \otimes Id > \otimes S. > > > > Realistic systems are of the order of at least 20 local systems, which > > would mean that I need the matrices of the above structure for all 20 > > possible positions. > > This is something for which you should use CTF or TCE. When S moves out > of the last position in the tensor product, the operation becomes less > local and various forms of transposition is needed. This is not > something worth recreating in PETSc unless there is a critical > deficiency in the available libraries. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Tue Apr 1 10:07:14 2014 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 1 Apr 2014 11:07:14 -0400 Subject: [petsc-users] gamg failure with petsc-dev In-Reply-To: References: <532B19AD.50105@imperial.ac.uk> <87lhw4yy9n.fsf@jedbrown.org> <532C23D6.7000400@imperial.ac.uk> <53306FA8.4040001@imperial.ac.uk> Message-ID: Stephan, I have pushed a pull request to fix this but for now you can just use -mg_levels_ksp_type chebyshev -mg_levels_pc_type jacobi. This used to be the default be we move to SOR recently. Mark On Sat, Mar 29, 2014 at 5:52 PM, Mark Adams wrote: > Sorry for getting to this late. I think you have figured it out basically > but there are a few things: > > 1) You must set the block size of A (bs=2) for the null spaces to work and > for aggregation MG to work properly. SA-AMG really does not make sense > unless you work at the vertex level, for which we need the block size. > > 2) You must be right that the zero column is because the aggregation > produced a singleton aggregate. And so the coarse grid is low rank. This > is not catastrophic, it is like a fake BC equations. The numerics just > have to work around it. Jacobi does this. I will fix SOR. > > Mark > > >> Ok, I found out a bit more. The fact that the prolongator has zero >> columns appears to arise in petsc 3.4 as well. The only reason it wasn't >> flagged before is that the default for the smoother (not the aggregation >> smoother but the standard pre and post smoothing) changed from jacobi to >> sor. I can make the example work with the additional option: >> >> $ ./ex49 -elas_pc_type gamg -mx 100 -my 100 -mat_no_inode >> -elas_mg_levels_1_pc_type jacobi >> >> Vice versa, if in petsc 3.4.4 I change ex49 to include the near nullspace >> (the /* constrain near-null space bit */) at the end, it works with jacobi >> (the default in 3.4) but it breaks with sor with the same error message as >> above. I'm not entirely sure why jacobi doesn't give an error with a zero >> on the diagonal, but the zero column also means that the related coarse dof >> doesn't actually affect the fine grid solution. >> >> I think (but I might be barking up the wrong tree here) that the zero >> columns appear because the aggregation method typically will have a few >> small aggregates that are not big enough to support the polynomials of the >> near null space (i.e. the polynomials restricted to an aggregate are not >> linearly independent). A solution would be to reduce the number of >> polynomials for these aggregates (only take the linearly independent). >> Obviously this has the down-side that the degrees of freedom per aggregate >> at the coarse level is no longer a constant making the administration more >> complicated. It would be nice to find a solution though as I've always been >> taught that jacobi is not a robust smoother for multigrid. >> >> Cheers >> Stephan >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Tue Apr 1 10:10:12 2014 From: jed at jedbrown.org (Jed Brown) Date: Tue, 01 Apr 2014 09:10:12 -0600 Subject: [petsc-users] Tensor product as matrix free method In-Reply-To: References: <87fvlxb0nu.fsf@jedbrown.org> <87a9c5azbm.fsf@jedbrown.org> <871txhayin.fsf@jedbrown.org> Message-ID: <87y4zp9jcb.fsf@jedbrown.org> Mathis Friesdorf writes: > Alright, so your point is that while this construction is in principal > possible, it will not be efficient. Thanks for clearing this up! Yes, TAIJ chooses a specific order for simplicity. A general tensor library is a big problem that has already gotten a lot of attention, so we would prefer to use one. Wrapping as a MatShell is very simple, though it may be interesting to include a full interface in the library. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From torquil at gmail.com Tue Apr 1 12:58:38 2014 From: torquil at gmail.com (=?UTF-8?B?VG9ycXVpbCBNYWNkb25hbGQgU8O4cmVuc2Vu?=) Date: Tue, 01 Apr 2014 19:58:38 +0200 Subject: [petsc-users] MatView and complex matrices In-Reply-To: <877g79az9d.fsf@jedbrown.org> References: <877g79az9d.fsf@jedbrown.org> Message-ID: <533AFE4E.3020203@gmail.com> On 01/04/14 16:41, Jed Brown wrote: > Torquil Macdonald S?rensen writes: > >> Hi! >> >> Using Petsc 3.4.4, compiled to support complex numbers, I'm getting a >> completely white figure when plotting a matrix with purely imaginary >> elements, using MatView(mat, PETSC_VIEWER_DRAW_WORLD). So I cannot see the >> nonzero structure from the plot. >> >> For real matrices (still using the complex-enabled Petsc), I'm getting red >> and blue points corresponding to positive and negative elements. >> >> Is it possible to get Petsc to show the nonzero elements of a complex >> matrix, by checking the absolute values of each element? > If we use the absolute value, how would we distinguish positive and > negative values? > > The complex version could make two plots. Yes, that would be very useful. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 880 bytes Desc: OpenPGP digital signature URL: From s.kramer at imperial.ac.uk Tue Apr 1 13:10:49 2014 From: s.kramer at imperial.ac.uk (Stephan Kramer) Date: Tue, 01 Apr 2014 19:10:49 +0100 Subject: [petsc-users] gamg failure with petsc-dev In-Reply-To: References: <532B19AD.50105@imperial.ac.uk> <87lhw4yy9n.fsf@jedbrown.org> <532C23D6.7000400@imperial.ac.uk> <53306FA8.4040001@imperial.ac.uk> Message-ID: <533B0129.2030905@imperial.ac.uk> On 01/04/14 16:07, Mark Adams wrote: > Stephan, I have pushed a pull request to fix this but for now you can just > use -mg_levels_ksp_type chebyshev -mg_levels_pc_type jacobi. This used to > be the default be we move to SOR recently. > Mark Ah, that's great news. Thanks a lot for the effort. You're right: the previous defaults should be fine for us; your fix should hopefully only improve things > > > On Sat, Mar 29, 2014 at 5:52 PM, Mark Adams wrote: > >> Sorry for getting to this late. I think you have figured it out basically >> but there are a few things: >> >> 1) You must set the block size of A (bs=2) for the null spaces to work and >> for aggregation MG to work properly. SA-AMG really does not make sense >> unless you work at the vertex level, for which we need the block size. Yes indeed. I've come to realize this now by looking into how smoothed aggregation with a near null space actually works. We currently have our dofs numbered the wrong way around (vertices on the inside, velocity component on the outside - which made sense for other eqns we solve with the model) so will take a bit of work, but might well be worth the effort Thanks a lot for looking into this Cheers Stephan >> >> 2) You must be right that the zero column is because the aggregation >> produced a singleton aggregate. And so the coarse grid is low rank. This >> is not catastrophic, it is like a fake BC equations. The numerics just >> have to work around it. Jacobi does this. I will fix SOR. >> >> Mark >> >> >>> Ok, I found out a bit more. The fact that the prolongator has zero >>> columns appears to arise in petsc 3.4 as well. The only reason it wasn't >>> flagged before is that the default for the smoother (not the aggregation >>> smoother but the standard pre and post smoothing) changed from jacobi to >>> sor. I can make the example work with the additional option: >>> >>> $ ./ex49 -elas_pc_type gamg -mx 100 -my 100 -mat_no_inode >>> -elas_mg_levels_1_pc_type jacobi >>> >>> Vice versa, if in petsc 3.4.4 I change ex49 to include the near nullspace >>> (the /* constrain near-null space bit */) at the end, it works with jacobi >>> (the default in 3.4) but it breaks with sor with the same error message as >>> above. I'm not entirely sure why jacobi doesn't give an error with a zero >>> on the diagonal, but the zero column also means that the related coarse dof >>> doesn't actually affect the fine grid solution. >>> >>> I think (but I might be barking up the wrong tree here) that the zero >>> columns appear because the aggregation method typically will have a few >>> small aggregates that are not big enough to support the polynomials of the >>> near null space (i.e. the polynomials restricted to an aggregate are not >>> linearly independent). A solution would be to reduce the number of >>> polynomials for these aggregates (only take the linearly independent). >>> Obviously this has the down-side that the degrees of freedom per aggregate >>> at the coarse level is no longer a constant making the administration more >>> complicated. It would be nice to find a solution though as I've always been >>> taught that jacobi is not a robust smoother for multigrid. >>> >>> Cheers >>> Stephan >>> >>> >>> >> > From jed at jedbrown.org Tue Apr 1 13:17:29 2014 From: jed at jedbrown.org (Jed Brown) Date: Tue, 01 Apr 2014 12:17:29 -0600 Subject: [petsc-users] gamg failure with petsc-dev In-Reply-To: <533B0129.2030905@imperial.ac.uk> References: <532B19AD.50105@imperial.ac.uk> <87lhw4yy9n.fsf@jedbrown.org> <532C23D6.7000400@imperial.ac.uk> <53306FA8.4040001@imperial.ac.uk> <533B0129.2030905@imperial.ac.uk> Message-ID: <87d2h0didi.fsf@jedbrown.org> Stephan Kramer writes: > Yes indeed. I've come to realize this now by looking into how smoothed > aggregation with a near null space actually works. We currently have > our dofs numbered the wrong way around (vertices on the inside, > velocity component on the outside - which made sense for other eqns we > solve with the model) so will take a bit of work, but might well be > worth the effort The memory streaming and cache reuse is much better if you interlace the degrees of freedom. This is as true now as it was at the time of the PETSc-FUN3D papers. When evaluating the "physics", it can be useful to pack the interlaced degrees of freedom into a vector-friendly ordering. The AMG solve is plenty expensive that you can pack/solve/unpack an interlaced vector at negligible cost without changing the rest of your code. Mark, should we provide some more flexible way to label "fields"? It will be more complicated than the present code and I think packing into interlaced format is faster anyway. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From jed at jedbrown.org Tue Apr 1 13:28:22 2014 From: jed at jedbrown.org (Jed Brown) Date: Tue, 01 Apr 2014 12:28:22 -0600 Subject: [petsc-users] MatView and complex matrices In-Reply-To: <533AFE4E.3020203@gmail.com> References: <877g79az9d.fsf@jedbrown.org> <533AFE4E.3020203@gmail.com> Message-ID: <877g78dhvd.fsf@jedbrown.org> Torquil Macdonald S?rensen writes: > On 01/04/14 16:41, Jed Brown wrote: >> Torquil Macdonald S?rensen writes: >> >> The complex version could make two plots. > > Yes, that would be very useful. Hmm, this would be nicer if PETSc plotting had "subfig" capability. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From bsmith at mcs.anl.gov Tue Apr 1 13:54:09 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 1 Apr 2014 13:54:09 -0500 Subject: [petsc-users] MatView and complex matrices In-Reply-To: <877g78dhvd.fsf@jedbrown.org> References: <877g79az9d.fsf@jedbrown.org> <533AFE4E.3020203@gmail.com> <877g78dhvd.fsf@jedbrown.org> Message-ID: <99352DEA-9E34-47B8-BCF3-C53617299BAD@mcs.anl.gov> On Apr 1, 2014, at 1:28 PM, Jed Brown wrote: > Torquil Macdonald S?rensen writes: > >> On 01/04/14 16:41, Jed Brown wrote: >>> Torquil Macdonald S?rensen writes: >>> >>> The complex version could make two plots. >> >> Yes, that would be very useful. > > Hmm, this would be nicer if PETSc plotting had "subfig" capability. It does. In a couple of different ways. 1) PetscViewerDrawGetDraw(viewer,window number,&draw) allows you to create any number of draw windows associated with a Draw viewer, so if complex just request another draw for the imaginary part 2) PetscDrawViewPortsCreate() and friends. ViewPorts are like subfig In this case since you are starting with a viewer I would use 1. I intended a long time ago to merge the two concepts so one could trivially switch between having multiple viewports in the same window and multiple windows but life got in the way. Barry From torquil at gmail.com Tue Apr 1 17:00:51 2014 From: torquil at gmail.com (=?UTF-8?B?VG9ycXVpbCBNYWNkb25hbGQgU8O4cmVuc2Vu?=) Date: Wed, 02 Apr 2014 00:00:51 +0200 Subject: [petsc-users] Problem with TS and PCMG In-Reply-To: <87r4991zxx.fsf@jedbrown.org> References: <52AC6127.2010407@gmail.com> <87eh5952uu.fsf@jedbrown.org> <52B20F99.3080104@gmail.com> <87r4991zxx.fsf@jedbrown.org> Message-ID: <533B3713.7060701@gmail.com> On 19/12/13 00:50, Jed Brown wrote: >> I'm using PCMG. My minimal testcase program has everything >> hardcoded. I'm not providing any runtime options other than >> -start_in_debugger, so everything can be seen from the code. I've >> since made a couple of small changes that might influence line number >> statements from GDB, so I'm including the newest version in this >> message. >> >> Here is the GDB output of "bt full" and the corresponding program code: >> >> (gdb) bt full >> #1 0x00007ffdc080d172 in DMCoarsen (dm=0x1fa7df0, comm=0x7ffdbffdd9c0 >> , dmc=0x2048f00) at dm.c:1811 >> ierr = 0 >> link = 0x7ffdc1af1640 >> __func__ = "DMCoarsen" >> #2 0x00007ffdc0c41c1e in PCSetUp_MG (pc=0x1ff7570) at mg.c:593 >> kdm = 0x1d5b370 >> p = 0xb453 >> rscale = 0x1ff7570 >> lu = PETSC_FALSE >> svd = (PETSC_TRUE | unknown: 32764) >> dB = 0x0 >> __func__ = "PCSetUp_MG" >> tvec = 0x0 >> viewer = 0x0 >> mglevels = 0x2006980 >> ierr = 0 >> preonly = PETSC_FALSE >> cholesky = (unknown: 3246112504) >> mg = 0x20060c0 >> i = 0 >> n = 2 >> redundant = (unknown: 1211234) >> use_amat = PETSC_TRUE >> dms = 0x2048f00 >> cpc = 0x200f620 >> dump = PETSC_FALSE >> opsset = PETSC_FALSE >> dA = 0x7ffdc1af1640 >> uflag = DIFFERENT_NONZERO_PATTERN > Okay, this is the relevant bit of code. > > /* Skipping this for galerkin==2 (externally managed hierarchy such as ML and GAMG). Cleaner logic here would be great. Wrap ML/GAMG as DMs? */ > if (pc->dm && mg->galerkin != 2 && !pc->setupcalled) { > /* construct the interpolation from the DMs */ > Mat p; > Vec rscale; > ierr = PetscMalloc1(n,&dms);CHKERRQ(ierr); > dms[n-1] = pc->dm; > for (i=n-2; i>-1; i--) { > DMKSP kdm; > ierr = DMCoarsen(dms[i+1],MPI_COMM_NULL,&dms[i]);CHKERRQ(ierr); > ierr = KSPSetDM(mglevels[i]->smoothd,dms[i]);CHKERRQ(ierr); > if (mg->galerkin) {ierr = KSPSetDMActive(mglevels[i]->smoothd,PETSC_FALSE);CHKERRQ(ierr);} > ierr = DMGetDMKSPWrite(dms[i],&kdm);CHKERRQ(ierr); > /* Ugly hack so that the next KSPSetUp() will use the RHS that we set. A better fix is to change dmActive to take > * a bitwise OR of computing the matrix, RHS, and initial iterate. */ > kdm->ops->computerhs = NULL; > kdm->rhsctx = NULL; > if (!mglevels[i+1]->interpolate) { > ierr = DMCreateInterpolation(dms[i],dms[i+1],&p,&rscale);CHKERRQ(ierr); > > I'm inclined to implement DMCoarsen_Shell() so that it carries through > to the levels. It won't be used for interpolation because of the final > conditional (the user has already set mglevels[i+1]->interpolate). This > is slightly annoying because it's one more function that users have to > implement. I think the patch below will also work, though I'm nervous > about DMShellSetCreateMatrix(). > > diff --git i/src/dm/interface/dm.c w/src/dm/interface/dm.c > index ec60763..c987200 100644 > --- i/src/dm/interface/dm.c > +++ w/src/dm/interface/dm.c > @@ -1943,7 +1943,11 @@ PetscErrorCode DMCoarsen(DM dm, MPI_Comm comm, DM *dmc) > > PetscFunctionBegin; > PetscValidHeaderSpecific(dm,DM_CLASSID,1); > - ierr = (*dm->ops->coarsen)(dm, comm, dmc);CHKERRQ(ierr); > + if (dm->ops->coarsen) { > + ierr = (*dm->ops->coarsen)(dm, comm, dmc);CHKERRQ(ierr); > + } else { > + ierr = DMShellCreate(comm,dmc);CHKERRQ(ierr); > + } > (*dmc)->ops->creatematrix = dm->ops->creatematrix; > ierr = PetscObjectCopyFortranFunctionPointers((PetscObject)dm,(PetscObject)*dmc);CHKERRQ(ierr); > (*dmc)->ctx = dm->ctx; > I tried this patch against 3.4.4, and got the following error when running my test program: tmac at lenovo:/tmp/build$ ./ts_multigrid.exe [0]PETSC ERROR: PetscCommDuplicate() line 148 in /tmp/petsc-3.4.4/src/sys/objects/tagm.c [0]PETSC ERROR: PetscHeaderCreate_Private() line 55 in /tmp/petsc-3.4.4/src/sys/objects/inherit.c [0]PETSC ERROR: DMCreate() line 82 in /tmp/petsc-3.4.4/src/dm/interface/dm.c [0]PETSC ERROR: DMShellCreate() line 589 in /tmp/petsc-3.4.4/src/dm/impls/shell/dmshell.c [0]PETSC ERROR: DMCoarsen() line 1815 in /tmp/petsc-3.4.4/src/dm/interface/dm.c [0]PETSC ERROR: PCSetUp_MG() line 593 in /tmp/petsc-3.4.4/src/ksp/pc/impls/mg/mg.c [0]PETSC ERROR: PCSetUp() line 890 in /tmp/petsc-3.4.4/src/ksp/pc/interface/precon.c [0]PETSC ERROR: KSPSetUp() line 278 in /tmp/petsc-3.4.4/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: KSPSolve() line 399 in /tmp/petsc-3.4.4/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: SNESSolve_KSPONLY() line 44 in /tmp/petsc-3.4.4/src/snes/impls/ksponly/ksponly.c [0]PETSC ERROR: SNESSolve() line 3636 in /tmp/petsc-3.4.4/src/snes/interface/snes.c [0]PETSC ERROR: TSStep_Theta() line 183 in /tmp/petsc-3.4.4/src/ts/impls/implicit/theta/theta.c [0]PETSC ERROR: TSStep() line 2507 in /tmp/petsc-3.4.4/src/ts/interface/ts.c [0]PETSC ERROR: main() line 57 in "unknowndirectory/"/home/tmac/programming/petsc/source/ts_multigrid.cpp -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 1. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- The test program ts_multigrid.cpp is: #include "petscts.h" int main(int argc, char ** argv) { PetscErrorCode ierr = PetscInitialize(&argc, &argv, 0, 0); CHKERRQ(ierr); int const nb_dof_fine = 5; int const nb_dof_coarse = 3; Vec x; // A vector to hold the solution on the fine grid ierr = VecCreateSeq(PETSC_COMM_SELF, nb_dof_fine, &x); CHKERRQ(ierr); Mat Id; // Identity matrix on the vector space of the fine grid ierr = MatCreateSeqAIJ(PETSC_COMM_SELF, nb_dof_fine, nb_dof_fine, 1, 0, &Id); CHKERRQ(ierr); for(int i = 0; i != nb_dof_fine; ++i) { MatSetValue(Id, i, i, 1.0, INSERT_VALUES); } ierr = MatAssemblyBegin(Id, MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); ierr = MatAssemblyEnd(Id, MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); Mat interpolation; // Multigrid interpolation matrix ierr = MatCreateSeqAIJ(PETSC_COMM_SELF, nb_dof_fine, nb_dof_coarse, 2, 0, &interpolation); CHKERRQ(ierr); ierr = MatSetValue(interpolation, 0, 0, 1.0, INSERT_VALUES); CHKERRQ(ierr); ierr = MatSetValue(interpolation, 1, 0, 0.5, INSERT_VALUES); CHKERRQ(ierr); ierr = MatSetValue(interpolation, 1, 1, 0.5, INSERT_VALUES); CHKERRQ(ierr); ierr = MatSetValue(interpolation, 2, 1, 1.0, INSERT_VALUES); CHKERRQ(ierr); ierr = MatSetValue(interpolation, 3, 1, 0.5, INSERT_VALUES); CHKERRQ(ierr); ierr = MatSetValue(interpolation, 3, 2, 0.5, INSERT_VALUES); CHKERRQ(ierr); ierr = MatSetValue(interpolation, 4, 2, 1.0, INSERT_VALUES); CHKERRQ(ierr); ierr = MatAssemblyBegin(interpolation, MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); ierr = MatAssemblyEnd(interpolation, MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); TS ts; ierr = TSCreate(PETSC_COMM_SELF, &ts); CHKERRQ(ierr); ierr = TSSetProblemType(ts, TS_LINEAR); CHKERRQ(ierr); ierr = TSSetType(ts, TSBEULER); ierr = TSSetInitialTimeStep(ts, 0.0, 0.1); CHKERRQ(ierr); ierr = TSSetSolution(ts, x); CHKERRQ(ierr); ierr = TSSetIFunction(ts, 0, TSComputeIFunctionLinear, 0); CHKERRQ(ierr); ierr = TSSetIJacobian(ts, Id, Id, TSComputeIJacobianConstant, 0); CHKERRQ(ierr); KSP ksp; ierr = TSGetKSP(ts, &ksp); CHKERRQ(ierr); PC pc; ierr = KSPGetPC(ksp, &pc); CHKERRQ(ierr); ierr = PCSetType(pc, PCMG); CHKERRQ(ierr); ierr = PCMGSetLevels(pc, 2, 0); CHKERRQ(ierr); ierr = PCMGSetInterpolation(pc, 1, interpolation); CHKERRQ(ierr); ierr = PCMGSetGalerkin(pc, PETSC_TRUE); CHKERRQ(ierr); ierr = TSSetFromOptions(ts); CHKERRQ(ierr); ierr = TSSetUp(ts); CHKERRQ(ierr); ierr = TSStep(ts); CHKERRQ(ierr); ierr = TSDestroy(&ts); CHKERRQ(ierr); ierr = MatDestroy(&interpolation); CHKERRQ(ierr); ierr = VecDestroy(&x); CHKERRQ(ierr); ierr = MatDestroy(&Id); CHKERRQ(ierr); ierr = PetscFinalize(); CHKERRQ(ierr); return 0; } -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 880 bytes Desc: OpenPGP digital signature URL: From dharmareddy84 at gmail.com Tue Apr 1 17:17:16 2014 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Tue, 1 Apr 2014 17:17:16 -0500 Subject: [petsc-users] SNESSetConvergenceTest In-Reply-To: References: <5FF817E1-8806-4965-8F76-B0504EF1F311@mcs.anl.gov> <82D00B91-6660-47AA-AC5B-F77CD33A2F99@mcs.anl.gov> <8761o2vp52.fsf@jedbrown.org> <87a9detujh.fsf@jedbrown.org> Message-ID: Hello Jed, Can you please help me fix this issue. Thanks Reddy On Thu, Mar 20, 2014 at 3:52 PM, Dharmendar Reddy wrote: > Hello Jed, > Were you able to look into this issue ? > > Thanks > Reddy > > On Tue, Feb 25, 2014 at 9:36 PM, Jed Brown wrote: >> Dharmendar Reddy writes: >>> Hello Jed, >>> Sorry for the compilation issues, I do not have access >>> to a machine with petsc and gfortran > 4.7 to fix the compile issues. >>> Can you make the follwing changes to the code in precision_m module. >>> REAL64 and REAL32 should be available via iso_fortran_env module, I >>> do not know why it is complaining. >>> >>> module precision_m >>> implicit none >>> integer,parameter :: DP = kind(1.0D0) >>> integer,parameter :: SP = kind(1.0E0) >>> integer,parameter :: WP=DP >>> integer,parameter :: MSL=100 ! MAX_STR_LENGTH >>> end module precision_m >> >> The f2003 dialect is not supported by mpif.h in my build of mpich-3.1. >> I'll try spinning up a new build of MPICH with -std=f2003 (hopefully its >> configure can sort this out). Did I mention Fortran is not my favorite >> language? >> >> >> /opt/mpich/include/mpif.h:16.18: >> Included at /home/jed/petsc/include/finclude/petscsys.h:11: >> Included at /home/jed/petsc/include/finclude/petsc.h:7: >> Included at /home/jed/petsc/include/finclude/petsc.h90:5: >> Included at Solver.F90:160: >> >> CHARACTER*1 MPI_ARGVS_NULL(1,1) >> 1 >> Warning: Obsolescent feature: Old-style character length at (1) >> /opt/mpich/include/mpif.h:17.18: >> Included at /home/jed/petsc/include/finclude/petscsys.h:11: >> Included at /home/jed/petsc/include/finclude/petsc.h:7: >> Included at /home/jed/petsc/include/finclude/petsc.h90:5: >> Included at Solver.F90:160: >> >> CHARACTER*1 MPI_ARGV_NULL(1) >> 1 >> Warning: Obsolescent feature: Old-style character length at (1) >> /opt/mpich/include/mpif.h:528.16: >> Included at /home/jed/petsc/include/finclude/petscsys.h:11: >> Included at /home/jed/petsc/include/finclude/petsc.h:7: >> Included at /home/jed/petsc/include/finclude/petsc.h90:5: >> Included at Solver.F90:160: >> >> integer*8 MPI_DISPLACEMENT_CURRENT >> 1 >> Error: GNU Extension: Nonstandard type declaration INTEGER*8 at (1) >> /opt/mpich/include/mpif.h:546.13: >> Included at /home/jed/petsc/include/finclude/petscsys.h:11: >> Included at /home/jed/petsc/include/finclude/petsc.h:7: >> Included at /home/jed/petsc/include/finclude/petsc.h90:5: >> Included at Solver.F90:160: >> >> REAL*8 MPI_WTIME, MPI_WTICK >> 1 >> Error: GNU Extension: Nonstandard type declaration REAL*8 at (1) From mfadams at lbl.gov Tue Apr 1 17:21:50 2014 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 1 Apr 2014 18:21:50 -0400 Subject: [petsc-users] [Bitbucket] Pull request #161: GAMG: shift coarse grids to avoid zero diags from low rank coarse grid space (petsc/petsc) In-Reply-To: References: Message-ID: Jed: get a thesaurus! I am getting sick of 'elide' :) The fix would be to eliminate the rotation mode(s) (or any of the modes for that matter) for that vertex of P. It just happens on the first coarse grid where the number of equation per vertex goes up (2-->3 in 2D elasticity). This would be problematic for a few reasons but the most obvious is that the coarse grid would have variable block size. Note, because we have Galerkin coarse grids we can really just add anything to the diagonal. I've had good luck with doing this but it is tedious to get you implementations to deal with these fake equations. But I've found that getting fancy does not help convergence and is not perfect anyway. On Tue, Apr 1, 2014 at 5:59 PM, Jed Brown wrote: > Jed Brown commented on pull request #161: GAMG: shift coarse grids > to avoid zero diags from low rank coarse grid space > [jedbrown] *Jed Brown* commented on pull request #161: GAMG: shift > coarse grids to avoid zero diags from low rank coarse grid space > > Hmm, why can't singletons be elided completely? Then you would have a zero > row of P. > View this pull requestor add a comment by replying to this email. > In reply to *Mark Adams* > > "avoid[ing] zero columns " columns is impossible. And the fix is the same > thing that people do for BCs: make an equation not in the math; it is just > going along for the ride. > > I had custom code to do this in Prometheus, it would try to add singletons > to other aggregates, in fact it would break up all small aggregates and try > to distribute them, but it is not guaranteed to succeed. > > I could add post processing step where I scan for singletons and then add > them to other aggregates but this will not always work: for instance, given > a high threshold a (good) input problem might have a singleton in the > graph. What do you do? lower the threshold, redo the graph and try again. > Really? > > This is crufty MPI code: any singletons?; send messages to you neighbors: > "can you take me?"; receive replies; pick one; send it. > > What happens if there are two singletons next to each other? Now I say if > I can take you and I am a singleton, and I have a higher processor ID I > will take you and you must give me your singleton so I will no longer be a > singleton. Fine. What if there are three singletons next each other .... > Just to avoid a BC like equation? This there a polynomial time algorithm to > do this? > Unwatch this pull requestto stop receiving email updates. [image: > Bitbucket] > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brune at mcs.anl.gov Tue Apr 1 17:25:06 2014 From: brune at mcs.anl.gov (Peter Brune) Date: Tue, 1 Apr 2014 17:25:06 -0500 Subject: [petsc-users] [Bitbucket] Pull request #161: GAMG: shift coarse grids to avoid zero diags from low rank coarse grid space (petsc/petsc) In-Reply-To: References: Message-ID: On Tue, Apr 1, 2014 at 5:21 PM, Mark Adams wrote: > Jed: get a thesaurus! I am getting sick of 'elide' :) > The thesaurus is the problem, man! The thesaurus is the problem! - Peter > > > The fix would be to eliminate the rotation mode(s) (or any of the modes > for that matter) for that vertex of P. It just happens on the first coarse > grid where the number of equation per vertex goes up (2-->3 in 2D > elasticity). This would be problematic for a few reasons but the most > obvious is that the coarse grid would have variable block size. > > Note, because we have Galerkin coarse grids we can really just add > anything to the diagonal. I've had good luck with doing this but it is > tedious to get you implementations to deal with these fake equations. But > I've found that getting fancy does not help convergence and is not perfect > anyway. > > > > On Tue, Apr 1, 2014 at 5:59 PM, Jed Brown < > pullrequests-reply at bitbucket.org> wrote: > >> Jed Brown commented on pull request #161: GAMG: shift coarse >> grids to avoid zero diags from low rank coarse grid space >> [jedbrown] *Jed Brown* commented on pull request #161: GAMG: shift >> coarse grids to avoid zero diags from low rank coarse grid space >> >> Hmm, why can't singletons be elided completely? Then you would have a >> zero row of P. >> View this pull requestor add a comment by replying to this email. >> In reply to *Mark Adams* >> >> "avoid[ing] zero columns " columns is impossible. And the fix is the same >> thing that people do for BCs: make an equation not in the math; it is just >> going along for the ride. >> >> I had custom code to do this in Prometheus, it would try to add >> singletons to other aggregates, in fact it would break up all small >> aggregates and try to distribute them, but it is not guaranteed to succeed. >> >> I could add post processing step where I scan for singletons and then add >> them to other aggregates but this will not always work: for instance, given >> a high threshold a (good) input problem might have a singleton in the >> graph. What do you do? lower the threshold, redo the graph and try again. >> Really? >> >> This is crufty MPI code: any singletons?; send messages to you neighbors: >> "can you take me?"; receive replies; pick one; send it. >> >> What happens if there are two singletons next to each other? Now I say if >> I can take you and I am a singleton, and I have a higher processor ID I >> will take you and you must give me your singleton so I will no longer be a >> singleton. Fine. What if there are three singletons next each other .... >> Just to avoid a BC like equation? This there a polynomial time algorithm to >> do this? >> Unwatch this pull requestto stop receiving email updates. [image: >> Bitbucket] >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Wed Apr 2 05:18:55 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Wed, 02 Apr 2014 18:18:55 +0800 Subject: [petsc-users] Difference between ex5f, ex5f90 and ex5f90t Message-ID: <533BE40F.3040200@gmail.com> Hi, May I know what's the difference between ex5f, ex5f90 and ex5f90t? I wrote my CFD code in F90, and I would like to create a Poisson solver subroutine based on the ex5 example. The solver will be placed inside a module. So which is the preferred format to use? Btw, my code is something like this: /*module PETSc_solvers*//* *//* *//*use set_matrix*//* *//* *//*use ibm*//* *//* *//*implicit none*//* *//* *//*contains*//* *//* *//*subroutine semi_momentum_simple_xyz(du,dv,dw)*//* *//* *//*!discretise momentum eqn using semi-implicit mtd*//* *//* *//*!simple except east,west*//* *//* *//*#include "finclude/petsc.h90"*//* PetscViewer viewer PetscScalar, pointer :: xx_v(:)*//* *//* *//*integer :: i,j,k,ijk,ierr,JJ,kk,*/ ... -- Thank you Yours sincerely, TAY wee-beng -------------- next part -------------- An HTML attachment was scrubbed... URL: From christophe.ortiz at ciemat.es Wed Apr 2 07:19:49 2014 From: christophe.ortiz at ciemat.es (Christophe Ortiz) Date: Wed, 2 Apr 2014 14:19:49 +0200 Subject: [petsc-users] Problem with compilation C++ file. -std=c++0x option missing Message-ID: Hi all, Since I'm solving problems with many dof (thousands) I developed a programming interface in C++ to manage and automatize the coupling between the different fields and to distribute the different terms in the matrix. Now I am adding the different PETSc commands but the compiler complains with the typical error when using C++: This file requires compiler and library support for the upcoming ISO C++ standard, C++0x. This support is currently experimental, and must be enabled with the -std=c++0x or -std=gnu++0x compiler options. In the case of C++ files (without PETSc) this is solved by simply adding the option -std=c++0x in the compilation line. I tried to add the -std=c++0x option in the makefile as follows but it does not work: diffusion: diffusion.o chkopts -${CLINKER} -std=gnu++0x -o diffusion diffusion.o ${PETSC_TS_LIB} ${RM} diffusion.o Where can I add this option in the makefile used to compile PETSc code ? Many thanks in advance. Christophe -- Q Por favor, piense en el medio ambiente antes de imprimir este mensaje. Please consider the environment before printing this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From christophe.ortiz at ciemat.es Wed Apr 2 07:27:17 2014 From: christophe.ortiz at ciemat.es (Christophe Ortiz) Date: Wed, 2 Apr 2014 14:27:17 +0200 Subject: [petsc-users] Problem with compilation C++ file. -std=c++0x option missing In-Reply-To: References: Message-ID: Obviously in CFLAGS = -std=c++0x Sorry ! I need holidays :-). Christophe CIEMAT Laboratorio Nacional de Fusi?n por Confinamiento Magn?tico Unidad de Materiales Edificio 2 - Planta 0 - Despacho 28m Avenida Complutense 40, 28040 Madrid, Spain Tel: +34 91496 2582 Fax: +34 91346 6442 -- Q Por favor, piense en el medio ambiente antes de imprimir este mensaje. Please consider the environment before printing this email. On Wed, Apr 2, 2014 at 2:19 PM, Christophe Ortiz wrote: > Hi all, > > Since I'm solving problems with many dof (thousands) I developed a > programming interface in C++ to manage and automatize the coupling between > the different fields and to distribute the different terms in the matrix. > Now I am adding the different PETSc commands but the compiler complains > with the typical error when using C++: > > This file requires compiler and library support for the upcoming ISO C++ > standard, C++0x. This support is currently experimental, and must be > enabled with the -std=c++0x or -std=gnu++0x compiler options. > > In the case of C++ files (without PETSc) this is solved by simply adding > the option -std=c++0x in the compilation line. I tried to add the > -std=c++0x option in the makefile as follows but it does not work: > > diffusion: diffusion.o chkopts > -${CLINKER} -std=gnu++0x -o diffusion diffusion.o ${PETSC_TS_LIB} > ${RM} diffusion.o > > > Where can I add this option in the makefile used to compile PETSc code ? > > Many thanks in advance. > Christophe > > -- > Q > Por favor, piense en el medio ambiente antes de imprimir este mensaje. > Please consider the environment before printing this email. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From epscodes at gmail.com Wed Apr 2 08:23:31 2014 From: epscodes at gmail.com (Xiangdong) Date: Wed, 2 Apr 2014 09:23:31 -0400 Subject: [petsc-users] questions about snes ex3.c Message-ID: Hello everyone, I have two quick questions about src/snes/examples/tutorials/ex3.c. 1) In line 150-151, why do we need to call MatSeqAIJSetPreallocation before MatMPIAIJSetPreallocation? I found that without the seqaij call, the program crashed. However, the mat J we want to create is the mpiaij. 2) In line 390-397, it sets the values of the residue function at the boundary. Where do these values come from? In other words, I am not clear about ff[0]=xx[0] and ff[xs+xm-1] = xx[xs+xm-1] - 1.0; Thank you. Xiangdong -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Apr 2 08:26:28 2014 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 2 Apr 2014 08:26:28 -0500 Subject: [petsc-users] Difference between ex5f, ex5f90 and ex5f90t In-Reply-To: <533BE40F.3040200@gmail.com> References: <533BE40F.3040200@gmail.com> Message-ID: On Wed, Apr 2, 2014 at 5:18 AM, TAY wee-beng wrote: > Hi, > > May I know what's the difference between ex5f, ex5f90 and ex5f90t? > This describes the various options: http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/UsingFortran.html Matt > I wrote my CFD code in F90, and I would like to create a Poisson solver > subroutine based on the ex5 example. The solver will be placed inside a > module. > > So which is the preferred format to use? > > Btw, my code is something like this: > > *module PETSc_solvers* > > *use set_matrix* > > *use ibm* > > *implicit none* > > *contains* > > *subroutine semi_momentum_simple_xyz(du,dv,dw)* > > *!discretise momentum eqn using semi-implicit mtd* > > *!simple except east,west* > > *#include "finclude/petsc.h90"* > > > > * PetscViewer viewer PetscScalar, pointer :: xx_v(:)* > > *integer :: i,j,k,ijk,ierr,JJ,kk,* > > ... > > -- > Thank you > > Yours sincerely, > > TAY wee-beng > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Apr 2 08:49:59 2014 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 2 Apr 2014 08:49:59 -0500 Subject: [petsc-users] questions about snes ex3.c In-Reply-To: References: Message-ID: On Wed, Apr 2, 2014 at 8:23 AM, Xiangdong wrote: > Hello everyone, > > I have two quick questions about src/snes/examples/tutorials/ex3.c. > > 1) In line 150-151, why do we need to call MatSeqAIJSetPreallocation > before MatMPIAIJSetPreallocation? I found that without the seqaij call, the > program crashed. However, the mat J we want to create is the mpiaij. > One is used in serial, the other in parallel. Really, you can just call http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatXAIJSetPreallocation.html Matt > 2) In line 390-397, it sets the values of the residue function at the > boundary. Where do these values come from? In other words, I am not clear > about ff[0]=xx[0] and ff[xs+xm-1] = xx[xs+xm-1] - 1.0; > > Thank you. > > Xiangdong > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Apr 2 08:50:40 2014 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 2 Apr 2014 08:50:40 -0500 Subject: [petsc-users] questions about snes ex3.c In-Reply-To: References: Message-ID: On Wed, Apr 2, 2014 at 8:23 AM, Xiangdong wrote: > Hello everyone, > > I have two quick questions about src/snes/examples/tutorials/ex3.c. > > 1) In line 150-151, why do we need to call MatSeqAIJSetPreallocation > before MatMPIAIJSetPreallocation? I found that without the seqaij call, the > program crashed. However, the mat J we want to create is the mpiaij. > > 2) In line 390-397, it sets the values of the residue function at the > boundary. Where do these values come from? In other words, I am not clear > about ff[0]=xx[0] and ff[xs+xm-1] = xx[xs+xm-1] - 1.0; > These are just the BC we choose. Matt > Thank you. > > Xiangdong > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From epscodes at gmail.com Wed Apr 2 13:44:01 2014 From: epscodes at gmail.com (Xiangdong) Date: Wed, 2 Apr 2014 14:44:01 -0400 Subject: [petsc-users] questions about snes ex3.c In-Reply-To: References: Message-ID: On Wed, Apr 2, 2014 at 9:49 AM, Matthew Knepley wrote: > On Wed, Apr 2, 2014 at 8:23 AM, Xiangdong wrote: > >> Hello everyone, >> >> I have two quick questions about src/snes/examples/tutorials/ex3.c. >> >> 1) In line 150-151, why do we need to call MatSeqAIJSetPreallocation >> before MatMPIAIJSetPreallocation? I found that without the seqaij call, the >> program crashed. However, the mat J we want to create is the mpiaij. >> > > One is used in serial, the other in parallel. Really, you can just call > So depending on the number of processors given in the runtime, either seqaij or mpiaij is not executed, thought both lines are complied. Is it correct? Thank you. Xiangdong > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatXAIJSetPreallocation.html > > Matt > > >> 2) In line 390-397, it sets the values of the residue function at the >> boundary. Where do these values come from? In other words, I am not clear >> about ff[0]=xx[0] and ff[xs+xm-1] = xx[xs+xm-1] - 1.0; >> >> Thank you. >> >> Xiangdong >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From epscodes at gmail.com Wed Apr 2 13:48:49 2014 From: epscodes at gmail.com (Xiangdong) Date: Wed, 2 Apr 2014 14:48:49 -0400 Subject: [petsc-users] questions about snes ex3.c In-Reply-To: References: Message-ID: On Wed, Apr 2, 2014 at 9:50 AM, Matthew Knepley wrote: > On Wed, Apr 2, 2014 at 8:23 AM, Xiangdong wrote: > >> Hello everyone, >> >> I have two quick questions about src/snes/examples/tutorials/ex3.c. >> >> 1) In line 150-151, why do we need to call MatSeqAIJSetPreallocation >> before MatMPIAIJSetPreallocation? I found that without the seqaij call, the >> program crashed. However, the mat J we want to create is the mpiaij. >> >> 2) In line 390-397, it sets the values of the residue function at the >> boundary. Where do these values come from? In other words, I am not clear >> about ff[0]=xx[0] and ff[xs+xm-1] = xx[xs+xm-1] - 1.0; >> > > These are just the BC we choose. > The exact solution used in the code is u=x^3. The thing not clear to me is how this is translated to the boundary conditions I mentioned above. More like a formulation question, in fact. Thank you. Xiangdong > > Matt > > >> Thank you. >> >> Xiangdong >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Apr 2 14:26:39 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 2 Apr 2014 14:26:39 -0500 Subject: [petsc-users] questions about snes ex3.c In-Reply-To: References: Message-ID: <31AB3F02-E232-4A63-AA8B-E252E6C0F00F@mcs.anl.gov> On Apr 2, 2014, at 1:48 PM, Xiangdong wrote: > > > > On Wed, Apr 2, 2014 at 9:50 AM, Matthew Knepley wrote: > On Wed, Apr 2, 2014 at 8:23 AM, Xiangdong wrote: > Hello everyone, > > I have two quick questions about src/snes/examples/tutorials/ex3.c. > > 1) In line 150-151, why do we need to call MatSeqAIJSetPreallocation before MatMPIAIJSetPreallocation? I found that without the seqaij call, the program crashed. However, the mat J we want to create is the mpiaij. > > 2) In line 390-397, it sets the values of the residue function at the boundary. Where do these values come from? In other words, I am not clear about ff[0]=xx[0] and ff[xs+xm-1] = xx[xs+xm-1] - 1.0; At 0 we want to force the solution to be 0. and at 1 we want to force the solution to be 1. So this is just a way to apply Dirichlet boundary conditions. Barry > > These are just the BC we choose. > > The exact solution used in the code is u=x^3. The thing not clear to me is how this is translated to the boundary conditions I mentioned above. More like a formulation question, in fact. > > Thank you. > > Xiangdong > > > Matt > > Thank you. > > Xiangdong > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > From epscodes at gmail.com Wed Apr 2 15:03:00 2014 From: epscodes at gmail.com (Xiangdong) Date: Wed, 2 Apr 2014 16:03:00 -0400 Subject: [petsc-users] questions about snes ex3.c In-Reply-To: <31AB3F02-E232-4A63-AA8B-E252E6C0F00F@mcs.anl.gov> References: <31AB3F02-E232-4A63-AA8B-E252E6C0F00F@mcs.anl.gov> Message-ID: I see your point. Since the residue ff[0] and ff[xs+xm-1] are going to be minimized to zero, it forces that xx[0]=0 and xx[xs+xm-1]-1.0=0. So the BC is satisfied. Thank you. Xiangdong On Wed, Apr 2, 2014 at 3:26 PM, Barry Smith wrote: > > On Apr 2, 2014, at 1:48 PM, Xiangdong wrote: > > > > > > > > > On Wed, Apr 2, 2014 at 9:50 AM, Matthew Knepley > wrote: > > On Wed, Apr 2, 2014 at 8:23 AM, Xiangdong wrote: > > Hello everyone, > > > > I have two quick questions about src/snes/examples/tutorials/ex3.c. > > > > 1) In line 150-151, why do we need to call MatSeqAIJSetPreallocation > before MatMPIAIJSetPreallocation? I found that without the seqaij call, the > program crashed. However, the mat J we want to create is the mpiaij. > > > > 2) In line 390-397, it sets the values of the residue function at the > boundary. Where do these values come from? In other words, I am not clear > about ff[0]=xx[0] and ff[xs+xm-1] = xx[xs+xm-1] - 1.0; > > At 0 we want to force the solution to be 0. and at 1 we want to force > the solution to be 1. So this is just a way to apply Dirichlet boundary > conditions. > > Barry > > > > > These are just the BC we choose. > > > > The exact solution used in the code is u=x^3. The thing not clear to me > is how this is translated to the boundary conditions I mentioned above. > More like a formulation question, in fact. > > > > Thank you. > > > > Xiangdong > > > > > > Matt > > > > Thank you. > > > > Xiangdong > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From asmund.ervik at ntnu.no Thu Apr 3 08:43:53 2014 From: asmund.ervik at ntnu.no (=?ISO-8859-1?Q?=C5smund_Ervik?=) Date: Thu, 03 Apr 2014 15:43:53 +0200 Subject: [petsc-users] Trying to write Fortran version of ksp/ex34.c Message-ID: <533D6599.3010507@ntnu.no> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Dear PETSc users, I'm trying to write a Poisson solver in Fortran using the KSPSetComputeOperators etc. framework. When debugging this, I ended up modifying ksp/ex22.f so that it matches ksp/ex34.c. The difference between these is that ex34.c has a non-constant RHS and Neumann BCs, which is closer to what I want. Now, when I run these two programs I get the following: ./ex22f_mod Residual, L2 norm 1.007312 Error, sup norm 0.020941 Error, L1 norm 10.687882 Error, L2 norm 0.340425 ./ex34 Residual, L2 norm 1.07124e-05 Error, sup norm 0.0209405 Error, L1 norm 0.00618512 Error, L2 norm 0.000197005 but when I MatView/VecView the matrix and RHS for each example (write them to file), there is no difference. I have attached ex22f_mod.F90, any suggestions on what the error is? Once this example works, you can of course include it with PETSc. Regards, ?smund -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBAgAGBQJTPWWZAAoJED+FDAHgGz19aNYIANyq9k3a5kHzTlAdybkZCYDw yKtDE5l/4iWYmNL49FH8AocHVcRivLaeJG5CGKqySFtZUXOlC9DM7rn4UmuQecni dgIQuTk0Ym+OJccHyT5xxnebpFVNrIOTpInfQaDW6dTyeL1svAMeHqslKaGepySL q/cODbgNDYgl6uumB+POMZevtlM6HPhl/1m7HofcHC9upvTRjSPqP1cg+kg+/8m2 Fdie/X7PCBfShrAys94kNXNcwtbO7taauphkQGMfyl0gUd+lFATG6zrEZdDqSFlV c44GFFbxW/SRIDBXdOeX9/cy75KW5do1Sildwb6R4H/i7t6/hCJUJuss7FHjmLc= =tV3N -----END PGP SIGNATURE----- -------------- next part -------------- A non-text attachment was scrubbed... Name: ex22f_mod.F90 Type: text/x-fortran Size: 11443 bytes Desc: not available URL: From knepley at gmail.com Thu Apr 3 08:56:10 2014 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 3 Apr 2014 08:56:10 -0500 Subject: [petsc-users] Trying to write Fortran version of ksp/ex34.c In-Reply-To: <533D6599.3010507@ntnu.no> References: <533D6599.3010507@ntnu.no> Message-ID: On Thu, Apr 3, 2014 at 8:43 AM, ?smund Ervik wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Dear PETSc users, > > I'm trying to write a Poisson solver in Fortran using the > KSPSetComputeOperators etc. framework. When debugging this, I ended up > modifying ksp/ex22.f so that it matches ksp/ex34.c. The difference > between these is that ex34.c has a non-constant RHS and Neumann BCs, > which is closer to what I want. > If it has Neumann conditions, then it has a null space. Have you included this in your solver? That can cause a residual offset. Matt > Now, when I run these two programs I get the following: > > ./ex22f_mod > Residual, L2 norm 1.007312 > Error, sup norm 0.020941 > Error, L1 norm 10.687882 > Error, L2 norm 0.340425 > > ./ex34 > Residual, L2 norm 1.07124e-05 > Error, sup norm 0.0209405 > Error, L1 norm 0.00618512 > Error, L2 norm 0.000197005 > > but when I MatView/VecView the matrix and RHS for each example (write > them to file), there is no difference. I have attached ex22f_mod.F90, > any suggestions on what the error is? > > Once this example works, you can of course include it with PETSc. > > Regards, > ?smund > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2.0.22 (GNU/Linux) > Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ > > iQEcBAEBAgAGBQJTPWWZAAoJED+FDAHgGz19aNYIANyq9k3a5kHzTlAdybkZCYDw > yKtDE5l/4iWYmNL49FH8AocHVcRivLaeJG5CGKqySFtZUXOlC9DM7rn4UmuQecni > dgIQuTk0Ym+OJccHyT5xxnebpFVNrIOTpInfQaDW6dTyeL1svAMeHqslKaGepySL > q/cODbgNDYgl6uumB+POMZevtlM6HPhl/1m7HofcHC9upvTRjSPqP1cg+kg+/8m2 > Fdie/X7PCBfShrAys94kNXNcwtbO7taauphkQGMfyl0gUd+lFATG6zrEZdDqSFlV > c44GFFbxW/SRIDBXdOeX9/cy75KW5do1Sildwb6R4H/i7t6/hCJUJuss7FHjmLc= > =tV3N > -----END PGP SIGNATURE----- > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Apr 3 12:38:02 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 3 Apr 2014 12:38:02 -0500 Subject: [petsc-users] Reusing search directions In-Reply-To: <17A78B9D13564547AC894B88C159674720378C21@XMBX4.uibk.ac.at> References: <17A78B9D13564547AC894B88C159674720378C21@XMBX4.uibk.ac.at> Message-ID: <183105E5-D3F6-40DF-BDB1-ABA621CC988C@mcs.anl.gov> On Mar 28, 2014, at 11:56 AM, De Groof, Vincent Frans Maria wrote: > Hi all, > > > for the reanalysis of a set of problems (think sensitivity study, reliability, ... ), I'd like to reuse (some) of the search directions of previous problems I solved. The idea is to orthogonolize the new search directions also to a set of user-defined vectors. This idea is not new, and while browsing through the mailing lists, there have been a few discussions on related topics. > > 1* Can I access/store the old search directions of an iterative solve. Maybe through KSP? GMRES keeps its most recent Krylov space basis around so you can access that. See src/ksp/ksp/impls/gmres > 2* Is there an augmented conjugated gradient implementation available? I've read some discussions on whether or not to implement this since a multigrid approach can reach the same objective. But I don't know if it was implemented or not. No. Barry > > > > thanks, > Vincent From asmund.ervik at ntnu.no Thu Apr 3 14:49:41 2014 From: asmund.ervik at ntnu.no (=?iso-8859-1?Q?=C5smund_Ervik?=) Date: Thu, 3 Apr 2014 19:49:41 +0000 Subject: [petsc-users] Trying to write Fortran version of ksp/ex34.c In-Reply-To: References: <533D6599.3010507@ntnu.no>, Message-ID: <0E576811AB298343AC632BBCAAEFC37945BFC66E@WAREHOUSE08.win.ntnu.no> Hi Matt, I'm handling the null space in the same way as ex34, that is, I remove the null space (containing just the constant functions, I think) in both ComputeRHS and ComputeMatrix. However, I just tried commenting out those lines (MatNullSpaceCreate, MatSetNullSpace or MatNullSpaceRemove, MatNullSpaceDestroy) and that has no effect. The same happens in ex34.c, commenting out the lines fixing the null space makes no difference. Does that make sense? Are these examples intended to be run with some special options? I see in the makefile (e.g. runex34) that there are a lot of multigrid options, but I don't think that should matter? Also, could you confirm that ex34.c does zero Neumann BCs? The header was a bit confusing, it says u=0 but I assume it should be du/dn = 0 (zero normal derivative)? This is with 3.4.4, by the way. Thanks, ?smund ________________________________ Fra: Matthew Knepley [knepley at gmail.com] Sendt: 3. april 2014 15:56 Til: ?smund Ervik Kopi: petsc-users at mcs.anl.gov Emne: Re: [petsc-users] Trying to write Fortran version of ksp/ex34.c On Thu, Apr 3, 2014 at 8:43 AM, ?smund Ervik > wrote: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Dear PETSc users, I'm trying to write a Poisson solver in Fortran using the KSPSetComputeOperators etc. framework. When debugging this, I ended up modifying ksp/ex22.f so that it matches ksp/ex34.c. The difference between these is that ex34.c has a non-constant RHS and Neumann BCs, which is closer to what I want. If it has Neumann conditions, then it has a null space. Have you included this in your solver? That can cause a residual offset. Matt Now, when I run these two programs I get the following: ./ex22f_mod Residual, L2 norm 1.007312 Error, sup norm 0.020941 Error, L1 norm 10.687882 Error, L2 norm 0.340425 ./ex34 Residual, L2 norm 1.07124e-05 Error, sup norm 0.0209405 Error, L1 norm 0.00618512 Error, L2 norm 0.000197005 but when I MatView/VecView the matrix and RHS for each example (write them to file), there is no difference. I have attached ex22f_mod.F90, any suggestions on what the error is? Once this example works, you can of course include it with PETSc. Regards, ?smund -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBAgAGBQJTPWWZAAoJED+FDAHgGz19aNYIANyq9k3a5kHzTlAdybkZCYDw yKtDE5l/4iWYmNL49FH8AocHVcRivLaeJG5CGKqySFtZUXOlC9DM7rn4UmuQecni dgIQuTk0Ym+OJccHyT5xxnebpFVNrIOTpInfQaDW6dTyeL1svAMeHqslKaGepySL q/cODbgNDYgl6uumB+POMZevtlM6HPhl/1m7HofcHC9upvTRjSPqP1cg+kg+/8m2 Fdie/X7PCBfShrAys94kNXNcwtbO7taauphkQGMfyl0gUd+lFATG6zrEZdDqSFlV c44GFFbxW/SRIDBXdOeX9/cy75KW5do1Sildwb6R4H/i7t6/hCJUJuss7FHjmLc= =tV3N -----END PGP SIGNATURE----- -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Apr 3 14:54:34 2014 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 3 Apr 2014 14:54:34 -0500 Subject: [petsc-users] Trying to write Fortran version of ksp/ex34.c In-Reply-To: <0E576811AB298343AC632BBCAAEFC37945BFC66E@WAREHOUSE08.win.ntnu.no> References: <533D6599.3010507@ntnu.no> <0E576811AB298343AC632BBCAAEFC37945BFC66E@WAREHOUSE08.win.ntnu.no> Message-ID: On Thu, Apr 3, 2014 at 2:49 PM, ?smund Ervik wrote: > Hi Matt, > > I'm handling the null space in the same way as ex34, that is, I remove the > null space (containing just the constant functions, I think) in both > ComputeRHS and ComputeMatrix. > That should be fine. > However, I just tried commenting out those lines (MatNullSpaceCreate, > MatSetNullSpace or MatNullSpaceRemove, MatNullSpaceDestroy) and that has no > effect. The same happens in ex34.c, commenting out the lines fixing the > null space makes no difference. Does that make sense? > It appears that the RHS in ex34 already has zero mean. Its possible for some Krylov iteration to preserve this property, but I think if you played around with solvers you could find one that screws it up without null space removal. > Are these examples intended to be run with some special options? I see in > the makefile (e.g. runex34) that there are a lot of multigrid options, but > I don't think that should matter? > > Also, could you confirm that ex34.c does zero Neumann BCs? The header was > a bit confusing, it says u=0 but I assume it should be du/dn = 0 (zero > normal derivative)? > Yes, its pure Neumann. Note that this is an easy analytical solution, so you can check your example. Matt > This is with 3.4.4, by the way. > > Thanks, > ?smund > > ------------------------------ > *Fra:* Matthew Knepley [knepley at gmail.com] > *Sendt:* 3. april 2014 15:56 > *Til:* ?smund Ervik > *Kopi:* petsc-users at mcs.anl.gov > *Emne:* Re: [petsc-users] Trying to write Fortran version of ksp/ex34.c > > On Thu, Apr 3, 2014 at 8:43 AM, ?smund Ervik wrote: > >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA1 >> >> Dear PETSc users, >> >> I'm trying to write a Poisson solver in Fortran using the >> KSPSetComputeOperators etc. framework. When debugging this, I ended up >> modifying ksp/ex22.f so that it matches ksp/ex34.c. The difference >> between these is that ex34.c has a non-constant RHS and Neumann BCs, >> which is closer to what I want. >> > > If it has Neumann conditions, then it has a null space. Have you > included this in > your solver? That can cause a residual offset. > > Matt > > >> Now, when I run these two programs I get the following: >> >> ./ex22f_mod >> Residual, L2 norm 1.007312 >> Error, sup norm 0.020941 >> Error, L1 norm 10.687882 >> Error, L2 norm 0.340425 >> >> ./ex34 >> Residual, L2 norm 1.07124e-05 >> Error, sup norm 0.0209405 >> Error, L1 norm 0.00618512 >> Error, L2 norm 0.000197005 >> >> but when I MatView/VecView the matrix and RHS for each example (write >> them to file), there is no difference. I have attached ex22f_mod.F90, >> any suggestions on what the error is? >> >> Once this example works, you can of course include it with PETSc. >> >> Regards, >> ?smund >> >> -----BEGIN PGP SIGNATURE----- >> Version: GnuPG v2.0.22 (GNU/Linux) >> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ >> >> iQEcBAEBAgAGBQJTPWWZAAoJED+FDAHgGz19aNYIANyq9k3a5kHzTlAdybkZCYDw >> yKtDE5l/4iWYmNL49FH8AocHVcRivLaeJG5CGKqySFtZUXOlC9DM7rn4UmuQecni >> dgIQuTk0Ym+OJccHyT5xxnebpFVNrIOTpInfQaDW6dTyeL1svAMeHqslKaGepySL >> q/cODbgNDYgl6uumB+POMZevtlM6HPhl/1m7HofcHC9upvTRjSPqP1cg+kg+/8m2 >> Fdie/X7PCBfShrAys94kNXNcwtbO7taauphkQGMfyl0gUd+lFATG6zrEZdDqSFlV >> c44GFFbxW/SRIDBXdOeX9/cy75KW5do1Sildwb6R4H/i7t6/hCJUJuss7FHjmLc= >> =tV3N >> -----END PGP SIGNATURE----- >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From cedric.doucet at inria.fr Fri Apr 4 06:09:50 2014 From: cedric.doucet at inria.fr (Cedric Doucet) Date: Fri, 4 Apr 2014 13:09:50 +0200 (CEST) Subject: [petsc-users] Partitioner in DMPlexDistribute In-Reply-To: <1414725859.2537896.1396609649315.JavaMail.zimbra@inria.fr> Message-ID: <121915984.2538748.1396609790203.JavaMail.zimbra@inria.fr> Hello, I have just downloaded Petsc 3.4.4 and I have looked at plex.c file. In DMPlexDistribute, I cannot see where "partitioner" input argument is used. Is METIS used by default? Best regards, C?dric Doucet INRIA Paris-Rocquencourt France -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Apr 4 07:28:53 2014 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 4 Apr 2014 07:28:53 -0500 Subject: [petsc-users] Partitioner in DMPlexDistribute In-Reply-To: <121915984.2538748.1396609790203.JavaMail.zimbra@inria.fr> References: <1414725859.2537896.1396609649315.JavaMail.zimbra@inria.fr> <121915984.2538748.1396609790203.JavaMail.zimbra@inria.fr> Message-ID: On Fri, Apr 4, 2014 at 6:09 AM, Cedric Doucet wrote: > Hello, > > I have just downloaded Petsc 3.4.4 and I have looked at plex.c file. > In DMPlexDistribute, I cannot see where "partitioner" input argument is > used. > > Is METIS used by default? > In 3.4.4, the name is just a placeholder, and it users Chaco by default, and then ParMetis. In master, it does use the name to select the preconditioner. If you use Plex, I recommend using master. Thanks, Matt > Best regards, > > C?dric Doucet > INRIA Paris-Rocquencourt > France > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From cedric.doucet at inria.fr Fri Apr 4 07:33:47 2014 From: cedric.doucet at inria.fr (Cedric Doucet) Date: Fri, 4 Apr 2014 14:33:47 +0200 (CEST) Subject: [petsc-users] Partitioner in DMPlexDistribute In-Reply-To: References: <1414725859.2537896.1396609649315.JavaMail.zimbra@inria.fr> <121915984.2538748.1396609790203.JavaMail.zimbra@inria.fr> Message-ID: <1624872835.2574894.1396614827192.JavaMail.zimbra@inria.fr> Hello, thank you very much for your answer. When do you plan to put this functionality in a stable release? Best, C?dric ----- Mail original ----- > De: "Matthew Knepley" > ?: "Cedric Doucet" > Cc: petsc-users at mcs.anl.gov > Envoy?: Vendredi 4 Avril 2014 14:28:53 > Objet: Re: [petsc-users] Partitioner in DMPlexDistribute > On Fri, Apr 4, 2014 at 6:09 AM, Cedric Doucet < cedric.doucet at inria.fr > > wrote: > > Hello, > > > I have just downloaded Petsc 3.4.4 and I have looked at plex.c file. > > > In DMPlexDistribute, I cannot see where "partitioner" input argument is > > used. > > > Is METIS used by default? > > In 3.4.4, the name is just a placeholder, and it users Chaco by default, and > then ParMetis. > In master, it does use the name to select the preconditioner. If you use > Plex, I recommend > using master. > Thanks, > Matt > > Best regards, > > > C?dric Doucet > > > INRIA Paris-Rocquencourt > > > France > > -- > What most experimenters take for granted before they begin their experiments > is infinitely more interesting than any results to which their experiments > lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Apr 4 07:52:44 2014 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 4 Apr 2014 07:52:44 -0500 Subject: [petsc-users] Partitioner in DMPlexDistribute In-Reply-To: <1624872835.2574894.1396614827192.JavaMail.zimbra@inria.fr> References: <1414725859.2537896.1396609649315.JavaMail.zimbra@inria.fr> <121915984.2538748.1396609790203.JavaMail.zimbra@inria.fr> <1624872835.2574894.1396614827192.JavaMail.zimbra@inria.fr> Message-ID: On Fri, Apr 4, 2014 at 7:33 AM, Cedric Doucet wrote: > > Hello, > thank you very much for your answer. > When do you plan to put this functionality in a stable release? > 'master' is stable, 'next' is the unstable branch. We plan to release this month. Matt > Best, > C?dric > > > ------------------------------ > > *De: *"Matthew Knepley" > *?: *"Cedric Doucet" > *Cc: *petsc-users at mcs.anl.gov > *Envoy?: *Vendredi 4 Avril 2014 14:28:53 > *Objet: *Re: [petsc-users] Partitioner in DMPlexDistribute > > On Fri, Apr 4, 2014 at 6:09 AM, Cedric Doucet wrote: > >> Hello, >> >> I have just downloaded Petsc 3.4.4 and I have looked at plex.c file. >> In DMPlexDistribute, I cannot see where "partitioner" input argument is >> used. >> >> Is METIS used by default? >> > > In 3.4.4, the name is just a placeholder, and it users Chaco by default, > and then ParMetis. > In master, it does use the name to select the preconditioner. If you use > Plex, I recommend > using master. > > Thanks, > > Matt > > >> Best regards, >> >> C?dric Doucet >> INRIA Paris-Rocquencourt >> France >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Apr 4 10:37:47 2014 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 4 Apr 2014 10:37:47 -0500 Subject: [petsc-users] Problems running ex12.c In-Reply-To: References: <8448FFCE4362914496BCEAF8BE810C13EF1D34@DCPWPEXMBX02.mdanderson.edu> <8448FFCE4362914496BCEAF8BE810C13EF1D5E@DCPWPEXMBX02.mdanderson.edu> <8448FFCE4362914496BCEAF8BE810C13EF1D7C@DCPWPEXMBX02.mdanderson.edu> <8448FFCE4362914496BCEAF8BE810C13EF1D8F@DCPWPEXMBX02.mdanderson.edu> <8448FFCE4362914496BCEAF8BE810C13EF1DA3@DCPWPEXMBX02.mdanderson.edu> <8448FFCE4362914496BCEAF8BE810C13EF1DBF@DCPWPEXMBX02.mdanderson.edu> <8448FFCE4362914496BCEAF8BE810C13EF1E22@DCPWPEXMBX02.mdanderson.edu> <8448FFCE4362914496BCEAF8BE810C13EF1E50@DCPWPEXMBX02.mdanderson.edu> Message-ID: On Mon, Mar 31, 2014 at 7:08 PM, Matthew Knepley wrote: > On Mon, Mar 31, 2014 at 6:37 PM, Miguel Angel Salazar de Troya < > salazardetroya at gmail.com> wrote: > >> Thanks for your response. Now I am trying to modify this example to >> include Dirichlet and Neumann conditions at the same time. >> >> I can see that inside of DMPlexCreateSquareBoundary there is an option >> ("-dm_plex_separate_marker") to just mark the top boundary with 1. I >> understand that only this side would have Dirichlet conditions that are >> described by the function bcFuncs in user.fem (the exact function in this >> example). However, when we run the Neumann condition, we fix all the >> boundary as Neumann condition with the function DMPlexAddBoundary, is this >> right? >> > > Right about the shortcoming, but wrong about the source. > DMPlexAddBoundary() takes an argument that is the marker value for the > given label, so you can select boundaries. > However, DMPlexComputeResidualFEM() currently hardcodes the boundary name > ("boundary") > and the marker value (1). I wrote this when we had no boundary > representation in PETSc. Now that we have DMAddBoundary(), we can > loop over the Neumann boundaries. I have put this on my todo list. If you > are motivated, you can do it first and I will help. > I pushed this change today, so now you can have multiple Neumann boundaries with whatever labels you want. I have not written a test for multiple boundaries, but the old single boundary tests pass. Thanks, Matt > Thanks, > > Matt > > >> Could there be a way to just fix a certain boundary with the Neumann >> condition in this example? Would it be easier with an external library as >> Exodus II? >> >> >> On Sun, Mar 30, 2014 at 7:51 PM, Matthew Knepley wrote: >> >>> On Sun, Mar 30, 2014 at 7:07 PM, Miguel Angel Salazar de Troya < >>> salazardetroya at gmail.com> wrote: >>> >>>> Thanks for your response. Your help is really useful to me. >>>> >>>> The difference between the analytic and the field options are that for >>>> the field options the function is projected onto the function space defined >>>> for feAux right? What is the advantage of doing this? >>>> >>> >>> If it is not purely a function of the coordinates, or you do not know >>> that function, there is no option left. >>> >>> >>>> Also, for this field case I see that the function always has to be a >>>> vector. What if we wanted to implement a heterogeneous material in linear >>>> elasticity? Would we implement the constitutive tensor as a vector? It >>>> would not be very difficult I think, I just want to make sure it would be >>>> this way. >>>> >>> >>> Its not a vector, which indicates a particular behavior under coordinate >>> transformations, but an array >>> which can hold any data you want. >>> >>> Matt >>> >>> >>>> Thanks in advance >>>> Miguel >>>> >>>> >>>> On Sun, Mar 30, 2014 at 2:01 PM, Matthew Knepley wrote: >>>> >>>>> On Sun, Mar 30, 2014 at 1:57 PM, Miguel Angel Salazar de Troya < >>>>> salazardetroya at gmail.com> wrote: >>>>> >>>>>> Hello everybody >>>>>> >>>>>> I had a question about this example. In the petsc-dev next version, >>>>>> why don't we create a PetscSection in the function SetupSection, but we do >>>>>> it in the function SetupMaterialSection and in the function SetupSection of >>>>>> the petsc-current version. >>>>>> >>>>> >>>>> 1) I wanted to try and make things more automatic for the user >>>>> >>>>> 2) I needed a way to automatically layout data for coarser/finer grids >>>>> in unstructured MG >>>>> >>>>> Thus, now when you set for PetscFE into the DM using DMSetField(), it >>>>> will automatically create >>>>> the section on the first call to DMGetDefaultSection(). >>>>> >>>>> I do not have a similar provision now for materials, so you create >>>>> your own section. I think this is >>>>> alright until we have some idea of a nicer interface. >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> >>>>>> petsc-dev: >>>>>> >>>>>> #undef __FUNCT__ >>>>>> #define __FUNCT__ "SetupSection" >>>>>> PetscErrorCode SetupSection(DM dm, AppCtx *user) >>>>>> { >>>>>> DM cdm = dm; >>>>>> const PetscInt id = 1; >>>>>> PetscErrorCode ierr; >>>>>> >>>>>> PetscFunctionBeginUser; >>>>>> ierr = PetscObjectSetName((PetscObject) user->fe[0], >>>>>> "potential");CHKERRQ(ierr); >>>>>> while (cdm) { >>>>>> ierr = DMSetNumFields(cdm, 1);CHKERRQ(ierr); >>>>>> ierr = DMSetField(cdm, 0, (PetscObject) >>>>>> user->fe[0]);CHKERRQ(ierr); >>>>>> ierr = DMPlexAddBoundary(cdm, user->bcType == DIRICHLET, >>>>>> user->bcType == NEUMANN ? "boundary" : "marker", 0, user->exactFuncs[0], 1, >>>>>> &id, user);CHKERRQ(ierr); >>>>>> ierr = DMPlexGetCoarseDM(cdm, &cdm);CHKERRQ(ierr); >>>>>> } >>>>>> PetscFunctionReturn(0); >>>>>> } >>>>>> >>>>>> >>>>>> It seems that it adds the number of fields directly to the DM, and >>>>>> takes the number of components that were specified in SetupElementCommon, >>>>>> but what about the number of degrees of freedom? Why we added it for the >>>>>> MaterialSection but not for the regular Section. >>>>>> >>>>>> Thanks in advance >>>>>> Miguel >>>>>> >>>>>> >>>>>> On Sat, Mar 15, 2014 at 4:16 PM, Miguel Angel Salazar de Troya < >>>>>> salazardetroya at gmail.com> wrote: >>>>>> >>>>>>> Thanks a lot. >>>>>>> >>>>>>> >>>>>>> On Sat, Mar 15, 2014 at 3:36 PM, Matthew Knepley wrote: >>>>>>> >>>>>>>> On Sat, Mar 15, 2014 at 3:31 PM, Miguel Angel Salazar de Troya < >>>>>>>> salazardetroya at gmail.com> wrote: >>>>>>>> >>>>>>>>> Hello everybody >>>>>>>>> >>>>>>>>> I keep trying to understand this example. I don't have any >>>>>>>>> problems with this example when I run it like this: >>>>>>>>> >>>>>>>>> [salaza11 at maya PETSC]$ ./ex12 -bc_type dirichlet -interpolate >>>>>>>>> -petscspace_order 1 -variable_coefficient nonlinear -dim 2 -run_type full >>>>>>>>> -show_solution >>>>>>>>> Number of SNES iterations = 5 >>>>>>>>> L_2 Error: 0.107289 >>>>>>>>> Solution >>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>> type: seq >>>>>>>>> 0.484618 >>>>>>>>> >>>>>>>>> However, when I change the boundary conditions to Neumann, I get >>>>>>>>> this error. >>>>>>>>> >>>>>>>>> [salaza11 at maya PETSC]$ ./ex12 -bc_type neumann -interpolate 1 >>>>>>>>> -petscspace_order 2 -variable_coefficient nonlinear -dim 2 -run_type full >>>>>>>>> -show_solution >>>>>>>>> >>>>>>>> >>>>>>>> Here you set the order of the element used in bulk, but not on the >>>>>>>> boundary where you condition is, so it defaults to 0. In >>>>>>>> order to become more familiar, take a look at the tests that I run >>>>>>>> here: >>>>>>>> >>>>>>>> >>>>>>>> https://bitbucket.org/petsc/petsc/src/64715f0f033346c10c77b73cf58216d111db8789/config/builder.py?at=master#cl-216 >>>>>>>> >>>>>>>> Matt >>>>>>>> >>>>>>>> [0]PETSC ERROR: --------------------- Error Message >>>>>>>>> -------------------------------------------------------------- >>>>>>>>> [0]PETSC ERROR: Petsc has generated inconsistent data >>>>>>>>> [0]PETSC ERROR: Number of dual basis vectors 0 not equal to >>>>>>>>> dimension 1 >>>>>>>>> [0]PETSC ERROR: See http:// >>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble >>>>>>>>> shooting. >>>>>>>>> [0]PETSC ERROR: Petsc Development GIT revision: >>>>>>>>> v3.4.3-4776-gb18359b GIT Date: 2014-03-04 10:53:30 -0600 >>>>>>>>> [0]PETSC ERROR: ./ex12 on a linux-gnu-c-debug named maya by >>>>>>>>> salaza11 Sat Mar 15 14:28:05 2014 >>>>>>>>> [0]PETSC ERROR: Configure options --download-mpich >>>>>>>>> --download-scientificpython --download-triangle --download-ctetgen >>>>>>>>> --download-chaco --with-c2html=0 >>>>>>>>> [0]PETSC ERROR: #1 PetscDualSpaceSetUp_Lagrange() line 1763 in >>>>>>>>> /home/salaza11/petsc/src/dm/dt/interface/dtfe.c >>>>>>>>> [0]PETSC ERROR: #2 PetscDualSpaceSetUp() line 1277 in >>>>>>>>> /home/salaza11/petsc/src/dm/dt/interface/dtfe.c >>>>>>>>> [0]PETSC ERROR: #3 SetupElementCommon() line 474 in >>>>>>>>> /home/salaza11/workspace/PETSC/ex12.c >>>>>>>>> [0]PETSC ERROR: #4 SetupBdElement() line 559 in >>>>>>>>> /home/salaza11/workspace/PETSC/ex12.c >>>>>>>>> [0]PETSC ERROR: #5 main() line 755 in >>>>>>>>> /home/salaza11/workspace/PETSC/ex12.c >>>>>>>>> [0]PETSC ERROR: ----------------End of Error Message -------send >>>>>>>>> entire error message to petsc-maint at mcs.anl.gov---------- >>>>>>>>> application called MPI_Abort(MPI_COMM_WORLD, 77) - process 0 >>>>>>>>> [unset]: aborting job: >>>>>>>>> application called MPI_Abort(MPI_COMM_WORLD, 77) - process 0 >>>>>>>>> >>>>>>>>> I honestly do not know much about using dual spaces in a finite >>>>>>>>> element context. I have been trying to find some material that could help >>>>>>>>> me without much success. I tried to modify the dual space order with the >>>>>>>>> option -petscdualspace_order but I kept getting errors. In particular, I >>>>>>>>> got this when I set it to 1. >>>>>>>>> >>>>>>>>> [salaza11 at maya PETSC]$ ./ex12 -bc_type neumann -interpolate 1 >>>>>>>>> -petscspace_order 2 -variable_coefficient nonlinear -dim 2 -run_type full >>>>>>>>> -show_solution -petscdualspace_order 1 >>>>>>>>> [0]PETSC ERROR: PetscTrFreeDefault() called from >>>>>>>>> PetscFESetUp_Basic() line 2492 in >>>>>>>>> /home/salaza11/petsc/src/dm/dt/interface/dtfe.c >>>>>>>>> [0]PETSC ERROR: Block [id=0(32)] at address 0x1cc32f0 is corrupted >>>>>>>>> (probably write past end of array) >>>>>>>>> [0]PETSC ERROR: Block allocated in PetscFESetUp_Basic() line 2483 >>>>>>>>> in /home/salaza11/petsc/src/dm/dt/interface/dtfe.c >>>>>>>>> [0]PETSC ERROR: --------------------- Error Message >>>>>>>>> -------------------------------------------------------------- >>>>>>>>> [0]PETSC ERROR: Memory corruption: >>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/installation.html#valgrind >>>>>>>>> [0]PETSC ERROR: Corrupted memory >>>>>>>>> [0]PETSC ERROR: See http:// >>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble >>>>>>>>> shooting. >>>>>>>>> [0]PETSC ERROR: Petsc Development GIT revision: >>>>>>>>> v3.4.3-4776-gb18359b GIT Date: 2014-03-04 10:53:30 -0600 >>>>>>>>> [0]PETSC ERROR: ./ex12 on a linux-gnu-c-debug named maya by >>>>>>>>> salaza11 Sat Mar 15 14:37:34 2014 >>>>>>>>> [0]PETSC ERROR: Configure options --download-mpich >>>>>>>>> --download-scientificpython --download-triangle --download-ctetgen >>>>>>>>> --download-chaco --with-c2html=0 >>>>>>>>> [0]PETSC ERROR: #1 PetscTrFreeDefault() line 289 in >>>>>>>>> /home/salaza11/petsc/src/sys/memory/mtr.c >>>>>>>>> [0]PETSC ERROR: #2 PetscFESetUp_Basic() line 2492 in >>>>>>>>> /home/salaza11/petsc/src/dm/dt/interface/dtfe.c >>>>>>>>> [0]PETSC ERROR: #3 PetscFESetUp() line 2126 in >>>>>>>>> /home/salaza11/petsc/src/dm/dt/interface/dtfe.c >>>>>>>>> [0]PETSC ERROR: #4 SetupElementCommon() line 482 in >>>>>>>>> /home/salaza11/workspace/PETSC/ex12.c >>>>>>>>> [0]PETSC ERROR: #5 SetupElement() line 506 in >>>>>>>>> /home/salaza11/workspace/PETSC/ex12.c >>>>>>>>> [0]PETSC ERROR: #6 main() line 754 in >>>>>>>>> /home/salaza11/workspace/PETSC/ex12.c >>>>>>>>> [0]PETSC ERROR: ----------------End of Error Message -------send >>>>>>>>> entire error message to petsc-maint at mcs.anl.gov---------- >>>>>>>>> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0 >>>>>>>>> [unset]: aborting job: >>>>>>>>> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0 >>>>>>>>> [salaza11 at maya PETSC]$ >>>>>>>>> >>>>>>>>> >>>>>>>>> Then again, I do not know much what I am doing given my ignorance >>>>>>>>> with respect to the dual spaces in FE. I apologize for that. My questions >>>>>>>>> are: >>>>>>>>> >>>>>>>>> - Where could I find more resources in order to understand the >>>>>>>>> PETSc implementation of dual spaces for FE? >>>>>>>>> - Why does it run with Dirichlet but not with Neumann? >>>>>>>>> >>>>>>>>> Thanks in advance. >>>>>>>>> Miguel. >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, Mar 4, 2014 at 11:28 PM, Matthew Knepley < >>>>>>>>> knepley at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> On Tue, Mar 4, 2014 at 12:01 PM, Matthew Knepley < >>>>>>>>>> knepley at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> On Tue, Mar 4, 2014 at 11:51 AM, Miguel Angel Salazar de Troya < >>>>>>>>>>> salazardetroya at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> I can run it now, thanks. Although if I run it with valgrind >>>>>>>>>>>> 3.5.0 (should I update to the last version?) I get some memory leaks >>>>>>>>>>>> related with the function DMPlexCreateBoxMesh. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> I will check it out. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> This is now fixed. >>>>>>>>>> >>>>>>>>>> Thanks for finding it >>>>>>>>>> >>>>>>>>>> Matt >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> >>>>>>>>>>> Matt >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> [salaza11 at maya tutorials]$ valgrind --leak-check=full ./ex12 >>>>>>>>>>>> -run_type test -refinement_limit 0.0 -bc_type dirichlet -interpolate 0 >>>>>>>>>>>> -petscspace_order 1 -show_initial -dm_plex_print_fem 1 >>>>>>>>>>>> ==9625== Memcheck, a memory error detector >>>>>>>>>>>> ==9625== Copyright (C) 2002-2009, and GNU GPL'd, by Julian >>>>>>>>>>>> Seward et al. >>>>>>>>>>>> ==9625== Using Valgrind-3.5.0 and LibVEX; rerun with -h for >>>>>>>>>>>> copyright info >>>>>>>>>>>> ==9625== Command: ./ex12 -run_type test -refinement_limit 0.0 >>>>>>>>>>>> -bc_type dirichlet -interpolate 0 -petscspace_order 1 -show_initial >>>>>>>>>>>> -dm_plex_print_fem 1 >>>>>>>>>>>> ==9625== >>>>>>>>>>>> Local function: >>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>> type: seq >>>>>>>>>>>> 0 >>>>>>>>>>>> 0.25 >>>>>>>>>>>> 1 >>>>>>>>>>>> 0.25 >>>>>>>>>>>> 0.5 >>>>>>>>>>>> 1.25 >>>>>>>>>>>> 1 >>>>>>>>>>>> 1.25 >>>>>>>>>>>> 2 >>>>>>>>>>>> Initial guess >>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>> type: seq >>>>>>>>>>>> 0.5 >>>>>>>>>>>> L_2 Error: 0.111111 >>>>>>>>>>>> Residual: >>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>> type: seq >>>>>>>>>>>> 0 >>>>>>>>>>>> 0 >>>>>>>>>>>> 0 >>>>>>>>>>>> 0 >>>>>>>>>>>> 0 >>>>>>>>>>>> 0 >>>>>>>>>>>> 0 >>>>>>>>>>>> 0 >>>>>>>>>>>> 0 >>>>>>>>>>>> Initial Residual >>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>> type: seq >>>>>>>>>>>> 0 >>>>>>>>>>>> L_2 Residual: 0 >>>>>>>>>>>> Jacobian: >>>>>>>>>>>> Mat Object: 1 MPI processes >>>>>>>>>>>> type: seqaij >>>>>>>>>>>> row 0: (0, 4) >>>>>>>>>>>> Residual: >>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>> type: seq >>>>>>>>>>>> 0 >>>>>>>>>>>> 0 >>>>>>>>>>>> 0 >>>>>>>>>>>> 0 >>>>>>>>>>>> -2 >>>>>>>>>>>> 0 >>>>>>>>>>>> 0 >>>>>>>>>>>> 0 >>>>>>>>>>>> 0 >>>>>>>>>>>> Au - b = Au + F(0) >>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>> type: seq >>>>>>>>>>>> 0 >>>>>>>>>>>> Linear L_2 Residual: 0 >>>>>>>>>>>> ==9625== >>>>>>>>>>>> ==9625== HEAP SUMMARY: >>>>>>>>>>>> ==9625== in use at exit: 288 bytes in 3 blocks >>>>>>>>>>>> ==9625== total heap usage: 2,484 allocs, 2,481 frees, >>>>>>>>>>>> 1,009,287 bytes allocated >>>>>>>>>>>> ==9625== >>>>>>>>>>>> ==9625== 48 bytes in 1 blocks are definitely lost in loss >>>>>>>>>>>> record 1 of 3 >>>>>>>>>>>> ==9625== at 0x4A05E46: malloc (vg_replace_malloc.c:195) >>>>>>>>>>>> ==9625== by 0x5D8D4E1: writepoly (triangle.c:12012) >>>>>>>>>>>> ==9625== by 0x5D8FAAC: triangulate (triangle.c:13167) >>>>>>>>>>>> ==9625== by 0x56B0884: DMPlexGenerate_Triangle (plex.c:3749) >>>>>>>>>>>> ==9625== by 0x56B5EE4: DMPlexGenerate (plex.c:4503) >>>>>>>>>>>> ==9625== by 0x567F414: DMPlexCreateBoxMesh (plexcreate.c:668) >>>>>>>>>>>> ==9625== by 0x4051FA: CreateMesh (ex12.c:341) >>>>>>>>>>>> ==9625== by 0x408D3D: main (ex12.c:651) >>>>>>>>>>>> ==9625== >>>>>>>>>>>> ==9625== 96 bytes in 1 blocks are definitely lost in loss >>>>>>>>>>>> record 2 of 3 >>>>>>>>>>>> ==9625== at 0x4A05E46: malloc (vg_replace_malloc.c:195) >>>>>>>>>>>> ==9625== by 0x5D8D485: writepoly (triangle.c:12004) >>>>>>>>>>>> ==9625== by 0x5D8FAAC: triangulate (triangle.c:13167) >>>>>>>>>>>> ==9625== by 0x56B0884: DMPlexGenerate_Triangle (plex.c:3749) >>>>>>>>>>>> ==9625== by 0x56B5EE4: DMPlexGenerate (plex.c:4503) >>>>>>>>>>>> ==9625== by 0x567F414: DMPlexCreateBoxMesh (plexcreate.c:668) >>>>>>>>>>>> ==9625== by 0x4051FA: CreateMesh (ex12.c:341) >>>>>>>>>>>> ==9625== by 0x408D3D: main (ex12.c:651) >>>>>>>>>>>> ==9625== >>>>>>>>>>>> ==9625== 144 bytes in 1 blocks are definitely lost in loss >>>>>>>>>>>> record 3 of 3 >>>>>>>>>>>> ==9625== at 0x4A05E46: malloc (vg_replace_malloc.c:195) >>>>>>>>>>>> ==9625== by 0x5D8CD20: writenodes (triangle.c:11718) >>>>>>>>>>>> ==9625== by 0x5D8F9DE: triangulate (triangle.c:13132) >>>>>>>>>>>> ==9625== by 0x56B0884: DMPlexGenerate_Triangle (plex.c:3749) >>>>>>>>>>>> ==9625== by 0x56B5EE4: DMPlexGenerate (plex.c:4503) >>>>>>>>>>>> ==9625== by 0x567F414: DMPlexCreateBoxMesh (plexcreate.c:668) >>>>>>>>>>>> ==9625== by 0x4051FA: CreateMesh (ex12.c:341) >>>>>>>>>>>> ==9625== by 0x408D3D: main (ex12.c:651) >>>>>>>>>>>> ==9625== >>>>>>>>>>>> ==9625== LEAK SUMMARY: >>>>>>>>>>>> ==9625== definitely lost: 288 bytes in 3 blocks >>>>>>>>>>>> ==9625== indirectly lost: 0 bytes in 0 blocks >>>>>>>>>>>> ==9625== possibly lost: 0 bytes in 0 blocks >>>>>>>>>>>> ==9625== still reachable: 0 bytes in 0 blocks >>>>>>>>>>>> ==9625== suppressed: 0 bytes in 0 blocks >>>>>>>>>>>> ==9625== >>>>>>>>>>>> ==9625== For counts of detected and suppressed errors, rerun >>>>>>>>>>>> with: -v >>>>>>>>>>>> ==9625== ERROR SUMMARY: 3 errors from 3 contexts (suppressed: 6 >>>>>>>>>>>> from 6) >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Mon, Mar 3, 2014 at 7:05 PM, Matthew Knepley < >>>>>>>>>>>> knepley at gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> On Mon, Mar 3, 2014 at 4:59 PM, Miguel Angel Salazar de Troya >>>>>>>>>>>>> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> You are welcome, thanks for your help. >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Okay, I have rebuilt completely clean, and ex12 runs for me. >>>>>>>>>>>>> Can you try again after pulling? >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks, >>>>>>>>>>>>> >>>>>>>>>>>>> Matt >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> On Mon, Mar 3, 2014 at 4:13 PM, Matthew Knepley < >>>>>>>>>>>>>> knepley at gmail.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Mon, Mar 3, 2014 at 1:44 PM, Miguel Angel Salazar de >>>>>>>>>>>>>>> Troya wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Thanks. This is what I get. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Okay, this was broken by a new push to master/next in the >>>>>>>>>>>>>>> last few days. I have pushed a fix, >>>>>>>>>>>>>>> however next is currently broken due to a failure to check >>>>>>>>>>>>>>> in a file. This should be fixed shortly, >>>>>>>>>>>>>>> and then ex12 will work. I will mail you when its ready. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thanks for finding this, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> (gdb) cont >>>>>>>>>>>>>>>> Continuing. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Program received signal SIGSEGV, Segmentation fault. >>>>>>>>>>>>>>>> 0x00007fd6811bea7b in DMPlexComputeJacobianFEM >>>>>>>>>>>>>>>> (dm=0x159a180, X=0x168b5b0, >>>>>>>>>>>>>>>> Jac=0x7fffae6e8a88, JacP=0x7fffae6e8a88, >>>>>>>>>>>>>>>> str=0x7fffae6e7970, >>>>>>>>>>>>>>>> user=0x7fd6811be509) >>>>>>>>>>>>>>>> at /home/salaza11/petsc/src/dm/impls/plex/plexfem.c:882 >>>>>>>>>>>>>>>> 882 ierr = PetscFEGetDimension(fe[f], >>>>>>>>>>>>>>>> &Nb);CHKERRQ(ierr); >>>>>>>>>>>>>>>> (gdb) where >>>>>>>>>>>>>>>> #0 0x00007fd6811bea7b in DMPlexComputeJacobianFEM >>>>>>>>>>>>>>>> (dm=0x159a180, X=0x168b5b0, >>>>>>>>>>>>>>>> Jac=0x7fffae6e8a88, JacP=0x7fffae6e8a88, >>>>>>>>>>>>>>>> str=0x7fffae6e7970, >>>>>>>>>>>>>>>> user=0x7fd6811be509) >>>>>>>>>>>>>>>> at /home/salaza11/petsc/src/dm/impls/plex/plexfem.c:882 >>>>>>>>>>>>>>>> #1 0x00007fd6814a5bf6 in SNESComputeJacobian_DMLocal >>>>>>>>>>>>>>>> (snes=0x14e9450, >>>>>>>>>>>>>>>> X=0x1622ad0, A=0x7fffae6e8a88, B=0x7fffae6e8a88, >>>>>>>>>>>>>>>> ctx=0x1652300) >>>>>>>>>>>>>>>> at /home/salaza11/petsc/src/snes/utils/dmlocalsnes.c:102 >>>>>>>>>>>>>>>> #2 0x00007fd6814cc609 in SNESComputeJacobian >>>>>>>>>>>>>>>> (snes=0x14e9450, X=0x1622ad0, >>>>>>>>>>>>>>>> A=0x7fffae6e8a88, B=0x7fffae6e8a88) >>>>>>>>>>>>>>>> at /home/salaza11/petsc/src/snes/interface/snes.c:2245 >>>>>>>>>>>>>>>> #3 0x000000000040af72 in main (argc=15, >>>>>>>>>>>>>>>> argv=0x7fffae6e8bc8) >>>>>>>>>>>>>>>> at >>>>>>>>>>>>>>>> /home/salaza11/petsc/src/snes/examples/tutorials/ex12.c:784 >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Mon, Mar 3, 2014 at 1:40 PM, Matthew Knepley < >>>>>>>>>>>>>>>> knepley at gmail.com> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Mon, Mar 3, 2014 at 1:39 PM, Miguel Angel Salazar de >>>>>>>>>>>>>>>>> Troya wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> This is what I get at gdb when I type 'where'. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> You have to type 'cont', and then when it fails you type >>>>>>>>>>>>>>>>> 'where'. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> #0 0x000000310e0aa860 in __nanosleep_nocancel () from >>>>>>>>>>>>>>>>>> /lib64/libc.so.6 >>>>>>>>>>>>>>>>>> #1 0x000000310e0aa70f in sleep () from /lib64/libc.so.6 >>>>>>>>>>>>>>>>>> #2 0x00007fd83a00a8be in PetscSleep (s=10) >>>>>>>>>>>>>>>>>> at /home/salaza11/petsc/src/sys/utils/psleep.c:52 >>>>>>>>>>>>>>>>>> #3 0x00007fd83a06f331 in PetscAttachDebugger () >>>>>>>>>>>>>>>>>> at /home/salaza11/petsc/src/sys/error/adebug.c:397 >>>>>>>>>>>>>>>>>> #4 0x00007fd83a0af1d2 in >>>>>>>>>>>>>>>>>> PetscOptionsCheckInitial_Private () >>>>>>>>>>>>>>>>>> at /home/salaza11/petsc/src/sys/objects/init.c:444 >>>>>>>>>>>>>>>>>> #5 0x00007fd83a0b6448 in PetscInitialize >>>>>>>>>>>>>>>>>> (argc=0x7fff5cd8df2c, >>>>>>>>>>>>>>>>>> args=0x7fff5cd8df20, file=0x0, >>>>>>>>>>>>>>>>>> help=0x60ce40 "Poisson Problem in 2d and 3d with >>>>>>>>>>>>>>>>>> simplicial finite elements.\nWe solve the Poisson problem in a >>>>>>>>>>>>>>>>>> rectangular\ndomain, using a parallel unstructured mesh (DMPLEX) to >>>>>>>>>>>>>>>>>> discretize it.\n\n\n") >>>>>>>>>>>>>>>>>> at /home/salaza11/petsc/src/sys/objects/pinit.c:876 >>>>>>>>>>>>>>>>>> #6 0x0000000000408f2c in main (argc=15, >>>>>>>>>>>>>>>>>> argv=0x7fff5cd8f1f8) >>>>>>>>>>>>>>>>>> at >>>>>>>>>>>>>>>>>> /home/salaza11/petsc/src/snes/examples/tutorials/ex12.c:663 >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> The rest of the gdb output is attached. I am a bit >>>>>>>>>>>>>>>>>> ignorant with gdb, I apologize for that. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> On Mon, Mar 3, 2014 at 12:48 PM, Matthew Knepley < >>>>>>>>>>>>>>>>>> knepley at gmail.com> wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> On Mon, Mar 3, 2014 at 12:39 PM, Miguel Angel Salazar de >>>>>>>>>>>>>>>>>>> Troya wrote: >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Thanks for your response. Sorry I did not have the >>>>>>>>>>>>>>>>>>>> "next" version, but the "master" version. I still have an error though. I >>>>>>>>>>>>>>>>>>>> followed the steps given here ( >>>>>>>>>>>>>>>>>>>> https://bitbucket.org/petsc/petsc/wiki/Home) to obtain >>>>>>>>>>>>>>>>>>>> the next version, I configured petsc as above and ran ex12 as above as >>>>>>>>>>>>>>>>>>>> well, getting this error: >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> [salaza11 at maya tutorials]$ ./ex12 -run_type test >>>>>>>>>>>>>>>>>>>> -refinement_limit 0.0 -bc_type dirichlet -interpolate 0 >>>>>>>>>>>>>>>>>>>> -petscspace_order 1 -show_initial -dm_plex_print_fem 1 >>>>>>>>>>>>>>>>>>>> Local function: >>>>>>>>>>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>>>>>>>>>> type: seq >>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>> 0.25 >>>>>>>>>>>>>>>>>>>> 1 >>>>>>>>>>>>>>>>>>>> 0.25 >>>>>>>>>>>>>>>>>>>> 0.5 >>>>>>>>>>>>>>>>>>>> 1.25 >>>>>>>>>>>>>>>>>>>> 1 >>>>>>>>>>>>>>>>>>>> 1.25 >>>>>>>>>>>>>>>>>>>> 2 >>>>>>>>>>>>>>>>>>>> Initial guess >>>>>>>>>>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>>>>>>>>>> type: seq >>>>>>>>>>>>>>>>>>>> 0.5 >>>>>>>>>>>>>>>>>>>> L_2 Error: 0.111111 >>>>>>>>>>>>>>>>>>>> Residual: >>>>>>>>>>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>>>>>>>>>> type: seq >>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>> Initial Residual >>>>>>>>>>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>>>>>>>>>> type: seq >>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>> L_2 Residual: 0 >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Okay, now run with -start_in_debugger, and give me a >>>>>>>>>>>>>>>>>>> stack trace using 'where'. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: >>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Caught signal number 11 SEGV: >>>>>>>>>>>>>>>>>>>> Segmentation Violation, probably memory access out of range >>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Try option -start_in_debugger or >>>>>>>>>>>>>>>>>>>> -on_error_attach_debugger >>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: or see >>>>>>>>>>>>>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSCERROR: or try >>>>>>>>>>>>>>>>>>>> http://valgrind.org on GNU/linux and Apple Mac OS X to >>>>>>>>>>>>>>>>>>>> find memory corruption errors >>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: likely location of problem given in >>>>>>>>>>>>>>>>>>>> stack below >>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: --------------------- Stack Frames >>>>>>>>>>>>>>>>>>>> ------------------------------------ >>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Note: The EXACT line numbers in the >>>>>>>>>>>>>>>>>>>> stack are not available, >>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: INSTEAD the line number of the >>>>>>>>>>>>>>>>>>>> start of the function >>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: is given. >>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] DMPlexComputeJacobianFEM line 871 >>>>>>>>>>>>>>>>>>>> /home/salaza11/petsc/src/dm/impls/plex/plexfem.c >>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] SNESComputeJacobian_DMLocal line >>>>>>>>>>>>>>>>>>>> 94 /home/salaza11/petsc/src/snes/utils/dmlocalsnes.c >>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] SNES user Jacobian function line >>>>>>>>>>>>>>>>>>>> 2244 /home/salaza11/petsc/src/snes/interface/snes.c >>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] SNESComputeJacobian line 2203 >>>>>>>>>>>>>>>>>>>> /home/salaza11/petsc/src/snes/interface/snes.c >>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: --------------------- Error Message >>>>>>>>>>>>>>>>>>>> -------------------------------------------------------------- >>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Signal received >>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: See http:// >>>>>>>>>>>>>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.htmlfor trouble shooting. >>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Petsc Development GIT revision: >>>>>>>>>>>>>>>>>>>> v3.4.3-4705-gfb6b3bc GIT Date: 2014-03-03 08:23:43 -0600 >>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: ./ex12 on a linux-gnu-c-debug named >>>>>>>>>>>>>>>>>>>> maya by salaza11 Mon Mar 3 11:49:15 2014 >>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Configure options --download-mpich >>>>>>>>>>>>>>>>>>>> --download-scientificpython --download-triangle --download-ctetgen >>>>>>>>>>>>>>>>>>>> --download-chaco --with-c2html=0 >>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: #1 User provided function() line 0 in >>>>>>>>>>>>>>>>>>>> unknown file >>>>>>>>>>>>>>>>>>>> application called MPI_Abort(MPI_COMM_WORLD, 59) - >>>>>>>>>>>>>>>>>>>> process 0 >>>>>>>>>>>>>>>>>>>> [unset]: aborting job: >>>>>>>>>>>>>>>>>>>> application called MPI_Abort(MPI_COMM_WORLD, 59) - >>>>>>>>>>>>>>>>>>>> process 0 >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> On Sun, Mar 2, 2014 at 7:11 PM, Matthew Knepley < >>>>>>>>>>>>>>>>>>>> knepley at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> On Sun, Mar 2, 2014 at 6:54 PM, Miguel Angel Salazar >>>>>>>>>>>>>>>>>>>>> de Troya wrote: >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> Hi everybody >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> I am trying to run example ex12.c without much >>>>>>>>>>>>>>>>>>>>>> success. I specifically run it with the command options: >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> We need to start narrowing down differences, because >>>>>>>>>>>>>>>>>>>>> it runs for me and our nightly tests. So, first can >>>>>>>>>>>>>>>>>>>>> you confirm that you are using the latest 'next' >>>>>>>>>>>>>>>>>>>>> branch? >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> ./ex12 -run_type test -refinement_limit 0.0 >>>>>>>>>>>>>>>>>>>>>> -bc_type dirichlet -interpolate 0 -petscspace_order 1 -show_initial >>>>>>>>>>>>>>>>>>>>>> -dm_plex_print_fem 1 >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> And I get this output >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> Local function: >>>>>>>>>>>>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>>>>>>>>>>>> type: seq >>>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>>> 1 >>>>>>>>>>>>>>>>>>>>>> 1 >>>>>>>>>>>>>>>>>>>>>> 2 >>>>>>>>>>>>>>>>>>>>>> 1 >>>>>>>>>>>>>>>>>>>>>> 2 >>>>>>>>>>>>>>>>>>>>>> 2 >>>>>>>>>>>>>>>>>>>>>> 3 >>>>>>>>>>>>>>>>>>>>>> Initial guess >>>>>>>>>>>>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>>>>>>>>>>>> type: seq >>>>>>>>>>>>>>>>>>>>>> L_2 Error: 0.625 >>>>>>>>>>>>>>>>>>>>>> Residual: >>>>>>>>>>>>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>>>>>>>>>>>> type: seq >>>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>>> Initial Residual >>>>>>>>>>>>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>>>>>>>>>>>> type: seq >>>>>>>>>>>>>>>>>>>>>> L_2 Residual: 0 >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: >>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Caught signal number 11 SEGV: >>>>>>>>>>>>>>>>>>>>>> Segmentation Violation, probably memory access out of range >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Try option -start_in_debugger or >>>>>>>>>>>>>>>>>>>>>> -on_error_attach_debugger >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: or see >>>>>>>>>>>>>>>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSCERROR: or try >>>>>>>>>>>>>>>>>>>>>> http://valgrind.org on GNU/linux and Apple Mac OS X >>>>>>>>>>>>>>>>>>>>>> to find memory corruption errors >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: likely location of problem given in >>>>>>>>>>>>>>>>>>>>>> stack below >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: --------------------- Stack Frames >>>>>>>>>>>>>>>>>>>>>> ------------------------------------ >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Note: The EXACT line numbers in the >>>>>>>>>>>>>>>>>>>>>> stack are not available, >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: INSTEAD the line number of the >>>>>>>>>>>>>>>>>>>>>> start of the function >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: is given. >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] DMPlexComputeJacobianFEM line 867 >>>>>>>>>>>>>>>>>>>>>> /home/salaza11/petsc/src/dm/impls/plex/plexfem.c >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] SNESComputeJacobian_DMLocal line >>>>>>>>>>>>>>>>>>>>>> 94 /home/salaza11/petsc/src/snes/utils/dmlocalsnes.c >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] SNES user Jacobian function line >>>>>>>>>>>>>>>>>>>>>> 2244 /home/salaza11/petsc/src/snes/interface/snes.c >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] SNESComputeJacobian line 2203 >>>>>>>>>>>>>>>>>>>>>> /home/salaza11/petsc/src/snes/interface/snes.c >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: --------------------- Error Message >>>>>>>>>>>>>>>>>>>>>> ------------------------------------ >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Signal received! >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: >>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Petsc Development GIT revision: >>>>>>>>>>>>>>>>>>>>>> v3.4.3-3453-g0a94005 GIT Date: 2014-03-02 13:12:04 -0600 >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: See docs/changes/index.html for >>>>>>>>>>>>>>>>>>>>>> recent updates. >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: See docs/faq.html for hints about >>>>>>>>>>>>>>>>>>>>>> trouble shooting. >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: See docs/index.html for manual pages. >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: >>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: ./ex12 on a linux-gnu-c-debug named >>>>>>>>>>>>>>>>>>>>>> maya by salaza11 Sun Mar 2 17:00:09 2014 >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Libraries linked from >>>>>>>>>>>>>>>>>>>>>> /home/salaza11/petsc/linux-gnu-c-debug/lib >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Configure run at Sun Mar 2 16:46:51 >>>>>>>>>>>>>>>>>>>>>> 2014 >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Configure options --download-mpich >>>>>>>>>>>>>>>>>>>>>> --download-scientificpython --download-triangle --download-ctetgen >>>>>>>>>>>>>>>>>>>>>> --download-chaco --with-c2html=0 >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: >>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: User provided function() line 0 in >>>>>>>>>>>>>>>>>>>>>> unknown file >>>>>>>>>>>>>>>>>>>>>> application called MPI_Abort(MPI_COMM_WORLD, 59) - >>>>>>>>>>>>>>>>>>>>>> process 0 >>>>>>>>>>>>>>>>>>>>>> [unset]: aborting job: >>>>>>>>>>>>>>>>>>>>>> application called MPI_Abort(MPI_COMM_WORLD, 59) - >>>>>>>>>>>>>>>>>>>>>> process 0 >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> Probably my problems could be on my configuration. I >>>>>>>>>>>>>>>>>>>>>> attach the configure.log. I ran ./configure like this >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> ./configure --download-mpich >>>>>>>>>>>>>>>>>>>>>> --download-scientificpython --download-triangle --download-ctetgen >>>>>>>>>>>>>>>>>>>>>> --download-chaco --with-c2html=0 >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> Thanks a lot in advance. >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> On Tue, Jan 28, 2014 at 10:37 AM, Matthew Knepley < >>>>>>>>>>>>>>>>>>>>>> knepley at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> On Tue, Jan 28, 2014 at 10:31 AM, Yaakoub El Khamra >>>>>>>>>>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> If >>>>>>>>>>>>>>>>>>>>>>>> ./ex12 -run_type test -dim 3 -refinement_limit >>>>>>>>>>>>>>>>>>>>>>>> 0.0125 -variable_coefficient field -interpolate 1 -petscspace_order 2 >>>>>>>>>>>>>>>>>>>>>>>> -show_initial -dm_plex_print_fem >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> is for serial, any chance we can get the options to >>>>>>>>>>>>>>>>>>>>>>>> run in parallel? >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> Just use mpiexec -n >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> Regards >>>>>>>>>>>>>>>>>>>>>>>> Yaakoub El Khamra >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> On Fri, Jan 17, 2014 at 11:29 AM, Matthew Knepley < >>>>>>>>>>>>>>>>>>>>>>>> knepley at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Jan 17, 2014 at 11:06 AM, Jones,Martin >>>>>>>>>>>>>>>>>>>>>>>>> Alexander wrote: >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>> *From:* Matthew Knepley [knepley at gmail.com] >>>>>>>>>>>>>>>>>>>>>>>>>> *Sent:* Friday, January 17, 2014 11:04 AM >>>>>>>>>>>>>>>>>>>>>>>>>> *To:* Jones,Martin Alexander >>>>>>>>>>>>>>>>>>>>>>>>>> *Cc:* petsc-users at mcs.anl.gov >>>>>>>>>>>>>>>>>>>>>>>>>> *Subject:* Re: [petsc-users] Problems running >>>>>>>>>>>>>>>>>>>>>>>>>> ex12.c >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Jan 17, 2014 at 11:00 AM, >>>>>>>>>>>>>>>>>>>>>>>>>> Jones,Martin Alexander wrote: >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> These examples all seem to run excepting the >>>>>>>>>>>>>>>>>>>>>>>>>>> following command, >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> ex12 -run_type test -dim 3 -refinement_limit >>>>>>>>>>>>>>>>>>>>>>>>>>> 0.0125 -variable_coefficient field -interpolate 1 -petscspace_order 2 >>>>>>>>>>>>>>>>>>>>>>>>>>> -show_initial -dm_plex_print_fem >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> I get the following ouput: >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> ./ex12 -run_type test -dim 3 -refinement_limit >>>>>>>>>>>>>>>>>>>>>>>>>>> 0.0125 -variable_coefficient field -interpolate 1 -petscspace_order 2 >>>>>>>>>>>>>>>>>>>>>>>>>>> -show_initial -dm_plex_print_fem >>>>>>>>>>>>>>>>>>>>>>>>>>> Local function: >>>>>>>>>>>>>>>>>>>>>>>>>>> ./ex12: symbol lookup error: >>>>>>>>>>>>>>>>>>>>>>>>>>> /opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libmkl_intel_thread.so: undefined >>>>>>>>>>>>>>>>>>>>>>>>>>> symbol: omp_get_num_procs >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> This is a build problem, but it should affect >>>>>>>>>>>>>>>>>>>>>>>>>> all the runs. Is this reproducible? Can you send configure.log? MKL is the >>>>>>>>>>>>>>>>>>>>>>>>>> worst. If this >>>>>>>>>>>>>>>>>>>>>>>>>> persists, I would just switch to >>>>>>>>>>>>>>>>>>>>>>>>>> --download-f-blas-lapack. >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> Thanks. I have some advice on options >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> --with-precision=single # I would not use this >>>>>>>>>>>>>>>>>>>>>>>>> unless you are doing something special, like CUDA >>>>>>>>>>>>>>>>>>>>>>>>> --with-clanguage=C++ # I would recommend >>>>>>>>>>>>>>>>>>>>>>>>> switching to C, the build is much faster >>>>>>>>>>>>>>>>>>>>>>>>> --with-mpi-dir=/usr --with-mpi4py=0 >>>>>>>>>>>>>>>>>>>>>>>>> --with-shared-libraries --CFLAGS=-O0 >>>>>>>>>>>>>>>>>>>>>>>>> --CXXFLAGS=-O0 --with-fc=0 >>>>>>>>>>>>>>>>>>>>>>>>> --with-etags=1 # This is >>>>>>>>>>>>>>>>>>>>>>>>> unnecessary >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> --with-blas-lapack-lib="[/opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libmkl_rt.so,/opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libmkl_intel_thread.so,/opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libmkl_core.so,/opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libiomp5.so]" >>>>>>>>>>>>>>>>>>>>>>>>> # Here is the problem, see below >>>>>>>>>>>>>>>>>>>>>>>>> --download-metis >>>>>>>>>>>>>>>>>>>>>>>>> --download-fiat=yes --download-generator >>>>>>>>>>>>>>>>>>>>>>>>> --download-scientificpython # Get rid of these, they are obsolete >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> Your MKL needs another library for the OpenMP >>>>>>>>>>>>>>>>>>>>>>>>> symbols. I would recommend switching to --download-f2cblaslapack, >>>>>>>>>>>>>>>>>>>>>>>>> or you can try and find that library. >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>> *From:* Matthew Knepley [knepley at gmail.com] >>>>>>>>>>>>>>>>>>>>>>>>>>> *Sent:* Thursday, January 16, 2014 6:35 PM >>>>>>>>>>>>>>>>>>>>>>>>>>> *To:* Jones,Martin Alexander >>>>>>>>>>>>>>>>>>>>>>>>>>> *Cc:* petsc-users at mcs.anl.gov >>>>>>>>>>>>>>>>>>>>>>>>>>> *Subject:* Re: [petsc-users] Problems running >>>>>>>>>>>>>>>>>>>>>>>>>>> ex12.c >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Jan 16, 2014 at 5:43 PM, >>>>>>>>>>>>>>>>>>>>>>>>>>> Jones,Martin Alexander wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi, This is the next error message after >>>>>>>>>>>>>>>>>>>>>>>>>>>> configuring and building with the triangle package when trying to run ex12 >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> This is my fault for bad defaults. I will fix. >>>>>>>>>>>>>>>>>>>>>>>>>>> Try running >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> ./ex12 -run_type test -refinement_limit 0.0 >>>>>>>>>>>>>>>>>>>>>>>>>>> -bc_type dirichlet -interpolate 0 -petscspace_order 1 -show_initial >>>>>>>>>>>>>>>>>>>>>>>>>>> -dm_plex_print_fem 1 >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> for a representative run. Then you could try 3D >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> ex12 -run_type test -dim 3 >>>>>>>>>>>>>>>>>>>>>>>>>>> -refinement_limit 0.0125 -variable_coefficient field -interpolate 1 >>>>>>>>>>>>>>>>>>>>>>>>>>> -petscspace_order 2 -show_initial -dm_plex_print_fem >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> or a full run >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> ex12 -refinement_limit 0.01 -bc_type >>>>>>>>>>>>>>>>>>>>>>>>>>> dirichlet -interpolate -petscspace_order 1 >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> ex12 -refinement_limit 0.01 -bc_type >>>>>>>>>>>>>>>>>>>>>>>>>>> dirichlet -interpolate -petscspace_order 2 >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> Let me know if those work. >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> ./ex12 >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: >>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Caught signal number 8 FPE: >>>>>>>>>>>>>>>>>>>>>>>>>>>> Floating Point Exception,probably divide by zero >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Try option -start_in_debugger >>>>>>>>>>>>>>>>>>>>>>>>>>>> or -on_error_attach_debugger >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: or see >>>>>>>>>>>>>>>>>>>>>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSCERROR: or try >>>>>>>>>>>>>>>>>>>>>>>>>>>> http://valgrind.org on GNU/linux and Apple Mac >>>>>>>>>>>>>>>>>>>>>>>>>>>> OS X to find memory corruption errors >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: likely location of problem >>>>>>>>>>>>>>>>>>>>>>>>>>>> given in stack below >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: --------------------- Stack >>>>>>>>>>>>>>>>>>>>>>>>>>>> Frames ------------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Note: The EXACT line numbers in >>>>>>>>>>>>>>>>>>>>>>>>>>>> the stack are not available, >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: INSTEAD the line number >>>>>>>>>>>>>>>>>>>>>>>>>>>> of the start of the function >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: is given. >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] DMPlexComputeResidualFEM >>>>>>>>>>>>>>>>>>>>>>>>>>>> line 531 /home/mjonesa/PETSc/petsc/src/dm/impls/plex/plexfem.c >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] SNESComputeFunction_DMLocal >>>>>>>>>>>>>>>>>>>>>>>>>>>> line 63 /home/mjonesa/PETSc/petsc/src/snes/utils/dmlocalsnes.c >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] SNES user function line >>>>>>>>>>>>>>>>>>>>>>>>>>>> 2088 /home/mjonesa/PETSc/petsc/src/snes/interface/snes.c >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] SNESComputeFunction line >>>>>>>>>>>>>>>>>>>>>>>>>>>> 2076 /home/mjonesa/PETSc/petsc/src/snes/interface/snes.c >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] SNESSolve_NEWTONLS line 144 >>>>>>>>>>>>>>>>>>>>>>>>>>>> /home/mjonesa/PETSc/petsc/src/snes/impls/ls/ls.c >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] SNESSolve line 3765 >>>>>>>>>>>>>>>>>>>>>>>>>>>> /home/mjonesa/PETSc/petsc/src/snes/interface/snes.c >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: --------------------- Error >>>>>>>>>>>>>>>>>>>>>>>>>>>> Message ------------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Signal received! >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: >>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Petsc Development GIT revision: >>>>>>>>>>>>>>>>>>>>>>>>>>>> v3.4.3-2317-gcd0e7f7 GIT Date: 2014-01-15 20:33:42 -0600 >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: See docs/changes/index.html for >>>>>>>>>>>>>>>>>>>>>>>>>>>> recent updates. >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: See docs/faq.html for hints >>>>>>>>>>>>>>>>>>>>>>>>>>>> about trouble shooting. >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: See docs/index.html for manual >>>>>>>>>>>>>>>>>>>>>>>>>>>> pages. >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: >>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: ./ex12 on a >>>>>>>>>>>>>>>>>>>>>>>>>>>> arch-linux2-cxx-debug named maeda by mjonesa Thu Jan 16 17:41:23 2014 >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Libraries linked from >>>>>>>>>>>>>>>>>>>>>>>>>>>> /home/mjonesa/local/lib >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Configure run at Thu Jan 16 >>>>>>>>>>>>>>>>>>>>>>>>>>>> 17:38:33 2014 >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Configure options >>>>>>>>>>>>>>>>>>>>>>>>>>>> --prefix=/home/mjonesa/local >>>>>>>>>>>>>>>>>>>>>>>>>>>> --with-blas-lapack-lib="[/opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libmkl_rt.so,/opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libmkl_intel_thread.so,/opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libmkl_core.so,/opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libiomp5.so]" >>>>>>>>>>>>>>>>>>>>>>>>>>>> --with-c2html=0 --with-clanguage=c++ PETSC_ARCH=arch-linux2-cxx-debug >>>>>>>>>>>>>>>>>>>>>>>>>>>> --download-triangle >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: >>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: User provided function() line 0 >>>>>>>>>>>>>>>>>>>>>>>>>>>> in unknown file >>>>>>>>>>>>>>>>>>>>>>>>>>>> application called MPI_Abort(MPI_COMM_WORLD, >>>>>>>>>>>>>>>>>>>>>>>>>>>> 59) - process 0 >>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>> *From:* Matthew Knepley [knepley at gmail.com] >>>>>>>>>>>>>>>>>>>>>>>>>>>> *Sent:* Thursday, January 16, 2014 4:37 PM >>>>>>>>>>>>>>>>>>>>>>>>>>>> *To:* Jones,Martin Alexander >>>>>>>>>>>>>>>>>>>>>>>>>>>> *Cc:* petsc-users at mcs.anl.gov >>>>>>>>>>>>>>>>>>>>>>>>>>>> *Subject:* Re: [petsc-users] Problems running >>>>>>>>>>>>>>>>>>>>>>>>>>>> ex12.c >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Jan 16, 2014 at 4:33 PM, >>>>>>>>>>>>>>>>>>>>>>>>>>>> Jones,Martin Alexander >>>>>>>>>>>>>>>>>>>>>>>>>>> > wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi, I have downloaded and built the dev >>>>>>>>>>>>>>>>>>>>>>>>>>>>> version you suggested. I think I need the triangle package to run this >>>>>>>>>>>>>>>>>>>>>>>>>>>>> particular case. Is there any thing else that appears wrong in what I have >>>>>>>>>>>>>>>>>>>>>>>>>>>>> done from the error messages below: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Great! Its running. You can reconfigure like >>>>>>>>>>>>>>>>>>>>>>>>>>>> this: >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> $PETSC_DIR/$PETSC_ARCH/conf/reconfigure-$PETSC_ARCH.py --download-triangle >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> and then rebuild >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> make >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> and then rerun. You can load meshes, but its >>>>>>>>>>>>>>>>>>>>>>>>>>>> much easier to have triangle create them. >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks for being patient, >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: --------------------- Error >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Message ------------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: No support for this operation >>>>>>>>>>>>>>>>>>>>>>>>>>>>> for this object type! >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Mesh generation needs external >>>>>>>>>>>>>>>>>>>>>>>>>>>>> package support. >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Please reconfigure with --download-triangle.! >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Petsc Development GIT >>>>>>>>>>>>>>>>>>>>>>>>>>>>> revision: v3.4.3-2317-gcd0e7f7 GIT Date: 2014-01-15 20:33:42 -0600 >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: See docs/changes/index.html >>>>>>>>>>>>>>>>>>>>>>>>>>>>> for recent updates. >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: See docs/faq.html for hints >>>>>>>>>>>>>>>>>>>>>>>>>>>>> about trouble shooting. >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: See docs/index.html for manual >>>>>>>>>>>>>>>>>>>>>>>>>>>>> pages. >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: ./ex12 on a >>>>>>>>>>>>>>>>>>>>>>>>>>>>> arch-linux2-cxx-debug named maeda by mjonesa Thu Jan 16 16:28:20 2014 >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Libraries linked from >>>>>>>>>>>>>>>>>>>>>>>>>>>>> /home/mjonesa/local/lib >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Configure run at Thu Jan 16 >>>>>>>>>>>>>>>>>>>>>>>>>>>>> 16:25:53 2014 >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Configure options >>>>>>>>>>>>>>>>>>>>>>>>>>>>> --prefix=/home/mjonesa/local --with-clanguage=c++ --with-c2html=0 >>>>>>>>>>>>>>>>>>>>>>>>>>>>> --with-blas-lapack-lib="[/opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libmkl_rt.so,/opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libmkl_intel_thread.so,/opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libmkl_core.so,/opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libiomp5.so]" >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: DMPlexGenerate() line 4332 in >>>>>>>>>>>>>>>>>>>>>>>>>>>>> /home/mjonesa/PETSc/petsc/src/dm/impls/plex/plex.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: DMPlexCreateBoxMesh() line 600 >>>>>>>>>>>>>>>>>>>>>>>>>>>>> in /home/mjonesa/PETSc/petsc/src/dm/impls/plex/plexcreate.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: CreateMesh() line 295 in >>>>>>>>>>>>>>>>>>>>>>>>>>>>> /home/mjonesa/PETSc/petsc/src/snes/examples/tutorials/ex12.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: main() line 659 in >>>>>>>>>>>>>>>>>>>>>>>>>>>>> /home/mjonesa/PETSc/petsc/src/snes/examples/tutorials/ex12.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>> application called MPI_Abort(MPI_COMM_WORLD, >>>>>>>>>>>>>>>>>>>>>>>>>>>>> 56) - process 0 >>>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>>> *From:* Matthew Knepley [knepley at gmail.com] >>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Sent:* Thursday, January 16, 2014 4:06 PM >>>>>>>>>>>>>>>>>>>>>>>>>>>>> *To:* Jones,Martin Alexander >>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Cc:* petsc-users at mcs.anl.gov >>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Subject:* Re: [petsc-users] Problems running >>>>>>>>>>>>>>>>>>>>>>>>>>>>> ex12.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Jan 16, 2014 at 4:05 PM, >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jones,Martin Alexander < >>>>>>>>>>>>>>>>>>>>>>>>>>>>> MAJones2 at mdanderson.org> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi. I changed the ENV variable to the >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> correct entry. when I type make ex12 I get this: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mjonesa at maeda:~/PETSc/petsc-3.4.3/src/snes/examples/tutorials$ >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> make ex12 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> g++ -o ex12.o -c -Wall -Wwrite-strings >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -Wno-strict-aliasing -Wno-unknown-pragmas -g -fPIC >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -I/home/mjonesa/PETSc/petsc-3.4.3/include >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -I/home/mjonesa/PETSc/petsc-3.4.3/arch-linux2-cxx-debug/include >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -I/home/mjonesa/PETSc/petsc-3.4.3/include/mpiuni >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -D__INSDIR__=src/snes/examples/tutorials/ ex12.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ex12.c:14:18: fatal error: ex12.h: No such >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> file or directory >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> compilation terminated. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> make: *** [ex12.o] Error 1 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Any help of yours is very much appreciated. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, this relates to my 3). This is not >>>>>>>>>>>>>>>>>>>>>>>>>>>>> going to work for you with the release. Please see the link I sent. >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *From:* Matthew Knepley [knepley at gmail.com] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Sent:* Thursday, January 16, 2014 3:58 PM >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *To:* Jones,Martin Alexander >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Cc:* petsc-users at mcs.anl.gov >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Subject:* Re: [petsc-users] Problems >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> running ex12.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Jan 16, 2014 at 3:55 PM, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jones,Martin Alexander < >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> MAJones2 at mdanderson.org> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks! >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You built with >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> PETSC_ARCH=arch-linux2-cxx-debug >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *From:* Matthew Knepley [knepley at gmail.com] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Sent:* Thursday, January 16, 2014 3:31 PM >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *To:* Jones,Martin Alexander >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Cc:* petsc-users at mcs.anl.gov >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Subject:* Re: [petsc-users] Problems >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> running ex12.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Jan 16, 2014 at 3:11 PM, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jones,Martin Alexander < >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> MAJones2 at mdanderson.org> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I went to the directory where ex12.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sits and just did a 'make ex12.c' with the following error if this helps? : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mjonesa at maeda:~/PETSc/petsc-3.4.3/src/snes/examples/tutorials$ >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> make ex12.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /home/mjonesa/PETSc/petsc-3.4.3/conf/variables:108: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /home/mjonesa/PETSc/petsc-3.4.3/linux-gnu-cxx-debug/conf/petscvariables: No >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> such file or directory >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /home/mjonesa/PETSc/petsc-3.4.3/conf/rules:962: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /home/mjonesa/PETSc/petsc-3.4.3/linux-gnu-cxx-debug/conf/petscrules: No >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> such file or directory >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> make: *** No rule to make target >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> `/home/mjonesa/PETSc/petsc-3.4.3/linux-gnu-cxx-debug/conf/petscrules'. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Stop. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 1) You would type 'make ex12' >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2) Either you PETSC_DIR ( >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /home/mjonesa/PETSc/petsc-3.4.3) or >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> PETSC_ARCH (linux-gnu-cxx-debug) >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> environment variables >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> do not match what you built. Please send >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> configure.log and make.log >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 3) Since it was only recently added, if >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you want to use the FEM functionality, you must use the development version: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> http://www.mcs.anl.gov/petsc/developers/index.html >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *From:* Matthew Knepley [mailto: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> knepley at gmail.com] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Sent:* Thursday, January 16, 2014 2:48 PM >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *To:* Jones,Martin Alexander >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Cc:* petsc-users at mcs.anl.gov >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Subject:* Re: [petsc-users] Problems >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> running ex12.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Jan 16, 2014 at 2:35 PM, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jones,Martin Alexander < >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> MAJones2 at mdanderson.org> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hello To Whom it Concerns, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am trying to run the tutorial ex12.c by >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> running 'bin/pythonscripts/PetscGenerateFEMQuadrature.py >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> dim order dim 1 laplacian dim order dim 1 boundary >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> src/snes/examples/tutorials/ex12.h' >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> but getting the following error: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> $ >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> bin/pythonscripts/PetscGenerateFEMQuadrature.py dim order dim 1 laplacian >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> dim order dim 1 boundary src/snes/examples/tutorials/ex12.h >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Traceback (most recent call last): >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> File >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "bin/pythonscripts/PetscGenerateFEMQuadrature.py", line 15, in >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from FIAT.reference_element import >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> default_simplex >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ImportError: No module named >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> FIAT.reference_element >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I have removed the requirement of >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> generating the header file (its now all handled in C). I thought >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I changed the documentation everywhere >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (including the latest tutorial slides). Can you try running >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> with 'master' (or 'next'), and point me >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> toward the old docs? >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What most experimenters take for granted >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> before they begin their experiments is infinitely more interesting than any >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> results to which their experiments lead. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What most experimenters take for granted >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> before they begin their experiments is infinitely more interesting than any >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> results to which their experiments lead. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What most experimenters take for granted >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> before they begin their experiments is infinitely more interesting than any >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> results to which their experiments lead. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>> What most experimenters take for granted >>>>>>>>>>>>>>>>>>>>>>>>>>>>> before they begin their experiments is infinitely more interesting than any >>>>>>>>>>>>>>>>>>>>>>>>>>>>> results to which their experiments lead. >>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>> What most experimenters take for granted before >>>>>>>>>>>>>>>>>>>>>>>>>>>> they begin their experiments is infinitely more interesting than any >>>>>>>>>>>>>>>>>>>>>>>>>>>> results to which their experiments lead. >>>>>>>>>>>>>>>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>> What most experimenters take for granted before >>>>>>>>>>>>>>>>>>>>>>>>>>> they begin their experiments is infinitely more interesting than any >>>>>>>>>>>>>>>>>>>>>>>>>>> results to which their experiments lead. >>>>>>>>>>>>>>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>> What most experimenters take for granted before >>>>>>>>>>>>>>>>>>>>>>>>>> they begin their experiments is infinitely more interesting than any >>>>>>>>>>>>>>>>>>>>>>>>>> results to which their experiments lead. >>>>>>>>>>>>>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>> What most experimenters take for granted before >>>>>>>>>>>>>>>>>>>>>>>>> they begin their experiments is infinitely more interesting than any >>>>>>>>>>>>>>>>>>>>>>>>> results to which their experiments lead. >>>>>>>>>>>>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>> What most experimenters take for granted before they >>>>>>>>>>>>>>>>>>>>>>> begin their experiments is infinitely more interesting than any results to >>>>>>>>>>>>>>>>>>>>>>> which their experiments lead. >>>>>>>>>>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>>>>>>>>>>>>>> Graduate Research Assistant >>>>>>>>>>>>>>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>>>>>>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>>>>>>>>>>>>>> (217) 550-2360 >>>>>>>>>>>>>>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>> What most experimenters take for granted before they >>>>>>>>>>>>>>>>>>>>> begin their experiments is infinitely more interesting than any results to >>>>>>>>>>>>>>>>>>>>> which their experiments lead. >>>>>>>>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>>>>>>>>>>>> Graduate Research Assistant >>>>>>>>>>>>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>>>>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>>>>>>>>>>>> (217) 550-2360 >>>>>>>>>>>>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>> What most experimenters take for granted before they >>>>>>>>>>>>>>>>>>> begin their experiments is infinitely more interesting than any results to >>>>>>>>>>>>>>>>>>> which their experiments lead. >>>>>>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>>>>>>>>>> Graduate Research Assistant >>>>>>>>>>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>>>>>>>>>> (217) 550-2360 >>>>>>>>>>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> What most experimenters take for granted before they begin >>>>>>>>>>>>>>>>> their experiments is infinitely more interesting than any results to which >>>>>>>>>>>>>>>>> their experiments lead. >>>>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>>>>>>>> Graduate Research Assistant >>>>>>>>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>>>>>>>> (217) 550-2360 >>>>>>>>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> What most experimenters take for granted before they begin >>>>>>>>>>>>>>> their experiments is infinitely more interesting than any results to which >>>>>>>>>>>>>>> their experiments lead. >>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>>>>>> Graduate Research Assistant >>>>>>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>>>>>> (217) 550-2360 >>>>>>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> What most experimenters take for granted before they begin >>>>>>>>>>>>> their experiments is infinitely more interesting than any results to which >>>>>>>>>>>>> their experiments lead. >>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>>>> Graduate Research Assistant >>>>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>>>> (217) 550-2360 >>>>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> What most experimenters take for granted before they begin their >>>>>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>>>>> experiments lead. >>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> What most experimenters take for granted before they begin their >>>>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>>>> experiments lead. >>>>>>>>>> -- Norbert Wiener >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>> Graduate Research Assistant >>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>> (217) 550-2360 >>>>>>>>> salaza11 at illinois.edu >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> What most experimenters take for granted before they begin their >>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>> experiments lead. >>>>>>>> -- Norbert Wiener >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> *Miguel Angel Salazar de Troya* >>>>>>> Graduate Research Assistant >>>>>>> Department of Mechanical Science and Engineering >>>>>>> University of Illinois at Urbana-Champaign >>>>>>> (217) 550-2360 >>>>>>> salaza11 at illinois.edu >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> *Miguel Angel Salazar de Troya* >>>>>> Graduate Research Assistant >>>>>> Department of Mechanical Science and Engineering >>>>>> University of Illinois at Urbana-Champaign >>>>>> (217) 550-2360 >>>>>> salaza11 at illinois.edu >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>> >>>> >>>> >>>> -- >>>> *Miguel Angel Salazar de Troya* >>>> Graduate Research Assistant >>>> Department of Mechanical Science and Engineering >>>> University of Illinois at Urbana-Champaign >>>> (217) 550-2360 >>>> salaza11 at illinois.edu >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> *Miguel Angel Salazar de Troya* >> Graduate Research Assistant >> Department of Mechanical Science and Engineering >> University of Illinois at Urbana-Champaign >> (217) 550-2360 >> salaza11 at illinois.edu >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From gcfrai at gmail.com Fri Apr 4 12:27:21 2014 From: gcfrai at gmail.com (Amit Itagi) Date: Fri, 4 Apr 2014 12:27:21 -0500 Subject: [petsc-users] Mat.createNest in petsc4py Message-ID: I need to use the createNest for some application. I am trying it out on a toy program. I created four 6x6 AIJ matrices on two processes. I am trying to create a nest using A=PETSc.Mat() A.createNest([mA11,mA12,mA21,mA22]) I get the following error: A.createNest([mA11,mA12,mA21,mA22]) File "Mat.pyx", line 409, in petsc4py.PETSc.Mat.createNest (src/petsc4py.PETSc.c:85554) A.createNest([mA11,mA12,mA21,mA22]) File "Mat.pyx", line 409, in petsc4py.PETSc.Mat.createNest (src/petsc4py.PETSc.c:85554) File "Mat.pyx", line 197, in petsc4py.PETSc.Mat.__getitem__ (src/petsc4py.PETSc.c:81516) File "Mat.pyx", line 197, in petsc4py.PETSc.Mat.__getitem__ (src/petsc4py.PETSc.c:81516) File "petscmat.pxi", line 870, in petsc4py.PETSc.mat_getitem (src/petsc4py.PETSc.c:24830) File "petscmat.pxi", line 870, in petsc4py.PETSc.mat_getitem (src/petsc4py.PETSc.c:24830) TypeError: 'int' object is not iterable TypeError: 'int' object is not iterable Any ideas ? Also, is the following the correct way of setting the nest to a 2x2 block ? ix=PETSc.IS() ix.CreateGeneral([0,1]) A.createNest([mA11,mA12,mA21,mA22],isrows=ix,iscols=ix) Thanks Amit -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Apr 4 12:29:04 2014 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 4 Apr 2014 12:29:04 -0500 Subject: [petsc-users] Mat.createNest in petsc4py In-Reply-To: References: Message-ID: On Fri, Apr 4, 2014 at 12:27 PM, Amit Itagi wrote: > I need to use the createNest for some application. I am trying it out on a > toy program. I created four 6x6 AIJ matrices on two processes. I am trying > to create a nest using > > A=PETSc.Mat() > A.createNest([mA11,mA12,mA21,mA22]) > I have never run this using petsc4py, but if it were me I would make it take a 2D array. Matt > I get the following error: > > A.createNest([mA11,mA12,mA21,mA22]) > File "Mat.pyx", line 409, in petsc4py.PETSc.Mat.createNest > (src/petsc4py.PETSc.c:85554) > A.createNest([mA11,mA12,mA21,mA22]) > File "Mat.pyx", line 409, in petsc4py.PETSc.Mat.createNest > (src/petsc4py.PETSc.c:85554) > File "Mat.pyx", line 197, in petsc4py.PETSc.Mat.__getitem__ > (src/petsc4py.PETSc.c:81516) > File "Mat.pyx", line 197, in petsc4py.PETSc.Mat.__getitem__ > (src/petsc4py.PETSc.c:81516) > File "petscmat.pxi", line 870, in petsc4py.PETSc.mat_getitem > (src/petsc4py.PETSc.c:24830) > File "petscmat.pxi", line 870, in petsc4py.PETSc.mat_getitem > (src/petsc4py.PETSc.c:24830) > TypeError: 'int' object is not iterable > TypeError: 'int' object is not iterable > > Any ideas ? > > Also, is the following the correct way of setting the nest to a 2x2 block ? > > ix=PETSc.IS() > ix.CreateGeneral([0,1]) > A.createNest([mA11,mA12,mA21,mA22],isrows=ix,iscols=ix) > > Thanks > > Amit > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From gcfrai at gmail.com Fri Apr 4 12:40:16 2014 From: gcfrai at gmail.com (Amit Itagi) Date: Fri, 4 Apr 2014 12:40:16 -0500 Subject: [petsc-users] Mat.createNest in petsc4py In-Reply-To: References: Message-ID: I tried the equivalent version with petsc in C++: Mat subA[4]; Mat A; ... MatCreateNest(comm,2,NULL,2,NULL,subA,&A); I did not face any issues. I am trying to map it to petsc4py. Amit On Fri, Apr 4, 2014 at 12:29 PM, Matthew Knepley wrote: > On Fri, Apr 4, 2014 at 12:27 PM, Amit Itagi wrote: > >> I need to use the createNest for some application. I am trying it out on >> a toy program. I created four 6x6 AIJ matrices on two processes. I am >> trying to create a nest using >> >> A=PETSc.Mat() >> A.createNest([mA11,mA12,mA21,mA22]) >> > > I have never run this using petsc4py, but if it were me I would make it > take a 2D array. > > Matt > > >> I get the following error: >> >> A.createNest([mA11,mA12,mA21,mA22]) >> File "Mat.pyx", line 409, in petsc4py.PETSc.Mat.createNest >> (src/petsc4py.PETSc.c:85554) >> A.createNest([mA11,mA12,mA21,mA22]) >> File "Mat.pyx", line 409, in petsc4py.PETSc.Mat.createNest >> (src/petsc4py.PETSc.c:85554) >> File "Mat.pyx", line 197, in petsc4py.PETSc.Mat.__getitem__ >> (src/petsc4py.PETSc.c:81516) >> File "Mat.pyx", line 197, in petsc4py.PETSc.Mat.__getitem__ >> (src/petsc4py.PETSc.c:81516) >> File "petscmat.pxi", line 870, in petsc4py.PETSc.mat_getitem >> (src/petsc4py.PETSc.c:24830) >> File "petscmat.pxi", line 870, in petsc4py.PETSc.mat_getitem >> (src/petsc4py.PETSc.c:24830) >> TypeError: 'int' object is not iterable >> TypeError: 'int' object is not iterable >> >> Any ideas ? >> >> Also, is the following the correct way of setting the nest to a 2x2 block >> ? >> >> ix=PETSc.IS() >> ix.CreateGeneral([0,1]) >> A.createNest([mA11,mA12,mA21,mA22],isrows=ix,iscols=ix) >> >> Thanks >> >> Amit >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Apr 4 12:45:37 2014 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 4 Apr 2014 12:45:37 -0500 Subject: [petsc-users] Mat.createNest in petsc4py In-Reply-To: References: Message-ID: On Fri, Apr 4, 2014 at 12:40 PM, Amit Itagi wrote: > I tried the equivalent version with petsc in C++: > > Mat subA[4]; > Mat A; > ... > MatCreateNest(comm,2,NULL,2,NULL,subA,&A); > > I did not face any issues. I am trying to map it to petsc4py. > Notice that petsc4py does not have the two 2 arguments. It must get these from the array... Matt > Amit > > > > On Fri, Apr 4, 2014 at 12:29 PM, Matthew Knepley wrote: > >> On Fri, Apr 4, 2014 at 12:27 PM, Amit Itagi wrote: >> >>> I need to use the createNest for some application. I am trying it out on >>> a toy program. I created four 6x6 AIJ matrices on two processes. I am >>> trying to create a nest using >>> >>> A=PETSc.Mat() >>> A.createNest([mA11,mA12,mA21,mA22]) >>> >> >> I have never run this using petsc4py, but if it were me I would make it >> take a 2D array. >> >> Matt >> >> >>> I get the following error: >>> >>> A.createNest([mA11,mA12,mA21,mA22]) >>> File "Mat.pyx", line 409, in petsc4py.PETSc.Mat.createNest >>> (src/petsc4py.PETSc.c:85554) >>> A.createNest([mA11,mA12,mA21,mA22]) >>> File "Mat.pyx", line 409, in petsc4py.PETSc.Mat.createNest >>> (src/petsc4py.PETSc.c:85554) >>> File "Mat.pyx", line 197, in petsc4py.PETSc.Mat.__getitem__ >>> (src/petsc4py.PETSc.c:81516) >>> File "Mat.pyx", line 197, in petsc4py.PETSc.Mat.__getitem__ >>> (src/petsc4py.PETSc.c:81516) >>> File "petscmat.pxi", line 870, in petsc4py.PETSc.mat_getitem >>> (src/petsc4py.PETSc.c:24830) >>> File "petscmat.pxi", line 870, in petsc4py.PETSc.mat_getitem >>> (src/petsc4py.PETSc.c:24830) >>> TypeError: 'int' object is not iterable >>> TypeError: 'int' object is not iterable >>> >>> Any ideas ? >>> >>> Also, is the following the correct way of setting the nest to a 2x2 >>> block ? >>> >>> ix=PETSc.IS() >>> ix.CreateGeneral([0,1]) >>> A.createNest([mA11,mA12,mA21,mA22],isrows=ix,iscols=ix) >>> >>> Thanks >>> >>> Amit >>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Fri Apr 4 12:48:55 2014 From: jed at jedbrown.org (Jed Brown) Date: Fri, 04 Apr 2014 11:48:55 -0600 Subject: [petsc-users] Mat.createNest in petsc4py In-Reply-To: References: Message-ID: <87y4zlc7eg.fsf@jedbrown.org> Amit Itagi writes: > I tried the equivalent version with petsc in C++: > > Mat subA[4]; > Mat A; > ... > MatCreateNest(comm,2,NULL,2,NULL,subA,&A); > > I did not face any issues. I am trying to map it to petsc4py. You should take Matt's suggestion. Sometimes it is best to look at the source. Lisandro his limited time to work on petsc4py, and he spends it ensuring that everything works correctly rather than carefully documenting the interface quirks. Unfortunately, this means it can be tricky to guess how an interface should be used. It will probably stay this way until either cross-language documentation becomes more automated or someone finds enough time to spend on petsc4py documentation. def createNest(self, mats, isrows=None, iscols=None, comm=None): mats = [list(mat) for mat in mats] if isrows: isrows = list(isrows) assert len(isrows) == len(mats) else: isrows = None if iscols: iscols = list(iscols) assert len(iscols) == len(mats[0]) else: iscols = None cdef MPI_Comm ccomm = def_Comm(comm, PETSC_COMM_DEFAULT) cdef Py_ssize_t i, mr = len(mats) cdef Py_ssize_t j, mc = len(mats[0]) cdef PetscInt nr = mr cdef PetscInt nc = mc cdef PetscMat *cmats = NULL cdef PetscIS *cisrows = NULL cdef PetscIS *ciscols = NULL cdef object tmp1, tmp2, tmp3 tmp1 = oarray_p(empty_p(nr*nc), NULL, &cmats) for i from 0 <= i < mr: for j from 0 <= j < mc: cmats[i*mc+j] = (mats[i][j]).mat if isrows is not None: tmp2 = oarray_p(empty_p(nr), NULL, &cisrows) for i from 0 <= i < mr: cisrows[i] = (isrows[i]).iset if iscols is not None: tmp3 = oarray_p(empty_p(nc), NULL, &ciscols) for j from 0 <= j < mc: ciscols[j] = (iscols[j]).iset cdef PetscMat newmat = NULL CHKERR( MatCreateNest(ccomm, nr, cisrows, nc, ciscols, cmats, &newmat) ) PetscCLEAR(self.obj); self.mat = newmat return self -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From gcfrai at gmail.com Fri Apr 4 12:52:47 2014 From: gcfrai at gmail.com (Amit Itagi) Date: Fri, 4 Apr 2014 12:52:47 -0500 Subject: [petsc-users] Mat.createNest in petsc4py In-Reply-To: <87y4zlc7eg.fsf@jedbrown.org> References: <87y4zlc7eg.fsf@jedbrown.org> Message-ID: I did not completely understand the suggestion the first time. My bad. A.createNest([[mA11,mA12,mA21,mA22]]) works. For ,my specific situation, A.createNest([[mA11,mA12],[mA21,mA22]]) works. Thanks Amit On Fri, Apr 4, 2014 at 12:48 PM, Jed Brown wrote: > Amit Itagi writes: > > > I tried the equivalent version with petsc in C++: > > > > Mat subA[4]; > > Mat A; > > ... > > MatCreateNest(comm,2,NULL,2,NULL,subA,&A); > > > > I did not face any issues. I am trying to map it to petsc4py. > > You should take Matt's suggestion. > > Sometimes it is best to look at the source. Lisandro his limited time > to work on petsc4py, and he spends it ensuring that everything works > correctly rather than carefully documenting the interface quirks. > Unfortunately, this means it can be tricky to guess how an interface > should be used. It will probably stay this way until either > cross-language documentation becomes more automated or someone finds > enough time to spend on petsc4py documentation. > > def createNest(self, mats, isrows=None, iscols=None, comm=None): > mats = [list(mat) for mat in mats] > if isrows: > isrows = list(isrows) > assert len(isrows) == len(mats) > else: > isrows = None > if iscols: > iscols = list(iscols) > assert len(iscols) == len(mats[0]) > else: > iscols = None > cdef MPI_Comm ccomm = def_Comm(comm, PETSC_COMM_DEFAULT) > cdef Py_ssize_t i, mr = len(mats) > cdef Py_ssize_t j, mc = len(mats[0]) > cdef PetscInt nr = mr > cdef PetscInt nc = mc > cdef PetscMat *cmats = NULL > cdef PetscIS *cisrows = NULL > cdef PetscIS *ciscols = NULL > cdef object tmp1, tmp2, tmp3 > tmp1 = oarray_p(empty_p(nr*nc), NULL, &cmats) > for i from 0 <= i < mr: > for j from 0 <= j < mc: > cmats[i*mc+j] = (mats[i][j]).mat > if isrows is not None: > tmp2 = oarray_p(empty_p(nr), NULL, &cisrows) > for i from 0 <= i < mr: cisrows[i] = (isrows[i]).iset > if iscols is not None: > tmp3 = oarray_p(empty_p(nc), NULL, &ciscols) > for j from 0 <= j < mc: ciscols[j] = (iscols[j]).iset > cdef PetscMat newmat = NULL > CHKERR( MatCreateNest(ccomm, nr, cisrows, nc, ciscols, cmats, > &newmat) ) > PetscCLEAR(self.obj); self.mat = newmat > return self > -------------- next part -------------- An HTML attachment was scrubbed... URL: From epscodes at gmail.com Fri Apr 4 13:53:33 2014 From: epscodes at gmail.com (Xiangdong) Date: Fri, 4 Apr 2014 14:53:33 -0400 Subject: [petsc-users] questions about snes/ex14 Message-ID: Hello everyone, I have some questions about running ex14 with different options -snes_fd -fdcoloring and -snes_mf. I run ex14 with -snes_max_it 1 to see how many function evaluations and linear solver iterations take. In this example, there are 64 unknowns. 1) -fdcoloring. For -fdcoloring, -snes_view says "total number of function evaluations=2" while the real number of calls of FormFunction (by putting a counter inside the function) is 30. Only 2 function calls reported by snes_view seems to be small. How does snes_view count the number of function evaluations? (For snes_fd, the number of function evaluations is 67 (64+3), which is normal.) If I use -snes_fd_color, it still says "total number of function evaluations=2" while the counter inside the FormFunction tells 12 now. 2) -snes_mf. For -snes_mf, "the total number of linear solver iterations=1". Given that no preconditioner (pcnone) is used, why does the gmres converges in just one iteration? Thank you. Xiangdong -------------- next part -------------- An HTML attachment was scrubbed... URL: From epscodes at gmail.com Fri Apr 4 15:57:41 2014 From: epscodes at gmail.com (Xiangdong) Date: Fri, 4 Apr 2014 16:57:41 -0400 Subject: [petsc-users] preconditioner for -snes_fd or -snes_fd_color Message-ID: Hello everyone, If we use -snes_fd or -snes_fd_color, can we still pass a preconditioner to snes/ksp? In snes/ex1.c, when I use -snes_fd, the program never calls the FormJacobian1 to get the preconditioner. Any suggestions about providing preconditioner to snes_fd or snes_fd_color? Thanks. Xiangdong -------------- next part -------------- An HTML attachment was scrubbed... URL: From brune at mcs.anl.gov Fri Apr 4 16:02:08 2014 From: brune at mcs.anl.gov (Peter Brune) Date: Fri, 4 Apr 2014 16:02:08 -0500 Subject: [petsc-users] preconditioner for -snes_fd or -snes_fd_color In-Reply-To: References: Message-ID: On Fri, Apr 4, 2014 at 3:57 PM, Xiangdong wrote: > Hello everyone, > > If we use -snes_fd or -snes_fd_color, can we still pass a preconditioner > to snes/ksp? > > In snes/ex1.c, when I use -snes_fd, the program never calls the > FormJacobian1 to get the preconditioner. > FormJacobian1 forms... wait for it... the Jacobian! Not the preconditioner. -snes_fd and -snes_fd_color replace the Jacobian-forming routine. > > Any suggestions about providing preconditioner to snes_fd or snes_fd_color? > > Both of these explicitly create the approximate Jacobian matrix by using finite differences, so they may be used with any preconditioner. -snes_mf is a different beast entirely and merely approximates the action of the matrix without forming it, which is where the preconditioner issues come in. - Peter > Thanks. > > Xiangdong > -------------- next part -------------- An HTML attachment was scrubbed... URL: From epscodes at gmail.com Fri Apr 4 16:15:42 2014 From: epscodes at gmail.com (Xiangdong) Date: Fri, 4 Apr 2014 17:15:42 -0400 Subject: [petsc-users] preconditioner for -snes_fd or -snes_fd_color In-Reply-To: References: Message-ID: On Fri, Apr 4, 2014 at 5:02 PM, Peter Brune wrote: > > > > On Fri, Apr 4, 2014 at 3:57 PM, Xiangdong wrote: > >> Hello everyone, >> >> If we use -snes_fd or -snes_fd_color, can we still pass a preconditioner >> to snes/ksp? >> >> In snes/ex1.c, when I use -snes_fd, the program never calls the >> FormJacobian1 to get the preconditioner. >> > > FormJacobian1 forms... wait for it... the Jacobian! Not the > preconditioner. -snes_fd and -snes_fd_color replace the Jacobian-forming > routine. > It looks like the FormJacobian1 forms both Jacobian J and preconditioner B. My observations are 1) Without any options, FormJacobian1 can provide both Jacobian J and preconditioner B. 2) With -snes_fd, the program does not call the FormJacobian1 function at all, even if we have SNESSetJacobian in the codes. If it does not call the FormJacobian1 and SNESSetJacobian is ignored with -snes_fd, how can I provide the preconditioner to snes? Thank you. Xiangdong > > >> >> Any suggestions about providing preconditioner to snes_fd or >> snes_fd_color? >> >> > Both of these explicitly create the approximate Jacobian matrix by using > finite differences, so they may be used with any preconditioner. -snes_mf > is a different beast entirely and merely approximates the action of the > matrix without forming it, which is where the preconditioner issues come in. > > - Peter > > >> Thanks. >> >> Xiangdong >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From epscodes at gmail.com Fri Apr 4 16:22:53 2014 From: epscodes at gmail.com (Xiangdong) Date: Fri, 4 Apr 2014 17:22:53 -0400 Subject: [petsc-users] preconditioner for -snes_fd or -snes_fd_color In-Reply-To: References: Message-ID: On Fri, Apr 4, 2014 at 5:15 PM, Xiangdong wrote: > > > > On Fri, Apr 4, 2014 at 5:02 PM, Peter Brune wrote: > >> >> >> >> On Fri, Apr 4, 2014 at 3:57 PM, Xiangdong wrote: >> >>> Hello everyone, >>> >>> If we use -snes_fd or -snes_fd_color, can we still pass a preconditioner >>> to snes/ksp? >>> >>> In snes/ex1.c, when I use -snes_fd, the program never calls the >>> FormJacobian1 to get the preconditioner. >>> >> >> FormJacobian1 forms... wait for it... the Jacobian! Not the >> preconditioner. -snes_fd and -snes_fd_color replace the Jacobian-forming >> routine. >> > > It looks like the FormJacobian1 forms both Jacobian J and preconditioner > B. My observations are > > 1) Without any options, FormJacobian1 can provide both Jacobian J and > preconditioner B. > 2) With -snes_fd, the program does not call the FormJacobian1 function at > all, even if we have SNESSetJacobian in the codes. > > If it does not call the FormJacobian1 and SNESSetJacobian is ignored with > -snes_fd, how can I provide the preconditioner to snes? > > Thank you. > > Xiangdong > > > >> >> >>> >>> Any suggestions about providing preconditioner to snes_fd or >>> snes_fd_color? >>> >>> >> Both of these explicitly create the approximate Jacobian matrix by using >> finite differences, so they may be used with any preconditioner. -snes_mf >> is a different beast entirely and merely approximates the action of the >> matrix without forming it, which is where the preconditioner issues come in. >> > Given that -snes_fd is explicitly formed, so -pc_type gamg or ilu will work. What happens if I want to provide my own preconditioner? Thank you. Xiangdong > >> - Peter >> >> >>> Thanks. >>> >>> Xiangdong >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prbrune at gmail.com Fri Apr 4 16:29:47 2014 From: prbrune at gmail.com (Peter Brune) Date: Fri, 4 Apr 2014 16:29:47 -0500 Subject: [petsc-users] preconditioner for -snes_fd or -snes_fd_color In-Reply-To: References: Message-ID: On Fri, Apr 4, 2014 at 4:22 PM, Xiangdong wrote: > > > > On Fri, Apr 4, 2014 at 5:15 PM, Xiangdong wrote: > >> >> >> >> On Fri, Apr 4, 2014 at 5:02 PM, Peter Brune wrote: >> >>> >>> >>> >>> On Fri, Apr 4, 2014 at 3:57 PM, Xiangdong wrote: >>> >>>> Hello everyone, >>>> >>>> If we use -snes_fd or -snes_fd_color, can we still pass a >>>> preconditioner to snes/ksp? >>>> >>>> In snes/ex1.c, when I use -snes_fd, the program never calls the >>>> FormJacobian1 to get the preconditioner. >>>> >>> >>> FormJacobian1 forms... wait for it... the Jacobian! Not the >>> preconditioner. -snes_fd and -snes_fd_color replace the Jacobian-forming >>> routine. >>> >> >> It looks like the FormJacobian1 forms both Jacobian J and preconditioner >> B. My observations are >> >> 1) Without any options, FormJacobian1 can provide both Jacobian J and >> preconditioner B. >> > From http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESJacobianFunction.html#SNESJacobianFunction: "Pmat - the matrix to be used in constructing the preconditioner, usually the same as Amat" The preconditioner is formed later using Pmat, whether or not it's just the same as the Jacobian. > 2) With -snes_fd, the program does not call the FormJacobian1 function at >> all, even if we have SNESSetJacobian in the codes. >> >> If it does not call the FormJacobian1 and SNESSetJacobian is ignored with >> -snes_fd, how can I provide the preconditioner to snes? >> >> Thank you. >> >> Xiangdong >> >> >> >>> >>> >>>> >>>> Any suggestions about providing preconditioner to snes_fd or >>>> snes_fd_color? >>>> >>>> >>> Both of these explicitly create the approximate Jacobian matrix by using >>> finite differences, so they may be used with any preconditioner. -snes_mf >>> is a different beast entirely and merely approximates the action of the >>> matrix without forming it, which is where the preconditioner issues come in. >>> >> > Given that -snes_fd is explicitly formed, so -pc_type gamg or ilu will > work. What happens if I want to provide my own preconditioner? > > If you have a custom preconditioner in mind, you could implement it in a PCSHELL http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCSHELL.htmlor register a custom preconditioner http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCRegister.html. If your preconditioner is merely a matrix you may use PCMAT http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCMAT.html > Thank you. > > Xiangdong > > > >> >>> - Peter >>> >>> >>>> Thanks. >>>> >>>> Xiangdong >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Fri Apr 4 18:41:03 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Sat, 05 Apr 2014 07:41:03 +0800 Subject: [petsc-users] Use of DM to update ghost values Message-ID: <533F430F.10600@gmail.com> Hi, Supposed, I use DM to construct my grid. I would like to know if DM can be used to update the ghost values easily? If I can use staggered grid, does it mean I have to create 3 DM grids (x,y,z)? Also, how can I solve the momentum eqn involving u,v,w using KSP? What examples can I follow for the above requirements? Thank you. -- Yours sincerely, TAY wee-beng From knepley at gmail.com Fri Apr 4 19:03:15 2014 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 4 Apr 2014 19:03:15 -0500 Subject: [petsc-users] Use of DM to update ghost values In-Reply-To: <533F430F.10600@gmail.com> References: <533F430F.10600@gmail.com> Message-ID: On Fri, Apr 4, 2014 at 6:41 PM, TAY wee-beng wrote: > Hi, > > Supposed, I use DM to construct my grid. I would like to know if DM can be > used to update the ghost values easily? > DMGlobalToLocalBegin/End(). > If I can use staggered grid, does it mean I have to create 3 DM grids > (x,y,z)? > You can also just interpret components in different ways. See SNES ex30. > Also, how can I solve the momentum eqn involving u,v,w using KSP? > -pc_type gamg. > What examples can I follow for the above requirements? > SNES ex43, ex62, KSP ex48. Matt > Thank you. > > -- > Yours sincerely, > > TAY wee-beng > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From venkateshgk.j at gmail.com Sat Apr 5 03:12:38 2014 From: venkateshgk.j at gmail.com (venkatesh g) Date: Sat, 5 Apr 2014 13:42:38 +0530 Subject: [petsc-users] reg: ex7.c takes large memory and gets killed Message-ID: Hi I am running "mpirun -np 64 -hostfile /cluster/hostnames ./ex7 -f1 A1 -f2 B1 -st_type sinvert -evecs outvec" It works for small matrices, it uses noticeable memory for small matrices (complex). However when I run with 39200x39200 complex matrices, it uses all the 480 GB ram and swap and gets killed. Am I doing something wrong ? Is there a solution. Pls let me know regards, Venkatesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Sat Apr 5 05:06:35 2014 From: jroman at dsic.upv.es (Jose E. Roman) Date: Sat, 5 Apr 2014 12:06:35 +0200 Subject: [petsc-users] reg: ex7.c takes large memory and gets killed In-Reply-To: References: Message-ID: <401D5C70-FD90-4503-A21A-6FD0B1B443D7@dsic.upv.es> El 05/04/2014, a las 10:12, venkatesh g escribi?: > Hi > > I am running "mpirun -np 64 -hostfile /cluster/hostnames ./ex7 -f1 A1 -f2 B1 -st_type sinvert -evecs outvec" > > It works for small matrices, it uses noticeable memory for small matrices (complex). > > However when I run with 39200x39200 complex matrices, it uses all the 480 GB ram and swap and gets killed. > > Am I doing something wrong ? Is there a solution. > > Pls let me know > > regards, > Venkatesh You need to use a parallel direct linear solver, or alternatively an iterative linear solver. See section 3.4.1 of SLEPc users guide. Jose From zonexo at gmail.com Sat Apr 5 07:43:11 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Sat, 05 Apr 2014 20:43:11 +0800 Subject: [petsc-users] Use of DM to update ghost values In-Reply-To: References: <533F430F.10600@gmail.com> Message-ID: <533FFA5F.5090909@gmail.com> On 5/4/2014 8:03 AM, Matthew Knepley wrote: > On Fri, Apr 4, 2014 at 6:41 PM, TAY wee-beng > wrote: > > Hi, > > Supposed, I use DM to construct my grid. I would like to know if > DM can be used to update the ghost values easily? > > > DMGlobalToLocalBegin/End(). > > If I can use staggered grid, does it mean I have to create 3 DM > grids (x,y,z)? > > > You can also just interpret components in different ways. See SNES ex30. > > Also, how can I solve the momentum eqn involving u,v,w using KSP? > > > -pc_type gamg. > > What examples can I follow for the above requirements? > > > SNES ex43, ex62, KSP ex48. Hi, I just checked. There's no ksp ex48. Are you referring to ex46? In ex46, it mentions that it is similar to ex2 but with DM. What is the advantage of DM? Also, Can I use non uniform grids with DM? Thanks > > Matt > > Thank you. > > -- > Yours sincerely, > > TAY wee-beng > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From popov at uni-mainz.de Sat Apr 5 08:19:35 2014 From: popov at uni-mainz.de (Anton Popov) Date: Sat, 5 Apr 2014 15:19:35 +0200 Subject: [petsc-users] Use of DM to update ghost values In-Reply-To: <533F430F.10600@gmail.com> References: <533F430F.10600@gmail.com> Message-ID: <534002E7.9060605@uni-mainz.de> On 4/5/14 1:41 AM, TAY wee-beng wrote: > Hi, > > Supposed, I use DM to construct my grid. I would like to know if DM > can be used to update the ghost values easily? > > If I can use staggered grid, does it mean I have to create 3 DM grids > (x,y,z)? Yes this is how I do it currently. But this is not optimal, because you have to scatter to ghost points four times (vx, vy, vz, p separately). Just do it this way to start with, because by far it's the simplest implementation. Alternatively you can create 2 distributed vectors, the one that contains non-overlapping partitioning of the grid points (to setup SNES, KSP, PC), and another overlapping one that contains all ghost points you need to compute your residual/Jacobian. Depending on the problem you solve this can be either star or box stencil (for each solution component). Then setup scatter context. http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Vec/VecScatterCreate.html You can reuse the same context for both accessing off-processor values and residual assembly, just need to change scatter mode SCATTER_FORWARD / SCATTER_REVERSE and insert mode ADD_VALUES / INSERT_VALUES http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Vec/VecScatterBegin.html Anton > > Also, how can I solve the momentum eqn involving u,v,w using KSP? > > What examples can I follow for the above requirements? > > Thank you. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhyde at stanford.edu Sat Apr 5 12:25:37 2014 From: mrhyde at stanford.edu (D H) Date: Sat, 5 Apr 2014 10:25:37 -0700 (PDT) Subject: [petsc-users] KSP in OpenMP Parallel For Loop In-Reply-To: <784705799.13314379.1396718120834.JavaMail.zimbra@stanford.edu> Message-ID: <510006714.13321195.1396718737252.JavaMail.zimbra@stanford.edu> Hi, I have a C++ program where I would like to call some of PETSc's KSP methods (KSPCreate, KSPSolve, etc.) from inside a for loop that has a "#pragma omp parallel for" in front of it. Without this OpenMP pragma, my code runs fine. But when I add in this parallelism, my program segfaults with PETSc reporting some memory corruption errors. I've read online in a few places that PETSc is not thread-safe, but before I give up hope, I thought I would ask to see if anyone has had success working with KSP routines when they are being called simultaneously from multiple threads (or whether such a feat is definitely not possible with PETSc). Thanks very much for your advice! Best, David From bsmith at mcs.anl.gov Sat Apr 5 13:34:37 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 5 Apr 2014 13:34:37 -0500 Subject: [petsc-users] KSP in OpenMP Parallel For Loop In-Reply-To: <510006714.13321195.1396718737252.JavaMail.zimbra@stanford.edu> References: <510006714.13321195.1396718737252.JavaMail.zimbra@stanford.edu> Message-ID: There is a branch in the petsc bitbucket repository http://www.mcs.anl.gov/petsc/developers/index.html called barry/make-petscoptionsobject-nonglobal where one can call the PETSc operations in threads without any conflicts. Otherwise it just won?t work. Barry On Apr 5, 2014, at 12:25 PM, D H wrote: > Hi, > > I have a C++ program where I would like to call some of PETSc's KSP methods (KSPCreate, KSPSolve, etc.) from inside a for loop that has a "#pragma omp parallel for" in front of it. Without this OpenMP pragma, my code runs fine. But when I add in this parallelism, my program segfaults with PETSc reporting some memory corruption errors. > > I've read online in a few places that PETSc is not thread-safe, but before I give up hope, I thought I would ask to see if anyone has had success working with KSP routines when they are being called simultaneously from multiple threads (or whether such a feat is definitely not possible with PETSc). Thanks very much for your advice! > > Best, > > David From zonexo at gmail.com Sun Apr 6 09:47:06 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Sun, 06 Apr 2014 22:47:06 +0800 Subject: [petsc-users] Questions abt DM and DMDALocalToLocalBegin / End Message-ID: <534168EA.8010902@gmail.com> Hi, I have questions abt DM and DMDALocalToLocalBegin / End. I have 3 dof for u,v,w using DMDACreate3d. Currently I'm using the topology of DM to help update the ghost values on each processor. I found that to update the ghost values, I only need to use DMDALocalToLocalBegin / End. Is this so? Initially, I was using : DMLocalToGlobalBegin / End and then DMGlobalToLocalBegin / End. Another thing is updating ghost values of only a single variable e.g. u. Is that possible if my DMDACreate3d has 3 dof u,v,w? Or must I create DMDACreate3d with only 1 dof u? -- Thank you. Yours sincerely, TAY wee-beng From jed at jedbrown.org Sun Apr 6 09:54:54 2014 From: jed at jedbrown.org (Jed Brown) Date: Sun, 06 Apr 2014 08:54:54 -0600 Subject: [petsc-users] Questions abt DM and DMDALocalToLocalBegin / End In-Reply-To: <534168EA.8010902@gmail.com> References: <534168EA.8010902@gmail.com> Message-ID: <87y4zi8q4h.fsf@jedbrown.org> TAY wee-beng writes: > Hi, > > I have questions abt DM and DMDALocalToLocalBegin / End. > > I have 3 dof for u,v,w using DMDACreate3d. > > Currently I'm using the topology of DM to help update the ghost values > on each processor. > > I found that to update the ghost values, I only need to use > DMDALocalToLocalBegin / End. It is common for explicit methods to be implemented using only local vectors. I recommend using global vectors if you might want to use implicit methods. > Another thing is updating ghost values of only a single variable e.g. u. > > Is that possible if my DMDACreate3d has 3 dof u,v,w? > > Or must I create DMDACreate3d with only 1 dof u? Just update all variables. It is usually not very wasteful. If profiling and performance analysis shows that your method is limited by network bandwidth (as opposed to latency or local compute and bandwidth) then you can profile the same operations having extracted the single variable. It is usually not worth it and you're better off structuring your code to work with all variables. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From zonexo at gmail.com Sun Apr 6 10:00:00 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Sun, 06 Apr 2014 23:00:00 +0800 Subject: [petsc-users] Questions abt DM and DMDALocalToLocalBegin / End In-Reply-To: <87y4zi8q4h.fsf@jedbrown.org> References: <534168EA.8010902@gmail.com> <87y4zi8q4h.fsf@jedbrown.org> Message-ID: <53416BF0.1030909@gmail.com> On 6/4/2014 10:54 PM, Jed Brown wrote: > TAY wee-beng writes: > >> Hi, >> >> I have questions abt DM and DMDALocalToLocalBegin / End. >> >> I have 3 dof for u,v,w using DMDACreate3d. >> >> Currently I'm using the topology of DM to help update the ghost values >> on each processor. >> >> I found that to update the ghost values, I only need to use >> DMDALocalToLocalBegin / End. > It is common for explicit methods to be implemented using only local > vectors. I recommend using global vectors if you might want to use > implicit methods. Hi Jed, May I know what you mean by explicit or implicit mtds? Thanks. > >> Another thing is updating ghost values of only a single variable e.g. u. >> >> Is that possible if my DMDACreate3d has 3 dof u,v,w? >> >> Or must I create DMDACreate3d with only 1 dof u? > Just update all variables. It is usually not very wasteful. If > profiling and performance analysis shows that your method is limited by > network bandwidth (as opposed to latency or local compute and bandwidth) > then you can profile the same operations having extracted the single > variable. It is usually not worth it and you're better off structuring > your code to work with all variables. From jed at jedbrown.org Sun Apr 6 10:04:54 2014 From: jed at jedbrown.org (Jed Brown) Date: Sun, 06 Apr 2014 09:04:54 -0600 Subject: [petsc-users] Questions abt DM and DMDALocalToLocalBegin / End In-Reply-To: <53416BF0.1030909@gmail.com> References: <534168EA.8010902@gmail.com> <87y4zi8q4h.fsf@jedbrown.org> <53416BF0.1030909@gmail.com> Message-ID: <87vbum8pnt.fsf@jedbrown.org> TAY wee-beng writes: > On 6/4/2014 10:54 PM, Jed Brown wrote: >> It is common for explicit methods to be implemented using only local >> vectors. I recommend using global vectors if you might want to use >> implicit methods. > Hi Jed, > > May I know what you mean by explicit or implicit mtds? Explicit versus implicit time integration; like forward versus backward Euler. There are some abstraction corners that can be cut more easily with explicit methods. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From zonexo at gmail.com Sun Apr 6 21:31:19 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Mon, 07 Apr 2014 10:31:19 +0800 Subject: [petsc-users] Questions abt DM and DMDALocalToLocalBegin / End In-Reply-To: <87y4zi8q4h.fsf@jedbrown.org> References: <534168EA.8010902@gmail.com> <87y4zi8q4h.fsf@jedbrown.org> Message-ID: <53420DF7.7080304@gmail.com> Hi, I understand that I need to use call DMDAVecGetArrayF90(da,u_local,u_array,ierr) if I want to access the u_local values. And then DMDAVecRestoreArrayF90 once it's done. I need to access the u_local values many times for each time step. So must I restore the array after each access? Or do I only have to do it when I need to call some routines like DMLocalToGlobalBegin, DMDALocalToLocalBegin etc? Thank you Yours sincerely, TAY wee-beng On 6/4/2014 10:54 PM, Jed Brown wrote: > TAY wee-beng writes: > >> Hi, >> >> I have questions abt DM and DMDALocalToLocalBegin / End. >> >> I have 3 dof for u,v,w using DMDACreate3d. >> >> Currently I'm using the topology of DM to help update the ghost values >> on each processor. >> >> I found that to update the ghost values, I only need to use >> DMDALocalToLocalBegin / End. > It is common for explicit methods to be implemented using only local > vectors. I recommend using global vectors if you might want to use > implicit methods. > >> Another thing is updating ghost values of only a single variable e.g. u. >> >> Is that possible if my DMDACreate3d has 3 dof u,v,w? >> >> Or must I create DMDACreate3d with only 1 dof u? > Just update all variables. It is usually not very wasteful. If > profiling and performance analysis shows that your method is limited by > network bandwidth (as opposed to latency or local compute and bandwidth) > then you can profile the same operations having extracted the single > variable. It is usually not worth it and you're better off structuring > your code to work with all variables. From jed at jedbrown.org Sun Apr 6 21:40:02 2014 From: jed at jedbrown.org (Jed Brown) Date: Sun, 06 Apr 2014 20:40:02 -0600 Subject: [petsc-users] Questions abt DM and DMDALocalToLocalBegin / End In-Reply-To: <53420DF7.7080304@gmail.com> References: <534168EA.8010902@gmail.com> <87y4zi8q4h.fsf@jedbrown.org> <53420DF7.7080304@gmail.com> Message-ID: <87sipp6ewt.fsf@jedbrown.org> TAY wee-beng writes: > Hi, > > I understand that I need to use > > call DMDAVecGetArrayF90(da,u_local,u_array,ierr) > > if I want to access the u_local values. > > And then DMDAVecRestoreArrayF90 once it's done. > > I need to access the u_local values many times for each time step. So > must I restore the array after each access? You would normally get access once, then do one or more loops over your domain, then restore the array. Accessing and restoring the array is very cheap compared to looping over the domain, but might not be negligible if called once for each degree of freedom. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From zonexo at gmail.com Sun Apr 6 21:50:37 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Mon, 07 Apr 2014 10:50:37 +0800 Subject: [petsc-users] Questions abt DM and DMDALocalToLocalBegin / End In-Reply-To: <87sipp6ewt.fsf@jedbrown.org> References: <534168EA.8010902@gmail.com> <87y4zi8q4h.fsf@jedbrown.org> <53420DF7.7080304@gmail.com> <87sipp6ewt.fsf@jedbrown.org> Message-ID: <5342127D.5020302@gmail.com> On 7/4/2014 10:40 AM, Jed Brown wrote: > but might not be > negligible if called once for each degree of freedom. Sorry I don't understand this sentence. So you mean if I have 3 dof u,v,w, it will not be cheap if I access and restore it many times. Is that so? From jed at jedbrown.org Sun Apr 6 21:54:48 2014 From: jed at jedbrown.org (Jed Brown) Date: Sun, 06 Apr 2014 20:54:48 -0600 Subject: [petsc-users] Questions abt DM and DMDALocalToLocalBegin / End In-Reply-To: <5342127D.5020302@gmail.com> References: <534168EA.8010902@gmail.com> <87y4zi8q4h.fsf@jedbrown.org> <53420DF7.7080304@gmail.com> <87sipp6ewt.fsf@jedbrown.org> <5342127D.5020302@gmail.com> Message-ID: <87mwfx6e87.fsf@jedbrown.org> TAY wee-beng writes: > On 7/4/2014 10:40 AM, Jed Brown wrote: >> but might not be >> negligible if called once for each degree of freedom. > Sorry I don't understand this sentence. > > So you mean if I have 3 dof u,v,w, it will not be cheap if I access and > restore it many times. Is that so? It means that you should not do this: do k=1,zm do j=1,ym do i=1,zm call DMDAVecGetArrayF90(... x,ierr) x(i,j,k) = something call DMDAVecRestoreArrayF90(... x,ierr) end end end Lift the call out of the loop. call DMDAVecGetArrayF90(... x,ierr) do k=1,zm do j=1,ym do i=1,zm x(i,j,k) = something end end end call DMDAVecRestoreArrayF90(... x,ierr) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From zonexo at gmail.com Sun Apr 6 21:59:21 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Mon, 07 Apr 2014 10:59:21 +0800 Subject: [petsc-users] Questions abt DM and DMDALocalToLocalBegin / End In-Reply-To: <87mwfx6e87.fsf@jedbrown.org> References: <534168EA.8010902@gmail.com> <87y4zi8q4h.fsf@jedbrown.org> <53420DF7.7080304@gmail.com> <87sipp6ewt.fsf@jedbrown.org> <5342127D.5020302@gmail.com> <87mwfx6e87.fsf@jedbrown.org> Message-ID: <53421489.2070502@gmail.com> Hi Jed, Thanks for the clarifications. Yours sincerely, TAY wee-beng On 7/4/2014 10:54 AM, Jed Brown wrote: > TAY wee-beng writes: > >> On 7/4/2014 10:40 AM, Jed Brown wrote: >>> but might not be >>> negligible if called once for each degree of freedom. >> Sorry I don't understand this sentence. >> >> So you mean if I have 3 dof u,v,w, it will not be cheap if I access and >> restore it many times. Is that so? > It means that you should not do this: > > do k=1,zm > do j=1,ym > do i=1,zm > call DMDAVecGetArrayF90(... x,ierr) > x(i,j,k) = something > call DMDAVecRestoreArrayF90(... x,ierr) > end > end > end > > Lift the call out of the loop. > > call DMDAVecGetArrayF90(... x,ierr) > do k=1,zm > do j=1,ym > do i=1,zm > x(i,j,k) = something > end > end > end > call DMDAVecRestoreArrayF90(... x,ierr) From zonexo at gmail.com Mon Apr 7 05:16:50 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Mon, 07 Apr 2014 18:16:50 +0800 Subject: [petsc-users] Intel Internal compiler error: segmentation violation signal raised Message-ID: <53427B12.7030307@gmail.com> Hi, I encountered the error below when compiling my code using intel fortran: /tmp/ifortlPEDlK.i90: catastrophic error: **Internal compiler error: segmentation violation signal raised** Please report this error along with the circumstances in which it occurred in a Software Problem Report. Note: File and line given may not be explicit cause of this error. In the end, I realised that it is due to using *petsc.h90*: module PETSc_solvers use set_matrix ... implicit none contains subroutine semi_momentum_simple_xyz(du,dv,dw) *#include "finclude/petsc.h90"* integer :: i,j,k,ijk,ierr,II !,ro... If I use : *#include "finclude/petsc.h" or * *#include "finclude/petscdmda.h90"* *#include "finclude/petscksp.h90"* Then there is no problem. May I know why this is happening? Although I can now compile and build successfully, is this the right way to go? -- Thank you Yours sincerely, TAY wee-beng -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Mon Apr 7 05:19:25 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Mon, 07 Apr 2014 18:19:25 +0800 Subject: [petsc-users] Intel Internal compiler error: segmentation violation signal raised In-Reply-To: <53427B12.7030307@gmail.com> References: <53427B12.7030307@gmail.com> Message-ID: <53427BAD.3090201@gmail.com> Sorry I realised that *#include "finclude/petscdmda.h90"* *#include "finclude/petscksp.h90"* also gave errors: *//home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(10): error #5082: Syntax error, found ',' when expecting one of: ( % [ : . = =>/**/ /**/ PetscInt, pointer :: array(:)/**/ /**/------------------^/**/ /**//home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(10): error #5082: Syntax error, found END-OF-STATEMENT when expecting one of: ) ,/**/ /**/ PetscInt, pointer :: array(:)/**/ /**/---------------------------------------^/**/ /**//home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(11): error #5082: Syntax error, found IDENTIFIER 'N' when expecting one of: ( % [ : . = =>/**/ /**/ PetscInt n/**/ /**/--------------------^/**/ /**//home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(12): error #5082: Syntax error, found IDENTIFIER 'IERR' when expecting one of: ( % [ : . = =>/**/ /**/ PetscErrorCode ierr/**/ /**/-------------------------^/**/ /**//home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(13): error #5082: Syntax error, found IDENTIFIER 'V' when expecting one of: ( % [ : . = =>/* Thank you Yours sincerely, TAY wee-beng On 7/4/2014 6:16 PM, TAY wee-beng wrote: > Hi, > > I encountered the error below when compiling my code using intel fortran: > > /tmp/ifortlPEDlK.i90: catastrophic error: **Internal compiler error: > segmentation violation signal raised** Please report this error along > with the circumstances in which it occurred in a Software Problem > Report. Note: File and line given may not be explicit cause of this > error. > > In the end, I realised that it is due to using *petsc.h90*: > > module PETSc_solvers > > use set_matrix > > ... > > implicit none > > contains > > subroutine semi_momentum_simple_xyz(du,dv,dw) > > *#include "finclude/petsc.h90"* > > integer :: i,j,k,ijk,ierr,II !,ro... > > If I use : > > *#include "finclude/petsc.h" > > or > * > *#include "finclude/petscdmda.h90"* > *#include "finclude/petscksp.h90"* > > Then there is no problem. > > May I know why this is happening? > > Although I can now compile and build successfully, is this the right > way to go? > > -- > Thank you > > Yours sincerely, > > TAY wee-beng -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Apr 7 11:40:12 2014 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 7 Apr 2014 11:40:12 -0500 Subject: [petsc-users] Intel Internal compiler error: segmentation violation signal raised In-Reply-To: <53427BAD.3090201@gmail.com> References: <53427B12.7030307@gmail.com> <53427BAD.3090201@gmail.com> Message-ID: How about just including petsc.h90 so you get everything. Thanks, Matt On Mon, Apr 7, 2014 at 5:19 AM, TAY wee-beng wrote: > Sorry I realised that > > *#include "finclude/petscdmda.h90"* > *#include "finclude/petscksp.h90"* > > also gave errors: > > */home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(10): > error #5082: Syntax error, found ',' when expecting one of: ( % [ : . = =>* > * PetscInt, pointer :: array(:)* > *------------------^* > */home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(10): > error #5082: Syntax error, found END-OF-STATEMENT when expecting one of: ) > ,* > * PetscInt, pointer :: array(:)* > *---------------------------------------^* > */home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(11): > error #5082: Syntax error, found IDENTIFIER 'N' when expecting one of: ( % > [ : . = =>* > * PetscInt n* > *--------------------^* > */home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(12): > error #5082: Syntax error, found IDENTIFIER 'IERR' when expecting one of: ( > % [ : . = =>* > * PetscErrorCode ierr* > *-------------------------^* > */home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(13): > error #5082: Syntax error, found IDENTIFIER 'V' when expecting one of: ( % > [ : . = =>* > > Thank you > > Yours sincerely, > > TAY wee-beng > > On 7/4/2014 6:16 PM, TAY wee-beng wrote: > > Hi, > > I encountered the error below when compiling my code using intel fortran: > > /tmp/ifortlPEDlK.i90: catastrophic error: **Internal compiler error: > segmentation violation signal raised** Please report this error along with > the circumstances in which it occurred in a Software Problem Report. Note: > File and line given may not be explicit cause of this error. > > In the end, I realised that it is due to using *petsc.h90*: > > module PETSc_solvers > > use set_matrix > > ... > > implicit none > > contains > > subroutine semi_momentum_simple_xyz(du,dv,dw) > > *#include "finclude/petsc.h90"* > > integer :: i,j,k,ijk,ierr,II !,ro... > > If I use : > > > > > *#include "finclude/petsc.h" or * > *#include "finclude/petscdmda.h90"* > *#include "finclude/petscksp.h90"* > > Then there is no problem. > > May I know why this is happening? > > Although I can now compile and build successfully, is this the right way > to go? > > -- > Thank you > > Yours sincerely, > > TAY wee-beng > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From cvpoulton at gmail.com Mon Apr 7 12:48:46 2014 From: cvpoulton at gmail.com (Christopher Poulton) Date: Mon, 7 Apr 2014 11:48:46 -0600 Subject: [petsc-users] Question on maximum size of SLEPc matrices Message-ID: Hey SLEPc team, I want to create a 3D Maxwell's Equation eigenmode solver and your library seems to be a good option because it supports complex numbers. How large can your matrices be and still solve in a decent amount of time? Memory is not a problem on my computer (144GB). Thanks, Chris Poulton - CU Boulder -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Mon Apr 7 12:58:23 2014 From: jroman at dsic.upv.es (Jose E. Roman) Date: Mon, 7 Apr 2014 19:58:23 +0200 Subject: [petsc-users] Question on maximum size of SLEPc matrices In-Reply-To: References: Message-ID: On 07/04/2014, Christopher Poulton wrote: > Hey SLEPc team, > > I want to create a 3D Maxwell's Equation eigenmode solver and your library seems to be a good option because it supports complex numbers. How large can your matrices be and still solve in a decent amount of time? Memory is not a problem on my computer (144GB). > > Thanks, > Chris Poulton - CU Boulder If matrices are sparse, you can probably solve problems with millions of degrees of freedom, provided that you do not request too many eigenpairs. Jose From zonexo at gmail.com Mon Apr 7 18:46:56 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Tue, 08 Apr 2014 07:46:56 +0800 Subject: [petsc-users] Intel Internal compiler error: segmentation violation signal raised In-Reply-To: References: <53427B12.7030307@gmail.com> <53427BAD.3090201@gmail.com> Message-ID: <534338F0.8060108@gmail.com> On 8/4/2014 12:40 AM, Matthew Knepley wrote: > How about just including petsc.h90 so you get everything. > > Thanks, > > Matt > Sorry this is my 2nd email. See below for the 1st email. Using petsc.h90 gave the error "... segmentation violation signal raised ...." > > On Mon, Apr 7, 2014 at 5:19 AM, TAY wee-beng > wrote: > > Sorry I realised that > > *#include "finclude/petscdmda.h90"* > *#include "finclude/petscksp.h90"* > > also gave errors: > > *//home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(10): > error #5082: Syntax error, found ',' when expecting one of: ( % [ > : . = =>/**/ > /**/ PetscInt, pointer :: array(:)/**/ > /**/------------------^/**/ > /**//home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(10): > error #5082: Syntax error, found END-OF-STATEMENT when expecting > one of: ) ,/**/ > /**/ PetscInt, pointer :: array(:)/**/ > /**/---------------------------------------^/**/ > /**//home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(11): > error #5082: Syntax error, found IDENTIFIER 'N' when expecting one > of: ( % [ : . = =>/**/ > /**/ PetscInt n/**/ > /**/--------------------^/**/ > /**//home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(12): > error #5082: Syntax error, found IDENTIFIER 'IERR' when expecting > one of: ( % [ : . = =>/**/ > /**/ PetscErrorCode ierr/**/ > /**/-------------------------^/**/ > /**//home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(13): > error #5082: Syntax error, found IDENTIFIER 'V' when expecting one > of: ( % [ : . = =>/* > > Thank you > > Yours sincerely, > > TAY wee-beng > > On 7/4/2014 6:16 PM, TAY wee-beng wrote: > Here's the 1st email: > >> Hi, >> >> I encountered the error below when compiling my code using intel >> fortran: >> >> /tmp/ifortlPEDlK.i90: catastrophic error: **Internal compiler >> error: segmentation violation signal raised** Please report this >> error along with the circumstances in which it occurred in a >> Software Problem Report. Note: File and line given may not be >> explicit cause of this error. >> >> In the end, I realised that it is due to using *petsc.h90*: >> >> module PETSc_solvers >> >> use set_matrix >> >> ... >> >> implicit none >> >> contains >> >> subroutine semi_momentum_simple_xyz(du,dv,dw) >> >> *#include "finclude/petsc.h90"* >> >> integer :: i,j,k,ijk,ierr,II !,ro... >> >> If I use : >> >> *#include "finclude/petsc.h" >> >> or >> * >> *#include "finclude/petscdmda.h90"* >> *#include "finclude/petscksp.h90"* >> >> Then there is no problem. >> >> May I know why this is happening? >> >> Although I can now compile and build successfully, is this the >> right way to go? >> >> -- >> Thank you >> >> Yours sincerely, >> >> TAY wee-beng > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Apr 7 18:58:01 2014 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 7 Apr 2014 18:58:01 -0500 Subject: [petsc-users] Intel Internal compiler error: segmentation violation signal raised In-Reply-To: <534338F0.8060108@gmail.com> References: <53427B12.7030307@gmail.com> <53427BAD.3090201@gmail.com> <534338F0.8060108@gmail.com> Message-ID: On Mon, Apr 7, 2014 at 6:46 PM, TAY wee-beng wrote: > On 8/4/2014 12:40 AM, Matthew Knepley wrote: > > How about just including petsc.h90 so you get everything. > > Thanks, > > Matt > > Sorry this is my 2nd email. See below for the 1st email. Using petsc.h90 > gave the error "... segmentation violation signal raised .... > Right, the solution there is not to use a buggy compiler. I suggest gcc. It is also faster for a lot of code than Intel. Matt > On Mon, Apr 7, 2014 at 5:19 AM, TAY wee-beng wrote: > >> Sorry I realised that >> >> *#include "finclude/petscdmda.h90"* >> *#include "finclude/petscksp.h90"* >> >> also gave errors: >> >> */home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(10): >> error #5082: Syntax error, found ',' when expecting one of: ( % [ : . = =>* >> * PetscInt, pointer :: array(:)* >> *------------------^* >> */home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(10): >> error #5082: Syntax error, found END-OF-STATEMENT when expecting one of: ) >> ,* >> * PetscInt, pointer :: array(:)* >> *---------------------------------------^* >> */home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(11): >> error #5082: Syntax error, found IDENTIFIER 'N' when expecting one of: ( % >> [ : . = =>* >> * PetscInt n* >> *--------------------^* >> */home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(12): >> error #5082: Syntax error, found IDENTIFIER 'IERR' when expecting one of: ( >> % [ : . = =>* >> * PetscErrorCode ierr* >> *-------------------------^* >> */home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(13): >> error #5082: Syntax error, found IDENTIFIER 'V' when expecting one of: ( % >> [ : . = =>* >> >> Thank you >> >> Yours sincerely, >> >> TAY wee-beng >> >> On 7/4/2014 6:16 PM, TAY wee-beng wrote: >> > > Here's the 1st email: > > Hi, >> >> I encountered the error below when compiling my code using intel fortran: >> >> /tmp/ifortlPEDlK.i90: catastrophic error: **Internal compiler error: >> segmentation violation signal raised** Please report this error along with >> the circumstances in which it occurred in a Software Problem Report. Note: >> File and line given may not be explicit cause of this error. >> >> In the end, I realised that it is due to using *petsc.h90*: >> >> module PETSc_solvers >> >> use set_matrix >> >> ... >> >> implicit none >> >> contains >> >> subroutine semi_momentum_simple_xyz(du,dv,dw) >> >> *#include "finclude/petsc.h90"* >> >> integer :: i,j,k,ijk,ierr,II !,ro... >> >> If I use : >> >> >> >> >> *#include "finclude/petsc.h" or * >> *#include "finclude/petscdmda.h90"* >> *#include "finclude/petscksp.h90"* >> >> Then there is no problem. >> >> May I know why this is happening? >> >> Although I can now compile and build successfully, is this the right way >> to go? >> >> -- >> Thank you >> >> Yours sincerely, >> >> TAY wee-beng >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Apr 7 20:08:15 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 7 Apr 2014 20:08:15 -0500 Subject: [petsc-users] Intel Internal compiler error: segmentation violation signal raised In-Reply-To: References: <53427B12.7030307@gmail.com> <53427BAD.3090201@gmail.com> <534338F0.8060108@gmail.com> Message-ID: <0BF6261D-BC18-4006-8FFA-595A451FF729@mcs.anl.gov> Did you submit an Intel "Software Problem Report? as they requested? Perhaps they already have updates that fix the problem. Barry On Apr 7, 2014, at 6:58 PM, Matthew Knepley wrote: > On Mon, Apr 7, 2014 at 6:46 PM, TAY wee-beng wrote: > On 8/4/2014 12:40 AM, Matthew Knepley wrote: >> How about just including petsc.h90 so you get everything. >> >> Thanks, >> >> Matt >> > Sorry this is my 2nd email. See below for the 1st email. Using petsc.h90 gave the error "... segmentation violation signal raised .... > > Right, the solution there is not to use a buggy compiler. I suggest gcc. It is also faster for a lot of code than Intel. > > Matt > >> On Mon, Apr 7, 2014 at 5:19 AM, TAY wee-beng wrote: >> Sorry I realised that >> >> #include "finclude/petscdmda.h90" >> #include "finclude/petscksp.h90" >> >> also gave errors: >> >> /home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(10): error #5082: Syntax error, found ',' when expecting one of: ( % [ : . = => >> PetscInt, pointer :: array(:) >> ------------------^ >> /home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(10): error #5082: Syntax error, found END-OF-STATEMENT when expecting one of: ) , >> PetscInt, pointer :: array(:) >> ---------------------------------------^ >> /home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(11): error #5082: Syntax error, found IDENTIFIER 'N' when expecting one of: ( % [ : . = => >> PetscInt n >> --------------------^ >> /home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(12): error #5082: Syntax error, found IDENTIFIER 'IERR' when expecting one of: ( % [ : . = => >> PetscErrorCode ierr >> -------------------------^ >> /home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(13): error #5082: Syntax error, found IDENTIFIER 'V' when expecting one of: ( % [ : . = => >> >> Thank you >> >> Yours sincerely, >> >> TAY wee-beng >> >> On 7/4/2014 6:16 PM, TAY wee-beng wrote: > > Here's the 1st email: >>> Hi, >>> >>> I encountered the error below when compiling my code using intel fortran: >>> >>> /tmp/ifortlPEDlK.i90: catastrophic error: **Internal compiler error: segmentation violation signal raised** Please report this error along with the circumstances in which it occurred in a Software Problem Report. Note: File and line given may not be explicit cause of this error. >>> >>> In the end, I realised that it is due to using petsc.h90: >>> >>> module PETSc_solvers >>> >>> use set_matrix >>> >>> ... >>> >>> implicit none >>> >>> contains >>> >>> subroutine semi_momentum_simple_xyz(du,dv,dw) >>> >>> #include "finclude/petsc.h90" >>> >>> integer :: i,j,k,ijk,ierr,II !,ro... >>> >>> If I use : >>> >>> #include "finclude/petsc.h" >>> >>> or >>> >>> #include "finclude/petscdmda.h90" >>> #include "finclude/petscksp.h90" >>> >>> Then there is no problem. >>> >>> May I know why this is happening? >>> >>> Although I can now compile and build successfully, is this the right way to go? >>> >>> -- >>> Thank you >>> >>> Yours sincerely, >>> >>> TAY wee-beng >>> >> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From zonexo at gmail.com Tue Apr 8 06:26:46 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Tue, 08 Apr 2014 19:26:46 +0800 Subject: [petsc-users] Intel Internal compiler error: segmentation violation signal raised In-Reply-To: References: <53427B12.7030307@gmail.com> <53427BAD.3090201@gmail.com> <534338F0.8060108@gmail.com> Message-ID: <5343DCF6.9070100@gmail.com> On 8/4/2014 7:58 AM, Matthew Knepley wrote: > On Mon, Apr 7, 2014 at 6:46 PM, TAY wee-beng > wrote: > > On 8/4/2014 12:40 AM, Matthew Knepley wrote: >> How about just including petsc.h90 so you get everything. >> >> Thanks, >> >> Matt >> > Sorry this is my 2nd email. See below for the 1st email. Using > petsc.h90 gave the error "... segmentation violation signal raised > .... > > > Right, the solution there is not to use a buggy compiler. I suggest > gcc. It is also faster for a lot of code than Intel. My impression was intel is mostly faster. However, does it apply to gfortran too? Is it also faster for a lot of code than Intel fortran? I'll give it a go if it's so. However, I remember changing a no. of options to build and in the end, it was slower. that's a few yrs ago though. > > Matt > >> On Mon, Apr 7, 2014 at 5:19 AM, TAY wee-beng > > wrote: >> >> Sorry I realised that >> >> *#include "finclude/petscdmda.h90"* >> *#include "finclude/petscksp.h90"* >> >> also gave errors: >> >> *//home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(10): >> error #5082: Syntax error, found ',' when expecting one of: ( >> % [ : . = =>/**/ >> /**/ PetscInt, pointer :: array(:)/**/ >> /**/------------------^/**/ >> /**//home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(10): >> error #5082: Syntax error, found END-OF-STATEMENT when >> expecting one of: ) ,/**/ >> /**/ PetscInt, pointer :: array(:)/**/ >> /**/---------------------------------------^/**/ >> /**//home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(11): >> error #5082: Syntax error, found IDENTIFIER 'N' when >> expecting one of: ( % [ : . = =>/**/ >> /**/ PetscInt n/**/ >> /**/--------------------^/**/ >> /**//home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(12): >> error #5082: Syntax error, found IDENTIFIER 'IERR' when >> expecting one of: ( % [ : . = =>/**/ >> /**/ PetscErrorCode ierr/**/ >> /**/-------------------------^/**/ >> /**//home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(13): >> error #5082: Syntax error, found IDENTIFIER 'V' when >> expecting one of: ( % [ : . = =>/* >> >> Thank you >> >> Yours sincerely, >> >> TAY wee-beng >> >> On 7/4/2014 6:16 PM, TAY wee-beng wrote: >> > > Here's the 1st email: >> >>> Hi, >>> >>> I encountered the error below when compiling my code using >>> intel fortran: >>> >>> /tmp/ifortlPEDlK.i90: catastrophic error: **Internal >>> compiler error: segmentation violation signal raised** >>> Please report this error along with the circumstances in >>> which it occurred in a Software Problem Report. Note: File >>> and line given may not be explicit cause of this error. >>> >>> In the end, I realised that it is due to using *petsc.h90*: >>> >>> module PETSc_solvers >>> >>> use set_matrix >>> >>> ... >>> >>> implicit none >>> >>> contains >>> >>> subroutine semi_momentum_simple_xyz(du,dv,dw) >>> >>> *#include "finclude/petsc.h90"* >>> >>> integer :: i,j,k,ijk,ierr,II !,ro... >>> >>> If I use : >>> >>> *#include "finclude/petsc.h" >>> >>> or >>> * >>> *#include "finclude/petscdmda.h90"* >>> *#include "finclude/petscksp.h90"* >>> >>> Then there is no problem. >>> >>> May I know why this is happening? >>> >>> Although I can now compile and build successfully, is this >>> the right way to go? >>> >>> -- >>> Thank you >>> >>> Yours sincerely, >>> >>> TAY wee-beng >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to >> which their experiments lead. >> -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Apr 8 06:32:14 2014 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 8 Apr 2014 06:32:14 -0500 Subject: [petsc-users] Intel Internal compiler error: segmentation violation signal raised In-Reply-To: <5343DCF6.9070100@gmail.com> References: <53427B12.7030307@gmail.com> <53427BAD.3090201@gmail.com> <534338F0.8060108@gmail.com> <5343DCF6.9070100@gmail.com> Message-ID: On Tue, Apr 8, 2014 at 6:26 AM, TAY wee-beng wrote: > On 8/4/2014 7:58 AM, Matthew Knepley wrote: > > On Mon, Apr 7, 2014 at 6:46 PM, TAY wee-beng wrote: > >> On 8/4/2014 12:40 AM, Matthew Knepley wrote: >> >> How about just including petsc.h90 so you get everything. >> >> Thanks, >> >> Matt >> >> Sorry this is my 2nd email. See below for the 1st email. Using >> petsc.h90 gave the error "... segmentation violation signal raised .... >> > > Right, the solution there is not to use a buggy compiler. I suggest gcc. > It is also faster for a lot of code than Intel. > > > My impression was intel is mostly faster. However, does it apply to > gfortran too? Is it also faster for a lot of code than Intel fortran? I'll > give it a go if it's so. However, I remember changing a no. of options to > build and in the end, it was slower. that's a few yrs ago though. > Anything is faster than a seg-faulting compiler. However, we have just run a number of tests in which gcc was faster. Matt > Matt > > >> On Mon, Apr 7, 2014 at 5:19 AM, TAY wee-beng wrote: >> >>> Sorry I realised that >>> >>> *#include "finclude/petscdmda.h90"* >>> *#include "finclude/petscksp.h90"* >>> >>> also gave errors: >>> >>> */home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(10): >>> error #5082: Syntax error, found ',' when expecting one of: ( % [ : . = =>* >>> * PetscInt, pointer :: array(:)* >>> *------------------^* >>> */home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(10): >>> error #5082: Syntax error, found END-OF-STATEMENT when expecting one of: ) >>> ,* >>> * PetscInt, pointer :: array(:)* >>> *---------------------------------------^* >>> */home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(11): >>> error #5082: Syntax error, found IDENTIFIER 'N' when expecting one of: ( % >>> [ : . = =>* >>> * PetscInt n* >>> *--------------------^* >>> */home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(12): >>> error #5082: Syntax error, found IDENTIFIER 'IERR' when expecting one of: ( >>> % [ : . = =>* >>> * PetscErrorCode ierr* >>> *-------------------------^* >>> */home/wtay/Lib/petsc-3.4.4_shared_rel/include/finclude/ftn-custom/petscdmda.h90(13): >>> error #5082: Syntax error, found IDENTIFIER 'V' when expecting one of: ( % >>> [ : . = =>* >>> >>> Thank you >>> >>> Yours sincerely, >>> >>> TAY wee-beng >>> >>> On 7/4/2014 6:16 PM, TAY wee-beng wrote: >>> >> >> Here's the 1st email: >> >> Hi, >>> >>> I encountered the error below when compiling my code using intel fortran: >>> >>> /tmp/ifortlPEDlK.i90: catastrophic error: **Internal compiler error: >>> segmentation violation signal raised** Please report this error along with >>> the circumstances in which it occurred in a Software Problem Report. Note: >>> File and line given may not be explicit cause of this error. >>> >>> In the end, I realised that it is due to using *petsc.h90*: >>> >>> module PETSc_solvers >>> >>> use set_matrix >>> >>> ... >>> >>> implicit none >>> >>> contains >>> >>> subroutine semi_momentum_simple_xyz(du,dv,dw) >>> >>> *#include "finclude/petsc.h90"* >>> >>> integer :: i,j,k,ijk,ierr,II !,ro... >>> >>> If I use : >>> >>> >>> >>> >>> *#include "finclude/petsc.h" or * >>> *#include "finclude/petscdmda.h90"* >>> *#include "finclude/petscksp.h90"* >>> >>> Then there is no problem. >>> >>> May I know why this is happening? >>> >>> Although I can now compile and build successfully, is this the right way >>> to go? >>> >>> -- >>> Thank you >>> >>> Yours sincerely, >>> >>> TAY wee-beng >>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Tue Apr 8 06:39:01 2014 From: jed at jedbrown.org (Jed Brown) Date: Tue, 08 Apr 2014 05:39:01 -0600 Subject: [petsc-users] Intel Internal compiler error: segmentation violation signal raised In-Reply-To: <5343DCF6.9070100@gmail.com> References: <53427B12.7030307@gmail.com> <53427BAD.3090201@gmail.com> <534338F0.8060108@gmail.com> <5343DCF6.9070100@gmail.com> Message-ID: <87lhvg125m.fsf@jedbrown.org> TAY wee-beng writes: > My impression was intel is mostly faster. However, does it apply to > gfortran too? Is it also faster for a lot of code than Intel fortran? > I'll give it a go if it's so. However, I remember changing a no. of > options to build and in the end, it was slower. that's a few yrs ago though. It varies, but usually not by a large factor and a broken compiler takes infinitely long to produce correct answers. Report the bug to Intel, use gfortran (latest version), and get on with your business. If/when Intel fixes the bug, if you still have a license, try ifort again. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From zonexo at gmail.com Tue Apr 8 06:55:49 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Tue, 08 Apr 2014 19:55:49 +0800 Subject: [petsc-users] Intel Internal compiler error: segmentation violation signal raised In-Reply-To: <87lhvg125m.fsf@jedbrown.org> References: <53427B12.7030307@gmail.com> <53427BAD.3090201@gmail.com> <534338F0.8060108@gmail.com> <5343DCF6.9070100@gmail.com> <87lhvg125m.fsf@jedbrown.org> Message-ID: <5343E3C5.9080207@gmail.com> On 8/4/2014 7:39 PM, Jed Brown wrote: > TAY wee-beng writes: >> My impression was intel is mostly faster. However, does it apply to >> gfortran too? Is it also faster for a lot of code than Intel fortran? >> I'll give it a go if it's so. However, I remember changing a no. of >> options to build and in the end, it was slower. that's a few yrs ago though. > It varies, but usually not by a large factor and a broken compiler takes > infinitely long to produce correct answers. Report the bug to Intel, > use gfortran (latest version), and get on with your business. If/when > Intel fixes the bug, if you still have a license, try ifort again. Actually in my original email, I mentioned that by using petsc.h instead of petsc.h90, I was able to compile and build successfully. Running it seems to be fine. I will try gfortran when I have the time and report my results. Thanks! From bsmith at mcs.anl.gov Tue Apr 8 07:04:05 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 8 Apr 2014 07:04:05 -0500 Subject: [petsc-users] Intel Internal compiler error: segmentation violation signal raised In-Reply-To: <87lhvg125m.fsf@jedbrown.org> References: <53427B12.7030307@gmail.com> <53427BAD.3090201@gmail.com> <534338F0.8060108@gmail.com> <5343DCF6.9070100@gmail.com> <87lhvg125m.fsf@jedbrown.org> Message-ID: <7FD14189-C301-42F4-885A-69AC0B797742@mcs.anl.gov> You should never get yourself in a position where you ?have to? use a particular compiler. Strive to have portable makefiles that don?t depend on the compiler (with PETSc makefiles this is easy) and to have portable code that doesn?t depend on the compiler. Then switching between compilers takes literally a couple minutes. It is extremely counter productive to do nothing for days because the compiler you are using doesn?t work under a particular circumstance. It really isn?t hard to set things up so changing compilers is easy. Barry On Apr 8, 2014, at 6:39 AM, Jed Brown wrote: > TAY wee-beng writes: >> My impression was intel is mostly faster. However, does it apply to >> gfortran too? Is it also faster for a lot of code than Intel fortran? >> I'll give it a go if it's so. However, I remember changing a no. of >> options to build and in the end, it was slower. that's a few yrs ago though. > > It varies, but usually not by a large factor and a broken compiler takes > infinitely long to produce correct answers. Report the bug to Intel, > use gfortran (latest version), and get on with your business. If/when > Intel fixes the bug, if you still have a license, try ifort again. From zonexo at gmail.com Tue Apr 8 07:15:13 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Tue, 08 Apr 2014 20:15:13 +0800 Subject: [petsc-users] Intel Internal compiler error: segmentation violation signal raised In-Reply-To: <7FD14189-C301-42F4-885A-69AC0B797742@mcs.anl.gov> References: <53427B12.7030307@gmail.com> <53427BAD.3090201@gmail.com> <534338F0.8060108@gmail.com> <5343DCF6.9070100@gmail.com> <87lhvg125m.fsf@jedbrown.org> <7FD14189-C301-42F4-885A-69AC0B797742@mcs.anl.gov> Message-ID: <5343E851.9040707@gmail.com> Hi Barry, Thanks for the advice. It took me a while to compile and build successfully with gfortran due to the stricter rules and entirely different options. But the most problematic thing was that most clusters I work with use old versions of gcc/gfortran. With the gcc/gfortran tied to the MPI, it seems that I can't just update myself to use the new version, or can I? Also, can anyone recommend options to get optimized results in gfortran? I'm using : -fno-signed-zeros -fno-trapping-math -ffast-math -march=native -funroll-loops -ffree-line-length-none -O3 Thank you. Yours sincerely, TAY wee-beng On 8/4/2014 8:04 PM, Barry Smith wrote: > You should never get yourself in a position where you ?have to? use a particular compiler. Strive to have portable makefiles that don?t depend on the compiler (with PETSc makefiles this is easy) and to have portable code that doesn?t depend on the compiler. Then switching between compilers takes literally a couple minutes. It is extremely counter productive to do nothing for days because the compiler you are using doesn?t work under a particular circumstance. It really isn?t hard to set things up so changing compilers is easy. > > Barry > > > On Apr 8, 2014, at 6:39 AM, Jed Brown wrote: > >> TAY wee-beng writes: >>> My impression was intel is mostly faster. However, does it apply to >>> gfortran too? Is it also faster for a lot of code than Intel fortran? >>> I'll give it a go if it's so. However, I remember changing a no. of >>> options to build and in the end, it was slower. that's a few yrs ago though. >> It varies, but usually not by a large factor and a broken compiler takes >> infinitely long to produce correct answers. Report the bug to Intel, >> use gfortran (latest version), and get on with your business. If/when >> Intel fixes the bug, if you still have a license, try ifort again. From jed at jedbrown.org Tue Apr 8 07:33:09 2014 From: jed at jedbrown.org (Jed Brown) Date: Tue, 08 Apr 2014 06:33:09 -0600 Subject: [petsc-users] Intel Internal compiler error: segmentation violation signal raised In-Reply-To: <5343E851.9040707@gmail.com> References: <53427B12.7030307@gmail.com> <53427BAD.3090201@gmail.com> <534338F0.8060108@gmail.com> <5343DCF6.9070100@gmail.com> <87lhvg125m.fsf@jedbrown.org> <7FD14189-C301-42F4-885A-69AC0B797742@mcs.anl.gov> <5343E851.9040707@gmail.com> Message-ID: <87d2gs0zne.fsf@jedbrown.org> TAY wee-beng writes: > Thanks for the advice. It took me a while to compile and build > successfully with gfortran due to the stricter rules and entirely > different options. Writing portable standards-compliant code pays off. > But the most problematic thing was that most clusters I work with use > old versions of gcc/gfortran. With the gcc/gfortran tied to the MPI, it > seems that I can't just update myself to use the new version, or can I? They often have modules to access more recent versions. You can also request it. Paying to upgrade commercial compilers while leaving open source compilers at an old version is not good machine maintenance. > Also, can anyone recommend options to get optimized results in gfortran? > > I'm using : > > -fno-signed-zeros -fno-trapping-math -ffast-math -march=native > -funroll-loops -ffree-line-length-none -O3 I usually use this, with or without -ffast-math depending on requirements: -O3 -march=native -ftree-vectorize -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From dharmareddy84 at gmail.com Tue Apr 8 11:31:37 2014 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Tue, 8 Apr 2014 11:31:37 -0500 Subject: [petsc-users] Intel Internal compiler error: segmentation violation signal raised In-Reply-To: <5343E851.9040707@gmail.com> References: <53427B12.7030307@gmail.com> <53427BAD.3090201@gmail.com> <534338F0.8060108@gmail.com> <5343DCF6.9070100@gmail.com> <87lhvg125m.fsf@jedbrown.org> <7FD14189-C301-42F4-885A-69AC0B797742@mcs.anl.gov> <5343E851.9040707@gmail.com> Message-ID: Hello wee-beng, What is the version of intel compiler you are using ? I use 13.1.0.146 Build 20130121, i have no issues using petsc.h90 in my fortran code. Reddy On Tue, Apr 8, 2014 at 7:15 AM, TAY wee-beng wrote: > Hi Barry, > > Thanks for the advice. It took me a while to compile and build successfully > with gfortran due to the stricter rules and entirely different options. > > But the most problematic thing was that most clusters I work with use old > versions of gcc/gfortran. With the gcc/gfortran tied to the MPI, it seems > that I can't just update myself to use the new version, or can I? > > Also, can anyone recommend options to get optimized results in gfortran? > > I'm using : > > -fno-signed-zeros -fno-trapping-math -ffast-math -march=native > -funroll-loops -ffree-line-length-none -O3 > > Thank you. > > Yours sincerely, > > TAY wee-beng > > > On 8/4/2014 8:04 PM, Barry Smith wrote: >> >> You should never get yourself in a position where you "have to" use a >> particular compiler. Strive to have portable makefiles that don't depend on >> the compiler (with PETSc makefiles this is easy) and to have portable code >> that doesn't depend on the compiler. Then switching between compilers takes >> literally a couple minutes. It is extremely counter productive to do nothing >> for days because the compiler you are using doesn't work under a particular >> circumstance. It really isn't hard to set things up so changing compilers is >> easy. >> >> Barry >> >> >> On Apr 8, 2014, at 6:39 AM, Jed Brown wrote: >> >>> TAY wee-beng writes: >>>> >>>> My impression was intel is mostly faster. However, does it apply to >>>> gfortran too? Is it also faster for a lot of code than Intel fortran? >>>> I'll give it a go if it's so. However, I remember changing a no. of >>>> options to build and in the end, it was slower. that's a few yrs ago >>>> though. >>> >>> It varies, but usually not by a large factor and a broken compiler takes >>> infinitely long to produce correct answers. Report the bug to Intel, >>> use gfortran (latest version), and get on with your business. If/when >>> Intel fixes the bug, if you still have a license, try ifort again. > > From epscodes at gmail.com Tue Apr 8 15:18:33 2014 From: epscodes at gmail.com (Xiangdong) Date: Tue, 8 Apr 2014 16:18:33 -0400 Subject: [petsc-users] SNESGetJacobian issue when added to snes/ex3.c Message-ID: Hello everyone, I have a question about the usage of SNESGetJacobian(). When I added the following three lines to the snes Monitor function in ex3 for viewing the Jacobian matrix: Mat Jmat; ierr = SNESGetJacobian(snes,&Jmat,NULL,NULL,NULL); CHKERRQ(ierr); ierr = MatView(Jamt, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr); I got error message complaining: Operation done in wrong order! Must call MatAssemblyBegin/End() before viewing matrix! MatView() line 812 in matrix.c Is the Jmat returned by SNESGetJacobian supposed to be assembled? Even if I add MatAssemblyBegin/End(), it still crashes by reporting argument out of range in MatSetValues_SeqAIJ() or MatSetValues_MPIAIJ(). Any suggestions? Thank you. Xiangdong -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Apr 8 15:44:51 2014 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 8 Apr 2014 15:44:51 -0500 Subject: [petsc-users] SNESGetJacobian issue when added to snes/ex3.c In-Reply-To: References: Message-ID: On Tue, Apr 8, 2014 at 3:18 PM, Xiangdong wrote: > Hello everyone, > > I have a question about the usage of SNESGetJacobian(). > > When I added the following three lines to the snes Monitor function in ex3 > for viewing the Jacobian matrix: > > Mat Jmat; > ierr = SNESGetJacobian(snes,&Jmat,NULL,NULL,NULL); CHKERRQ(ierr); > ierr = MatView(Jamt, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr); > > I got error message complaining: > Operation done in wrong order! > Must call MatAssemblyBegin/End() before viewing matrix! > MatView() line 812 in matrix.c > > > Is the Jmat returned by SNESGetJacobian supposed to be assembled? > We do not control assembly. You have put values in the Jacobian without assembling. > Even if I add MatAssemblyBegin/End(), it still crashes by reporting > argument out of range in MatSetValues_SeqAIJ() or MatSetValues_MPIAIJ(). > Always always always send the full error log. Matt > Any suggestions? > > Thank you. > > Xiangdong > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Apr 8 15:45:58 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 8 Apr 2014 15:45:58 -0500 Subject: [petsc-users] SNESGetJacobian issue when added to snes/ex3.c In-Reply-To: References: Message-ID: <3122A913-7D48-48EB-A6EF-EEAC1473D0ED@mcs.anl.gov> The first monitor is called BEFORE the Jacobian is ever created. So skip mucking with the jacobian in the 0th iteration of SNES, use the iteration argument in your SNES monitor routine. BTW: there is a lot of built in viewing of PETSc matrices you can do without writing your own monitor. For example in the master development of PETSc you can use -mat_view viewer type:filename:format for example -mat_view prints the values to the screen in ASCII (use only for tiny matrices). -mat_view draw draws the nonzero pattern -mat_view binary:filename saves the matrix in binary format to the file filename. Barry On Apr 8, 2014, at 3:18 PM, Xiangdong wrote: > Hello everyone, > > I have a question about the usage of SNESGetJacobian(). > > When I added the following three lines to the snes Monitor function in ex3 for viewing the Jacobian matrix: > > Mat Jmat; > ierr = SNESGetJacobian(snes,&Jmat,NULL,NULL,NULL); CHKERRQ(ierr); > ierr = MatView(Jamt, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr); > > I got error message complaining: > Operation done in wrong order! > Must call MatAssemblyBegin/End() before viewing matrix! > MatView() line 812 in matrix.c > > > Is the Jmat returned by SNESGetJacobian supposed to be assembled? > > Even if I add MatAssemblyBegin/End(), it still crashes by reporting argument out of range in MatSetValues_SeqAIJ() or MatSetValues_MPIAIJ(). > > Any suggestions? > > Thank you. > > Xiangdong > > From epscodes at gmail.com Tue Apr 8 19:55:11 2014 From: epscodes at gmail.com (Xiangdong) Date: Tue, 8 Apr 2014 20:55:11 -0400 Subject: [petsc-users] SNESGetJacobian issue when added to snes/ex3.c In-Reply-To: <3122A913-7D48-48EB-A6EF-EEAC1473D0ED@mcs.anl.gov> References: <3122A913-7D48-48EB-A6EF-EEAC1473D0ED@mcs.anl.gov> Message-ID: On Tue, Apr 8, 2014 at 4:45 PM, Barry Smith wrote: > > The first monitor is called BEFORE the Jacobian is ever created. So > skip mucking with the jacobian in the 0th iteration of SNES, use the > iteration argument in your SNES monitor routine. > Yes, by adding the condition iter>0 fixed the matrix unassemble problem. Thanks a lot for mentioning the -mat_view options. Xiangdong > > BTW: there is a lot of built in viewing of PETSc matrices you can do > without writing your own monitor. For example in the master development of > PETSc you can use > > -mat_view viewer type:filename:format > > for example -mat_view prints the values to the screen in ASCII (use only > for tiny matrices). -mat_view draw draws the nonzero pattern -mat_view > binary:filename saves the matrix in binary format to the file filename. > > Barry > > On Apr 8, 2014, at 3:18 PM, Xiangdong wrote: > > > Hello everyone, > > > > I have a question about the usage of SNESGetJacobian(). > > > > When I added the following three lines to the snes Monitor function in > ex3 for viewing the Jacobian matrix: > > > > Mat Jmat; > > ierr = SNESGetJacobian(snes,&Jmat,NULL,NULL,NULL); CHKERRQ(ierr); > > ierr = MatView(Jamt, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr); > > > > I got error message complaining: > > Operation done in wrong order! > > Must call MatAssemblyBegin/End() before viewing matrix! > > MatView() line 812 in matrix.c > > > > > > Is the Jmat returned by SNESGetJacobian supposed to be assembled? > > > > Even if I add MatAssemblyBegin/End(), it still crashes by reporting > argument out of range in MatSetValues_SeqAIJ() or MatSetValues_MPIAIJ(). > > > > Any suggestions? > > > > Thank you. > > > > Xiangdong > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Tue Apr 8 20:18:56 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Wed, 09 Apr 2014 09:18:56 +0800 Subject: [petsc-users] Intel Internal compiler error: segmentation violation signal raised In-Reply-To: References: <53427B12.7030307@gmail.com> <53427BAD.3090201@gmail.com> <534338F0.8060108@gmail.com> <5343DCF6.9070100@gmail.com> <87lhvg125m.fsf@jedbrown.org> <7FD14189-C301-42F4-885A-69AC0B797742@mcs.anl.gov> <5343E851.9040707@gmail.com> Message-ID: <5344A000.1030908@gmail.com> On 9/4/2014 12:31 AM, Dharmendar Reddy wrote: > Hello wee-beng, > What is the version of intel compiler you > are using ? I use 13.1.0.146 Build 20130121, i have no issues using > petsc.h90 in my fortran code. > > Reddy Hi Reddy, Thanks for the help. I was also using petsc.h90 w/o problem until I added some DM functions into my code. I'm using ifort 11 and 12. Maybe it's fixed in 13 but my cluster doesn't have it. Anyway, I can circumvent the error by using petsc.h instead. Regards. > > On Tue, Apr 8, 2014 at 7:15 AM, TAY wee-beng wrote: >> Hi Barry, >> >> Thanks for the advice. It took me a while to compile and build successfully >> with gfortran due to the stricter rules and entirely different options. >> >> But the most problematic thing was that most clusters I work with use old >> versions of gcc/gfortran. With the gcc/gfortran tied to the MPI, it seems >> that I can't just update myself to use the new version, or can I? >> >> Also, can anyone recommend options to get optimized results in gfortran? >> >> I'm using : >> >> -fno-signed-zeros -fno-trapping-math -ffast-math -march=native >> -funroll-loops -ffree-line-length-none -O3 >> >> Thank you. >> >> Yours sincerely, >> >> TAY wee-beng >> >> >> On 8/4/2014 8:04 PM, Barry Smith wrote: >>> You should never get yourself in a position where you "have to" use a >>> particular compiler. Strive to have portable makefiles that don't depend on >>> the compiler (with PETSc makefiles this is easy) and to have portable code >>> that doesn't depend on the compiler. Then switching between compilers takes >>> literally a couple minutes. It is extremely counter productive to do nothing >>> for days because the compiler you are using doesn't work under a particular >>> circumstance. It really isn't hard to set things up so changing compilers is >>> easy. >>> >>> Barry >>> >>> >>> On Apr 8, 2014, at 6:39 AM, Jed Brown wrote: >>> >>>> TAY wee-beng writes: >>>>> My impression was intel is mostly faster. However, does it apply to >>>>> gfortran too? Is it also faster for a lot of code than Intel fortran? >>>>> I'll give it a go if it's so. However, I remember changing a no. of >>>>> options to build and in the end, it was slower. that's a few yrs ago >>>>> though. >>>> It varies, but usually not by a large factor and a broken compiler takes >>>> infinitely long to produce correct answers. Report the bug to Intel, >>>> use gfortran (latest version), and get on with your business. If/when >>>> Intel fixes the bug, if you still have a license, try ifort again. >> From asmund.ervik at ntnu.no Wed Apr 9 02:17:34 2014 From: asmund.ervik at ntnu.no (=?ISO-8859-1?Q?=C5smund_Ervik?=) Date: Wed, 09 Apr 2014 09:17:34 +0200 Subject: [petsc-users] Intel Internal compiler error: segmentation violation, signal raised In-Reply-To: References: Message-ID: <5344F40E.80504@ntnu.no> > From: TAY wee-beng To: Barry Smith > , Jed Brown Cc: PETSc list > Subject: Re: [petsc-users] Intel Internal > compiler error: segmentation violation signal raised > > Hi Barry, ... > > Also, can anyone recommend options to get optimized results in > gfortran? Hi all, Just my 2 cents: we're using -O3 -march=native -fno-protect-parens -fstack-arrays This is supposed to mimick the behaviour of ifort with "-O3 -xHost". Be aware that using "-ffast-math" can be a little dangerous in some situations, so unless this option gives your code a large speedup I would personally remove it. For gfortran version 4.8.1 you also have to add -fno-predictive-commoning which is due to a bug that is fixed in 4.8.2 I would also add that actively using more than one compiler (we currently use pgf90, ifort, gfortran) together with nightly regression testing (using Jenkins) has helped us in discovering bugs much faster than if we were just using one compiler. We've seen code that will segfault when compiled with ifort and run just fine with gfortran, and vice versa. Same goes for debug vs. optim builds, we do a full set of both for our nightly testing. Regards, ?smund > > I'm using : > > -fno-signed-zeros -fno-trapping-math -ffast-math -march=native > -funroll-loops -ffree-line-length-none -O3 > > Thank you. > > Yours sincerely, > > TAY wee-beng > >> >> > > > ------------------------------ > > _______________________________________________ petsc-users mailing > list petsc-users at mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > > > End of petsc-users Digest, Vol 64, Issue 21 > ******************************************* > From epscodes at gmail.com Wed Apr 9 10:38:29 2014 From: epscodes at gmail.com (Xiangdong) Date: Wed, 9 Apr 2014 11:38:29 -0400 Subject: [petsc-users] SNESFunction undefined reference Message-ID: Hello everyone, When add this line to src/ex3.c SNESFunction(snes,x,fval,&ctx); The compiler complains that undefined reference to `SNESFunction'. However, all other snes function calls in ex3 work well. Why is it special for SNESFunction? Any suggestions to fix this problem? Thank you. Xiangdong -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Apr 9 10:40:09 2014 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 9 Apr 2014 10:40:09 -0500 Subject: [petsc-users] SNESFunction undefined reference In-Reply-To: References: Message-ID: On Wed, Apr 9, 2014 at 10:38 AM, Xiangdong wrote: > Hello everyone, > > When add this line to src/ex3.c > > SNESFunction(snes,x,fval,&ctx); > > The compiler complains that undefined reference to `SNESFunction'. > > However, all other snes function calls in ex3 work well. Why is it special > for SNESFunction? > > Any suggestions to fix this problem? > Do you want SNESComputeFunction() ? Matt > Thank you. > > Xiangdong > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Apr 9 10:57:05 2014 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 9 Apr 2014 10:57:05 -0500 Subject: [petsc-users] ex12 with Neumann BC In-Reply-To: <525804C2.3000101@avignon.inra.fr> References: <524430F2.2020507@avignon.inra.fr> <524D3D5D.5090301@avignon.inra.fr> <525804C2.3000101@avignon.inra.fr> Message-ID: On Fri, Oct 11, 2013 at 9:01 AM, Olivier Bonnefon < olivier.bonnefon at avignon.inra.fr> wrote: > Hello, > > I now able to simulate a two fields non-linear system with fem, and I get > a good speed up with Dirichlet BC. Thanks for your help. > > About Neumann BC, I need to add "-ksp_type fgmres", else the line search > failed (even with -snes_mf). > More over, the Neumann version works only in sequential. Indeed, in > parallel, the boundary conditions are applied inside the domain. I guess it > is because the label are automatically add after the mesh partitioning. > This problem doesn't appear in ex12 because there is a test in the plugin > function, but it is not possible to do the same thing with a complex shape. > I don't know what it must be done. > I am finally fixing this stuff. First, you can either have the boundary automatically marked in serial, and then distribute. Or you can mark the boundary yourself, either in serial or parallel. > In Neumann case, the 1D-integral apears only in the residual part (ie the > right side hand of the system). Is it done in the > FEMIntegrateBdResidualBatch function ? > I put in all the hooks for this computation, but I think it would only contribute for Robin conditions. Are I wrong here? Thanks, Matt > Thanks > Olivier B > > > > > On 10/10/2013 02:37 AM, Matthew Knepley wrote: > > On Thu, Oct 3, 2013 at 4:48 AM, Olivier Bonnefon < > olivier.bonnefon at avignon.inra.fr> wrote: > >> Hello, >> >> Thank you for your answer, I'm now able to run the ex12.c with Neumann BC >> (options -bc_type neumann -interpolate 1). >> >> I have adapted the ex12.c for the 2D-system: >> >> -\Delta u +u =0 >> >> It consists in adapting the fem->f0Funcs[0] function and adding the >> jacobian function fem->g0Funcs[0]. >> My implementation works for Dirichlet BC. >> With Neumann BC(with options -bc_type neumann -interpolate 1), the line >> search failed. I think my jacobian functions are corrects, because the >> option "-snes_mf_operator" leads to the same behavior. >> > > Sorry this took me a long time. Do you mean that it converges with > -snes_mf? > > >> Do you know what I have missed ? >> In Neumann case, Where is added the 1d-integral along \delta \Omega ? >> > > You are correct that I am not doing the correct integral for the > Jacobian. I will put it in soon. That > is why it should work with the FD approximation since the residual is > correct. > > Thanks, > > Matt > > >> Thanks, >> Olivier Bonnefon >> >> >> >> >> >> >> >> On 09/26/2013 06:54 PM, Matthew Knepley wrote: >> >> On Thu, Sep 26, 2013 at 6:04 AM, Olivier Bonnefon < >> olivier.bonnefon at avignon.inra.fr> wrote: >> >>> Hello, >>> >>> I have implemented my own system from ex12.c. It works with Dirichlet >>> BC, but failed with Neumann one. >>> >>> So, I'm came back to the example /src/snes/example/tutorial/ex12.c, and >>> I tried with Neumann BC: >>> >>> ./ex12 -bc_type NEUMANN >>> >> >> Here is the full list of tests I run (just checked that it passes in >> 'next'): >> >> >> https://bitbucket.org/petsc/petsc/src/f34a81fe8510aa025c9247a5b14f0fe30e3c0bed/config/builder.py?at=master#cl-175 >> >> Make sure you use an interpolated mesh with Neumann conditions since >> you need faces. >> >> Matt >> >> >>> This leads to the following crach: >>> >>> [0]PETSC ERROR: --------------------- Error Message >>> ------------------------------------ >>> [0]PETSC ERROR: No support for this operation for this object type! >>> [0]PETSC ERROR: Unsupported number of vertices 0 in cell 8 for element >>> geometry computation! >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: Petsc Release Version 3.4.2, Jul, 02, 2013 >>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>> [0]PETSC ERROR: See docs/index.html for manual pages. >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: ./ex12 on a arch-linux2-c-debug named pcbiom38 by >>> olivierb Thu Sep 26 14:53:32 2013 >>> [0]PETSC ERROR: Libraries linked from >>> /home/olivierb/SOFT/petsc-3.4.2/arch-linux2-c-debug/lib >>> [0]PETSC ERROR: Configure run at Thu Sep 26 14:44:42 2013 >>> [0]PETSC ERROR: Configure options --with-debugging=1 --download-fiat >>> --download-scientificpython --download-generator --download-triangle >>> --download-ctetgen --download-chaco --download-netcdf --download-hdf5 >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: DMPlexComputeCellGeometry() line 732 in >>> /home/olivierb/SOFT/petsc-3.4.2/src/dm/impls/plex/plexgeometry.c >>> [0]PETSC ERROR: DMPlexComputeResidualFEM() line 558 in >>> /home/olivierb/SOFT/petsc-3.4.2/src/dm/impls/plex/plexfem.c >>> [0]PETSC ERROR: SNESComputeFunction_DMLocal() line 75 in >>> /home/olivierb/SOFT/petsc-3.4.2/src/snes/utils/dmlocalsnes.c >>> [0]PETSC ERROR: SNESComputeFunction() line 1988 in >>> /home/olivierb/SOFT/petsc-3.4.2/src/snes/interface/snes.c >>> [0]PETSC ERROR: SNESSolve_NEWTONLS() line 162 in >>> /home/olivierb/SOFT/petsc-3.4.2/src/snes/impls/ls/ls.c >>> [0]PETSC ERROR: SNESSolve() line 3636 in >>> /home/olivierb/SOFT/petsc-3.4.2/src/snes/interface/snes.c >>> [0]PETSC ERROR: main() line 582 in >>> "unknowndirectory/"/home/olivierb/solvers/trunk/SandBox/PETSC/LANDSCAPE/REF/ex12.c >>> >>> -------------------------------------------------------------------------- >>> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD >>> with errorcode 56. >>> >>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. >>> You may or may not see output from other processes, depending on >>> exactly when Open MPI kills them. >>> >>> -------------------------------------------------------------------------- >>> >>> With Gbd, I saw that DMPlexGetConeSize is 0 for the last point. >>> >>> Do I have forget a step to use Neumann BC ? >>> >>> Thanks >>> Olivier Bonnefon >>> >>> -- >>> Olivier Bonnefon >>> INRA PACA-Avignon, Unit? BioSP >>> Tel: +33 (0)4 32 72 21 58 <%2B33%20%280%294%2032%2072%2021%2058> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> >> >> -- >> Olivier Bonnefon >> INRA PACA-Avignon, Unit? BioSP >> Tel: +33 (0)4 32 72 21 58 >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > -- > Olivier Bonnefon > INRA PACA-Avignon, Unit? BioSP > Tel: +33 (0)4 32 72 21 58 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From epscodes at gmail.com Wed Apr 9 11:37:25 2014 From: epscodes at gmail.com (Xiangdong) Date: Wed, 9 Apr 2014 12:37:25 -0400 Subject: [petsc-users] SNESFunction undefined reference In-Reply-To: References: Message-ID: On Wed, Apr 9, 2014 at 11:40 AM, Matthew Knepley wrote: > On Wed, Apr 9, 2014 at 10:38 AM, Xiangdong wrote: > >> Hello everyone, >> >> When add this line to src/ex3.c >> >> SNESFunction(snes,x,fval,&ctx); >> >> The compiler complains that undefined reference to `SNESFunction'. >> >> However, all other snes function calls in ex3 work well. Why is it >> special for SNESFunction? >> >> Any suggestions to fix this problem? >> > > Do you want SNESComputeFunction() ? > Yes, SNESComputeFunction() works. It seems that SNESFunction() can takes one more input argument ctx. Is this function disabled in current release or just a bug in the header file? Thanks. Xiangdong > > Matt > > >> Thank you. >> >> Xiangdong >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Apr 9 11:39:07 2014 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 9 Apr 2014 11:39:07 -0500 Subject: [petsc-users] SNESFunction undefined reference In-Reply-To: References: Message-ID: On Wed, Apr 9, 2014 at 11:37 AM, Xiangdong wrote: > > On Wed, Apr 9, 2014 at 11:40 AM, Matthew Knepley wrote: > >> On Wed, Apr 9, 2014 at 10:38 AM, Xiangdong wrote: >> >>> Hello everyone, >>> >>> When add this line to src/ex3.c >>> >>> SNESFunction(snes,x,fval,&ctx); >>> >>> The compiler complains that undefined reference to `SNESFunction'. >>> >>> However, all other snes function calls in ex3 work well. Why is it >>> special for SNESFunction? >>> >>> Any suggestions to fix this problem? >>> >> >> Do you want SNESComputeFunction() ? >> > > Yes, SNESComputeFunction() works. > > It seems that SNESFunction() can takes one more input argument ctx. Is > this function disabled in current release or just a bug in the header file? > I do not see it in the header. Matt > Thanks. > > Xiangdong > > > >> >> Matt >> >> >>> Thank you. >>> >>> Xiangdong >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Apr 9 13:15:41 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 9 Apr 2014 13:15:41 -0500 Subject: [petsc-users] SNESFunction undefined reference In-Reply-To: References: Message-ID: SNESFunction is not a function. What do you think it is suppose to compute? You provide a function for evaluating the nonlinear function with SNESSetFunction(). Barry On Apr 9, 2014, at 11:37 AM, Xiangdong wrote: > > > > On Wed, Apr 9, 2014 at 11:40 AM, Matthew Knepley wrote: > On Wed, Apr 9, 2014 at 10:38 AM, Xiangdong wrote: > Hello everyone, > > When add this line to src/ex3.c > > SNESFunction(snes,x,fval,&ctx); > > The compiler complains that undefined reference to `SNESFunction'. > > However, all other snes function calls in ex3 work well. Why is it special for SNESFunction? > > Any suggestions to fix this problem? > > Do you want SNESComputeFunction() ? > > Yes, SNESComputeFunction() works. > > It seems that SNESFunction() can takes one more input argument ctx. Is this function disabled in current release or just a bug in the header file? > > Thanks. > > Xiangdong > > > > Matt > > Thank you. > > Xiangdong > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > From zonexo at gmail.com Wed Apr 9 20:01:47 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Thu, 10 Apr 2014 09:01:47 +0800 Subject: [petsc-users] Intel Internal compiler error: segmentation violation signal raised In-Reply-To: <87d2gs0zne.fsf@jedbrown.org> References: <53427B12.7030307@gmail.com> <53427BAD.3090201@gmail.com> <534338F0.8060108@gmail.com> <5343DCF6.9070100@gmail.com> <87lhvg125m.fsf@jedbrown.org> <7FD14189-C301-42F4-885A-69AC0B797742@mcs.anl.gov> <5343E851.9040707@gmail.com> <87d2gs0zne.fsf@jedbrown.org> Message-ID: <5345ED7B.8090601@gmail.com> Hi all, Thank you for the gfortran options. I will try them out! Yours sincerely, TAY wee-beng On 8/4/2014 8:33 PM, Jed Brown wrote: > TAY wee-beng writes: > >> Thanks for the advice. It took me a while to compile and build >> successfully with gfortran due to the stricter rules and entirely >> different options. > Writing portable standards-compliant code pays off. > >> But the most problematic thing was that most clusters I work with use >> old versions of gcc/gfortran. With the gcc/gfortran tied to the MPI, it >> seems that I can't just update myself to use the new version, or can I? > They often have modules to access more recent versions. You can also > request it. Paying to upgrade commercial compilers while leaving open > source compilers at an old version is not good machine maintenance. > >> Also, can anyone recommend options to get optimized results in gfortran? >> >> I'm using : >> >> -fno-signed-zeros -fno-trapping-math -ffast-math -march=native >> -funroll-loops -ffree-line-length-none -O3 > I usually use this, with or without -ffast-math depending on requirements: > > -O3 -march=native -ftree-vectorize From epscodes at gmail.com Thu Apr 10 07:12:57 2014 From: epscodes at gmail.com (Xiangdong) Date: Thu, 10 Apr 2014 08:12:57 -0400 Subject: [petsc-users] SNESFunction undefined reference In-Reply-To: References: Message-ID: I thought it has the same functionality as SNESComputeFunction, but can take one more argument ctx. >From the manual pages, SNESFunction and SNESComputeFunction are very similar: http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESFunction.html http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESComputeFunction.html Thanks. Xiangdong On Wed, Apr 9, 2014 at 2:15 PM, Barry Smith wrote: > > SNESFunction is not a function. What do you think it is suppose to > compute? You provide a function for evaluating the nonlinear function with > SNESSetFunction(). > > Barry > > On Apr 9, 2014, at 11:37 AM, Xiangdong wrote: > > > > > > > > > On Wed, Apr 9, 2014 at 11:40 AM, Matthew Knepley > wrote: > > On Wed, Apr 9, 2014 at 10:38 AM, Xiangdong wrote: > > Hello everyone, > > > > When add this line to src/ex3.c > > > > SNESFunction(snes,x,fval,&ctx); > > > > The compiler complains that undefined reference to `SNESFunction'. > > > > However, all other snes function calls in ex3 work well. Why is it > special for SNESFunction? > > > > Any suggestions to fix this problem? > > > > Do you want SNESComputeFunction() ? > > > > Yes, SNESComputeFunction() works. > > > > It seems that SNESFunction() can takes one more input argument ctx. Is > this function disabled in current release or just a bug in the header file? > > > > Thanks. > > > > Xiangdong > > > > > > > > Matt > > > > Thank you. > > > > Xiangdong > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Apr 10 07:16:17 2014 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 10 Apr 2014 07:16:17 -0500 Subject: [petsc-users] SNESFunction undefined reference In-Reply-To: References: Message-ID: On Thu, Apr 10, 2014 at 7:12 AM, Xiangdong wrote: > I thought it has the same functionality as SNESComputeFunction, but can > take one more argument ctx. > > From the manual pages, SNESFunction and SNESComputeFunction are very > similar: > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESFunction.html > This is a manpage for a typedef. We have removed the typedef in the next release to avoid confusion. Matt > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESComputeFunction.html > > Thanks. > > Xiangdong > > > > On Wed, Apr 9, 2014 at 2:15 PM, Barry Smith wrote: > >> >> SNESFunction is not a function. What do you think it is suppose to >> compute? You provide a function for evaluating the nonlinear function with >> SNESSetFunction(). >> >> Barry >> >> On Apr 9, 2014, at 11:37 AM, Xiangdong wrote: >> >> > >> > >> > >> > On Wed, Apr 9, 2014 at 11:40 AM, Matthew Knepley >> wrote: >> > On Wed, Apr 9, 2014 at 10:38 AM, Xiangdong wrote: >> > Hello everyone, >> > >> > When add this line to src/ex3.c >> > >> > SNESFunction(snes,x,fval,&ctx); >> > >> > The compiler complains that undefined reference to `SNESFunction'. >> > >> > However, all other snes function calls in ex3 work well. Why is it >> special for SNESFunction? >> > >> > Any suggestions to fix this problem? >> > >> > Do you want SNESComputeFunction() ? >> > >> > Yes, SNESComputeFunction() works. >> > >> > It seems that SNESFunction() can takes one more input argument ctx. Is >> this function disabled in current release or just a bug in the header file? >> > >> > Thanks. >> > >> > Xiangdong >> > >> > >> > >> > Matt >> > >> > Thank you. >> > >> > Xiangdong >> > >> > >> > >> > -- >> > What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> > -- Norbert Wiener >> > >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjm2176 at columbia.edu Thu Apr 10 18:07:06 2014 From: cjm2176 at columbia.edu (Colin J McAuliffe) Date: Thu, 10 Apr 2014 19:07:06 -0400 Subject: [petsc-users] Metis ordering for lu Message-ID: Dear all, is it possible to use metis as a fill reducing ordering for lu in petsc? Also, would the native nested dissection ordering be comparable to metis which also employs nd? Thanks and all the best! Colin -- Colin McAuliffe, PhD Postdoctoral Research Scientist Columbia University Department of Civil Engineering and Engineering Mechanics -------------- next part -------------- An HTML attachment was scrubbed... URL: From stoneszone at gmail.com Fri Apr 11 03:26:09 2014 From: stoneszone at gmail.com (Lei Shi) Date: Fri, 11 Apr 2014 03:26:09 -0500 Subject: [petsc-users] Failed to compile the PETsc-dev with cuda, mpicc can not recognize c++ key word in thrust and cusp Message-ID: Hi guys, I tried to compile petsc-dev with cuda5, but failed. I think the problem is the PETsc-dev with cuda, mpicc can not recognize c++ key word in thrust and cusp. Please help me out. here is my option file configure_options = [ '--with-shared-libraries=0', '--with-mpi-dir=/media/public/MPICH3/mpich-install/', '--with-cuda=1', '--with-cuda-dir=/usr/local/cuda', '--with-cudac=/usr/local/cuda/bin/nvcc', '--with-cuda-arch=sm_20', '--with-cusp-dir=/usr/local/cuda/include/cusp', '--with-thrust=1', '--with-cusp=1', '--with-debugging=0', '--with-precision=double', 'COPTFLAGS=-O3', 'CXXOPTFLAGS=-O3', 'FOPTFLAGS=-O3', ] and the error i got ------------------------------------------------------------------------------------------- =============================================================================== Configuring PETSc to compile on your system =============================================================================== TESTING: alternateConfigureLibrary from PETSc.packages.petsc4py(config/PETSc/packages/petsc4py.py:65) \ Compilers: C Compiler: /media/public/MPICH3/mpich-install/bin/mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 CUDA Compiler: /usr/local/cuda/bin/nvcc -O -arch=sm_20 C++ Compiler: /media/public/MPICH3/mpich-install/bin/mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 Linkers: Static linker: /usr/bin/ar cr Dynamic linker: /usr/bin/ar make: MPI: Includes: -I/media/public/MPICH3/mpich-install/include BLAS/LAPACK: -llapack -lblas X: Library: -lX11 pthread: Library: -lpthread valgrind: cuda: Includes: -I/usr/local/cuda/include Library: -Wl,-rpath,/usr/local/cuda/lib64 -L/usr/local/cuda/lib64 -lcufft -lcublas -lcudart -lcusparse Arch: -arch=sm_20 cusp: Includes: -I/usr/local/cuda/include/cusp/ -I/usr/local/cuda/include/cusp/include thrust: sowing: c2html: PETSc: PETSC_ARCH: arch-cuda5-cg-opt PETSC_DIR: /home/leishi/work/development/3rd_party/petsc/petsc Clanguage: C Memory alignment: 16 Scalar type: real Precision: double shared libraries: disabled xxx=========================================================================xxx Configure stage complete. Now build PETSc libraries with (gnumake build): make PETSC_DIR=/home/leishi/work/development/3rd_party/petsc/petsc PETSC_ARCH=arch-cuda5-cg-opt all xxx=========================================================================xxx Using C/C++ compile: /media/public/MPICH3/mpich-install/bin/mpicc -c -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -I/home/leishi/work/development/3rd_party/pet\ sc/petsc/include -I/home/leishi/work/development/3rd_party/petsc/petsc/arch-cuda5-cg-opt/include -I/usr/local/cuda/include -I/usr/local/cuda/include/cusp/ -I/usr/local/cuda/include/cus\ p/include -I/media/public/MPICH3/mpich-install/include mpicc -show: gcc -I/media/public/MPICH3/mpich-install/include -L/media/public/MPICH3/mpich-install/lib -lmpich -lopa -lmpl -lrt -lpthread Using CUDA compile: /usr/local/cuda/bin/nvcc -O -arch=sm_20 -c --compiler-options=-Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -I/home/leishi/work/development/3\ rd_party/petsc/petsc/include -I/home/leishi/work/development/3rd_party/petsc/petsc/arch-cuda5-cg-opt/include -I/usr/local/cuda/include -I/usr/local/cuda/include/cusp/ -I/usr/local/cuda\ /include/cusp/include -I/media/public/MPICH3/mpich-install/include ----------------------------------------- Using C/C++ linker: /media/public/MPICH3/mpich-install/bin/mpicc Using C/C++ flags: -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 ----------------------------------------- Using libraries: -L/home/leishi/work/development/3rd_party/petsc/petsc/arch-cuda5-cg-opt/lib -lpetsc -llapack -lblas -lX11 -lpthread -Wl,-rpath,/usr/local/cuda/lib64 -L/usr/local/cuda\ /lib64 -lcufft -lcublas -lcudart -lcusparse -lm -L/media/public/MPICH3/mpich-install/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.6 -L/usr/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/usr/\ lib/gcc/x86_64-linux-gnu/4.8 -lmpichcxx -lstdc++ -ldl -lmpich -lopa -lmpl -lrt -lpthread -lgcc_s -ldl ------------------------------------------ Using mpiexec: /media/public/MPICH3/mpich-install/bin/mpiexec ========================================== Building PETSc using GNU Make with 18 build threads ========================================== make[2]: Entering directory `/home/leishi/work/development/3rd_party/petsc/petsc' Use "/usr/bin/make V=1" to see the verbose compile lines. CC arch-cuda5-cg-opt/obj/src/sys/utils/arch.o CC arch-cuda5-cg-opt/obj/src/sys/utils/fhost.o CC arch-cuda5-cg-opt/obj/src/sys/utils/fuser.o CC arch-cuda5-cg-opt/obj/src/sys/utils/memc.o CC arch-cuda5-cg-opt/obj/src/sys/utils/mpiu.o CC arch-cuda5-cg-opt/obj/src/sys/utils/psleep.o CC arch-cuda5-cg-opt/obj/src/sys/utils/sortd.o CC arch-cuda5-cg-opt/obj/src/sys/utils/sorti.o CC arch-cuda5-cg-opt/obj/src/sys/utils/str.o CC arch-cuda5-cg-opt/obj/src/sys/utils/sortip.o CC arch-cuda5-cg-opt/obj/src/sys/utils/pbarrier.o CC arch-cuda5-cg-opt/obj/src/sys/utils/pdisplay.o CC arch-cuda5-cg-opt/obj/src/sys/utils/ctable.o CC arch-cuda5-cg-opt/obj/src/sys/utils/psplit.o CC arch-cuda5-cg-opt/obj/src/sys/utils/select.o CC arch-cuda5-cg-opt/obj/src/sys/utils/mpimesg.o CC arch-cuda5-cg-opt/obj/src/sys/utils/sseenabled.o CC arch-cuda5-cg-opt/obj/src/sys/utils/mpitr.o In file included from /usr/local/cuda/include/cusp/detail/config.h:24:0, from /usr/local/cuda/include/cusp/complex.h:63, from /home/leishi/work/development/3rd_party/petsc/petsc/include/petscmath.h:145, from /home/leishi/work/development/3rd_party/petsc/petsc/include/petscsys.h:366, from /home/leishi/work/development/3rd_party/petsc/petsc/include/petsc-private/petscimpl.h:8, from /home/leishi/work/development/3rd_party/petsc/petsc/src/sys/objects/options.c:14: /usr/local/cuda/include/thrust/version.h:69:1: error: unknown type name ?namespace? /usr/local/cuda/include/thrust/version.h:70:1: error: expected ?=?, ?,?, ?;?, ?asm? or ?__attribute__? before ?{? token In file included from /home/leishi/work/development/3rd_party/petsc/petsc/include/petscmath.h:145:0, from /home/leishi/work/development/3rd_party/petsc/petsc/include/petscsys.h:366, from /home/leishi/work/development/3rd_party/petsc/petsc/include/petsc-private/petscimpl.h:8, from /home/leishi/work/development/3rd_party/petsc/petsc/src/sys/objects/options.c:14: /usr/local/cuda/include/cusp/complex.h:70:19: fatal error: complex: No such file or directory compilation terminated. CC arch-cuda5-cg-opt/obj/src/sys/objects/state.o CC arch-cuda5-cg-opt/obj/src/sys/objects/aoptions.o CC arch-cuda5-cg-opt/obj/src/sys/objects/subcomm.o In file included from /usr/local/cuda/include/cusp/detail/config.h:24:0, from /usr/local/cuda/include/cusp/complex.h:63, from /home/leishi/work/development/3rd_party/petsc/petsc/include/petscmath.h:145, from /home/leishi/work/development/3rd_party/petsc/petsc/include/petscsys.h:366, from /home/leishi/work/development/3rd_party/petsc/petsc/src/sys/objects/init.c:10: /usr/local/cuda/include/thrust/version.h:69:1: error:* unknown type name ?namespace? * /usr/local/cuda/include/thrust/version.h:70:1: error: expected ?=?, ?,?, ?;?, ?asm? or ?__attribute__? before ?{? token In file included from /home/leishi/work/development/3rd_party/petsc/petsc/include/petscmath.h:145:0, from /home/leishi/work/development/3rd_party/petsc/petsc/include/petscsys.h:366, from /home/leishi/work/development/3rd_party/petsc/petsc/src/sys/objects/init.c:10: /usr/local/cuda/include/cusp/complex.h:70:19: fatal error: complex: No such file or directory make: *** [all] Error 1 Sincerely Yours, Lei Shi --------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From christophe.ortiz at ciemat.es Fri Apr 11 03:42:27 2014 From: christophe.ortiz at ciemat.es (Christophe Ortiz) Date: Fri, 11 Apr 2014 10:42:27 +0200 Subject: [petsc-users] Failed to compile the PETsc-dev with cuda, mpicc can not recognize c++ key word in thrust and cusp (Lei Shi) Message-ID: Hi, Maybe I'm wrong but I think the following options are missing if you want to use C++: --with-cxx=(g++ or icc....) --with-clanguage=cxx Christophe > > Message: 2 > Date: Fri, 11 Apr 2014 03:26:09 -0500 > From: Lei Shi > To: petsc-users at mcs.anl.gov > Subject: [petsc-users] Failed to compile the PETsc-dev with cuda, > mpicc can not recognize c++ key word in thrust and cusp > Message-ID: > V9JJr+8Np5JaOKqgrkS12KJ6TaO3V4wM9YPo-NG29EU-A at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Hi guys, > > I tried to compile petsc-dev with cuda5, but failed. I think the problem > is the PETsc-dev with cuda, mpicc can not recognize c++ key word in thrust > and cusp. Please help me out. here is my option file > > configure_options = [ > > > '--with-shared-libraries=0', > > > '--with-mpi-dir=/media/public/MPICH3/mpich-install/', > > > '--with-cuda=1', > > > '--with-cuda-dir=/usr/local/cuda', > > > '--with-cudac=/usr/local/cuda/bin/nvcc', > > > '--with-cuda-arch=sm_20', > > > '--with-cusp-dir=/usr/local/cuda/include/cusp', > > > '--with-thrust=1', > > > '--with-cusp=1', > > > '--with-debugging=0', > > > '--with-precision=double', > > > 'COPTFLAGS=-O3', > > > 'CXXOPTFLAGS=-O3', > > > 'FOPTFLAGS=-O3', > > > ] > > and the error i got > > ------------------------------------------------------------------------------------------- > > =============================================================================== > > > Configuring PETSc to compile on your system > > > > =============================================================================== > > > TESTING: alternateConfigureLibrary from > PETSc.packages.petsc4py(config/PETSc/packages/petsc4py.py:65) > \ > Compilers: > > > C Compiler: /media/public/MPICH3/mpich-install/bin/mpicc -Wall > -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 > > CUDA Compiler: /usr/local/cuda/bin/nvcc -O -arch=sm_20 > > > C++ Compiler: /media/public/MPICH3/mpich-install/bin/mpicxx -Wall > -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 > > Linkers: > > > Static linker: /usr/bin/ar cr > > > Dynamic linker: /usr/bin/ar > > > make: > > > MPI: > > > Includes: -I/media/public/MPICH3/mpich-install/include > > > BLAS/LAPACK: -llapack -lblas > > > X: > > > Library: -lX11 > > > pthread: > > > Library: -lpthread > > > valgrind: > > > cuda: > > > Includes: -I/usr/local/cuda/include > > > Library: -Wl,-rpath,/usr/local/cuda/lib64 -L/usr/local/cuda/lib64 > -lcufft -lcublas -lcudart -lcusparse > > Arch: -arch=sm_20 > > > cusp: > > > Includes: -I/usr/local/cuda/include/cusp/ > -I/usr/local/cuda/include/cusp/include > > thrust: > > > sowing: > > > c2html: > > > PETSc: > > > PETSC_ARCH: arch-cuda5-cg-opt > > > PETSC_DIR: /home/leishi/work/development/3rd_party/petsc/petsc > > > Clanguage: C > > > Memory alignment: 16 > > > Scalar type: real > > > Precision: double > > > shared libraries: disabled > > > > xxx=========================================================================xxx > > > Configure stage complete. Now build PETSc libraries with (gnumake build): > > > make PETSC_DIR=/home/leishi/work/development/3rd_party/petsc/petsc > PETSC_ARCH=arch-cuda5-cg-opt all > > > xxx=========================================================================xxx > > > Using C/C++ compile: /media/public/MPICH3/mpich-install/bin/mpicc -c -Wall > -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 > -I/home/leishi/work/development/3rd_party/pet\ > sc/petsc/include > > -I/home/leishi/work/development/3rd_party/petsc/petsc/arch-cuda5-cg-opt/include > -I/usr/local/cuda/include -I/usr/local/cuda/include/cusp/ > -I/usr/local/cuda/include/cus\ > p/include -I/media/public/MPICH3/mpich-install/include > > > mpicc -show: gcc -I/media/public/MPICH3/mpich-install/include > -L/media/public/MPICH3/mpich-install/lib -lmpich -lopa -lmpl -lrt -lpthread > > Using CUDA compile: /usr/local/cuda/bin/nvcc -O -arch=sm_20 -c > --compiler-options=-Wall -Wwrite-strings -Wno-strict-aliasing > -Wno-unknown-pragmas -O3 -I/home/leishi/work/development/3\ > rd_party/petsc/petsc/include > > -I/home/leishi/work/development/3rd_party/petsc/petsc/arch-cuda5-cg-opt/include > -I/usr/local/cuda/include -I/usr/local/cuda/include/cusp/ > -I/usr/local/cuda\ > /include/cusp/include -I/media/public/MPICH3/mpich-install/include > > > ----------------------------------------- > > > Using C/C++ linker: /media/public/MPICH3/mpich-install/bin/mpicc > > > Using C/C++ flags: -Wall -Wwrite-strings -Wno-strict-aliasing > -Wno-unknown-pragmas -O3 > > ----------------------------------------- > > > Using libraries: > -L/home/leishi/work/development/3rd_party/petsc/petsc/arch-cuda5-cg-opt/lib > -lpetsc -llapack -lblas -lX11 -lpthread -Wl,-rpath,/usr/local/cuda/lib64 > -L/usr/local/cuda\ > /lib64 -lcufft -lcublas -lcudart -lcusparse -lm > -L/media/public/MPICH3/mpich-install/lib > -L/usr/lib/gcc/x86_64-linux-gnu/4.6 -L/usr/lib/x86_64-linux-gnu > -L/lib/x86_64-linux-gnu -L/usr/\ > lib/gcc/x86_64-linux-gnu/4.8 -lmpichcxx -lstdc++ -ldl -lmpich -lopa -lmpl > -lrt -lpthread -lgcc_s -ldl > > ------------------------------------------ > > > Using mpiexec: /media/public/MPICH3/mpich-install/bin/mpiexec > > > ========================================== > > > Building PETSc using GNU Make with 18 build threads > > > ========================================== > > > make[2]: Entering directory > `/home/leishi/work/development/3rd_party/petsc/petsc' > > > Use "/usr/bin/make V=1" to see the verbose compile lines. > > > CC arch-cuda5-cg-opt/obj/src/sys/utils/arch.o > > > CC arch-cuda5-cg-opt/obj/src/sys/utils/fhost.o > > > CC arch-cuda5-cg-opt/obj/src/sys/utils/fuser.o > > > CC arch-cuda5-cg-opt/obj/src/sys/utils/memc.o > > > CC arch-cuda5-cg-opt/obj/src/sys/utils/mpiu.o > > > CC arch-cuda5-cg-opt/obj/src/sys/utils/psleep.o > > > CC arch-cuda5-cg-opt/obj/src/sys/utils/sortd.o > > > CC arch-cuda5-cg-opt/obj/src/sys/utils/sorti.o > > > CC arch-cuda5-cg-opt/obj/src/sys/utils/str.o > > > CC arch-cuda5-cg-opt/obj/src/sys/utils/sortip.o > > > CC arch-cuda5-cg-opt/obj/src/sys/utils/pbarrier.o > > > CC arch-cuda5-cg-opt/obj/src/sys/utils/pdisplay.o > > > CC arch-cuda5-cg-opt/obj/src/sys/utils/ctable.o > > > CC arch-cuda5-cg-opt/obj/src/sys/utils/psplit.o > > > CC arch-cuda5-cg-opt/obj/src/sys/utils/select.o > > > CC arch-cuda5-cg-opt/obj/src/sys/utils/mpimesg.o > > > CC arch-cuda5-cg-opt/obj/src/sys/utils/sseenabled.o > > > CC arch-cuda5-cg-opt/obj/src/sys/utils/mpitr.o > > > > In file included from /usr/local/cuda/include/cusp/detail/config.h:24:0, > > > from /usr/local/cuda/include/cusp/complex.h:63, > > > from > > /home/leishi/work/development/3rd_party/petsc/petsc/include/petscmath.h:145, > > > from > /home/leishi/work/development/3rd_party/petsc/petsc/include/petscsys.h:366, > > > from > > /home/leishi/work/development/3rd_party/petsc/petsc/include/petsc-private/petscimpl.h:8, > > from > > /home/leishi/work/development/3rd_party/petsc/petsc/src/sys/objects/options.c:14: > > > /usr/local/cuda/include/thrust/version.h:69:1: error: unknown type name > ?namespace? > > /usr/local/cuda/include/thrust/version.h:70:1: error: expected ?=?, ?,?, > ?;?, ?asm? or ?__attribute__? before ?{? token > > In file included from > > /home/leishi/work/development/3rd_party/petsc/petsc/include/petscmath.h:145:0, > > > from > /home/leishi/work/development/3rd_party/petsc/petsc/include/petscsys.h:366, > > > from > > /home/leishi/work/development/3rd_party/petsc/petsc/include/petsc-private/petscimpl.h:8, > > from > > /home/leishi/work/development/3rd_party/petsc/petsc/src/sys/objects/options.c:14: > > > /usr/local/cuda/include/cusp/complex.h:70:19: fatal error: complex: No such > file or directory > > compilation terminated. > > > CC arch-cuda5-cg-opt/obj/src/sys/objects/state.o > > > CC arch-cuda5-cg-opt/obj/src/sys/objects/aoptions.o > > > CC arch-cuda5-cg-opt/obj/src/sys/objects/subcomm.o > > > In file included from /usr/local/cuda/include/cusp/detail/config.h:24:0, > > > from /usr/local/cuda/include/cusp/complex.h:63, > > > from > > /home/leishi/work/development/3rd_party/petsc/petsc/include/petscmath.h:145, > > > from > /home/leishi/work/development/3rd_party/petsc/petsc/include/petscsys.h:366, > > > from > > /home/leishi/work/development/3rd_party/petsc/petsc/src/sys/objects/init.c:10: > > > /usr/local/cuda/include/thrust/version.h:69:1: error:* unknown type name > ?namespace? * > > /usr/local/cuda/include/thrust/version.h:70:1: error: expected ?=?, ?,?, > ?;?, ?asm? or ?__attribute__? before ?{? token > > In file included from > > /home/leishi/work/development/3rd_party/petsc/petsc/include/petscmath.h:145:0, > > > from > /home/leishi/work/development/3rd_party/petsc/petsc/include/petscsys.h:366, > > > from > > /home/leishi/work/development/3rd_party/petsc/petsc/src/sys/objects/init.c:10: > > > /usr/local/cuda/include/cusp/complex.h:70:19: fatal error: complex: No such > file or directory > > > > make: *** [all] Error 1 > > Sincerely Yours, > > Lei Shi > --------- > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140411/4831a810/attachment.html > > > > ------------------------------ > > _______________________________________________ > petsc-users mailing list > petsc-users at mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > > > End of petsc-users Digest, Vol 64, Issue 26 > ******************************************* > > ---------------------------- > Confidencialidad: > Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su > destinatario y puede contener informaci?n privilegiada o confidencial. Si > no es vd. el destinatario indicado, queda notificado de que la utilizaci?n, > divulgaci?n y/o copia sin autorizaci?n est? prohibida en virtud de la > legislaci?n vigente. Si ha recibido este mensaje por error, le rogamos que > nos lo comunique inmediatamente respondiendo al mensaje y proceda a su > destrucci?n. > > Disclaimer: > This message and its attached files is intended exclusively for its > recipients and may contain confidential information. If you received this > e-mail in error you are hereby notified that any dissemination, copy or > disclosure of this communication is strictly prohibited and may be > unlawful. In this case, please notify us by a reply and delete this email > and its contents immediately. > ---------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spk at ldeo.columbia.edu Fri Apr 11 03:49:44 2014 From: spk at ldeo.columbia.edu (Samar Khatiwala) Date: Fri, 11 Apr 2014 04:49:44 -0400 Subject: [petsc-users] possible performance issues with PETSc on Cray Message-ID: <283192B3-CD15-4118-A688-EB6E500831A6@ldeo.columbia.edu> Hello, This is a somewhat vague query but I and a colleague have been running PETSc (3.4.3.0) on a Cray XC30 in Germany (https://www.hlrn.de/home/view/System3/WebHome) and the system administrators alerted us to some anomalies with our jobs that may or may not be related to PETSc but I thought I'd ask here in case others have noticed something similar. First, there was a large variation in run-time for identical jobs, sometimes as much as 50%. We didn't really pick up on this but other users complained to the IT people that their jobs were taking a performance hit with a similar variation in run-time. At that point we're told the IT folks started monitoring jobs and carrying out tests to see what was going on. They discovered that (1) this always happened when we were running our jobs and (2) the problem got worse with physical proximity to the nodes on which our jobs were running (what they described as a "strong interaction" between our jobs and others presumably through the communication network). If it helps, our jobs essentially do nothing more than a series of (sparse) MatMult and MatMultTranspose operations (many many thousands of times per run). There is very little I/O. The code has been running for years on many different systems without there being an issue. The system administrators have been at pains to tell me that they're not pointing fingers at PETSc and at this point we're just trying to explore different possibilities and if someone has come across similar behavior it would help narrow things down. I'm happy to provide further information. Thanks very much! Best, Samar From jed at jedbrown.org Fri Apr 11 06:44:10 2014 From: jed at jedbrown.org (Jed Brown) Date: Fri, 11 Apr 2014 06:44:10 -0500 Subject: [petsc-users] possible performance issues with PETSc on Cray In-Reply-To: <283192B3-CD15-4118-A688-EB6E500831A6@ldeo.columbia.edu> References: <283192B3-CD15-4118-A688-EB6E500831A6@ldeo.columbia.edu> Message-ID: <8761mgw0ol.fsf@jedbrown.org> Samar Khatiwala writes: > Hello, > > This is a somewhat vague query but I and a colleague have been running PETSc (3.4.3.0) on a Cray > XC30 in Germany (https://www.hlrn.de/home/view/System3/WebHome) and the system administrators > alerted us to some anomalies with our jobs that may or may not be related to PETSc but I thought I'd ask > here in case others have noticed something similar. > > First, there was a large variation in run-time for identical jobs, sometimes as much as 50%. We didn't > really pick up on this but other users complained to the IT people that their jobs were taking a performance > hit with a similar variation in run-time. At that point we're told the IT folks started monitoring jobs and > carrying out tests to see what was going on. They discovered that (1) this always happened when we were > running our jobs and (2) the problem got worse with physical proximity to the nodes on which our jobs were > running (what they described as a "strong interaction" between our jobs and others presumably through the > communication network). It sounds like you are strong scaling (smallish subdomains) so that your application is sensitive to network latency. I see significant performance variability on XC-30 with this Full Multigrid solver that is not using PETSc. http://59a2.org/files/hopper-vs-edison.3semilogx.png See the factor of 2 performance variability for the samples of the ~15M element case. This operation is limited by instruction issue rather than bandwidth (indeed, it is several times faster than doing the same operations with assembled matrices). Here the variability is within the same application performing repeated solves. If you get a different partition on a different run, you can see larger variation. If your matrices are large enough, your performance will be limited by memory bandwidth. (This is the typical case, but sufficiently small matrices can fit in cache.) I once encountered a batch system that did not properly reset nodes between runs, leaving a partially-filled ramdisk distributed asymmetrically across the memory busses. This led to 3x performance reduction on 4-socket nodes because much of the memory demanded by the application would be faulted onto one memory bus. Presumably your machine has a resource manager that would not allow such things to happen. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From spk at ldeo.columbia.edu Fri Apr 11 07:24:04 2014 From: spk at ldeo.columbia.edu (Samar Khatiwala) Date: Fri, 11 Apr 2014 08:24:04 -0400 Subject: [petsc-users] possible performance issues with PETSc on Cray In-Reply-To: <8761mgw0ol.fsf@jedbrown.org> References: <283192B3-CD15-4118-A688-EB6E500831A6@ldeo.columbia.edu> <8761mgw0ol.fsf@jedbrown.org> Message-ID: <0CF9D247-DCAE-463F-AB0A-88A45924B093@ldeo.columbia.edu> Hi Jed, Thanks for the quick reply. This is very helpful. You may well be right that my matrices are not large enough (~2. 5e6 x 2.5e6 and I'm running on 360 cores = 15 nodes x 24 cores/node on this XC-30) and my runs are therefore sensitive to network latency. Would this, though, impact other people running jobs on nearby nodes? (I suppose it would if I'm passing too many messages because of the small size of the matrices.) I'm going to pass on your reply to the system administrators. They will be able to understand the technical content better than I am capable of. Thanks again! Best, Samar On Apr 11, 2014, at 7:44 AM, Jed Brown wrote: > Samar Khatiwala writes: > >> Hello, >> >> This is a somewhat vague query but I and a colleague have been running PETSc (3.4.3.0) on a Cray >> XC30 in Germany (https://www.hlrn.de/home/view/System3/WebHome) and the system administrators >> alerted us to some anomalies with our jobs that may or may not be related to PETSc but I thought I'd ask >> here in case others have noticed something similar. >> >> First, there was a large variation in run-time for identical jobs, sometimes as much as 50%. We didn't >> really pick up on this but other users complained to the IT people that their jobs were taking a performance >> hit with a similar variation in run-time. At that point we're told the IT folks started monitoring jobs and >> carrying out tests to see what was going on. They discovered that (1) this always happened when we were >> running our jobs and (2) the problem got worse with physical proximity to the nodes on which our jobs were >> running (what they described as a "strong interaction" between our jobs and others presumably through the >> communication network). > > It sounds like you are strong scaling (smallish subdomains) so that your > application is sensitive to network latency. I see significant > performance variability on XC-30 with this Full Multigrid solver that is > not using PETSc. > > http://59a2.org/files/hopper-vs-edison.3semilogx.png > > See the factor of 2 performance variability for the samples of the ~15M > element case. This operation is limited by instruction issue rather > than bandwidth (indeed, it is several times faster than doing the same > operations with assembled matrices). Here the variability is within the > same application performing repeated solves. If you get a different > partition on a different run, you can see larger variation. > > If your matrices are large enough, your performance will be limited by > memory bandwidth. (This is the typical case, but sufficiently small > matrices can fit in cache.) I once encountered a batch system that did > not properly reset nodes between runs, leaving a partially-filled > ramdisk distributed asymmetrically across the memory busses. This led > to 3x performance reduction on 4-socket nodes because much of the memory > demanded by the application would be faulted onto one memory bus. > Presumably your machine has a resource manager that would not allow such > things to happen. From jed at jedbrown.org Fri Apr 11 07:41:48 2014 From: jed at jedbrown.org (Jed Brown) Date: Fri, 11 Apr 2014 07:41:48 -0500 Subject: [petsc-users] possible performance issues with PETSc on Cray In-Reply-To: <0CF9D247-DCAE-463F-AB0A-88A45924B093@ldeo.columbia.edu> References: <283192B3-CD15-4118-A688-EB6E500831A6@ldeo.columbia.edu> <8761mgw0ol.fsf@jedbrown.org> <0CF9D247-DCAE-463F-AB0A-88A45924B093@ldeo.columbia.edu> Message-ID: <87wqewujg3.fsf@jedbrown.org> Samar Khatiwala writes: > Hi Jed, > > Thanks for the quick reply. This is very helpful. You may well be right that my matrices are not large enough > (~2. 5e6 x 2.5e6 and I'm running on 360 cores = 15 nodes x 24 cores/node on this XC-30) and my runs are > therefore sensitive to network latency. Would this, though, impact other people running jobs on nearby nodes? > (I suppose it would if I'm passing too many messages because of the small size of the matrices.) It depends on your partition. The Aries network on XC-30 is a high-radix low-diameter network. There should be many routes between nodes, but the routing algorithm likely does not know which wires to avoid. This leads to performance variation, though I think it should tend to be less extreme than when you obtain disconnected partitions on Gemini. The gold standard of reproducible performance is Blue Gene, where the network is reconfigured to give you an isolated 5D torus. A Blue Gene may or may not be available or cost effective (reproducible performance does not imply high performance/efficiency for a given workload). -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From spk at ldeo.columbia.edu Fri Apr 11 10:39:40 2014 From: spk at ldeo.columbia.edu (Samar Khatiwala) Date: Fri, 11 Apr 2014 11:39:40 -0400 Subject: [petsc-users] possible performance issues with PETSc on Cray In-Reply-To: <87wqewujg3.fsf@jedbrown.org> References: <283192B3-CD15-4118-A688-EB6E500831A6@ldeo.columbia.edu> <8761mgw0ol.fsf@jedbrown.org> <0CF9D247-DCAE-463F-AB0A-88A45924B093@ldeo.columbia.edu> <87wqewujg3.fsf@jedbrown.org> Message-ID: <5E4234EB-B3BF-437A-826E-4D83B0AEFDD1@ldeo.columbia.edu> Thanks again Jed. This has definitely helped narrow down the possibilities. Best, Samar On Apr 11, 2014, at 8:41 AM, Jed Brown wrote: > Samar Khatiwala writes: > >> Hi Jed, >> >> Thanks for the quick reply. This is very helpful. You may well be right that my matrices are not large enough >> (~2. 5e6 x 2.5e6 and I'm running on 360 cores = 15 nodes x 24 cores/node on this XC-30) and my runs are >> therefore sensitive to network latency. Would this, though, impact other people running jobs on nearby nodes? >> (I suppose it would if I'm passing too many messages because of the small size of the matrices.) > > It depends on your partition. The Aries network on XC-30 is a > high-radix low-diameter network. There should be many routes between > nodes, but the routing algorithm likely does not know which wires to > avoid. This leads to performance variation, though I think it should > tend to be less extreme than when you obtain disconnected partitions on > Gemini. > > The gold standard of reproducible performance is Blue Gene, where the > network is reconfigured to give you an isolated 5D torus. A Blue Gene > may or may not be available or cost effective (reproducible performance > does not imply high performance/efficiency for a given workload). From gdiso at ustc.edu Sun Apr 13 02:42:57 2014 From: gdiso at ustc.edu (Gong Ding) Date: Sun, 13 Apr 2014 15:42:57 +0800 (CST) Subject: [petsc-users] MUMPS error -13 info Message-ID: <27476983.85891397374977520.JavaMail.coremail@mail.ustc.edu> Dear Sir, I see an error: Transient compute from 0 ps step 0.2 ps to 3000 ps t = 0.2 ps, dt = 0.2 ps -------------------------------------------------------------------------------- process particle generation....................ok Gummel electron equation CONVERGED_ATOL, residual 5.41824e-12, its 4 Gummel hole equation CONVERGED_ATOL, residual 4.6721e-13, its 4 --------------------- Error Message ------------------------------------ Fatal Error:Error reported by MUMPS in numerical factorization phase: Cannot allocate required memory 990883164 megabytes at line 721 in /tmp/build/rhel6-64/build.petsc.3.4.4.859b2b9/src/src/mat/impls/aij/mpi/mumps/mumps.c ------------------------------------------------------------------------ which reported that MUMPS requires 990883164 megabytes of memory. The memory requirement is too huge so I takes a look at the reason. The code in petsc is listed below: if (mumps->id.INFOG(1) < 0) { if (mumps->id.INFO(1) == -13) SETERRQ1(PETSC_COMM_SELF,PETSC_ERR_LIB,"Error reported by MUMPS in numerical factorization phase: Cannot allocate required memory %d megabytes\n",mumps->id.INFO(2)); } However, the mumps user's guide said ?13 An error occurred in a Fortran ALLOCATE statement. The size that the package requested is available in INFO(2). If INFO(2) is negative, then the size that the package requested is obtained by multiplying the absolute value of INFO(2) by 1 million. It is clear that 990883164 megabytes should be 990883164 bytes here. Hope this bug can be fixed. Regards, Gong Ding From romain.veltz at inria.fr Sun Apr 13 12:01:07 2014 From: romain.veltz at inria.fr (Veltz Romain) Date: Sun, 13 Apr 2014 19:01:07 +0200 Subject: [petsc-users] sundials and petsc4py Message-ID: <9B9BC0D0-6D37-4328-98F9-870730AB7CE0@inria.fr> Dear users of pets, I am totally new to petsc. I have been using trilinos a lot but compiling of C++ code sickens me. That is why I turned to petc4py. I would like to use the sundials time-stepper for petsc4py but I don't know how to install it. For now, and following https://pypi.python.org/pypi/petsc4py/3.4, I use: pip install petsc==dev petsc4py==dev But how can I specify to install the sundial part? Thank you in advance for your help, Best Veltz Romain Neuromathcomp Project Team Inria Sophia Antipolis M?diterran?e 2004 Route des Lucioles-BP 93 FR-06902 Sophia Antipolis http://www-sop.inria.fr/members/Romain.Veltz/public_html/Home.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Sun Apr 13 12:16:42 2014 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Sun, 13 Apr 2014 12:16:42 -0500 Subject: [petsc-users] MUMPS error -13 info In-Reply-To: References: Message-ID: Thanks for reporting it. Do you agree with the fix below? If so, I'll patch petsc-release. Hong $ git diff ../../../../mat/impls/aij/mpi/mumps/mumps.c diff --git a/src/mat/impls/aij/mpi/mumps/mumps.c b/src/mat/impls/aij/mpi/mumps/mumps.c index f07788a..e4b2e44 100644 --- a/src/mat/impls/aij/mpi/mumps/mumps.c +++ b/src/mat/impls/aij/mpi/mumps/mumps.c @@ -729,8 +729,13 @@ PetscErrorCode MatFactorNumeric_MUMPS(Mat F,Mat A,const MatFactorInfo *info) } PetscMUMPS_c(&mumps->id); if (mumps->id.INFOG(1) < 0) { - if (mumps->id.INFO(1) == -13) SETERRQ1(PETSC_COMM_SELF,PETSC_ERR_LIB,"Error reported by MUMPS in numerical factorization phase: Cannot allocate required memory %d megabytes\n",mumps->id.INFO(2)); - else SETERRQ2(PETSC_COMM_SELF,PETSC_ERR_LIB,"Error reported by MUMPS in numerical factorization phase: INFO(1)=%d, INFO(2)=%d\n",mumps->id.INFO(1),mumps->id.INFO(2)); + if (mumps->id.INFO(1) == -13) { + if (mumps->id.INFO(2) < 0) { + SETERRQ1(PETSC_COMM_SELF,PETSC_ERR_LIB,"Error reported by MUMPS in numerical factorization phase: Cannot allocate required memory %d bytes\n",-mumps->id.INFO(2)*1.e6); + } else { + SETERRQ1(PETSC_COMM_SELF,PETSC_ERR_LIB,"Error reported by MUMPS in numerical factorization phase: Cannot allocate required memory %d bytes\n",mumps->id.INFO(2)); + } + } else SETERRQ2(PETSC_COMM_SELF,PETSC_ERR_LIB,"Error reported by MUMPS in numerical factorization phase: INFO(1)=%d, INFO(2)=%d\n",mumps->id.INFO(1),mumps->id.INFO(2)); } On Sun, Apr 13, 2014 at 2:42 AM, Gong Ding wrote: > Dear Sir, > I see an error: > > Transient compute from 0 ps step 0.2 ps to 3000 ps > t = 0.2 ps, dt = 0.2 ps > -------------------------------------------------------------------------------- > process particle generation....................ok > Gummel electron equation CONVERGED_ATOL, residual 5.41824e-12, its 4 > Gummel hole equation CONVERGED_ATOL, residual 4.6721e-13, its 4 > --------------------- Error Message ------------------------------------ > Fatal Error:Error reported by MUMPS in numerical factorization phase: Cannot allocate required memory 990883164 megabytes > at line 721 in /tmp/build/rhel6-64/build.petsc.3.4.4.859b2b9/src/src/mat/impls/aij/mpi/mumps/mumps.c > ------------------------------------------------------------------------ > which reported that MUMPS requires 990883164 megabytes of memory. > The memory requirement is too huge so I takes a look at the reason. > > The code in petsc is listed below: > > if (mumps->id.INFOG(1) < 0) { > if (mumps->id.INFO(1) == -13) > SETERRQ1(PETSC_COMM_SELF,PETSC_ERR_LIB,"Error reported by MUMPS in numerical factorization phase: Cannot allocate required memory %d megabytes\n",mumps->id.INFO(2)); > } > > However, the mumps user's guide said > > -13 An error occurred in a Fortran ALLOCATE statement. The size that the package requested is > available in INFO(2). If INFO(2) is negative, then the size that the package requested is obtained > by multiplying the absolute value of INFO(2) by 1 million. > > It is clear that 990883164 megabytes should be 990883164 bytes here. Hope this bug can be fixed. > > Regards, > Gong Ding > > > > From knepley at gmail.com Sun Apr 13 12:28:12 2014 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 13 Apr 2014 12:28:12 -0500 Subject: [petsc-users] sundials and petsc4py In-Reply-To: <9B9BC0D0-6D37-4328-98F9-870730AB7CE0@inria.fr> References: <9B9BC0D0-6D37-4328-98F9-870730AB7CE0@inria.fr> Message-ID: On Sun, Apr 13, 2014 at 12:01 PM, Veltz Romain wrote: > Dear users of pets, > > I am totally new to petsc. I have been using trilinos a lot but compiling > of C++ code sickens me. That is why I turned to petc4py. > I would like to use the sundials time-stepper for petsc4py but I don't > know how to install it. > > For now, and following https://pypi.python.org/pypi/petsc4py/3.4, I use: > > pip install petsc==dev petsc4py==dev > > When you install petsc4py it will check the env var PETSC_DIR for an existing installation of PETSc. This is the way you can overlay a custom installation with 3rd party packages like sundials. You can install your own PETSc, configuring with --download-sundials, and then install petsc4py with PETSC_DIR (and maybe PETSC_ARCH) pointing to this installation. Matt > But how can I specify to install the sundial part? > > Thank you in advance for your help, > > Best > > > Veltz Romain > > > > > *Neuromathcomp Project Team Inria Sophia Antipolis M?diterran?e 2004 Route > des Lucioles-BP 93 FR-06902 Sophia Antipolis* > http://www-sop.inria.fr/members/Romain.Veltz/public_html/Home.html > > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Mon Apr 14 06:52:37 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Mon, 14 Apr 2014 19:52:37 +0800 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 Message-ID: <534BCC05.1000000@gmail.com> Hi, My code hangs and I added in mpi_barrier and print to catch the bug. I found that it hangs after printing "7". Is it because I'm doing something wrong? I need to access the u,v,w array so I use DMDAVecGetArrayF90. After access, I use DMDAVecRestoreArrayF90. call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"3" call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"4" call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"5" call I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"6" call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) !must be in reverse order /* call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"7"*/ call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"8" call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) -- Thank you. Yours sincerely, TAY wee-beng -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Apr 14 08:05:02 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 14 Apr 2014 08:05:02 -0500 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: <534BCC05.1000000@gmail.com> References: <534BCC05.1000000@gmail.com> Message-ID: Because IO doesn?t always get flushed immediately it may not be hanging at this point. It is better to use the option -start_in_debugger then type cont in each debugger window and then when you think it is ?hanging? do a control C in each debugger window and type where to see where each process is you can also look around in the debugger at variables to see why it is ?hanging? at that point. Barry This routines don?t have any parallel communication in them so are unlikely to hang. On Apr 14, 2014, at 6:52 AM, TAY wee-beng wrote: > Hi, > > My code hangs and I added in mpi_barrier and print to catch the bug. I found that it hangs after printing "7". Is it because I'm doing something wrong? I need to access the u,v,w array so I use DMDAVecGetArrayF90. After access, I use DMDAVecRestoreArrayF90. > > call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) > call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"3" > call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) > call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"4" > call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) > call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"5" > call I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) > call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"6" > call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) !must be in reverse order > call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"7" > call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) > call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"8" > call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) > -- > Thank you. > > Yours sincerely, > > TAY wee-beng > From adrimacortes at gmail.com Mon Apr 14 09:02:59 2014 From: adrimacortes at gmail.com (=?UTF-8?Q?Adriano_C=C3=B4rtes?=) Date: Mon, 14 Apr 2014 17:02:59 +0300 Subject: [petsc-users] Dump a matrix to a .py file Message-ID: Dear all, I'm on a debugging process, and I'd like to know if there is a way of using PetscViewer to dump Mat and Vec objects to a .py file, to be load and manipulated with numpy/scipy to inspect some properties. Many thanks in advance. Adriano. -- Adriano C?rtes ================================================= Post-doctoral fellow Center for Numerical Porous Media King Abdullah University of Science and Technology (KAUST) From zonexo at gmail.com Mon Apr 14 09:40:40 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Mon, 14 Apr 2014 22:40:40 +0800 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: References: <534BCC05.1000000@gmail.com> Message-ID: <534BF368.3070208@gmail.com> Hi Barry, I'm not too sure how to do it. I'm running mpi. So I run: mpirun -n 4 ./a.out -start_in_debugger I got the msg below. Before the gdb windows appear (thru x11), the program aborts. Also I tried running in another cluster and it worked. Also tried in the current cluster in debug mode and it worked too. /_*mpirun -n 4 ./a.out -start_in_debugger*_//_* *_//_*--------------------------------------------------------------------------*_//_* *_//_*An MPI process has executed an operation involving a call to the*_//_* *_//_*"fork()" system call to create a child process. Open MPI is currently*_//_* *_//_*operating in a condition that could result in memory corruption or*_//_* *_//_*other system errors; your MPI job may hang, crash, or produce silent*_//_* *_//_*data corruption. The use of fork() (or system() or other calls that*_//_* *_//_*create child processes) is strongly discouraged. *_//_* *_//_* *_//_*The process that invoked fork was:*_//_* *_//_* *_//_* Local host: n12-76 (PID 20235)*_//_* *_//_* MPI_COMM_WORLD rank: 2*_//_* *_//_* *_//_*If you are *absolutely sure* that your application will successfully*_//_* *_//_*and correctly survive a call to fork(), you may disable this warning*_//_* *_//_*by setting the mpi_warn_on_fork MCA parameter to 0.*_//_* *_//_*--------------------------------------------------------------------------*_//_* *_//_*[2]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20235 on display localhost:50.0 on machine n12-76*_//_* *_//_*[0]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20233 on display localhost:50.0 on machine n12-76*_//_* *_//_*[1]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20234 on display localhost:50.0 on machine n12-76*_//_* *_//_*[3]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20236 on display localhost:50.0 on machine n12-76*_//_* *_//_*[n12-76:20232] 3 more processes have sent help message help-mpi-runtime.txt / mpi_init:warn-fork*_//_* *_//_*[n12-76:20232] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages*_//_* *_//_* *_//_*....*_//_* *_//_* *_//_* 1*_//_* *_//_*[1]PETSC ERROR: ------------------------------------------------------------------------*_//_* *_//_*[1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range*_//_* *_//_*[1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger*_//_* *_//_*[1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors*_//_* *_//_*[1]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run *_//_* *_//_*[1]PETSC ERROR: to get more information on the crash.*_//_* *_//_*[1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null)*_//_* *_//_*[3]PETSC ERROR: ------------------------------------------------------------------------*_//_* *_//_*[3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range*_//_* *_//_*[3]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger*_//_* *_//_*[3]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors*_//_* *_//_*[3]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run *_//_* *_//_*[3]PETSC ERROR: to get more information on the crash.*_//_* *_//_*[3]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null)*_/ ... Thank you. Yours sincerely, TAY wee-beng On 14/4/2014 9:05 PM, Barry Smith wrote: > Because IO doesn?t always get flushed immediately it may not be hanging at this point. It is better to use the option -start_in_debugger then type cont in each debugger window and then when you think it is ?hanging? do a control C in each debugger window and type where to see where each process is you can also look around in the debugger at variables to see why it is ?hanging? at that point. > > Barry > > This routines don?t have any parallel communication in them so are unlikely to hang. > > On Apr 14, 2014, at 6:52 AM, TAY wee-beng wrote: > >> Hi, >> >> My code hangs and I added in mpi_barrier and print to catch the bug. I found that it hangs after printing "7". Is it because I'm doing something wrong? I need to access the u,v,w array so I use DMDAVecGetArrayF90. After access, I use DMDAVecRestoreArrayF90. >> >> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"3" >> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"4" >> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"5" >> call I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) >> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"6" >> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) !must be in reverse order >> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"7" >> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"8" >> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >> -- >> Thank you. >> >> Yours sincerely, >> >> TAY wee-beng >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Apr 14 09:43:39 2014 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 14 Apr 2014 09:43:39 -0500 Subject: [petsc-users] Dump a matrix to a .py file In-Reply-To: References: Message-ID: On Mon, Apr 14, 2014 at 9:02 AM, Adriano C?rtes wrote: > Dear all, > > I'm on a debugging process, and I'd like to know if there is a way of > using PetscViewer to dump Mat and Vec objects to a .py file, to be > load and manipulated with numpy/scipy to inspect some properties. > break in the debugger: (gdb) call MatView(M, PETSC_VIEWER_BINARY_(PETSC_COMM_WORLD)) which writes the 'binaryoutput' file. Then in Python make sure you have $PETSC_DIR/bin/pythonscripts in your PYTHONPATH, and >>> import PetscBinaryIO >>> io = PetscBinaryIO.PetscBinaryIO() >>> objects = io.readBinaryFile('file.dat') I just tested this and it worked for me. Matt > Many thanks in advance. > Adriano. > > -- > Adriano C?rtes > ================================================= > Post-doctoral fellow > Center for Numerical Porous Media > King Abdullah University of Science and Technology (KAUST) > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Apr 14 09:44:24 2014 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 14 Apr 2014 09:44:24 -0500 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: <534BF368.3070208@gmail.com> References: <534BCC05.1000000@gmail.com> <534BF368.3070208@gmail.com> Message-ID: On Mon, Apr 14, 2014 at 9:40 AM, TAY wee-beng wrote: > Hi Barry, > > I'm not too sure how to do it. I'm running mpi. So I run: > > mpirun -n 4 ./a.out -start_in_debugger > add -debugger_pause 10 Matt > I got the msg below. Before the gdb windows appear (thru x11), the program > aborts. > > Also I tried running in another cluster and it worked. Also tried in the > current cluster in debug mode and it worked too. > > *mpirun -n 4 ./a.out -start_in_debugger* > > *--------------------------------------------------------------------------* > *An MPI process has executed an operation involving a call to the* > *"fork()" system call to create a child process. Open MPI is currently* > *operating in a condition that could result in memory corruption or* > *other system errors; your MPI job may hang, crash, or produce silent* > *data corruption. The use of fork() (or system() or other calls that* > *create child processes) is strongly discouraged. * > > *The process that invoked fork was:* > > * Local host: n12-76 (PID 20235)* > * MPI_COMM_WORLD rank: 2* > > *If you are *absolutely sure* that your application will successfully* > *and correctly survive a call to fork(), you may disable this warning* > *by setting the mpi_warn_on_fork MCA parameter to 0.* > > *--------------------------------------------------------------------------* > *[2]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20235 on display > localhost:50.0 on machine n12-76* > *[0]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20233 on display > localhost:50.0 on machine n12-76* > *[1]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20234 on display > localhost:50.0 on machine n12-76* > *[3]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20236 on display > localhost:50.0 on machine n12-76* > *[n12-76:20232] 3 more processes have sent help message > help-mpi-runtime.txt / mpi_init:warn-fork* > *[n12-76:20232] Set MCA parameter "orte_base_help_aggregate" to 0 to see > all help / error messages* > > *....* > > * 1* > *[1]PETSC ERROR: > ------------------------------------------------------------------------* > *[1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range* > *[1]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger* > *[1]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > [1]PETSC > ERROR: or try http://valgrind.org on GNU/linux and > Apple Mac OS X to find memory corruption errors* > *[1]PETSC ERROR: configure using --with-debugging=yes, recompile, link, > and run * > *[1]PETSC ERROR: to get more information on the crash.* > *[1]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file (null)* > *[3]PETSC ERROR: > ------------------------------------------------------------------------* > *[3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range* > *[3]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger* > *[3]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > [3]PETSC > ERROR: or try http://valgrind.org on GNU/linux and > Apple Mac OS X to find memory corruption errors* > *[3]PETSC ERROR: configure using --with-debugging=yes, recompile, link, > and run * > *[3]PETSC ERROR: to get more information on the crash.* > *[3]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file (null)* > > ... > > Thank you. > > Yours sincerely, > > TAY wee-beng > > On 14/4/2014 9:05 PM, Barry Smith wrote: > > Because IO doesn?t always get flushed immediately it may not be hanging at this point. It is better to use the option -start_in_debugger then type cont in each debugger window and then when you think it is ?hanging? do a control C in each debugger window and type where to see where each process is you can also look around in the debugger at variables to see why it is ?hanging? at that point. > > Barry > > This routines don?t have any parallel communication in them so are unlikely to hang. > > On Apr 14, 2014, at 6:52 AM, TAY wee-beng wrote: > > > Hi, > > My code hangs and I added in mpi_barrier and print to catch the bug. I found that it hangs after printing "7". Is it because I'm doing something wrong? I need to access the u,v,w array so I use DMDAVecGetArrayF90. After access, I use DMDAVecRestoreArrayF90. > > call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) > call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"3" > call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) > call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"4" > call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) > call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"5" > call I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) > call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"6" > call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) !must be in reverse order > call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"7" > call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) > call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"8" > call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) > -- > Thank you. > > Yours sincerely, > > TAY wee-beng > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Mon Apr 14 10:01:41 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Mon, 14 Apr 2014 23:01:41 +0800 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: References: <534BCC05.1000000@gmail.com> <534BF368.3070208@gmail.com> Message-ID: <534BF855.9050403@gmail.com> On 14/4/2014 10:44 PM, Matthew Knepley wrote: > On Mon, Apr 14, 2014 at 9:40 AM, TAY wee-beng > wrote: > > Hi Barry, > > I'm not too sure how to do it. I'm running mpi. So I run: > > mpirun -n 4 ./a.out -start_in_debugger > > > add -debugger_pause 10 It seems that I need to use a value of 60. It gives a segmentation fault after the location I debug earlier: #0 0x00002b672cdee78a in f90array3daccessscalar_ () from /home/wtay/Lib/petsc-3.4.4_shared_rel/lib/libpetsc.so #1 0x00002b672cdedcae in F90Array3dAccess () from /home/wtay/Lib/petsc-3.4.4_shared_rel/lib/libpetsc.so #2 0x00002b672d2ad044 in dmdavecrestorearrayf903_ () from /home/wtay/Lib/petsc-3.4.4_shared_rel/lib/libpetsc.so #3 0x00000000008a1d8d in fractional_initial_mp_initial_ () #4 0x0000000000539289 in MAIN__ () #5 0x000000000043c04c in main () What;'s that supposed to mean? > > Matt > > I got the msg below. Before the gdb windows appear (thru x11), the > program aborts. > > Also I tried running in another cluster and it worked. Also tried > in the current cluster in debug mode and it worked too. > > /_*mpirun -n 4 ./a.out -start_in_debugger*_//_* > *_//_*--------------------------------------------------------------------------*_//_* > *_//_*An MPI process has executed an operation involving a call to > the*_//_* > *_//_*"fork()" system call to create a child process. Open MPI is > currently*_//_* > *_//_*operating in a condition that could result in memory > corruption or*_//_* > *_//_*other system errors; your MPI job may hang, crash, or > produce silent*_//_* > *_//_*data corruption. The use of fork() (or system() or other > calls that*_//_* > *_//_*create child processes) is strongly discouraged. *_//_* > *_//_* > *_//_*The process that invoked fork was:*_//_* > *_//_* > *_//_* Local host: n12-76 (PID 20235)*_//_* > *_//_* MPI_COMM_WORLD rank: 2*_//_* > *_//_* > *_//_*If you are *absolutely sure* that your application will > successfully*_//_* > *_//_*and correctly survive a call to fork(), you may disable this > warning*_//_* > *_//_*by setting the mpi_warn_on_fork MCA parameter to 0.*_//_* > *_//_*--------------------------------------------------------------------------*_//_* > *_//_*[2]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20235 > on display localhost:50.0 on machine n12-76*_//_* > *_//_*[0]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20233 > on display localhost:50.0 on machine n12-76*_//_* > *_//_*[1]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20234 > on display localhost:50.0 on machine n12-76*_//_* > *_//_*[3]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20236 > on display localhost:50.0 on machine n12-76*_//_* > *_//_*[n12-76:20232] 3 more processes have sent help message > help-mpi-runtime.txt / mpi_init:warn-fork*_//_* > *_//_*[n12-76:20232] Set MCA parameter "orte_base_help_aggregate" > to 0 to see all help / error messages*_//_* > *_//_* > *_//_*....*_//_* > *_//_* > *_//_* 1*_//_* > *_//_*[1]PETSC ERROR: > ------------------------------------------------------------------------*_//_* > *_//_*[1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation > Violation, probably memory access out of range*_//_* > *_//_*[1]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger*_//_* > *_//_*[1]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC > ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X > to find memory corruption errors*_//_* > *_//_*[1]PETSC ERROR: configure using --with-debugging=yes, > recompile, link, and run *_//_* > *_//_*[1]PETSC ERROR: to get more information on the crash.*_//_* > *_//_*[1]PETSC ERROR: User provided function() line 0 in unknown > directory unknown file (null)*_//_* > *_//_*[3]PETSC ERROR: > ------------------------------------------------------------------------*_//_* > *_//_*[3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation > Violation, probably memory access out of range*_//_* > *_//_*[3]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger*_//_* > *_//_*[3]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSC > ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X > to find memory corruption errors*_//_* > *_//_*[3]PETSC ERROR: configure using --with-debugging=yes, > recompile, link, and run *_//_* > *_//_*[3]PETSC ERROR: to get more information on the crash.*_//_* > *_//_*[3]PETSC ERROR: User provided function() line 0 in unknown > directory unknown file (null)*_/ > > ... > > Thank you. > > Yours sincerely, > > TAY wee-beng > > On 14/4/2014 9:05 PM, Barry Smith wrote: >> Because IO doesn?t always get flushed immediately it may not be hanging at this point. It is better to use the option -start_in_debugger then type cont in each debugger window and then when you think it is ?hanging? do a control C in each debugger window and type where to see where each process is you can also look around in the debugger at variables to see why it is ?hanging? at that point. >> >> Barry >> >> This routines don?t have any parallel communication in them so are unlikely to hang. >> >> On Apr 14, 2014, at 6:52 AM, TAY wee-beng wrote: >> >>> Hi, >>> >>> My code hangs and I added in mpi_barrier and print to catch the bug. I found that it hangs after printing "7". Is it because I'm doing something wrong? I need to access the u,v,w array so I use DMDAVecGetArrayF90. After access, I use DMDAVecRestoreArrayF90. >>> >>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"3" >>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"4" >>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"5" >>> call I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) >>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"6" >>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) !must be in reverse order >>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"7" >>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"8" >>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>> -- >>> Thank you. >>> >>> Yours sincerely, >>> >>> TAY wee-beng >>> > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Apr 14 10:16:14 2014 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 14 Apr 2014 10:16:14 -0500 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: <534BF855.9050403@gmail.com> References: <534BCC05.1000000@gmail.com> <534BF368.3070208@gmail.com> <534BF855.9050403@gmail.com> Message-ID: On Mon, Apr 14, 2014 at 10:01 AM, TAY wee-beng wrote: > On 14/4/2014 10:44 PM, Matthew Knepley wrote: > > On Mon, Apr 14, 2014 at 9:40 AM, TAY wee-beng wrote: > >> Hi Barry, >> >> I'm not too sure how to do it. I'm running mpi. So I run: >> >> mpirun -n 4 ./a.out -start_in_debugger >> > > add -debugger_pause 10 > > > It seems that I need to use a value of 60. It gives a segmentation fault > after the location I debug earlier: > > #0 0x00002b672cdee78a in f90array3daccessscalar_ () > from /home/wtay/Lib/petsc-3.4.4_shared_rel/lib/libpetsc.so > #1 0x00002b672cdedcae in F90Array3dAccess () > from /home/wtay/Lib/petsc-3.4.4_shared_rel/lib/libpetsc.so > #2 0x00002b672d2ad044 in dmdavecrestorearrayf903_ () > from /home/wtay/Lib/petsc-3.4.4_shared_rel/lib/libpetsc.so > #3 0x00000000008a1d8d in fractional_initial_mp_initial_ () > #4 0x0000000000539289 in MAIN__ () > #5 0x000000000043c04c in main () > > What;'s that supposed to mean? > It appears that you are restoring an array that you did not actually get. Matt > Matt > > >> I got the msg below. Before the gdb windows appear (thru x11), the >> program aborts. >> >> Also I tried running in another cluster and it worked. Also tried in the >> current cluster in debug mode and it worked too. >> >> *mpirun -n 4 ./a.out -start_in_debugger* >> >> *--------------------------------------------------------------------------* >> *An MPI process has executed an operation involving a call to the* >> *"fork()" system call to create a child process. Open MPI is currently* >> *operating in a condition that could result in memory corruption or* >> *other system errors; your MPI job may hang, crash, or produce silent* >> *data corruption. The use of fork() (or system() or other calls that* >> *create child processes) is strongly discouraged. * >> >> *The process that invoked fork was:* >> >> * Local host: n12-76 (PID 20235)* >> * MPI_COMM_WORLD rank: 2* >> >> *If you are *absolutely sure* that your application will successfully* >> *and correctly survive a call to fork(), you may disable this warning* >> *by setting the mpi_warn_on_fork MCA parameter to 0.* >> >> *--------------------------------------------------------------------------* >> *[2]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20235 on display >> localhost:50.0 on machine n12-76* >> *[0]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20233 on display >> localhost:50.0 on machine n12-76* >> *[1]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20234 on display >> localhost:50.0 on machine n12-76* >> *[3]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20236 on display >> localhost:50.0 on machine n12-76* >> *[n12-76:20232] 3 more processes have sent help message >> help-mpi-runtime.txt / mpi_init:warn-fork* >> *[n12-76:20232] Set MCA parameter "orte_base_help_aggregate" to 0 to see >> all help / error messages* >> >> *....* >> >> * 1* >> *[1]PETSC ERROR: >> ------------------------------------------------------------------------* >> *[1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >> probably memory access out of range* >> *[1]PETSC ERROR: Try option -start_in_debugger or >> -on_error_attach_debugger* >> *[1]PETSC ERROR: or see >> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >> [1]PETSC >> ERROR: or try http://valgrind.org on GNU/linux and >> Apple Mac OS X to find memory corruption errors* >> *[1]PETSC ERROR: configure using --with-debugging=yes, recompile, link, >> and run * >> *[1]PETSC ERROR: to get more information on the crash.* >> *[1]PETSC ERROR: User provided function() line 0 in unknown directory >> unknown file (null)* >> *[3]PETSC ERROR: >> ------------------------------------------------------------------------* >> *[3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >> probably memory access out of range* >> *[3]PETSC ERROR: Try option -start_in_debugger or >> -on_error_attach_debugger* >> *[3]PETSC ERROR: or see >> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >> [3]PETSC >> ERROR: or try http://valgrind.org on GNU/linux and >> Apple Mac OS X to find memory corruption errors* >> *[3]PETSC ERROR: configure using --with-debugging=yes, recompile, link, >> and run * >> *[3]PETSC ERROR: to get more information on the crash.* >> *[3]PETSC ERROR: User provided function() line 0 in unknown directory >> unknown file (null)* >> >> ... >> >> Thank you. >> >> Yours sincerely, >> >> TAY wee-beng >> >> On 14/4/2014 9:05 PM, Barry Smith wrote: >> >> Because IO doesn?t always get flushed immediately it may not be hanging at this point. It is better to use the option -start_in_debugger then type cont in each debugger window and then when you think it is ?hanging? do a control C in each debugger window and type where to see where each process is you can also look around in the debugger at variables to see why it is ?hanging? at that point. >> >> Barry >> >> This routines don?t have any parallel communication in them so are unlikely to hang. >> >> On Apr 14, 2014, at 6:52 AM, TAY wee-beng wrote: >> >> >> Hi, >> >> My code hangs and I added in mpi_barrier and print to catch the bug. I found that it hangs after printing "7". Is it because I'm doing something wrong? I need to access the u,v,w array so I use DMDAVecGetArrayF90. After access, I use DMDAVecRestoreArrayF90. >> >> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"3" >> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"4" >> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"5" >> call I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) >> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"6" >> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) !must be in reverse order >> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"7" >> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"8" >> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >> -- >> Thank you. >> >> Yours sincerely, >> >> TAY wee-beng >> >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Apr 14 13:20:07 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 14 Apr 2014 13:20:07 -0500 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: <534BF855.9050403@gmail.com> References: <534BCC05.1000000@gmail.com> <534BF368.3070208@gmail.com> <534BF855.9050403@gmail.com> Message-ID: <1FC7FC88-D97F-49EA-8831-379AA150B772@mcs.anl.gov> Please configure a PETSC_ARCH using --with-debugging=yes, and use the -start_in_debugger option with a build that has debugging turned on. This way it will give you exact line numbers and the values of variables when it crashes. After it crashes use ?print ptr? and ?print address? to see what those values are. Barry On Apr 14, 2014, at 10:01 AM, TAY wee-beng wrote: > On 14/4/2014 10:44 PM, Matthew Knepley wrote: >> On Mon, Apr 14, 2014 at 9:40 AM, TAY wee-beng wrote: >> Hi Barry, >> >> I'm not too sure how to do it. I'm running mpi. So I run: >> >> mpirun -n 4 ./a.out -start_in_debugger >> >> add -debugger_pause 10 > > It seems that I need to use a value of 60. It gives a segmentation fault after the location I debug earlier: > > #0 0x00002b672cdee78a in f90array3daccessscalar_ () > from /home/wtay/Lib/petsc-3.4.4_shared_rel/lib/libpetsc.so > #1 0x00002b672cdedcae in F90Array3dAccess () > from /home/wtay/Lib/petsc-3.4.4_shared_rel/lib/libpetsc.so > #2 0x00002b672d2ad044 in dmdavecrestorearrayf903_ () > from /home/wtay/Lib/petsc-3.4.4_shared_rel/lib/libpetsc.so > #3 0x00000000008a1d8d in fractional_initial_mp_initial_ () > #4 0x0000000000539289 in MAIN__ () > #5 0x000000000043c04c in main () > > What;'s that supposed to mean? >> >> Matt >> >> I got the msg below. Before the gdb windows appear (thru x11), the program aborts. >> >> Also I tried running in another cluster and it worked. Also tried in the current cluster in debug mode and it worked too. >> >> mpirun -n 4 ./a.out -start_in_debugger >> -------------------------------------------------------------------------- >> An MPI process has executed an operation involving a call to the >> "fork()" system call to create a child process. Open MPI is currently >> operating in a condition that could result in memory corruption or >> other system errors; your MPI job may hang, crash, or produce silent >> data corruption. The use of fork() (or system() or other calls that >> create child processes) is strongly discouraged. >> >> The process that invoked fork was: >> >> Local host: n12-76 (PID 20235) >> MPI_COMM_WORLD rank: 2 >> >> If you are *absolutely sure* that your application will successfully >> and correctly survive a call to fork(), you may disable this warning >> by setting the mpi_warn_on_fork MCA parameter to 0. >> -------------------------------------------------------------------------- >> [2]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20235 on display localhost:50.0 on machine n12-76 >> [0]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20233 on display localhost:50.0 on machine n12-76 >> [1]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20234 on display localhost:50.0 on machine n12-76 >> [3]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20236 on display localhost:50.0 on machine n12-76 >> [n12-76:20232] 3 more processes have sent help message help-mpi-runtime.txt / mpi_init:warn-fork >> [n12-76:20232] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages >> >> .... >> >> 1 >> [1]PETSC ERROR: ------------------------------------------------------------------------ >> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >> [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >> [1]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >> [1]PETSC ERROR: to get more information on the crash. >> [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >> [3]PETSC ERROR: ------------------------------------------------------------------------ >> [3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >> [3]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [3]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >> [3]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >> [3]PETSC ERROR: to get more information on the crash. >> [3]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >> >> ... >> Thank you. >> >> Yours sincerely, >> >> TAY wee-beng >> >> On 14/4/2014 9:05 PM, Barry Smith wrote: >>> Because IO doesn?t always get flushed immediately it may not be hanging at this point. It is better to use the option -start_in_debugger then type cont in each debugger window and then when you think it is ?hanging? do a control C in each debugger window and type where to see where each process is you can also look around in the debugger at variables to see why it is ?hanging? at that point. >>> >>> Barry >>> >>> This routines don?t have any parallel communication in them so are unlikely to hang. >>> >>> On Apr 14, 2014, at 6:52 AM, TAY wee-beng >>> >>> wrote: >>> >>> >>>> Hi, >>>> >>>> My code hangs and I added in mpi_barrier and print to catch the bug. I found that it hangs after printing "7". Is it because I'm doing something wrong? I need to access the u,v,w array so I use DMDAVecGetArrayF90. After access, I use DMDAVecRestoreArrayF90. >>>> >>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"3" >>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"4" >>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"5" >>>> call I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) >>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"6" >>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) !must be in reverse order >>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"7" >>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"8" >>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>> -- >>>> Thank you. >>>> >>>> Yours sincerely, >>>> >>>> TAY wee-beng >>>> >>>> >> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener > From bsmith at mcs.anl.gov Mon Apr 14 13:26:54 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 14 Apr 2014 13:26:54 -0500 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: <534BF368.3070208@gmail.com> References: <534BCC05.1000000@gmail.com> <534BF368.3070208@gmail.com> Message-ID: <909E1CF9-F3A2-43E2-8F25-405070CFB192@mcs.anl.gov> Please send the code that creates da_w and the declarations of w_array Barry On Apr 14, 2014, at 9:40 AM, TAY wee-beng wrote: > Hi Barry, > > I'm not too sure how to do it. I'm running mpi. So I run: > > mpirun -n 4 ./a.out -start_in_debugger > > I got the msg below. Before the gdb windows appear (thru x11), the program aborts. > > Also I tried running in another cluster and it worked. Also tried in the current cluster in debug mode and it worked too. > > mpirun -n 4 ./a.out -start_in_debugger > -------------------------------------------------------------------------- > An MPI process has executed an operation involving a call to the > "fork()" system call to create a child process. Open MPI is currently > operating in a condition that could result in memory corruption or > other system errors; your MPI job may hang, crash, or produce silent > data corruption. The use of fork() (or system() or other calls that > create child processes) is strongly discouraged. > > The process that invoked fork was: > > Local host: n12-76 (PID 20235) > MPI_COMM_WORLD rank: 2 > > If you are *absolutely sure* that your application will successfully > and correctly survive a call to fork(), you may disable this warning > by setting the mpi_warn_on_fork MCA parameter to 0. > -------------------------------------------------------------------------- > [2]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20235 on display localhost:50.0 on machine n12-76 > [0]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20233 on display localhost:50.0 on machine n12-76 > [1]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20234 on display localhost:50.0 on machine n12-76 > [3]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20236 on display localhost:50.0 on machine n12-76 > [n12-76:20232] 3 more processes have sent help message help-mpi-runtime.txt / mpi_init:warn-fork > [n12-76:20232] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages > > .... > > 1 > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [1]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run > [1]PETSC ERROR: to get more information on the crash. > [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) > [3]PETSC ERROR: ------------------------------------------------------------------------ > [3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > [3]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [3]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [3]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run > [3]PETSC ERROR: to get more information on the crash. > [3]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) > > ... > Thank you. > > Yours sincerely, > > TAY wee-beng > > On 14/4/2014 9:05 PM, Barry Smith wrote: >> Because IO doesn?t always get flushed immediately it may not be hanging at this point. It is better to use the option -start_in_debugger then type cont in each debugger window and then when you think it is ?hanging? do a control C in each debugger window and type where to see where each process is you can also look around in the debugger at variables to see why it is ?hanging? at that point. >> >> Barry >> >> This routines don?t have any parallel communication in them so are unlikely to hang. >> >> On Apr 14, 2014, at 6:52 AM, TAY wee-beng >> >> wrote: >> >> >>> Hi, >>> >>> My code hangs and I added in mpi_barrier and print to catch the bug. I found that it hangs after printing "7". Is it because I'm doing something wrong? I need to access the u,v,w array so I use DMDAVecGetArrayF90. After access, I use DMDAVecRestoreArrayF90. >>> >>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"3" >>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"4" >>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"5" >>> call I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) >>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"6" >>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) !must be in reverse order >>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"7" >>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"8" >>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>> -- >>> Thank you. >>> >>> Yours sincerely, >>> >>> TAY wee-beng >>> >>> > From zonexo at gmail.com Mon Apr 14 21:32:12 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Tue, 15 Apr 2014 10:32:12 +0800 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: <909E1CF9-F3A2-43E2-8F25-405070CFB192@mcs.anl.gov> References: <534BCC05.1000000@gmail.com> <534BF368.3070208@gmail.com> <909E1CF9-F3A2-43E2-8F25-405070CFB192@mcs.anl.gov> Message-ID: <534C9A2C.5060404@gmail.com> Hi Barry, As I mentioned earlier, the code works fine in PETSc debug mode but fails in non-debug mode. My code works in this way: */DM da_u,da_v,da_w/**/ /**/ /**/Vec u_local,u_global,v_local,v_global,w_local,w_global,p_local,p_global/**/ /**/ /**/call DMDACreate3d(MPI_COMM_WORLD,DMDA_BOUNDARY_NONE,DMDA_BOUNDARY_NONE,DMDA_BOUNDARY_NONE,DMDA_STENCIL_STAR,size_x,size_y,&/**/ /**/ /**/size_z,1,1,num_procs,1,stencil_width,lx,ly,lz,da_u,ierr)/**/ /**/ /**/call DMDACreate3d(MPI_COMM_WORLD,DMDA_BOUNDARY_NONE,DMDA_BOUNDARY_NONE,DMDA_BOUNDARY_NONE,DMDA_STENCIL_STAR,size_x,size_y,&/**/ /**/ /**/size_z,1,1,num_procs,1,stencil_width,lx,ly,lz,da_v,ierr)/**/ /**//**/ /**/ call DMDACreate3d(MPI_COMM_WORLD,DMDA_BOUNDARY_NONE,DMDA_BOUNDARY_NONE,DMDA_BOUNDARY_NONE,DMDA_STENCIL_STAR,size_x,size_y,&/**/ /**/ /**/size_z,1,1,num_procs,1,stencil_width,lx,ly,lz,da_w,ierr)/**/ /**/ /**/call DMCreateGlobalVector(da_u,u_global,ierr)/**/ /**/ /**/ call DMCreateLocalVector(da_u,u_local,ierr)/**/ /**//**/ /**/ call DMCreateGlobalVector(da_v,v_global,ierr)/**/ /**/ /**/ call DMCreateLocalVector(da_v,v_local,ierr)/**/ /**//**/ /**/ call DMCreateGlobalVector(da_w,w_global,ierr)/**/ /**/ /**/ call DMCreateLocalVector(da_w,w_local,ierr)/**/ /**/ /**/.../**/ /**/ /**/To access the arrays, I did the following:/**/ /**/ /**/call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr); call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr); call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr)/**/ /**/ /**/call I_IIB_uv_initial_1st_dm0(...,u_array,v_array,w_array)/**/ /**/ /**/call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr)/**/ /**/ /**/call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr)/**/-> hangs at this point/**/or/**/ /**/ /**/call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) -> hangs at this point/**/ /**/ /**/subroutine I_IIB_uv_initial_1st_dm0(...,I_cell_w,u,v,w)/**/ /**/ /**/integer :: i,j,k,ijk/**/ /**/ /**/.../**/ /**/ /**/real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:)/**/ /**/ /**/.../**/ /**/ u(i-1,j-1,k-1)=0./**/ /**/ /**/.../**/ /**/ /**/ v(i-1,j-1,k-1)=0./**/ /**/ /**/.../**/ /**/ /**/ w(i-1,j-1,k-1)=0./**/ /**/ /**/.../**/ /**/ /**/end subroutine I_IIB_uv_initial_1st_dm0/* Thank you Yours sincerely, TAY wee-beng On 15/4/2014 2:26 AM, Barry Smith wrote: > Please send the code that creates da_w and the declarations of w_array > > Barry > > On Apr 14, 2014, at 9:40 AM, TAY wee-beng wrote: > >> Hi Barry, >> >> I'm not too sure how to do it. I'm running mpi. So I run: >> >> mpirun -n 4 ./a.out -start_in_debugger >> >> I got the msg below. Before the gdb windows appear (thru x11), the program aborts. >> >> Also I tried running in another cluster and it worked. Also tried in the current cluster in debug mode and it worked too. >> >> mpirun -n 4 ./a.out -start_in_debugger >> -------------------------------------------------------------------------- >> An MPI process has executed an operation involving a call to the >> "fork()" system call to create a child process. Open MPI is currently >> operating in a condition that could result in memory corruption or >> other system errors; your MPI job may hang, crash, or produce silent >> data corruption. The use of fork() (or system() or other calls that >> create child processes) is strongly discouraged. >> >> The process that invoked fork was: >> >> Local host: n12-76 (PID 20235) >> MPI_COMM_WORLD rank: 2 >> >> If you are *absolutely sure* that your application will successfully >> and correctly survive a call to fork(), you may disable this warning >> by setting the mpi_warn_on_fork MCA parameter to 0. >> -------------------------------------------------------------------------- >> [2]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20235 on display localhost:50.0 on machine n12-76 >> [0]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20233 on display localhost:50.0 on machine n12-76 >> [1]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20234 on display localhost:50.0 on machine n12-76 >> [3]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20236 on display localhost:50.0 on machine n12-76 >> [n12-76:20232] 3 more processes have sent help message help-mpi-runtime.txt / mpi_init:warn-fork >> [n12-76:20232] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages >> >> .... >> >> 1 >> [1]PETSC ERROR: ------------------------------------------------------------------------ >> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >> [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >> [1]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >> [1]PETSC ERROR: to get more information on the crash. >> [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >> [3]PETSC ERROR: ------------------------------------------------------------------------ >> [3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >> [3]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [3]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >> [3]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >> [3]PETSC ERROR: to get more information on the crash. >> [3]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >> >> ... >> Thank you. >> >> Yours sincerely, >> >> TAY wee-beng >> >> On 14/4/2014 9:05 PM, Barry Smith wrote: >>> Because IO doesn?t always get flushed immediately it may not be hanging at this point. It is better to use the option -start_in_debugger then type cont in each debugger window and then when you think it is ?hanging? do a control C in each debugger window and type where to see where each process is you can also look around in the debugger at variables to see why it is ?hanging? at that point. >>> >>> Barry >>> >>> This routines don?t have any parallel communication in them so are unlikely to hang. >>> >>> On Apr 14, 2014, at 6:52 AM, TAY wee-beng >>> >>> wrote: >>> >>> >>>> Hi, >>>> >>>> My code hangs and I added in mpi_barrier and print to catch the bug. I found that it hangs after printing "7". Is it because I'm doing something wrong? I need to access the u,v,w array so I use DMDAVecGetArrayF90. After access, I use DMDAVecRestoreArrayF90. >>>> >>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"3" >>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"4" >>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"5" >>>> call I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) >>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"6" >>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) !must be in reverse order >>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"7" >>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"8" >>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>> -- >>>> Thank you. >>>> >>>> Yours sincerely, >>>> >>>> TAY wee-beng >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Mon Apr 14 21:42:24 2014 From: jed at jedbrown.org (Jed Brown) Date: Mon, 14 Apr 2014 21:42:24 -0500 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: <534C9A2C.5060404@gmail.com> References: <534BCC05.1000000@gmail.com> <534BF368.3070208@gmail.com> <909E1CF9-F3A2-43E2-8F25-405070CFB192@mcs.anl.gov> <534C9A2C.5060404@gmail.com> Message-ID: <87lhv7pb3j.fsf@jedbrown.org> TAY wee-beng writes: > Hi Barry, > > As I mentioned earlier, the code works fine in PETSc debug mode but > fails in non-debug mode. > > My code works in this way: > > */DM da_u,da_v,da_w/**/ > /**/ > /**/Vec This formatting is NOT easy to read. Please send text. http://lists.mcs.anl.gov/pipermail/petsc-users/2014-April/021239.html > /**/To access the arrays, I did the following:/**/ > /**/ > /**/call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr); call > DMDAVecGetArrayF90(da_v,v_local,v_array,ierr); call > DMDAVecGetArrayF90(da_w,w_local,w_array,ierr)/**/ > /**/ > /**/call I_IIB_uv_initial_1st_dm0(...,u_array,v_array,w_array)/**/ > /**/ > /**/call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr)/**/ > /**/ > /**/call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr)/**/-> hangs > at this point/**/or/**/ Which processes hang here? Where are the other processes? Get stack traces for each process. You can build in "optimized" mode with debugging symbols. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From zonexo at gmail.com Mon Apr 14 21:47:17 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Tue, 15 Apr 2014 10:47:17 +0800 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: <534C9A2C.5060404@gmail.com> References: <534C9A2C.5060404@gmail.com> Message-ID: <534C9DB5.9070407@gmail.com> Hi Barry, As I mentioned earlier, the code works fine in PETSc debug mode but fails in non-debug mode. I have attached my code. Thank you Yours sincerely, TAY wee-beng On 15/4/2014 2:26 AM, Barry Smith wrote: > Please send the code that creates da_w and the declarations of w_array > > Barry > > On Apr 14, 2014, at 9:40 AM, TAY wee-beng wrote: > >> Hi Barry, >> >> I'm not too sure how to do it. I'm running mpi. So I run: >> >> mpirun -n 4 ./a.out -start_in_debugger >> >> I got the msg below. Before the gdb windows appear (thru x11), the program aborts. >> >> Also I tried running in another cluster and it worked. Also tried in the current cluster in debug mode and it worked too. >> >> mpirun -n 4 ./a.out -start_in_debugger >> -------------------------------------------------------------------------- >> An MPI process has executed an operation involving a call to the >> "fork()" system call to create a child process. Open MPI is currently >> operating in a condition that could result in memory corruption or >> other system errors; your MPI job may hang, crash, or produce silent >> data corruption. The use of fork() (or system() or other calls that >> create child processes) is strongly discouraged. >> >> The process that invoked fork was: >> >> Local host: n12-76 (PID 20235) >> MPI_COMM_WORLD rank: 2 >> >> If you are *absolutely sure* that your application will successfully >> and correctly survive a call to fork(), you may disable this warning >> by setting the mpi_warn_on_fork MCA parameter to 0. >> -------------------------------------------------------------------------- >> [2]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20235 on display localhost:50.0 on machine n12-76 >> [0]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20233 on display localhost:50.0 on machine n12-76 >> [1]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20234 on display localhost:50.0 on machine n12-76 >> [3]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20236 on display localhost:50.0 on machine n12-76 >> [n12-76:20232] 3 more processes have sent help message help-mpi-runtime.txt / mpi_init:warn-fork >> [n12-76:20232] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages >> >> .... >> >> 1 >> [1]PETSC ERROR: ------------------------------------------------------------------------ >> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >> [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [1]PETSC ERROR: or seehttp://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC ERROR: or tryhttp://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >> [1]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >> [1]PETSC ERROR: to get more information on the crash. >> [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >> [3]PETSC ERROR: ------------------------------------------------------------------------ >> [3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >> [3]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [3]PETSC ERROR: or seehttp://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSC ERROR: or tryhttp://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >> [3]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >> [3]PETSC ERROR: to get more information on the crash. >> [3]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >> >> ... >> Thank you. >> >> Yours sincerely, >> >> TAY wee-beng >> >> On 14/4/2014 9:05 PM, Barry Smith wrote: >>> Because IO doesn?t always get flushed immediately it may not be hanging at this point. It is better to use the option -start_in_debugger then type cont in each debugger window and then when you think it is ?hanging? do a control C in each debugger window and type where to see where each process is you can also look around in the debugger at variables to see why it is ?hanging? at that point. >>> >>> Barry >>> >>> This routines don?t have any parallel communication in them so are unlikely to hang. >>> >>> On Apr 14, 2014, at 6:52 AM, TAY wee-beng >>> >>> wrote: >>> >>> >>>> Hi, >>>> >>>> My code hangs and I added in mpi_barrier and print to catch the bug. I found that it hangs after printing "7". Is it because I'm doing something wrong? I need to access the u,v,w array so I use DMDAVecGetArrayF90. After access, I use DMDAVecRestoreArrayF90. >>>> >>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"3" >>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"4" >>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"5" >>>> call I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) >>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"6" >>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) !must be in reverse order >>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"7" >>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"8" >>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>> -- >>>> Thank you. >>>> >>>> Yours sincerely, >>>> >>>> TAY wee-beng >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- DM da_u,da_v,da_w Vec u_local,u_global,v_local,v_global,w_local,w_global,p_local,p_global call DMDACreate3d(MPI_COMM_WORLD,DMDA_BOUNDARY_NONE,DMDA_BOUNDARY_NONE,DMDA_BOUNDARY_NONE,DMDA_STENCIL_STAR,size_x,size_y,& size_z,1,1,num_procs,1,stencil_width,lx,ly,lz,da_u,ierr) call DMDACreate3d(MPI_COMM_WORLD,DMDA_BOUNDARY_NONE,DMDA_BOUNDARY_NONE,DMDA_BOUNDARY_NONE,DMDA_STENCIL_STAR,size_x,size_y,& size_z,1,1,num_procs,1,stencil_width,lx,ly,lz,da_v,ierr) call DMDACreate3d(MPI_COMM_WORLD,DMDA_BOUNDARY_NONE,DMDA_BOUNDARY_NONE,DMDA_BOUNDARY_NONE,DMDA_STENCIL_STAR,size_x,size_y,& size_z,1,1,num_procs,1,stencil_width,lx,ly,lz,da_w,ierr) call DMCreateGlobalVector(da_u,u_global,ierr) call DMCreateLocalVector(da_u,u_local,ierr) call DMCreateGlobalVector(da_v,v_global,ierr) call DMCreateLocalVector(da_v,v_local,ierr) call DMCreateGlobalVector(da_w,w_global,ierr) call DMCreateLocalVector(da_w,w_local,ierr) ... To access the arrays, I did the following: call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr); call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr); call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) call I_IIB_uv_initial_1st_dm0(...,u_array,v_array,w_array) call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) -> hangs at this point or call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) -> hangs at this point subroutine I_IIB_uv_initial_1st_dm0(...,I_cell_w,u,v,w) integer :: i,j,k,ijk ... real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) ... u(i-1,j-1,k-1)=0. ... v(i-1,j-1,k-1)=0. ... w(i-1,j-1,k-1)=0. ... end subroutine I_IIB_uv_initial_1st_dm0 From leishi at ku.edu Tue Apr 15 03:06:38 2014 From: leishi at ku.edu (Lei Shi) Date: Tue, 15 Apr 2014 03:06:38 -0500 Subject: [petsc-users] Failed to compile the PETsc-dev with cuda, mpicc can not recognize c++ key word in thrust and cusp (Lei Shi) In-Reply-To: References: Message-ID: Hi Christophe, Thanks a lot. I add --with-clanguage=cxx option. It works well now. This option is not on the manual for the compilation with gpu. Sincerely Yours, Lei Shi --------- On Fri, Apr 11, 2014 at 3:42 AM, Christophe Ortiz < christophe.ortiz at ciemat.es> wrote: > Hi, > > Maybe I'm wrong but I think the following options are missing if you want > to use C++: > > --with-cxx=(g++ or icc....) --with-clanguage=cxx > > Christophe > > >> >> Message: 2 >> Date: Fri, 11 Apr 2014 03:26:09 -0500 >> From: Lei Shi >> To: petsc-users at mcs.anl.gov >> Subject: [petsc-users] Failed to compile the PETsc-dev with cuda, >> mpicc can not recognize c++ key word in thrust and cusp >> Message-ID: >> > V9JJr+8Np5JaOKqgrkS12KJ6TaO3V4wM9YPo-NG29EU-A at mail.gmail.com> >> Content-Type: text/plain; charset="utf-8" >> >> Hi guys, >> >> I tried to compile petsc-dev with cuda5, but failed. I think the problem >> is the PETsc-dev with cuda, mpicc can not recognize c++ key word in thrust >> and cusp. Please help me out. here is my option file >> >> configure_options = [ >> >> >> '--with-shared-libraries=0', >> >> >> '--with-mpi-dir=/media/public/MPICH3/mpich-install/', >> >> >> '--with-cuda=1', >> >> >> '--with-cuda-dir=/usr/local/cuda', >> >> >> '--with-cudac=/usr/local/cuda/bin/nvcc', >> >> >> '--with-cuda-arch=sm_20', >> >> >> '--with-cusp-dir=/usr/local/cuda/include/cusp', >> >> >> '--with-thrust=1', >> >> >> '--with-cusp=1', >> >> >> '--with-debugging=0', >> >> >> '--with-precision=double', >> >> >> 'COPTFLAGS=-O3', >> >> >> 'CXXOPTFLAGS=-O3', >> >> >> 'FOPTFLAGS=-O3', >> >> >> ] >> >> and the error i got >> >> ------------------------------------------------------------------------------------------- >> >> =============================================================================== >> >> >> Configuring PETSc to compile on your system >> >> >> >> =============================================================================== >> >> >> TESTING: alternateConfigureLibrary from >> PETSc.packages.petsc4py(config/PETSc/packages/petsc4py.py:65) >> \ >> Compilers: >> >> >> C Compiler: /media/public/MPICH3/mpich-install/bin/mpicc -Wall >> -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 >> >> CUDA Compiler: /usr/local/cuda/bin/nvcc -O -arch=sm_20 >> >> >> C++ Compiler: /media/public/MPICH3/mpich-install/bin/mpicxx -Wall >> -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 >> >> Linkers: >> >> >> Static linker: /usr/bin/ar cr >> >> >> Dynamic linker: /usr/bin/ar >> >> >> make: >> >> >> MPI: >> >> >> Includes: -I/media/public/MPICH3/mpich-install/include >> >> >> BLAS/LAPACK: -llapack -lblas >> >> >> X: >> >> >> Library: -lX11 >> >> >> pthread: >> >> >> Library: -lpthread >> >> >> valgrind: >> >> >> cuda: >> >> >> Includes: -I/usr/local/cuda/include >> >> >> Library: -Wl,-rpath,/usr/local/cuda/lib64 -L/usr/local/cuda/lib64 >> -lcufft -lcublas -lcudart -lcusparse >> >> Arch: -arch=sm_20 >> >> >> cusp: >> >> >> Includes: -I/usr/local/cuda/include/cusp/ >> -I/usr/local/cuda/include/cusp/include >> >> thrust: >> >> >> sowing: >> >> >> c2html: >> >> >> PETSc: >> >> >> PETSC_ARCH: arch-cuda5-cg-opt >> >> >> PETSC_DIR: /home/leishi/work/development/3rd_party/petsc/petsc >> >> >> Clanguage: C >> >> >> Memory alignment: 16 >> >> >> Scalar type: real >> >> >> Precision: double >> >> >> shared libraries: disabled >> >> >> >> xxx=========================================================================xxx >> >> >> Configure stage complete. Now build PETSc libraries with (gnumake build): >> >> >> make PETSC_DIR=/home/leishi/work/development/3rd_party/petsc/petsc >> PETSC_ARCH=arch-cuda5-cg-opt all >> >> >> xxx=========================================================================xxx >> >> >> Using C/C++ compile: /media/public/MPICH3/mpich-install/bin/mpicc -c -Wall >> -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 >> -I/home/leishi/work/development/3rd_party/pet\ >> sc/petsc/include >> >> -I/home/leishi/work/development/3rd_party/petsc/petsc/arch-cuda5-cg-opt/include >> -I/usr/local/cuda/include -I/usr/local/cuda/include/cusp/ >> -I/usr/local/cuda/include/cus\ >> p/include -I/media/public/MPICH3/mpich-install/include >> >> >> mpicc -show: gcc -I/media/public/MPICH3/mpich-install/include >> -L/media/public/MPICH3/mpich-install/lib -lmpich -lopa -lmpl -lrt >> -lpthread >> >> Using CUDA compile: /usr/local/cuda/bin/nvcc -O -arch=sm_20 -c >> --compiler-options=-Wall -Wwrite-strings -Wno-strict-aliasing >> -Wno-unknown-pragmas -O3 -I/home/leishi/work/development/3\ >> rd_party/petsc/petsc/include >> >> -I/home/leishi/work/development/3rd_party/petsc/petsc/arch-cuda5-cg-opt/include >> -I/usr/local/cuda/include -I/usr/local/cuda/include/cusp/ >> -I/usr/local/cuda\ >> /include/cusp/include -I/media/public/MPICH3/mpich-install/include >> >> >> ----------------------------------------- >> >> >> Using C/C++ linker: /media/public/MPICH3/mpich-install/bin/mpicc >> >> >> Using C/C++ flags: -Wall -Wwrite-strings -Wno-strict-aliasing >> -Wno-unknown-pragmas -O3 >> >> ----------------------------------------- >> >> >> Using libraries: >> >> -L/home/leishi/work/development/3rd_party/petsc/petsc/arch-cuda5-cg-opt/lib >> -lpetsc -llapack -lblas -lX11 -lpthread -Wl,-rpath,/usr/local/cuda/lib64 >> -L/usr/local/cuda\ >> /lib64 -lcufft -lcublas -lcudart -lcusparse -lm >> -L/media/public/MPICH3/mpich-install/lib >> -L/usr/lib/gcc/x86_64-linux-gnu/4.6 -L/usr/lib/x86_64-linux-gnu >> -L/lib/x86_64-linux-gnu -L/usr/\ >> lib/gcc/x86_64-linux-gnu/4.8 -lmpichcxx -lstdc++ -ldl -lmpich -lopa -lmpl >> -lrt -lpthread -lgcc_s -ldl >> >> ------------------------------------------ >> >> >> Using mpiexec: /media/public/MPICH3/mpich-install/bin/mpiexec >> >> >> ========================================== >> >> >> Building PETSc using GNU Make with 18 build threads >> >> >> ========================================== >> >> >> make[2]: Entering directory >> `/home/leishi/work/development/3rd_party/petsc/petsc' >> >> >> Use "/usr/bin/make V=1" to see the verbose compile lines. >> >> >> CC arch-cuda5-cg-opt/obj/src/sys/utils/arch.o >> >> >> CC arch-cuda5-cg-opt/obj/src/sys/utils/fhost.o >> >> >> CC arch-cuda5-cg-opt/obj/src/sys/utils/fuser.o >> >> >> CC arch-cuda5-cg-opt/obj/src/sys/utils/memc.o >> >> >> CC arch-cuda5-cg-opt/obj/src/sys/utils/mpiu.o >> >> >> CC arch-cuda5-cg-opt/obj/src/sys/utils/psleep.o >> >> >> CC arch-cuda5-cg-opt/obj/src/sys/utils/sortd.o >> >> >> CC arch-cuda5-cg-opt/obj/src/sys/utils/sorti.o >> >> >> CC arch-cuda5-cg-opt/obj/src/sys/utils/str.o >> >> >> CC arch-cuda5-cg-opt/obj/src/sys/utils/sortip.o >> >> >> CC arch-cuda5-cg-opt/obj/src/sys/utils/pbarrier.o >> >> >> CC arch-cuda5-cg-opt/obj/src/sys/utils/pdisplay.o >> >> >> CC arch-cuda5-cg-opt/obj/src/sys/utils/ctable.o >> >> >> CC arch-cuda5-cg-opt/obj/src/sys/utils/psplit.o >> >> >> CC arch-cuda5-cg-opt/obj/src/sys/utils/select.o >> >> >> CC arch-cuda5-cg-opt/obj/src/sys/utils/mpimesg.o >> >> >> CC arch-cuda5-cg-opt/obj/src/sys/utils/sseenabled.o >> >> >> CC arch-cuda5-cg-opt/obj/src/sys/utils/mpitr.o >> >> >> >> In file included from /usr/local/cuda/include/cusp/detail/config.h:24:0, >> >> >> from /usr/local/cuda/include/cusp/complex.h:63, >> >> >> from >> >> /home/leishi/work/development/3rd_party/petsc/petsc/include/petscmath.h:145, >> >> >> from >> >> /home/leishi/work/development/3rd_party/petsc/petsc/include/petscsys.h:366, >> >> >> from >> >> /home/leishi/work/development/3rd_party/petsc/petsc/include/petsc-private/petscimpl.h:8, >> >> from >> >> /home/leishi/work/development/3rd_party/petsc/petsc/src/sys/objects/options.c:14: >> >> >> /usr/local/cuda/include/thrust/version.h:69:1: error: unknown type name >> ?namespace? >> >> /usr/local/cuda/include/thrust/version.h:70:1: error: expected ?=?, ?,?, >> ?;?, ?asm? or ?__attribute__? before ?{? token >> >> In file included from >> >> /home/leishi/work/development/3rd_party/petsc/petsc/include/petscmath.h:145:0, >> >> >> from >> >> /home/leishi/work/development/3rd_party/petsc/petsc/include/petscsys.h:366, >> >> >> from >> >> /home/leishi/work/development/3rd_party/petsc/petsc/include/petsc-private/petscimpl.h:8, >> >> from >> >> /home/leishi/work/development/3rd_party/petsc/petsc/src/sys/objects/options.c:14: >> >> >> /usr/local/cuda/include/cusp/complex.h:70:19: fatal error: complex: No >> such >> file or directory >> >> compilation terminated. >> >> >> CC arch-cuda5-cg-opt/obj/src/sys/objects/state.o >> >> >> CC arch-cuda5-cg-opt/obj/src/sys/objects/aoptions.o >> >> >> CC arch-cuda5-cg-opt/obj/src/sys/objects/subcomm.o >> >> >> In file included from /usr/local/cuda/include/cusp/detail/config.h:24:0, >> >> >> from /usr/local/cuda/include/cusp/complex.h:63, >> >> >> from >> >> /home/leishi/work/development/3rd_party/petsc/petsc/include/petscmath.h:145, >> >> >> from >> >> /home/leishi/work/development/3rd_party/petsc/petsc/include/petscsys.h:366, >> >> >> from >> >> /home/leishi/work/development/3rd_party/petsc/petsc/src/sys/objects/init.c:10: >> >> >> /usr/local/cuda/include/thrust/version.h:69:1: error:* unknown type name >> ?namespace? * >> >> /usr/local/cuda/include/thrust/version.h:70:1: error: expected ?=?, ?,?, >> ?;?, ?asm? or ?__attribute__? before ?{? token >> >> In file included from >> >> /home/leishi/work/development/3rd_party/petsc/petsc/include/petscmath.h:145:0, >> >> >> from >> >> /home/leishi/work/development/3rd_party/petsc/petsc/include/petscsys.h:366, >> >> >> from >> >> /home/leishi/work/development/3rd_party/petsc/petsc/src/sys/objects/init.c:10: >> >> >> /usr/local/cuda/include/cusp/complex.h:70:19: fatal error: complex: No >> such >> file or directory >> >> >> >> make: *** [all] Error 1 >> >> Sincerely Yours, >> >> Lei Shi >> --------- >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: < >> http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140411/4831a810/attachment.html >> > >> >> ------------------------------ >> >> _______________________________________________ >> petsc-users mailing list >> petsc-users at mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/petsc-users >> >> >> End of petsc-users Digest, Vol 64, Issue 26 >> ******************************************* >> >> ---------------------------- >> Confidencialidad: >> Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su >> destinatario y puede contener informaci?n privilegiada o confidencial. Si >> no es vd. el destinatario indicado, queda notificado de que la utilizaci?n, >> divulgaci?n y/o copia sin autorizaci?n est? prohibida en virtud de la >> legislaci?n vigente. Si ha recibido este mensaje por error, le rogamos que >> nos lo comunique inmediatamente respondiendo al mensaje y proceda a su >> destrucci?n. >> >> Disclaimer: >> This message and its attached files is intended exclusively for its >> recipients and may contain confidential information. If you received this >> e-mail in error you are hereby notified that any dissemination, copy or >> disclosure of this communication is strictly prohibited and may be >> unlawful. In this case, please notify us by a reply and delete this email >> and its contents immediately. >> ---------------------------- >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Apr 15 07:00:21 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 15 Apr 2014 07:00:21 -0500 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: <534C9DB5.9070407@gmail.com> References: <534C9A2C.5060404@gmail.com> <534C9DB5.9070407@gmail.com> Message-ID: Try running under valgrind http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind On Apr 14, 2014, at 9:47 PM, TAY wee-beng wrote: > > Hi Barry, > > As I mentioned earlier, the code works fine in PETSc debug mode but fails in non-debug mode. > > I have attached my code. > > Thank you > > Yours sincerely, > > TAY wee-beng > > On 15/4/2014 2:26 AM, Barry Smith wrote: >> Please send the code that creates da_w and the declarations of w_array >> >> Barry >> >> On Apr 14, 2014, at 9:40 AM, TAY wee-beng >> >> wrote: >> >> >>> Hi Barry, >>> >>> I'm not too sure how to do it. I'm running mpi. So I run: >>> >>> mpirun -n 4 ./a.out -start_in_debugger >>> >>> I got the msg below. Before the gdb windows appear (thru x11), the program aborts. >>> >>> Also I tried running in another cluster and it worked. Also tried in the current cluster in debug mode and it worked too. >>> >>> mpirun -n 4 ./a.out -start_in_debugger >>> -------------------------------------------------------------------------- >>> An MPI process has executed an operation involving a call to the >>> "fork()" system call to create a child process. Open MPI is currently >>> operating in a condition that could result in memory corruption or >>> other system errors; your MPI job may hang, crash, or produce silent >>> data corruption. The use of fork() (or system() or other calls that >>> create child processes) is strongly discouraged. >>> >>> The process that invoked fork was: >>> >>> Local host: n12-76 (PID 20235) >>> MPI_COMM_WORLD rank: 2 >>> >>> If you are *absolutely sure* that your application will successfully >>> and correctly survive a call to fork(), you may disable this warning >>> by setting the mpi_warn_on_fork MCA parameter to 0. >>> -------------------------------------------------------------------------- >>> [2]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20235 on display localhost:50.0 on machine n12-76 >>> [0]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20233 on display localhost:50.0 on machine n12-76 >>> [1]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20234 on display localhost:50.0 on machine n12-76 >>> [3]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20236 on display localhost:50.0 on machine n12-76 >>> [n12-76:20232] 3 more processes have sent help message help-mpi-runtime.txt / mpi_init:warn-fork >>> [n12-76:20232] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages >>> >>> .... >>> >>> 1 >>> [1]PETSC ERROR: ------------------------------------------------------------------------ >>> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>> [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>> [1]PETSC ERROR: or see >>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC ERROR: or try http://valgrind.org >>> on GNU/linux and Apple Mac OS X to find memory corruption errors >>> [1]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >>> [1]PETSC ERROR: to get more information on the crash. >>> [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >>> [3]PETSC ERROR: ------------------------------------------------------------------------ >>> [3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>> [3]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>> [3]PETSC ERROR: or see >>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSC ERROR: or try http://valgrind.org >>> on GNU/linux and Apple Mac OS X to find memory corruption errors >>> [3]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >>> [3]PETSC ERROR: to get more information on the crash. >>> [3]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >>> >>> ... >>> Thank you. >>> >>> Yours sincerely, >>> >>> TAY wee-beng >>> >>> On 14/4/2014 9:05 PM, Barry Smith wrote: >>> >>>> Because IO doesn?t always get flushed immediately it may not be hanging at this point. It is better to use the option -start_in_debugger then type cont in each debugger window and then when you think it is ?hanging? do a control C in each debugger window and type where to see where each process is you can also look around in the debugger at variables to see why it is ?hanging? at that point. >>>> >>>> Barry >>>> >>>> This routines don?t have any parallel communication in them so are unlikely to hang. >>>> >>>> On Apr 14, 2014, at 6:52 AM, TAY wee-beng >>>> >>>> >>>> >>>> wrote: >>>> >>>> >>>> >>>>> Hi, >>>>> >>>>> My code hangs and I added in mpi_barrier and print to catch the bug. I found that it hangs after printing "7". Is it because I'm doing something wrong? I need to access the u,v,w array so I use DMDAVecGetArrayF90. After access, I use DMDAVecRestoreArrayF90. >>>>> >>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"3" >>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"4" >>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"5" >>>>> call I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) >>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"6" >>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) !must be in reverse order >>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"7" >>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"8" >>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>> -- >>>>> Thank you. >>>>> >>>>> Yours sincerely, >>>>> >>>>> TAY wee-beng >>>>> >>>>> >>>>> > > > > From quecat001 at gmail.com Tue Apr 15 11:01:13 2014 From: quecat001 at gmail.com (Que Cat) Date: Tue, 15 Apr 2014 11:01:13 -0500 Subject: [petsc-users] Test Jacobian Message-ID: Hello Petsc-users, I tested the hand-coded Jacobian, and received the follow error: Testing hand-coded Jacobian, if the ratio is O(1.e-8), the hand-coded Jacobian is probably correct. Run with -snes_test_display to show difference of hand-coded and finite difference Jacobian. Norm of matrix ratio 7.23832e-10 difference 1.39168e-07 (user-defined state) Norm of matrix ratio 1.15746e-08 difference 2.2254e-06 (constant state -1.0) Norm of matrix ratio 7.23832e-10 difference 1.39168e-07 (constant state 1.0) [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Object is in wrong state! [0]PETSC ERROR: SNESTest aborts after Jacobian test! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.4.3, Oct, 15, 2013 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. What does "Object is in wrong state!" mean? What are the possible sources of this error? Thank you for your time. Que -------------- next part -------------- An HTML attachment was scrubbed... URL: From prbrune at gmail.com Tue Apr 15 11:09:40 2014 From: prbrune at gmail.com (Peter Brune) Date: Tue, 15 Apr 2014 11:09:40 -0500 Subject: [petsc-users] Test Jacobian In-Reply-To: References: Message-ID: Perhaps there might be a clue somewhere else in the message: [0]PETSC ERROR: SNESTest aborts after Jacobian test! All SNESTest does is compute the difference between the finite-difference Jacobian and the user Jacobian (congratulations! yours isn't very different!) and then aborts with the error that you have experienced here. It may finally be time to excise SNESTest entirely as we have way better alternatives now. Anyone mind if I do that? I'll make sure to update the docs appropriately. - Peter On Tue, Apr 15, 2014 at 11:01 AM, Que Cat wrote: > Hello Petsc-users, > > I tested the hand-coded Jacobian, and received the follow error: > > Testing hand-coded Jacobian, if the ratio is > O(1.e-8), the hand-coded Jacobian is probably correct. > Run with -snes_test_display to show difference > of hand-coded and finite difference Jacobian. > Norm of matrix ratio 7.23832e-10 difference 1.39168e-07 (user-defined > state) > Norm of matrix ratio 1.15746e-08 difference 2.2254e-06 (constant state > -1.0) > Norm of matrix ratio 7.23832e-10 difference 1.39168e-07 (constant state > 1.0) > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Object is in wrong state! > [0]PETSC ERROR: SNESTest aborts after Jacobian test! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.4.3, Oct, 15, 2013 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > > What does "Object is in wrong state!" mean? What are the possible sources > of this error? Thank you for your time. > > Que > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jianjun.xiao at kit.edu Tue Apr 15 11:12:38 2014 From: jianjun.xiao at kit.edu (Xiao, Jianjun (IKET)) Date: Tue, 15 Apr 2014 18:12:38 +0200 Subject: [petsc-users] Two DMDAs for conjugate heat transfer Message-ID: <56D054AF2E93E044AC1D2685709D2868D8BBE2EE45@KIT-MSX-07.kit.edu> Dear developers, I am writing a CFD code to simulate the conjugate heat transfer. I would like to use two DMDAs: one is for the fluid cells, and the other one is for the solid cells. Here are the questions: 1. Is it possible to have two different DMDAs for such a purpose? How the data in these two DMDAs communicate with each other? Are there any similar examples? 2. How to deal with the load balancing if DMDA is used? Or it is simply impossible? Thank you. Best regards JJ From bsmith at mcs.anl.gov Tue Apr 15 11:30:35 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 15 Apr 2014 11:30:35 -0500 Subject: [petsc-users] Two DMDAs for conjugate heat transfer In-Reply-To: <56D054AF2E93E044AC1D2685709D2868D8BBE2EE45@KIT-MSX-07.kit.edu> References: <56D054AF2E93E044AC1D2685709D2868D8BBE2EE45@KIT-MSX-07.kit.edu> Message-ID: <630187E5-B795-4AA3-A482-793AA5BDCA2F@mcs.anl.gov> DMDA are for structured grids. That is each DMDA represents a structured grid. Do you have fluid everywhere and solid everywhere or are some cells fluid and some cells solid? Barry On Apr 15, 2014, at 11:12 AM, Xiao, Jianjun (IKET) wrote: > Dear developers, > > I am writing a CFD code to simulate the conjugate heat transfer. I would like to use two DMDAs: one is for the fluid cells, and the other one is for the solid cells. > > Here are the questions: > > 1. Is it possible to have two different DMDAs for such a purpose? How the data in these two DMDAs communicate with each other? Are there any similar examples? > > 2. How to deal with the load balancing if DMDA is used? Or it is simply impossible? > > Thank you. > > Best regards > JJ From bsmith at mcs.anl.gov Tue Apr 15 11:31:57 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 15 Apr 2014 11:31:57 -0500 Subject: [petsc-users] Test Jacobian In-Reply-To: References: Message-ID: <7C7267E8-801E-49A4-902E-0E58B51077C0@mcs.anl.gov> On Apr 15, 2014, at 11:09 AM, Peter Brune wrote: > Perhaps there might be a clue somewhere else in the message: > > [0]PETSC ERROR: SNESTest aborts after Jacobian test! > > All SNESTest does is compute the difference between the finite-difference Jacobian and the user Jacobian (congratulations! yours isn't very different!) and then aborts with the error that you have experienced here. > > It may finally be time to excise SNESTest entirely as we have way better alternatives now. Anyone mind if I do that? I'll make sure to update the docs appropriately. Peter, This has not been done because there are two sets of alternatives and both have problems. We need to merge/fix up the other testing infrastructures before removing the -snes_type test Barry > > - Peter > > > On Tue, Apr 15, 2014 at 11:01 AM, Que Cat wrote: > Hello Petsc-users, > > I tested the hand-coded Jacobian, and received the follow error: > > Testing hand-coded Jacobian, if the ratio is > O(1.e-8), the hand-coded Jacobian is probably correct. > Run with -snes_test_display to show difference > of hand-coded and finite difference Jacobian. > Norm of matrix ratio 7.23832e-10 difference 1.39168e-07 (user-defined state) > Norm of matrix ratio 1.15746e-08 difference 2.2254e-06 (constant state -1.0) > Norm of matrix ratio 7.23832e-10 difference 1.39168e-07 (constant state 1.0) > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: Object is in wrong state! > [0]PETSC ERROR: SNESTest aborts after Jacobian test! > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.4.3, Oct, 15, 2013 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > > What does "Object is in wrong state!" mean? What are the possible sources of this error? Thank you for your time. > > Que > From balay at mcs.anl.gov Tue Apr 15 11:44:41 2014 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 15 Apr 2014 11:44:41 -0500 Subject: [petsc-users] Failed to compile the PETsc-dev with cuda, mpicc can not recognize c++ key word in thrust and cusp (Lei Shi) In-Reply-To: References: Message-ID: You shouldn't need --with-clanguage=cxx for cuda build. The error is > >> In file included from /usr/local/cuda/include/cusp/detail/config.h:24:0, > >> from /usr/local/cuda/include/cusp/complex.h:63, > >> from > >> /home/leishi/work/development/3rd_party/petsc/petsc/include/petscmath.h:145, Hm - complex.h should not be picked up from 'cusp/' dir. It happens because of: > >> '--with-cusp-dir=/usr/local/cuda/include/cusp', You should use --with-cusp-dir=/usr/local/cuda/include Satish On Tue, 15 Apr 2014, Lei Shi wrote: > Hi Christophe, > > Thanks a lot. I add --with-clanguage=cxx option. It works well now. This > option is not on the manual for the compilation with gpu. > > > Sincerely Yours, > > Lei Shi > --------- > > > On Fri, Apr 11, 2014 at 3:42 AM, Christophe Ortiz < > christophe.ortiz at ciemat.es> wrote: > > > Hi, > > > > Maybe I'm wrong but I think the following options are missing if you want > > to use C++: > > > > --with-cxx=(g++ or icc....) --with-clanguage=cxx > > > > Christophe > > > > > >> > >> Message: 2 > >> Date: Fri, 11 Apr 2014 03:26:09 -0500 > >> From: Lei Shi > >> To: petsc-users at mcs.anl.gov > >> Subject: [petsc-users] Failed to compile the PETsc-dev with cuda, > >> mpicc can not recognize c++ key word in thrust and cusp > >> Message-ID: > >> >> V9JJr+8Np5JaOKqgrkS12KJ6TaO3V4wM9YPo-NG29EU-A at mail.gmail.com> > >> Content-Type: text/plain; charset="utf-8" > >> > >> Hi guys, > >> > >> I tried to compile petsc-dev with cuda5, but failed. I think the problem > >> is the PETsc-dev with cuda, mpicc can not recognize c++ key word in thrust > >> and cusp. Please help me out. here is my option file > >> > >> configure_options = [ > >> > >> > >> '--with-shared-libraries=0', > >> > >> > >> '--with-mpi-dir=/media/public/MPICH3/mpich-install/', > >> > >> > >> '--with-cuda=1', > >> > >> > >> '--with-cuda-dir=/usr/local/cuda', > >> > >> > >> '--with-cudac=/usr/local/cuda/bin/nvcc', > >> > >> > >> '--with-cuda-arch=sm_20', > >> > >> > >> '--with-cusp-dir=/usr/local/cuda/include/cusp', > >> > >> > >> '--with-thrust=1', > >> > >> > >> '--with-cusp=1', > >> > >> > >> '--with-debugging=0', > >> > >> > >> '--with-precision=double', > >> > >> > >> 'COPTFLAGS=-O3', > >> > >> > >> 'CXXOPTFLAGS=-O3', > >> > >> > >> 'FOPTFLAGS=-O3', > >> > >> > >> ] > >> > >> and the error i got > >> > >> ------------------------------------------------------------------------------------------- > >> > >> =============================================================================== > >> > >> > >> Configuring PETSc to compile on your system > >> > >> > >> > >> =============================================================================== > >> > >> > >> TESTING: alternateConfigureLibrary from > >> PETSc.packages.petsc4py(config/PETSc/packages/petsc4py.py:65) > >> \ > >> Compilers: > >> > >> > >> C Compiler: /media/public/MPICH3/mpich-install/bin/mpicc -Wall > >> -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 > >> > >> CUDA Compiler: /usr/local/cuda/bin/nvcc -O -arch=sm_20 > >> > >> > >> C++ Compiler: /media/public/MPICH3/mpich-install/bin/mpicxx -Wall > >> -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 > >> > >> Linkers: > >> > >> > >> Static linker: /usr/bin/ar cr > >> > >> > >> Dynamic linker: /usr/bin/ar > >> > >> > >> make: > >> > >> > >> MPI: > >> > >> > >> Includes: -I/media/public/MPICH3/mpich-install/include > >> > >> > >> BLAS/LAPACK: -llapack -lblas > >> > >> > >> X: > >> > >> > >> Library: -lX11 > >> > >> > >> pthread: > >> > >> > >> Library: -lpthread > >> > >> > >> valgrind: > >> > >> > >> cuda: > >> > >> > >> Includes: -I/usr/local/cuda/include > >> > >> > >> Library: -Wl,-rpath,/usr/local/cuda/lib64 -L/usr/local/cuda/lib64 > >> -lcufft -lcublas -lcudart -lcusparse > >> > >> Arch: -arch=sm_20 > >> > >> > >> cusp: > >> > >> > >> Includes: -I/usr/local/cuda/include/cusp/ > >> -I/usr/local/cuda/include/cusp/include > >> > >> thrust: > >> > >> > >> sowing: > >> > >> > >> c2html: > >> > >> > >> PETSc: > >> > >> > >> PETSC_ARCH: arch-cuda5-cg-opt > >> > >> > >> PETSC_DIR: /home/leishi/work/development/3rd_party/petsc/petsc > >> > >> > >> Clanguage: C > >> > >> > >> Memory alignment: 16 > >> > >> > >> Scalar type: real > >> > >> > >> Precision: double > >> > >> > >> shared libraries: disabled > >> > >> > >> > >> xxx=========================================================================xxx > >> > >> > >> Configure stage complete. Now build PETSc libraries with (gnumake build): > >> > >> > >> make PETSC_DIR=/home/leishi/work/development/3rd_party/petsc/petsc > >> PETSC_ARCH=arch-cuda5-cg-opt all > >> > >> > >> xxx=========================================================================xxx > >> > >> > >> Using C/C++ compile: /media/public/MPICH3/mpich-install/bin/mpicc -c -Wall > >> -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 > >> -I/home/leishi/work/development/3rd_party/pet\ > >> sc/petsc/include > >> > >> -I/home/leishi/work/development/3rd_party/petsc/petsc/arch-cuda5-cg-opt/include > >> -I/usr/local/cuda/include -I/usr/local/cuda/include/cusp/ > >> -I/usr/local/cuda/include/cus\ > >> p/include -I/media/public/MPICH3/mpich-install/include > >> > >> > >> mpicc -show: gcc -I/media/public/MPICH3/mpich-install/include > >> -L/media/public/MPICH3/mpich-install/lib -lmpich -lopa -lmpl -lrt > >> -lpthread > >> > >> Using CUDA compile: /usr/local/cuda/bin/nvcc -O -arch=sm_20 -c > >> --compiler-options=-Wall -Wwrite-strings -Wno-strict-aliasing > >> -Wno-unknown-pragmas -O3 -I/home/leishi/work/development/3\ > >> rd_party/petsc/petsc/include > >> > >> -I/home/leishi/work/development/3rd_party/petsc/petsc/arch-cuda5-cg-opt/include > >> -I/usr/local/cuda/include -I/usr/local/cuda/include/cusp/ > >> -I/usr/local/cuda\ > >> /include/cusp/include -I/media/public/MPICH3/mpich-install/include > >> > >> > >> ----------------------------------------- > >> > >> > >> Using C/C++ linker: /media/public/MPICH3/mpich-install/bin/mpicc > >> > >> > >> Using C/C++ flags: -Wall -Wwrite-strings -Wno-strict-aliasing > >> -Wno-unknown-pragmas -O3 > >> > >> ----------------------------------------- > >> > >> > >> Using libraries: > >> > >> -L/home/leishi/work/development/3rd_party/petsc/petsc/arch-cuda5-cg-opt/lib > >> -lpetsc -llapack -lblas -lX11 -lpthread -Wl,-rpath,/usr/local/cuda/lib64 > >> -L/usr/local/cuda\ > >> /lib64 -lcufft -lcublas -lcudart -lcusparse -lm > >> -L/media/public/MPICH3/mpich-install/lib > >> -L/usr/lib/gcc/x86_64-linux-gnu/4.6 -L/usr/lib/x86_64-linux-gnu > >> -L/lib/x86_64-linux-gnu -L/usr/\ > >> lib/gcc/x86_64-linux-gnu/4.8 -lmpichcxx -lstdc++ -ldl -lmpich -lopa -lmpl > >> -lrt -lpthread -lgcc_s -ldl > >> > >> ------------------------------------------ > >> > >> > >> Using mpiexec: /media/public/MPICH3/mpich-install/bin/mpiexec > >> > >> > >> ========================================== > >> > >> > >> Building PETSc using GNU Make with 18 build threads > >> > >> > >> ========================================== > >> > >> > >> make[2]: Entering directory > >> `/home/leishi/work/development/3rd_party/petsc/petsc' > >> > >> > >> Use "/usr/bin/make V=1" to see the verbose compile lines. > >> > >> > >> CC arch-cuda5-cg-opt/obj/src/sys/utils/arch.o > >> > >> > >> CC arch-cuda5-cg-opt/obj/src/sys/utils/fhost.o > >> > >> > >> CC arch-cuda5-cg-opt/obj/src/sys/utils/fuser.o > >> > >> > >> CC arch-cuda5-cg-opt/obj/src/sys/utils/memc.o > >> > >> > >> CC arch-cuda5-cg-opt/obj/src/sys/utils/mpiu.o > >> > >> > >> CC arch-cuda5-cg-opt/obj/src/sys/utils/psleep.o > >> > >> > >> CC arch-cuda5-cg-opt/obj/src/sys/utils/sortd.o > >> > >> > >> CC arch-cuda5-cg-opt/obj/src/sys/utils/sorti.o > >> > >> > >> CC arch-cuda5-cg-opt/obj/src/sys/utils/str.o > >> > >> > >> CC arch-cuda5-cg-opt/obj/src/sys/utils/sortip.o > >> > >> > >> CC arch-cuda5-cg-opt/obj/src/sys/utils/pbarrier.o > >> > >> > >> CC arch-cuda5-cg-opt/obj/src/sys/utils/pdisplay.o > >> > >> > >> CC arch-cuda5-cg-opt/obj/src/sys/utils/ctable.o > >> > >> > >> CC arch-cuda5-cg-opt/obj/src/sys/utils/psplit.o > >> > >> > >> CC arch-cuda5-cg-opt/obj/src/sys/utils/select.o > >> > >> > >> CC arch-cuda5-cg-opt/obj/src/sys/utils/mpimesg.o > >> > >> > >> CC arch-cuda5-cg-opt/obj/src/sys/utils/sseenabled.o > >> > >> > >> CC arch-cuda5-cg-opt/obj/src/sys/utils/mpitr.o > >> > >> > >> > >> In file included from /usr/local/cuda/include/cusp/detail/config.h:24:0, > >> > >> > >> from /usr/local/cuda/include/cusp/complex.h:63, > >> > >> > >> from > >> > >> /home/leishi/work/development/3rd_party/petsc/petsc/include/petscmath.h:145, > >> > >> > >> from > >> > >> /home/leishi/work/development/3rd_party/petsc/petsc/include/petscsys.h:366, > >> > >> > >> from > >> > >> /home/leishi/work/development/3rd_party/petsc/petsc/include/petsc-private/petscimpl.h:8, > >> > >> from > >> > >> /home/leishi/work/development/3rd_party/petsc/petsc/src/sys/objects/options.c:14: > >> > >> > >> /usr/local/cuda/include/thrust/version.h:69:1: error: unknown type name > >> ?namespace? > >> > >> /usr/local/cuda/include/thrust/version.h:70:1: error: expected ?=?, ?,?, > >> ?;?, ?asm? or ?__attribute__? before ?{? token > >> > >> In file included from > >> > >> /home/leishi/work/development/3rd_party/petsc/petsc/include/petscmath.h:145:0, > >> > >> > >> from > >> > >> /home/leishi/work/development/3rd_party/petsc/petsc/include/petscsys.h:366, > >> > >> > >> from > >> > >> /home/leishi/work/development/3rd_party/petsc/petsc/include/petsc-private/petscimpl.h:8, > >> > >> from > >> > >> /home/leishi/work/development/3rd_party/petsc/petsc/src/sys/objects/options.c:14: > >> > >> > >> /usr/local/cuda/include/cusp/complex.h:70:19: fatal error: complex: No > >> such > >> file or directory > >> > >> compilation terminated. > >> > >> > >> CC arch-cuda5-cg-opt/obj/src/sys/objects/state.o > >> > >> > >> CC arch-cuda5-cg-opt/obj/src/sys/objects/aoptions.o > >> > >> > >> CC arch-cuda5-cg-opt/obj/src/sys/objects/subcomm.o > >> > >> > >> In file included from /usr/local/cuda/include/cusp/detail/config.h:24:0, > >> > >> > >> from /usr/local/cuda/include/cusp/complex.h:63, > >> > >> > >> from > >> > >> /home/leishi/work/development/3rd_party/petsc/petsc/include/petscmath.h:145, > >> > >> > >> from > >> > >> /home/leishi/work/development/3rd_party/petsc/petsc/include/petscsys.h:366, > >> > >> > >> from > >> > >> /home/leishi/work/development/3rd_party/petsc/petsc/src/sys/objects/init.c:10: > >> > >> > >> /usr/local/cuda/include/thrust/version.h:69:1: error:* unknown type name > >> ?namespace? * > >> > >> /usr/local/cuda/include/thrust/version.h:70:1: error: expected ?=?, ?,?, > >> ?;?, ?asm? or ?__attribute__? before ?{? token > >> > >> In file included from > >> > >> /home/leishi/work/development/3rd_party/petsc/petsc/include/petscmath.h:145:0, > >> > >> > >> from > >> > >> /home/leishi/work/development/3rd_party/petsc/petsc/include/petscsys.h:366, > >> > >> > >> from > >> > >> /home/leishi/work/development/3rd_party/petsc/petsc/src/sys/objects/init.c:10: > >> > >> > >> /usr/local/cuda/include/cusp/complex.h:70:19: fatal error: complex: No > >> such > >> file or directory > >> > >> > >> > >> make: *** [all] Error 1 > >> > >> Sincerely Yours, > >> > >> Lei Shi > >> --------- > >> -------------- next part -------------- > >> An HTML attachment was scrubbed... > >> URL: < > >> http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140411/4831a810/attachment.html > >> > > >> > >> ------------------------------ > >> > >> _______________________________________________ > >> petsc-users mailing list > >> petsc-users at mcs.anl.gov > >> https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > >> > >> > >> End of petsc-users Digest, Vol 64, Issue 26 > >> ******************************************* > >> > >> ---------------------------- > >> Confidencialidad: > >> Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su > >> destinatario y puede contener informaci?n privilegiada o confidencial. Si > >> no es vd. el destinatario indicado, queda notificado de que la utilizaci?n, > >> divulgaci?n y/o copia sin autorizaci?n est? prohibida en virtud de la > >> legislaci?n vigente. Si ha recibido este mensaje por error, le rogamos que > >> nos lo comunique inmediatamente respondiendo al mensaje y proceda a su > >> destrucci?n. > >> > >> Disclaimer: > >> This message and its attached files is intended exclusively for its > >> recipients and may contain confidential information. If you received this > >> e-mail in error you are hereby notified that any dissemination, copy or > >> disclosure of this communication is strictly prohibited and may be > >> unlawful. In this case, please notify us by a reply and delete this email > >> and its contents immediately. > >> ---------------------------- > >> > >> > > > From jianjun.xiao at kit.edu Tue Apr 15 12:09:29 2014 From: jianjun.xiao at kit.edu (Xiao, Jianjun (IKET)) Date: Tue, 15 Apr 2014 19:09:29 +0200 Subject: [petsc-users] Two DMDAs for conjugate heat transfer In-Reply-To: <630187E5-B795-4AA3-A482-793AA5BDCA2F@mcs.anl.gov> References: <56D054AF2E93E044AC1D2685709D2868D8BBE2EE45@KIT-MSX-07.kit.edu>, <630187E5-B795-4AA3-A482-793AA5BDCA2F@mcs.anl.gov> Message-ID: <56D054AF2E93E044AC1D2685709D2868D8BBE2EE46@KIT-MSX-07.kit.edu> Dear Barry, Yes, I have structured grid. Say, we totally have one million cells (100*100*100). 100,000 cells are solid cells, and the shape of the solid is irregular. The other cells are fluid cells. In principle, we could use one DMDA. And give the property to each cell. It means the code knows which cell is solid and which is fluid. Then the problem of load balancing occurs: if all the solid cells accumulates, say at a corner, the work load will not be balanced. Maybe you have some other good ideas. Thank you JJ ________________________________________ From: Barry Smith [bsmith at mcs.anl.gov] Sent: Tuesday, April 15, 2014 6:30 PM To: Xiao, Jianjun (IKET) Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Two DMDAs for conjugate heat transfer DMDA are for structured grids. That is each DMDA represents a structured grid. Do you have fluid everywhere and solid everywhere or are some cells fluid and some cells solid? Barry On Apr 15, 2014, at 11:12 AM, Xiao, Jianjun (IKET) wrote: > Dear developers, > > I am writing a CFD code to simulate the conjugate heat transfer. I would like to use two DMDAs: one is for the fluid cells, and the other one is for the solid cells. > > Here are the questions: > > 1. Is it possible to have two different DMDAs for such a purpose? How the data in these two DMDAs communicate with each other? Are there any similar examples? > > 2. How to deal with the load balancing if DMDA is used? Or it is simply impossible? > > Thank you. > > Best regards > JJ From knepley at gmail.com Tue Apr 15 12:57:38 2014 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 15 Apr 2014 12:57:38 -0500 Subject: [petsc-users] Two DMDAs for conjugate heat transfer In-Reply-To: <56D054AF2E93E044AC1D2685709D2868D8BBE2EE46@KIT-MSX-07.kit.edu> References: <56D054AF2E93E044AC1D2685709D2868D8BBE2EE45@KIT-MSX-07.kit.edu> <630187E5-B795-4AA3-A482-793AA5BDCA2F@mcs.anl.gov> <56D054AF2E93E044AC1D2685709D2868D8BBE2EE46@KIT-MSX-07.kit.edu> Message-ID: On Tue, Apr 15, 2014 at 12:09 PM, Xiao, Jianjun (IKET) wrote: > Dear Barry, > > Yes, I have structured grid. > > Say, we totally have one million cells (100*100*100). 100,000 cells are > solid cells, and the shape of the solid is irregular. The other cells are > fluid cells. > > In principle, we could use one DMDA. And give the property to each cell. > It means the code knows which cell is solid and which is fluid. Then the > problem of load balancing occurs: if all the solid cells accumulates, say > at a corner, the work load will not be balanced. > Why would it not be balanced? You divide cells equally, and compute whatever equations you need on each cell. Matt > Maybe you have some other good ideas. Thank you > > JJ > ________________________________________ > From: Barry Smith [bsmith at mcs.anl.gov] > Sent: Tuesday, April 15, 2014 6:30 PM > To: Xiao, Jianjun (IKET) > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] Two DMDAs for conjugate heat transfer > > DMDA are for structured grids. That is each DMDA represents a > structured grid. Do you have fluid everywhere and solid everywhere or are > some cells fluid and some cells solid? > > Barry > > > > On Apr 15, 2014, at 11:12 AM, Xiao, Jianjun (IKET) > wrote: > > > Dear developers, > > > > I am writing a CFD code to simulate the conjugate heat transfer. I would > like to use two DMDAs: one is for the fluid cells, and the other one is for > the solid cells. > > > > Here are the questions: > > > > 1. Is it possible to have two different DMDAs for such a purpose? How > the data in these two DMDAs communicate with each other? Are there any > similar examples? > > > > 2. How to deal with the load balancing if DMDA is used? Or it is simply > impossible? > > > > Thank you. > > > > Best regards > > JJ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Apr 15 13:08:08 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 15 Apr 2014 13:08:08 -0500 Subject: [petsc-users] Two DMDAs for conjugate heat transfer In-Reply-To: <56D054AF2E93E044AC1D2685709D2868D8BBE2EE46@KIT-MSX-07.kit.edu> References: <56D054AF2E93E044AC1D2685709D2868D8BBE2EE45@KIT-MSX-07.kit.edu>, <630187E5-B795-4AA3-A482-793AA5BDCA2F@mcs.anl.gov> <56D054AF2E93E044AC1D2685709D2868D8BBE2EE46@KIT-MSX-07.kit.edu> Message-ID: <1C4A5D57-E469-4020-AE9D-13413989DA6E@mcs.anl.gov> On Apr 15, 2014, at 12:09 PM, Xiao, Jianjun (IKET) wrote: > Dear Barry, > > Yes, I have structured grid. > > Say, we totally have one million cells (100*100*100). 100,000 cells are solid cells, and the shape of the solid is irregular. The other cells are fluid cells. > > In principle, we could use one DMDA. And give the property to each cell. It means the code knows which cell is solid and which is fluid. Then the problem of load balancing occurs: if all the solid cells accumulates, say at a corner, the work load will not be balanced. > > Maybe you have some other good ideas. Thank you My recommendation in this case is to use a regular grid (since you want to) but use an unstructured representation using DMPlex. This way the cells can be partitioned out among processes for load balancing anyway you want. DMDA (and most ?structured grid? representation codes) don?t have support for dynamic ?odd? partitioning for load balance since they wish to partition the regular grid into nice smaller regular grids. Barry > > JJ > ________________________________________ > From: Barry Smith [bsmith at mcs.anl.gov] > Sent: Tuesday, April 15, 2014 6:30 PM > To: Xiao, Jianjun (IKET) > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] Two DMDAs for conjugate heat transfer > > DMDA are for structured grids. That is each DMDA represents a structured grid. Do you have fluid everywhere and solid everywhere or are some cells fluid and some cells solid? > > Barry > > > > On Apr 15, 2014, at 11:12 AM, Xiao, Jianjun (IKET) wrote: > >> Dear developers, >> >> I am writing a CFD code to simulate the conjugate heat transfer. I would like to use two DMDAs: one is for the fluid cells, and the other one is for the solid cells. >> >> Here are the questions: >> >> 1. Is it possible to have two different DMDAs for such a purpose? How the data in these two DMDAs communicate with each other? Are there any similar examples? >> >> 2. How to deal with the load balancing if DMDA is used? Or it is simply impossible? >> >> Thank you. >> >> Best regards >> JJ > From bsmith at mcs.anl.gov Tue Apr 15 17:27:45 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 15 Apr 2014 17:27:45 -0500 Subject: [petsc-users] KSP in OpenMP Parallel For Loop In-Reply-To: <1494336768.3007613.1397538376784.JavaMail.zimbra@stanford.edu> References: <510006714.13321195.1396718737252.JavaMail.zimbra@stanford.edu> <112540665.4715288.1396989594593.JavaMail.zimbra@stanford.edu> <3B7B3DA0-3C7D-4674-8083-BD4A9292A3A6@mcs.anl.gov> <1494336768.3007613.1397538376784.JavaMail.zimbra@stanford.edu> Message-ID: <73950928-98AA-416F-98DD-624C92BA463D@mcs.anl.gov> David, After wrestling with several problems on this I realized that this branch only works if one is ?lucky? in what they do. PETSc was originally written without regard to threads at all and in this branch I tried to ?fix? all the cases where we accessed global variables so each thread could create and destroy its own objects. I did this in a way independent of the threading model you used (OpenMP or pthreads) by simply removing use of global variables, not by introducing model specific locks in PETSc code. Unfortunately you?ve stumbled across a situation that does not involve global variables but instead variables stored inside MPI attributes that different threads are changing (and since they are not protected by locks) end up with incorrect values which later causes crashes. Essentially the variables in the MPI attributes are like global variables because all the threads have access to them. To fix this would require putting a bunch of locks in PETSc code that deals with the MPI attributes. I?m sorry but this won?t happen soon. The only good news is that within a month we have a new guy revisiting PETSc?s own threading code (which focuses on several threads per object) and maybe between him and Jed they can expand that approach to handle the needs of "different objects for different threads? that you have. I wish we had such support currently but it greatly increases the complexity of the PETSc code so has to be done well or we?ll be totally screwed. Barry On Apr 15, 2014, at 12:06 AM, D H wrote: > Dear Barry, > > Hope you are having a good week so far! > > I am still working on trying to get OpenMP parallel for loops to play nicely with PETSc's KSP functions. > > I modified the KSP ex1 that comes with PETSc to repeat its work 1000 times inside an omp parallel for loop, and sure enough, I was able to get the same errors as I get with my own program. I've attached the source code for my modified ex1 if you have any time to take a look and see if this works on your machine. Once in a while, this modified example will run all the way through without errors, but 9 times out of 10 it crashes somewhere through the 1000 iterations, at least on my machine. In order to avoid compiler errors I was getting from openmp, I overrode the definitions of CHKERRQ and SETERRQ near the top of the file... this of course could have goofed things up, but I was just trying to make something quick and dirty that reproduced the errors I was getting from petsc. > > If it makes a difference (perhaps there is something amiss here), my configuration for this petsc branch is > PETSC_ARCH=linux-gnu PETSC_DIR=/home/dabh/petsc --with-clanguage=cxx --download-hypre=1 --download-f-blas-lapack=1 --download-mpich=1 --with-debugging=0 COPTFLAGS="-O2 -march=native" CXXOPTFLAGS="-O2 -march=native" FOPTFLAGS="-O2 -march=native" --with-log=0 > > and the full error message I get from petsc is > > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: Corrupt argument: > see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind! > [0]PETSC ERROR: MPI_Comm does not have tag/name counter nor does it have inner MPI_Comm! > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Development GIT revision: 67670ee0cfa93f0c400c9bf0001548cd51aae596 GIT Date: 2013-11-16 17:43:36 -0600 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: ./petsc_ex12 on a linux-gnu named dabh by dabh Mon Apr 14 21:49:32 2014 > [0]PETSC ERROR: Libraries linked from /home/dabh/petsc/linux-gnu/lib > [0]PETSC ERROR: Configure run at Tue Apr 8 17:53:42 2014 > [0]PETSC ERROR: Configure options PETSC_ARCH=linux-gnu PETSC_DIR=/home/dabh/petsc --with-clanguage=cxx --download-hypre=1 --download-f-blas-lapack=1 --download-mpich=1 --with-debugging=0 COPTFLAGS="-O2 -march=native" CXXOPTFLAGS="-O2 -march=native" FOPTFLAGS="-O2 -march=native" --with-log=0 > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: PetscCommDestroy() line 234 in src/sys/objects/tagm.c > [0]PETSC ERROR: PetscHeaderDestroy_Private() line 118 in src/sys/objects/inherit.c > [0]PETSC ERROR: PetscViewerDestroy() line 107 in src/sys/classes/viewer/interface/view.c > [0]PETSC ERROR: PetscObjectDestroy() line 73 in src/sys/objects/destroy.c > [0]PETSC ERROR: PetscObjectRegisterDestroyAll() line 252 in src/sys/objects/destroy.c > [0]PETSC ERROR: PetscFinalize() line 1146 in src/sys/objects/pinit.c > > This is happening on a machine running Fedora 20 (64-bit), compiling things with g++. > > If you have any further insights into how to get petsc working properly with openmp, like with this example program, I would be most grateful for your time and suggestions. Thank you again for your help! > > Best, > > David > > ----- Original Message ----- > From: "Barry Smith" > To: "D H" > Sent: Tuesday, April 8, 2014 1:49:27 PM > Subject: Re: [petsc-users] KSP in OpenMP Parallel For Loop > > > On Apr 8, 2014, at 3:39 PM, D H wrote: > >> Dear Barry, >> >> Thanks for your email! I found the barry/make-petscoptionsobject-nonglobal branch and tried playing with it - it sounds like exactly what I need. I wasn't able to get it to compile at first - in src/sys/dll/reg.c, I had to add the following includes at the top: >> #include >> #include >> #include >> #include >> #include >> #include >> #include >> >> in order to get all the "...InitializePackage()" calls resolved. PetscThreadCommWorldInitialize() still wasn't being resolved, and I didn't see it in a header file anywhere (just in threadcomm.c), so I added this after line 44 in petscthreadcomm.h: "PETSC_EXTERN PetscErrorCode PetscThreadCommWorldInitialize(void);". After these changes I could get the branch compiling fine. > > Not sure about this business. >> >> Unfortunately, I'm still getting segfaults when I run my code. Things seem to break at one of three lines in my code: >> "ierr = PetscOptionsSetValue("-pc_hypre_boomeramg_relax_type_coarse", "SOR/Jacobi"); CHKERRXX(ierr);" >> "ierr = PCSetType(pc, pc_type); CHKERRXX(ierr);" >> "ierr = KSPSolve(ksp, b, x); CHKERRXX(ierr);" >> (it's usually the first line that causes the problem, though) > > You need to configure PETSc in this branch with --with-debugging=0 --with-log=0 I forgot to tell you that. The extra debugging and logging use global variables and hence must be turned off. > > Barry > >> >> PETSc reports that some memory is being double-freed, although it isn't helpful in pointing out exactly what object is having this memory issue. >> >> If it makes a difference, I'm attempting to use BCGS with the HYPRE BoomerAMG preconditioner. >> >> Do you have any ideas as to might what be going wrong? Is there anywhere else in PETSc where there might be relevant global variables that are causing the crash? It is possible that the branch is fine and that there's a problem in my solver class, but my solver class doesn't have any global variables, so I don't think there is a problem there (especially since it works fine in serial). >> >> I really appreciate your time and any additional thoughts you might have. Thanks very much! >> >> Best, >> >> David >> >> ----- Original Message ----- >> From: "Barry Smith" >> To: "D H" >> Cc: petsc-users at mcs.anl.gov >> Sent: Saturday, April 5, 2014 11:34:37 AM >> Subject: Re: [petsc-users] KSP in OpenMP Parallel For Loop >> >> >> There is a branch in the petsc bitbucket repository http://www.mcs.anl.gov/petsc/developers/index.html called barry/make-petscoptionsobject-nonglobal where one can call the PETSc operations in threads without any conflicts. Otherwise it just won?t work. >> >> Barry >> >> On Apr 5, 2014, at 12:25 PM, D H wrote: >> >>> Hi, >>> >>> I have a C++ program where I would like to call some of PETSc's KSP methods (KSPCreate, KSPSolve, etc.) from inside a for loop that has a "#pragma omp parallel for" in front of it. Without this OpenMP pragma, my code runs fine. But when I add in this parallelism, my program segfaults with PETSc reporting some memory corruption errors. >>> >>> I've read online in a few places that PETSc is not thread-safe, but before I give up hope, I thought I would ask to see if anyone has had success working with KSP routines when they are being called simultaneously from multiple threads (or whether such a feat is definitely not possible with PETSc). Thanks very much for your advice! >>> >>> Best, >>> >>> David >> > > From quecat001 at gmail.com Wed Apr 16 09:57:33 2014 From: quecat001 at gmail.com (Que Cat) Date: Wed, 16 Apr 2014 09:57:33 -0500 Subject: [petsc-users] SNES failed with reason -3 Message-ID: Dear Petsc-Users, I call the nonliner solver snes in my code and it said that "SNES Failed! Reason -3". I testes with "-pc_type lu" and it passed. I have checked the hand-coded Jacobian, ant it looks fine. Could you suggest any possible sources of the errors? THank you. Que -------------- next part -------------- An HTML attachment was scrubbed... URL: From prbrune at gmail.com Wed Apr 16 10:05:00 2014 From: prbrune at gmail.com (Peter Brune) Date: Wed, 16 Apr 2014 10:05:00 -0500 Subject: [petsc-users] SNES failed with reason -3 In-Reply-To: References: Message-ID: -3 is a linear solver failure. You can see the whole list of convergence/divergence reasons under SNESConvergedReason in $PETSC_DIR/include/petscsnes.h Anyhow, what kind of output do you get when running with -ksp_monitor and -ksp_converged_reason? - Peter On Wed, Apr 16, 2014 at 9:57 AM, Que Cat wrote: > Dear Petsc-Users, > > I call the nonliner solver snes in my code and it said that "SNES Failed! > Reason -3". I testes with "-pc_type lu" and it passed. I have checked the > hand-coded Jacobian, ant it looks fine. Could you suggest any possible > sources of the errors? THank you. > > Que > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Apr 16 10:05:31 2014 From: jed at jedbrown.org (Jed Brown) Date: Wed, 16 Apr 2014 09:05:31 -0600 Subject: [petsc-users] SNES failed with reason -3 In-Reply-To: References: Message-ID: <8738hdl3gk.fsf@jedbrown.org> Que Cat writes: > Dear Petsc-Users, > > I call the nonliner solver snes in my code and it said that "SNES Failed! > Reason -3". I testes with "-pc_type lu" and it passed. I have checked the > hand-coded Jacobian, ant it looks fine. Could you suggest any possible > sources of the errors? Run with -snes_converged_reason -ksp_converged_reason -ksp_monitor_true_residual -snes_view and send the output. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From quecat001 at gmail.com Wed Apr 16 11:38:37 2014 From: quecat001 at gmail.com (Que Cat) Date: Wed, 16 Apr 2014 11:38:37 -0500 Subject: [petsc-users] SNES failed with reason -3 In-Reply-To: <8738hdl3gk.fsf@jedbrown.org> References: <8738hdl3gk.fsf@jedbrown.org> Message-ID: Hi Jed and Peter, Please see the attached file for the output. Thank you for your help. Que On Wed, Apr 16, 2014 at 10:05 AM, Jed Brown wrote: > Que Cat writes: > > > Dear Petsc-Users, > > > > I call the nonliner solver snes in my code and it said that "SNES Failed! > > Reason -3". I testes with "-pc_type lu" and it passed. I have checked the > > hand-coded Jacobian, ant it looks fine. Could you suggest any possible > > sources of the errors? > > Run with -snes_converged_reason -ksp_converged_reason > -ksp_monitor_true_residual -snes_view and send the output. > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: convergence.dat Type: application/octet-stream Size: 1252023 bytes Desc: not available URL: From prbrune at gmail.com Wed Apr 16 11:46:17 2014 From: prbrune at gmail.com (Peter Brune) Date: Wed, 16 Apr 2014 11:46:17 -0500 Subject: [petsc-users] SNES failed with reason -3 In-Reply-To: References: <8738hdl3gk.fsf@jedbrown.org> Message-ID: The solver stagnates; you can try to diagnose what's going on using the tips from: http://www.mcs.anl.gov/petsc/documentation/faq.html#kspdiverged - Peter On Wed, Apr 16, 2014 at 11:38 AM, Que Cat wrote: > Hi Jed and Peter, > > Please see the attached file for the output. Thank you for your help. > > Que > > > On Wed, Apr 16, 2014 at 10:05 AM, Jed Brown wrote: > >> Que Cat writes: >> >> > Dear Petsc-Users, >> > >> > I call the nonliner solver snes in my code and it said that "SNES >> Failed! >> > Reason -3". I testes with "-pc_type lu" and it passed. I have checked >> the >> > hand-coded Jacobian, ant it looks fine. Could you suggest any possible >> > sources of the errors? >> >> Run with -snes_converged_reason -ksp_converged_reason >> -ksp_monitor_true_residual -snes_view and send the output. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Apr 16 11:49:07 2014 From: jed at jedbrown.org (Jed Brown) Date: Wed, 16 Apr 2014 10:49:07 -0600 Subject: [petsc-users] SNES failed with reason -3 In-Reply-To: References: <8738hdl3gk.fsf@jedbrown.org> Message-ID: <87ioq9jk3g.fsf@jedbrown.org> Que Cat writes: > Hi Jed and Peter, > > Please see the attached file for the output. Thank you for your help. The linear solve has stagnated. This is usually due to an ineffective preconditioner. Tips here: http://scicomp.stackexchange.com/questions/513/why-is-my-iterative-linear-solver-not-converging What sort of problem are you solving? -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From pvsang002 at gmail.com Wed Apr 16 16:48:38 2014 From: pvsang002 at gmail.com (Sang pham van) Date: Wed, 16 Apr 2014 17:48:38 -0400 Subject: [petsc-users] ex42 question In-Reply-To: <87eh1ost3w.fsf@jedbrown.org> References: <87eh1ost3w.fsf@jedbrown.org> Message-ID: Hi Jed, I modified the ex43 code to enforce no-slip BCs on all boundaries. I run the code with volume force (0,-1) and isoviscosity. The expected result is Vx = Vy = 0 everywhere, and linearly decreasing pressure (from to to bottom). In the attached is plot of velocity field and pressure, so there is still a (light) flow in middle of the domain. Do you know why the solution is that, and what should I do to get the expected result? Thank you. Sang On Wed, Mar 26, 2014 at 2:38 PM, Jed Brown wrote: > Sang pham van writes: > > > Hi Dave, > > I guess you are the one contributed the ex42 in KSP's examples. I want to > > modify the example to solve for stokes flow driven by volume force in 3D > > duct. Please help me to understand the code by answering the following > > questions: > > > > 1. Firstly, just for confirmation, the equations you're solving are: > > \nu * \nabla \cdot \nabla U - \nabla P = 0 and > > For variable viscosity, it must be formulated as in the example: > > \nabla\cdot (\nu D U) - \nabla P = 0 > > where D U = (\nabla U + (\nabla U)^T)/2 > > > \nabla \cdot U = 0 > > > > where U = (Ux,Uy,Uz), \nu is variable viscosity? > > > > 2. Are U and P defined at all nodes? (I googled the Q1Q1 element, it > looks > > like a box element with U and P defined at 8 corners). > > Yes. > > > 3. Are nodes' coordinate defined though the DA coordinates? > > Yes, though they are set to be uniform. > > > 4. How can I enforce noslip BC, and where should I plug in volume force? > > Enforce the Dirichlet condition for the entire node. > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ex43_result.png Type: image/png Size: 119323 bytes Desc: not available URL: From knepley at gmail.com Wed Apr 16 17:23:07 2014 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 16 Apr 2014 17:23:07 -0500 Subject: [petsc-users] ex42 question In-Reply-To: References: <87eh1ost3w.fsf@jedbrown.org> Message-ID: On Wed, Apr 16, 2014 at 4:48 PM, Sang pham van wrote: > Hi Jed, > > I modified the ex43 code to enforce no-slip BCs on all boundaries. > I run the code with volume force (0,-1) and isoviscosity. The expected > result is Vx = Vy = 0 everywhere, and linearly decreasing pressure (from to > to bottom). > In the attached is plot of velocity field and pressure, so there is still > a (light) flow in middle of the domain. Do you know why the solution is > that, and what should I do to get the expected result? > It sounds like it is due to discretization error. Your incompressibility constraint is not verified element-wise (I think ex43 is penalized Q1-Q1), so you can have some flow here. Refine it and see if it converges toward 0 flow. Matt > Thank you. > > Sang > > > > > On Wed, Mar 26, 2014 at 2:38 PM, Jed Brown wrote: > >> Sang pham van writes: >> >> > Hi Dave, >> > I guess you are the one contributed the ex42 in KSP's examples. I want >> to >> > modify the example to solve for stokes flow driven by volume force in 3D >> > duct. Please help me to understand the code by answering the following >> > questions: >> > >> > 1. Firstly, just for confirmation, the equations you're solving are: >> > \nu * \nabla \cdot \nabla U - \nabla P = 0 and >> >> For variable viscosity, it must be formulated as in the example: >> >> \nabla\cdot (\nu D U) - \nabla P = 0 >> >> where D U = (\nabla U + (\nabla U)^T)/2 >> >> > \nabla \cdot U = 0 >> > >> > where U = (Ux,Uy,Uz), \nu is variable viscosity? >> > >> > 2. Are U and P defined at all nodes? (I googled the Q1Q1 element, it >> looks >> > like a box element with U and P defined at 8 corners). >> >> Yes. >> >> > 3. Are nodes' coordinate defined though the DA coordinates? >> >> Yes, though they are set to be uniform. >> >> > 4. How can I enforce noslip BC, and where should I plug in volume >> force? >> >> Enforce the Dirichlet condition for the entire node. >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Apr 16 17:30:48 2014 From: jed at jedbrown.org (Jed Brown) Date: Wed, 16 Apr 2014 16:30:48 -0600 Subject: [petsc-users] ex42 question In-Reply-To: References: <87eh1ost3w.fsf@jedbrown.org> Message-ID: <87r44whppj.fsf@jedbrown.org> Matthew Knepley writes: > On Wed, Apr 16, 2014 at 4:48 PM, Sang pham van wrote: > >> Hi Jed, >> >> I modified the ex43 code to enforce no-slip BCs on all boundaries. >> I run the code with volume force (0,-1) and isoviscosity. The expected >> result is Vx = Vy = 0 everywhere, and linearly decreasing pressure (from to >> to bottom). >> In the attached is plot of velocity field and pressure, so there is still >> a (light) flow in middle of the domain. Do you know why the solution is >> that, and what should I do to get the expected result? >> > > It sounds like it is due to discretization error. Your incompressibility > constraint is not verified element-wise > (I think ex43 is penalized Q1-Q1), so you can have some flow here. Refine > it and see if it converges toward > 0 flow. I'm not sure I believe these boundary conditions. Try putting no-slip all around the exterior and solve the algebraic problems accurately (e.g., direct solver). Since the constant over the whole domain is in the space, you should be able to verify that the (global) integral of divergence is 0. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From pvsang002 at gmail.com Wed Apr 16 17:38:57 2014 From: pvsang002 at gmail.com (Sang pham van) Date: Wed, 16 Apr 2014 18:38:57 -0400 Subject: [petsc-users] ex42 question In-Reply-To: References: <87eh1ost3w.fsf@jedbrown.org> Message-ID: Yes, It does converge toward 0-flow when I refine the mesh ! In the code, coordinates are set by ierr = DMDASetUniformCoordinates(da_prop,0.0+0.5*dx,1.0-0.5*dx,0.0+0.5*dy,1.0-0.5*dy,0.,0);CHKERRQ(ierr); Does it set coordinate for elements center here? When I want to try non-uniform grid, should I just modify coordinates attached with the DM, or I need more implementation? Many thanks. Sang On Wed, Apr 16, 2014 at 6:23 PM, Matthew Knepley wrote: > On Wed, Apr 16, 2014 at 4:48 PM, Sang pham van wrote: > >> Hi Jed, >> >> I modified the ex43 code to enforce no-slip BCs on all boundaries. >> I run the code with volume force (0,-1) and isoviscosity. The expected >> result is Vx = Vy = 0 everywhere, and linearly decreasing pressure (from to >> to bottom). >> In the attached is plot of velocity field and pressure, so there is still >> a (light) flow in middle of the domain. Do you know why the solution is >> that, and what should I do to get the expected result? >> > > It sounds like it is due to discretization error. Your incompressibility > constraint is not verified element-wise > (I think ex43 is penalized Q1-Q1), so you can have some flow here. Refine > it and see if it converges toward > 0 flow. > > Matt > > >> Thank you. >> >> Sang >> >> >> >> >> On Wed, Mar 26, 2014 at 2:38 PM, Jed Brown wrote: >> >>> Sang pham van writes: >>> >>> > Hi Dave, >>> > I guess you are the one contributed the ex42 in KSP's examples. I want >>> to >>> > modify the example to solve for stokes flow driven by volume force in >>> 3D >>> > duct. Please help me to understand the code by answering the following >>> > questions: >>> > >>> > 1. Firstly, just for confirmation, the equations you're solving are: >>> > \nu * \nabla \cdot \nabla U - \nabla P = 0 and >>> >>> For variable viscosity, it must be formulated as in the example: >>> >>> \nabla\cdot (\nu D U) - \nabla P = 0 >>> >>> where D U = (\nabla U + (\nabla U)^T)/2 >>> >>> > \nabla \cdot U = 0 >>> > >>> > where U = (Ux,Uy,Uz), \nu is variable viscosity? >>> > >>> > 2. Are U and P defined at all nodes? (I googled the Q1Q1 element, it >>> looks >>> > like a box element with U and P defined at 8 corners). >>> >>> Yes. >>> >>> > 3. Are nodes' coordinate defined though the DA coordinates? >>> >>> Yes, though they are set to be uniform. >>> >>> > 4. How can I enforce noslip BC, and where should I plug in volume >>> force? >>> >>> Enforce the Dirichlet condition for the entire node. >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Apr 16 17:45:50 2014 From: jed at jedbrown.org (Jed Brown) Date: Wed, 16 Apr 2014 16:45:50 -0600 Subject: [petsc-users] ex42 question In-Reply-To: References: <87eh1ost3w.fsf@jedbrown.org> Message-ID: <87ob00hp0h.fsf@jedbrown.org> Sang pham van writes: > Yes, It does converge toward 0-flow when I refine the mesh ! > > In the code, coordinates are set by > ierr = > DMDASetUniformCoordinates(da_prop,0.0+0.5*dx,1.0-0.5*dx,0.0+0.5*dy,1.0-0.5*dy,0.,0);CHKERRQ(ierr); > Does it set coordinate for elements center here? > When I want to try non-uniform grid, should I just modify coordinates > attached with the DM, or I need more implementation? You can modify coordinates. It looks like the code does the correct coordinate transformations. (Dave's real code runs on non-affine meshes for science reasons.) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From quecat001 at gmail.com Wed Apr 16 20:00:54 2014 From: quecat001 at gmail.com (Que Cat) Date: Wed, 16 Apr 2014 20:00:54 -0500 Subject: [petsc-users] SNES failed with reason -3 In-Reply-To: <87ioq9jk3g.fsf@jedbrown.org> References: <8738hdl3gk.fsf@jedbrown.org> <87ioq9jk3g.fsf@jedbrown.org> Message-ID: Hi Peter and Jed, Thank you very much for your comments. The error associated with the problem state equation. Now, it works fine. Que On Wed, Apr 16, 2014 at 11:49 AM, Jed Brown wrote: > Que Cat writes: > > > Hi Jed and Peter, > > > > Please see the attached file for the output. Thank you for your help. > > The linear solve has stagnated. This is usually due to an ineffective > preconditioner. Tips here: > > > http://scicomp.stackexchange.com/questions/513/why-is-my-iterative-linear-solver-not-converging > > What sort of problem are you solving? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Apr 16 20:11:51 2014 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 16 Apr 2014 20:11:51 -0500 Subject: [petsc-users] ex42 question In-Reply-To: References: <87eh1ost3w.fsf@jedbrown.org> Message-ID: On Wed, Apr 16, 2014 at 5:38 PM, Sang pham van wrote: > Yes, It does converge toward 0-flow when I refine the mesh ! > > In the code, coordinates are set by > ierr = > DMDASetUniformCoordinates(da_prop,0.0+0.5*dx,1.0-0.5*dx,0.0+0.5*dy,1.0-0.5*dy,0.,0);CHKERRQ(ierr); > Does it set coordinate for elements center here? > When I want to try non-uniform grid, should I just modify coordinates > attached with the DM, or I need more implementation? > Yes, I think that should be enough. Matt > Many thanks. > Sang > > > > On Wed, Apr 16, 2014 at 6:23 PM, Matthew Knepley wrote: > >> On Wed, Apr 16, 2014 at 4:48 PM, Sang pham van wrote: >> >>> Hi Jed, >>> >>> I modified the ex43 code to enforce no-slip BCs on all boundaries. >>> I run the code with volume force (0,-1) and isoviscosity. The expected >>> result is Vx = Vy = 0 everywhere, and linearly decreasing pressure (from to >>> to bottom). >>> In the attached is plot of velocity field and pressure, so there is >>> still a (light) flow in middle of the domain. Do you know why the solution >>> is that, and what should I do to get the expected result? >>> >> >> It sounds like it is due to discretization error. Your incompressibility >> constraint is not verified element-wise >> (I think ex43 is penalized Q1-Q1), so you can have some flow here. Refine >> it and see if it converges toward >> 0 flow. >> >> Matt >> >> >>> Thank you. >>> >>> Sang >>> >>> >>> >>> >>> On Wed, Mar 26, 2014 at 2:38 PM, Jed Brown wrote: >>> >>>> Sang pham van writes: >>>> >>>> > Hi Dave, >>>> > I guess you are the one contributed the ex42 in KSP's examples. I >>>> want to >>>> > modify the example to solve for stokes flow driven by volume force in >>>> 3D >>>> > duct. Please help me to understand the code by answering the following >>>> > questions: >>>> > >>>> > 1. Firstly, just for confirmation, the equations you're solving are: >>>> > \nu * \nabla \cdot \nabla U - \nabla P = 0 and >>>> >>>> For variable viscosity, it must be formulated as in the example: >>>> >>>> \nabla\cdot (\nu D U) - \nabla P = 0 >>>> >>>> where D U = (\nabla U + (\nabla U)^T)/2 >>>> >>>> > \nabla \cdot U = 0 >>>> > >>>> > where U = (Ux,Uy,Uz), \nu is variable viscosity? >>>> > >>>> > 2. Are U and P defined at all nodes? (I googled the Q1Q1 element, it >>>> looks >>>> > like a box element with U and P defined at 8 corners). >>>> >>>> Yes. >>>> >>>> > 3. Are nodes' coordinate defined though the DA coordinates? >>>> >>>> Yes, though they are set to be uniform. >>>> >>>> > 4. How can I enforce noslip BC, and where should I plug in volume >>>> force? >>>> >>>> Enforce the Dirichlet condition for the entire node. >>>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Thu Apr 17 01:16:49 2014 From: dave.mayhem23 at gmail.com (Dave May) Date: Thu, 17 Apr 2014 08:16:49 +0200 Subject: [petsc-users] ex42 question In-Reply-To: References: <87eh1ost3w.fsf@jedbrown.org> Message-ID: Hi Sang, The only assumptions made in ex42 are about the mesh topoology, e.g. it assumes the mesh can be defined via an IJK type representation. Assembly of the weak form makes no assumptions about elements being affine, or elements having uniform spacing. The quadrature used is also sufficiently accurate that you can deform the mesh and the weak form will be correctly integrated. I would caution you about using this type of element Q1Q1 for any science applications which are driven purely by buoyancy variations (e.g. dirichlet and neumann boundary conditions are of zero value). This element uses Bochev stabilization and exhibits an error in div(u) which is O(h). The constant associated with this error can be huge. Also, the constant is very much dependent on the type of boundary conditions you use. Actually, one of the tests which shows how careful you should be is similar to what you tried. Apply u_i n_i=0 on all laterial walls and the base. Apply tau_{ij} t_j = 0 (no normal flow, zero shear stress on the walls) on the lateral sides and base. Set \sigma_{ij} = 0 on the surface. e.g. you want to model flow in a cup of water - the solution should be exactly zero. As Matt pointed out, this element doesn't have element wise conservative properties. Run this experiment, and you will see the fluid "compacts" at the base the domain. Increase the density of the fluid and you will see the "compaction" increases in size. Add a jump horizontal layering in the viscosity and you will see "compaction" at the interface. All these compaction like flow features are local and completely associated with poor enforcement of div(u). If you refine the mesh, the div(u) error decreases and the flow field approaches something incompressible. I strongly encourage you to always monitor div(u) in every element and consider whether this value is appropriate given your science application. If its not appropriate, you need to refine the mesh. I gave up with this element as to obtain meaningful solution I determined I would need 1500^3 elements.... Cheers, Dave On 17 April 2014 03:11, Matthew Knepley wrote: > On Wed, Apr 16, 2014 at 5:38 PM, Sang pham van wrote: > >> Yes, It does converge toward 0-flow when I refine the mesh ! >> >> In the code, coordinates are set by >> ierr = >> DMDASetUniformCoordinates(da_prop,0.0+0.5*dx,1.0-0.5*dx,0.0+0.5*dy,1.0-0.5*dy,0.,0);CHKERRQ(ierr); >> Does it set coordinate for elements center here? >> When I want to try non-uniform grid, should I just modify coordinates >> attached with the DM, or I need more implementation? >> > > Yes, I think that should be enough. > > Matt > > >> Many thanks. >> Sang >> >> >> >> On Wed, Apr 16, 2014 at 6:23 PM, Matthew Knepley wrote: >> >>> On Wed, Apr 16, 2014 at 4:48 PM, Sang pham van wrote: >>> >>>> Hi Jed, >>>> >>>> I modified the ex43 code to enforce no-slip BCs on all boundaries. >>>> I run the code with volume force (0,-1) and isoviscosity. The expected >>>> result is Vx = Vy = 0 everywhere, and linearly decreasing pressure (from to >>>> to bottom). >>>> In the attached is plot of velocity field and pressure, so there is >>>> still a (light) flow in middle of the domain. Do you know why the solution >>>> is that, and what should I do to get the expected result? >>>> >>> >>> It sounds like it is due to discretization error. Your incompressibility >>> constraint is not verified element-wise >>> (I think ex43 is penalized Q1-Q1), so you can have some flow here. >>> Refine it and see if it converges toward >>> 0 flow. >>> >>> Matt >>> >>> >>>> Thank you. >>>> >>>> Sang >>>> >>>> >>>> >>>> >>>> On Wed, Mar 26, 2014 at 2:38 PM, Jed Brown wrote: >>>> >>>>> Sang pham van writes: >>>>> >>>>> > Hi Dave, >>>>> > I guess you are the one contributed the ex42 in KSP's examples. I >>>>> want to >>>>> > modify the example to solve for stokes flow driven by volume force >>>>> in 3D >>>>> > duct. Please help me to understand the code by answering the >>>>> following >>>>> > questions: >>>>> > >>>>> > 1. Firstly, just for confirmation, the equations you're solving are: >>>>> > \nu * \nabla \cdot \nabla U - \nabla P = 0 and >>>>> >>>>> For variable viscosity, it must be formulated as in the example: >>>>> >>>>> \nabla\cdot (\nu D U) - \nabla P = 0 >>>>> >>>>> where D U = (\nabla U + (\nabla U)^T)/2 >>>>> >>>>> > \nabla \cdot U = 0 >>>>> > >>>>> > where U = (Ux,Uy,Uz), \nu is variable viscosity? >>>>> > >>>>> > 2. Are U and P defined at all nodes? (I googled the Q1Q1 element, >>>>> it looks >>>>> > like a box element with U and P defined at 8 corners). >>>>> >>>>> Yes. >>>>> >>>>> > 3. Are nodes' coordinate defined though the DA coordinates? >>>>> >>>>> Yes, though they are set to be uniform. >>>>> >>>>> > 4. How can I enforce noslip BC, and where should I plug in volume >>>>> force? >>>>> >>>>> Enforce the Dirichlet condition for the entire node. >>>>> >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From norihiro.w at gmail.com Thu Apr 17 09:31:32 2014 From: norihiro.w at gmail.com (Norihiro Watanabe) Date: Thu, 17 Apr 2014 16:31:32 +0200 Subject: [petsc-users] SNES test with a nested matrix Message-ID: Hi, I wonder whether "-snes_type test" option can work also with a nested matrix. I got the following error in my program and am not sure if there is a problem in my codes or just PETSc doesn't support it. Testing hand-coded Jacobian, if the ratio is O(1.e-8), the hand-coded Jacobian is probably correct. Run with -snes_test_display to show difference of hand-coded and finite difference Jacobian. [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: No support for this operation for this object type! [0]PETSC ERROR: Mat type nest! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.4.3, Oct, 15, 2013 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Libraries linked from /tools/petsc/petsc-3.4.3/linux-gnu-debug/lib [0]PETSC ERROR: Configure run at Wed Jan 15 15:48:37 2014 [0]PETSC ERROR: Configure options PETSC_ARCH=linux-gnu-debug --download-f2cblaslapack=1 -with-debugging=1 --download-superlu_dist --download-hypre=1 --download-ml=1 --download-parmetis --download-metis [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatSetValues() line 1085 in /tools/petsc/petsc-3.4.3/src/mat/interface/matrix.c [0]PETSC ERROR: SNESComputeJacobianDefault() line 121 in /tools/petsc/petsc-3.4.3/src/snes/interface/snesj.c [0]PETSC ERROR: SNESSolve_Test() line 64 in /tools/petsc/petsc-3.4.3/src/snes/impls/test/snestest.c [0]PETSC ERROR: SNESSolve() line 3636 in /tools/petsc/petsc-3.4.3/src/snes/interface/snes.c Thank you in advance, Nori -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Thu Apr 17 09:56:16 2014 From: jed at jedbrown.org (Jed Brown) Date: Thu, 17 Apr 2014 08:56:16 -0600 Subject: [petsc-users] SNES test with a nested matrix In-Reply-To: References: Message-ID: <87d2ggt373.fsf@jedbrown.org> Norihiro Watanabe writes: > Hi, I wonder whether "-snes_type test" option can work also with a nested > matrix. I got the following error in my program and am not sure if there is > a problem in my codes or just PETSc doesn't support it. I believe -snes_compare_explicit should convert if necessary (it provides similar functionality to -snes_type test). > Testing hand-coded Jacobian, if the ratio is > O(1.e-8), the hand-coded Jacobian is probably correct. > Run with -snes_test_display to show difference > of hand-coded and finite difference Jacobian. > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: No support for this operation for this object type! > [0]PETSC ERROR: Mat type nest! We strongly recommend writing your code in a way that is agnostic to matrix format (i.e., assemble using MatGetLocalSubMatrix). Then just use a monolithic format for comparisons like this and switch to MatNest when trying to get the best performance out of a fieldsplit preconditioner. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From jianjun.xiao at kit.edu Thu Apr 17 14:48:53 2014 From: jianjun.xiao at kit.edu (Xiao, Jianjun (IKET)) Date: Thu, 17 Apr 2014 21:48:53 +0200 Subject: [petsc-users] Preconditioner and number of processors Message-ID: <56D054AF2E93E044AC1D2685709D2868D8BBE2EE4D@KIT-MSX-07.kit.edu> Dear developers, I find the words below in PETSc FAQ. "Most commonly, you are using a preconditioner which behaves differently based upon the number of processors, such as Block-Jacobi which is the PETSc default." I am testing the fidelity of my code. I don't want to be disturbed by the different numbers of procs. I know Block-Jacobi may be fast. But currently I don't care the speed. I need a preconditioner which behaves exactly the same with various procs. Which preconditioner should I use? Thank you. Best regards JJ From knepley at gmail.com Thu Apr 17 14:57:01 2014 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 17 Apr 2014 14:57:01 -0500 Subject: [petsc-users] Preconditioner and number of processors In-Reply-To: <56D054AF2E93E044AC1D2685709D2868D8BBE2EE4D@KIT-MSX-07.kit.edu> References: <56D054AF2E93E044AC1D2685709D2868D8BBE2EE4D@KIT-MSX-07.kit.edu> Message-ID: On Thu, Apr 17, 2014 at 2:48 PM, Xiao, Jianjun (IKET) wrote: > Dear developers, > > I find the words below in PETSc FAQ. > > "Most commonly, you are using a preconditioner which behaves differently > based upon the number of processors, such as Block-Jacobi which is the > PETSc default." > > I am testing the fidelity of my code. I don't want to be disturbed by the > different numbers of procs. I know Block-Jacobi may be fast. But currently > I don't care the speed. I need a preconditioner which behaves exactly the > same with various procs. Which preconditioner should I use? > Always use a direct solver for fidelity tests, like SuperLU or MUMPS. Matt > Thank you. > > Best regards > JJ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Thu Apr 17 15:12:09 2014 From: jed at jedbrown.org (Jed Brown) Date: Thu, 17 Apr 2014 14:12:09 -0600 Subject: [petsc-users] Preconditioner and number of processors In-Reply-To: References: <56D054AF2E93E044AC1D2685709D2868D8BBE2EE4D@KIT-MSX-07.kit.edu> Message-ID: <877g6nsokm.fsf@jedbrown.org> Matthew Knepley writes: > Always use a direct solver for fidelity tests, like SuperLU or MUMPS. Direct solvers are good when you don't care about how the solve is done. You can use -pc_type redundant in parallel, for example. If you are interested in the way the solver behaves, consider -pc_type jacobi. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From song.gao2 at mail.mcgill.ca Thu Apr 17 16:17:38 2014 From: song.gao2 at mail.mcgill.ca (Song Gao) Date: Thu, 17 Apr 2014 17:17:38 -0400 Subject: [petsc-users] Question with preconditioned resid norm and true resid norm Message-ID: Hello, I am using KSP framework to solve the problem A \Delta x = b in the matrix free fashion. where A is the matrix free matrix. I have another assembled matrix for preconditioning, but I'm NOT using it. My code is not working so I'm debugging it. I run the code with options -ksp_pc_side left -ksp_max_it 10 -ksp_gmres_restart 30 -ksp_monitor_true_residual -pc_type none -ksp_view I think if pc_type is none, the precondiitoned resid norm should equal to true resid norm (Am I correct?). But this doesn't happen. So maybe it would be helpful if I know how preconditioned resid norm and true resid norm are computed. Website says true resid norm is just b - A \Delta x. But how is preconditioned resid norm computed? Any suggestions? Thank you very much. 0 KSP preconditioned resid norm 9.619278462343e-03 true resid norm 9.619278462343e-03 ||r(i)||/||b|| 1.000000000000e+00 1 KSP preconditioned resid norm 9.619210849854e-03 true resid norm 2.552369536916e+06 ||r(i)||/||b|| 2.653389801437e+08 2 KSP preconditioned resid norm 9.619210847390e-03 true resid norm 2.552458142544e+06 ||r(i)||/||b|| 2.653481913988e+08 3 KSP preconditioned resid norm 9.619210847385e-03 true resid norm 2.552458343191e+06 ||r(i)||/||b|| 2.653482122576e+08 4 KSP preconditioned resid norm 9.619210847385e-03 true resid norm 2.552458344014e+06 ||r(i)||/||b|| 2.653482123432e+08 5 KSP preconditioned resid norm 9.619210847385e-03 true resid norm 2.552458344015e+06 ||r(i)||/||b|| 2.653482123433e+08 ....... Linear solve did not converge due to DIVERGED_ITS iterations 10 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10, initial guess is zero tolerances: relative=1e-06, absolute=1e-50, divergence=100000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: none linear system matrix followed by preconditioner matrix: Matrix Object: 1 MPI processes type: mffd rows=22905, cols=22905 Matrix-free approximation: err=1.49012e-08 (relative error in function evaluation) Using wp compute h routine Does not compute normU Matrix Object: 1 MPI processes type: seqbaij rows=22905, cols=22905, bs=5 total: nonzeros=785525, allocated nonzeros=785525 total number of mallocs used during MatSetValues calls =0 block size is 5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Apr 17 16:22:59 2014 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 17 Apr 2014 16:22:59 -0500 Subject: [petsc-users] Question with preconditioned resid norm and true resid norm In-Reply-To: References: Message-ID: On Thu, Apr 17, 2014 at 4:17 PM, Song Gao wrote: > Hello, > > I am using KSP framework to solve the problem A \Delta x = b in the > matrix free fashion. where A is the matrix free matrix. I have another > assembled matrix for preconditioning, but I'm NOT using it. My code is not > working so I'm debugging it. > I run the code with options -ksp_pc_side left -ksp_max_it 10 > -ksp_gmres_restart 30 -ksp_monitor_true_residual -pc_type none -ksp_view > > I think if pc_type is none, the precondiitoned resid norm should equal to > true resid norm (Am I correct?). But this doesn't happen. So maybe it would > be helpful if I know how preconditioned resid norm and true resid norm are > computed. > > Website says true resid norm is just b - A \Delta x. But how is > preconditioned resid norm computed? Any suggestions? Thank you very much. > The easiest explanation here is that your operator A is not behaving linearly. Instead of MFFD, use the default FD with coloring to create the whole matrix. That should behave as we expect for GMRES (monotonically). If so, its a matter of debugging the function evaluation. Matt > 0 KSP preconditioned resid norm 9.619278462343e-03 true resid norm > 9.619278462343e-03 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP preconditioned resid norm 9.619210849854e-03 true resid norm > 2.552369536916e+06 ||r(i)||/||b|| 2.653389801437e+08 > 2 KSP preconditioned resid norm 9.619210847390e-03 true resid norm > 2.552458142544e+06 ||r(i)||/||b|| 2.653481913988e+08 > 3 KSP preconditioned resid norm 9.619210847385e-03 true resid norm > 2.552458343191e+06 ||r(i)||/||b|| 2.653482122576e+08 > 4 KSP preconditioned resid norm 9.619210847385e-03 true resid norm > 2.552458344014e+06 ||r(i)||/||b|| 2.653482123432e+08 > 5 KSP preconditioned resid norm 9.619210847385e-03 true resid norm > 2.552458344015e+06 ||r(i)||/||b|| 2.653482123433e+08 > ....... > Linear solve did not converge due to DIVERGED_ITS iterations 10 > KSP Object: 1 MPI processes > type: gmres > GMRES: restart=30, using Classical (unmodified) Gram-Schmidt > Orthogonalization with no iterative refinement > GMRES: happy breakdown tolerance 1e-30 > maximum iterations=10, initial guess is zero > tolerances: relative=1e-06, absolute=1e-50, divergence=100000 > left preconditioning > using PRECONDITIONED norm type for convergence test > PC Object: 1 MPI processes > type: none > linear system matrix followed by preconditioner matrix: > Matrix Object: 1 MPI processes > type: mffd > rows=22905, cols=22905 > Matrix-free approximation: > err=1.49012e-08 (relative error in function evaluation) > Using wp compute h routine > Does not compute normU > Matrix Object: 1 MPI processes > type: seqbaij > rows=22905, cols=22905, bs=5 > total: nonzeros=785525, allocated nonzeros=785525 > total number of mallocs used during MatSetValues calls =0 > block size is 5 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Thu Apr 17 16:28:25 2014 From: jed at jedbrown.org (Jed Brown) Date: Thu, 17 Apr 2014 15:28:25 -0600 Subject: [petsc-users] Question with preconditioned resid norm and true resid norm In-Reply-To: References: Message-ID: <8738hbsl1i.fsf@jedbrown.org> Song Gao writes: > Hello, > > I am using KSP framework to solve the problem A \Delta x = b in the > matrix free fashion. where A is the matrix free matrix. I have another > assembled matrix for preconditioning, but I'm NOT using it. My code is not > working so I'm debugging it. > I run the code with options -ksp_pc_side left -ksp_max_it 10 > -ksp_gmres_restart 30 -ksp_monitor_true_residual -pc_type none -ksp_view > > I think if pc_type is none, the precondiitoned resid norm should equal to > true resid norm (Am I correct?). But this doesn't happen. So maybe it would > be helpful if I know how preconditioned resid norm and true resid norm are > computed. > > Website says true resid norm is just b - A \Delta x. But how is > preconditioned resid norm computed? Just apply the preconditioner P^{-1} to the residual above. In practice, it is usually computed indirectly via a recurrence in the Krylov method, but they should agree up to rounding error. > 0 KSP preconditioned resid norm 9.619278462343e-03 true resid norm > 9.619278462343e-03 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP preconditioned resid norm 9.619210849854e-03 true resid norm > 2.552369536916e+06 ||r(i)||/||b|| 2.653389801437e+08 > 2 KSP preconditioned resid norm 9.619210847390e-03 true resid norm > 2.552458142544e+06 ||r(i)||/||b|| 2.653481913988e+08 > 3 KSP preconditioned resid norm 9.619210847385e-03 true resid norm > 2.552458343191e+06 ||r(i)||/||b|| 2.653482122576e+08 > 4 KSP preconditioned resid norm 9.619210847385e-03 true resid norm > 2.552458344014e+06 ||r(i)||/||b|| 2.653482123432e+08 > 5 KSP preconditioned resid norm 9.619210847385e-03 true resid norm > 2.552458344015e+06 ||r(i)||/||b|| 2.653482123433e+08 Try the above with -ksp_norm_type unpreconditioned. The output below claims there is no preconditioner, so the most likely cause is that your operator is nonlinear. With MFFD, I would speculate that it is caused by your nonlinear function being discontinuous, or by reusing some memory without clearing it (thus computing nonsense after the first iteration). > Linear solve did not converge due to DIVERGED_ITS iterations 10 > KSP Object: 1 MPI processes > type: gmres > GMRES: restart=30, using Classical (unmodified) Gram-Schmidt > Orthogonalization with no iterative refinement > GMRES: happy breakdown tolerance 1e-30 > maximum iterations=10, initial guess is zero > tolerances: relative=1e-06, absolute=1e-50, divergence=100000 > left preconditioning > using PRECONDITIONED norm type for convergence test > PC Object: 1 MPI processes > type: none > linear system matrix followed by preconditioner matrix: > Matrix Object: 1 MPI processes > type: mffd > rows=22905, cols=22905 > Matrix-free approximation: > err=1.49012e-08 (relative error in function evaluation) > Using wp compute h routine > Does not compute normU > Matrix Object: 1 MPI processes > type: seqbaij > rows=22905, cols=22905, bs=5 > total: nonzeros=785525, allocated nonzeros=785525 > total number of mallocs used during MatSetValues calls =0 > block size is 5 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From bsmith at mcs.anl.gov Thu Apr 17 16:28:35 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 17 Apr 2014 16:28:35 -0500 Subject: [petsc-users] Question with preconditioned resid norm and true resid norm In-Reply-To: References: Message-ID: <1E332B45-85E2-4111-8E86-8195842731BA@mcs.anl.gov> There is a bug in your function evaluation. Barry On Apr 17, 2014, at 4:17 PM, Song Gao wrote: > Hello, > > I am using KSP framework to solve the problem A \Delta x = b in the matrix free fashion. where A is the matrix free matrix. I have another assembled matrix for preconditioning, but I'm NOT using it. My code is not working so I'm debugging it. > I run the code with options -ksp_pc_side left -ksp_max_it 10 -ksp_gmres_restart 30 -ksp_monitor_true_residual -pc_type none -ksp_view > > I think if pc_type is none, the precondiitoned resid norm should equal to true resid norm (Am I correct?). But this doesn't happen. So maybe it would be helpful if I know how preconditioned resid norm and true resid norm are computed. > > Website says true resid norm is just b - A \Delta x. But how is preconditioned resid norm computed? Any suggestions? Thank you very much. > > > 0 KSP preconditioned resid norm 9.619278462343e-03 true resid norm 9.619278462343e-03 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP preconditioned resid norm 9.619210849854e-03 true resid norm 2.552369536916e+06 ||r(i)||/||b|| 2.653389801437e+08 > 2 KSP preconditioned resid norm 9.619210847390e-03 true resid norm 2.552458142544e+06 ||r(i)||/||b|| 2.653481913988e+08 > 3 KSP preconditioned resid norm 9.619210847385e-03 true resid norm 2.552458343191e+06 ||r(i)||/||b|| 2.653482122576e+08 > 4 KSP preconditioned resid norm 9.619210847385e-03 true resid norm 2.552458344014e+06 ||r(i)||/||b|| 2.653482123432e+08 > 5 KSP preconditioned resid norm 9.619210847385e-03 true resid norm 2.552458344015e+06 ||r(i)||/||b|| 2.653482123433e+08 > ....... > Linear solve did not converge due to DIVERGED_ITS iterations 10 > KSP Object: 1 MPI processes > type: gmres > GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement > GMRES: happy breakdown tolerance 1e-30 > maximum iterations=10, initial guess is zero > tolerances: relative=1e-06, absolute=1e-50, divergence=100000 > left preconditioning > using PRECONDITIONED norm type for convergence test > PC Object: 1 MPI processes > type: none > linear system matrix followed by preconditioner matrix: > Matrix Object: 1 MPI processes > type: mffd > rows=22905, cols=22905 > Matrix-free approximation: > err=1.49012e-08 (relative error in function evaluation) > Using wp compute h routine > Does not compute normU > Matrix Object: 1 MPI processes > type: seqbaij > rows=22905, cols=22905, bs=5 > total: nonzeros=785525, allocated nonzeros=785525 > total number of mallocs used during MatSetValues calls =0 > block size is 5 From song.gao2 at mail.mcgill.ca Fri Apr 18 08:50:22 2014 From: song.gao2 at mail.mcgill.ca (Song Gao) Date: Fri, 18 Apr 2014 09:50:22 -0400 Subject: [petsc-users] Question with preconditioned resid norm and true resid norm In-Reply-To: <8738hbsl1i.fsf@jedbrown.org> References: <8738hbsl1i.fsf@jedbrown.org> Message-ID: Yeah. Thank you all. I think there are bugs in my function evaluation. I just wondering if I could guess where the bug might be from the residual history. I would check my function evaluation. Thank you very much. I tried add -ksp_norm_type unpreconditioned, but got an error. run with -ksp_max_it 10 -ksp_gmres_restart 30 -ksp_monitor_true_residual -ksp_converged_reason -ksp_view -ksp_norm_type unpreconditioned -ksp_pc_side right -pc_type none The error is [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: No support for this operation for this object type! [0]PETSC ERROR: KSP gmres does not support UNPRECONDITIONED with LEFT! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 6, Mon Feb 11 12:26:34 CST 2013 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ On Thu, Apr 17, 2014 at 5:28 PM, Jed Brown wrote: > Song Gao writes: > > > Hello, > > > > I am using KSP framework to solve the problem A \Delta x = b in the > > matrix free fashion. where A is the matrix free matrix. I have another > > assembled matrix for preconditioning, but I'm NOT using it. My code is > not > > working so I'm debugging it. > > I run the code with options -ksp_pc_side left -ksp_max_it 10 > > -ksp_gmres_restart 30 -ksp_monitor_true_residual -pc_type none -ksp_view > > > > I think if pc_type is none, the precondiitoned resid norm should equal to > > true resid norm (Am I correct?). But this doesn't happen. So maybe it > would > > be helpful if I know how preconditioned resid norm and true resid norm > are > > computed. > > > > Website says true resid norm is just b - A \Delta x. But how is > > preconditioned resid norm computed? > > Just apply the preconditioner P^{-1} to the residual above. In > practice, it is usually computed indirectly via a recurrence in the > Krylov method, but they should agree up to rounding error. > > > 0 KSP preconditioned resid norm 9.619278462343e-03 true resid norm > > 9.619278462343e-03 ||r(i)||/||b|| 1.000000000000e+00 > > 1 KSP preconditioned resid norm 9.619210849854e-03 true resid norm > > 2.552369536916e+06 ||r(i)||/||b|| 2.653389801437e+08 > > 2 KSP preconditioned resid norm 9.619210847390e-03 true resid norm > > 2.552458142544e+06 ||r(i)||/||b|| 2.653481913988e+08 > > 3 KSP preconditioned resid norm 9.619210847385e-03 true resid norm > > 2.552458343191e+06 ||r(i)||/||b|| 2.653482122576e+08 > > 4 KSP preconditioned resid norm 9.619210847385e-03 true resid norm > > 2.552458344014e+06 ||r(i)||/||b|| 2.653482123432e+08 > > 5 KSP preconditioned resid norm 9.619210847385e-03 true resid norm > > 2.552458344015e+06 ||r(i)||/||b|| 2.653482123433e+08 > > Try the above with -ksp_norm_type unpreconditioned. The output below > claims there is no preconditioner, so the most likely cause is that your > operator is nonlinear. With MFFD, I would speculate that it is caused > by your nonlinear function being discontinuous, or by reusing some > memory without clearing it (thus computing nonsense after the first > iteration). > > > Linear solve did not converge due to DIVERGED_ITS iterations 10 > > KSP Object: 1 MPI processes > > type: gmres > > GMRES: restart=30, using Classical (unmodified) Gram-Schmidt > > Orthogonalization with no iterative refinement > > GMRES: happy breakdown tolerance 1e-30 > > maximum iterations=10, initial guess is zero > > tolerances: relative=1e-06, absolute=1e-50, divergence=100000 > > left preconditioning > > using PRECONDITIONED norm type for convergence test > > PC Object: 1 MPI processes > > type: none > > linear system matrix followed by preconditioner matrix: > > Matrix Object: 1 MPI processes > > type: mffd > > rows=22905, cols=22905 > > Matrix-free approximation: > > err=1.49012e-08 (relative error in function evaluation) > > Using wp compute h routine > > Does not compute normU > > Matrix Object: 1 MPI processes > > type: seqbaij > > rows=22905, cols=22905, bs=5 > > total: nonzeros=785525, allocated nonzeros=785525 > > total number of mallocs used during MatSetValues calls =0 > > block size is 5 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Fri Apr 18 09:31:01 2014 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Fri, 18 Apr 2014 09:31:01 -0500 Subject: [petsc-users] MUMPS error -13 info In-Reply-To: References: Message-ID: Gong Ding, The reported bug is fixed in petsc-release https://bitbucket.org/petsc/petsc/commits/151787a63101f3b7c1ee9a4abd20e4c3fe8caf18?at=maint Thanks for your contribution! Hong On Sun, Apr 13, 2014 at 2:42 AM, Gong Ding wrote: > Dear Sir, > I see an error: > > Transient compute from 0 ps step 0.2 ps to 3000 ps > t = 0.2 ps, dt = 0.2 ps > -------------------------------------------------------------------------------- > process particle generation....................ok > Gummel electron equation CONVERGED_ATOL, residual 5.41824e-12, its 4 > Gummel hole equation CONVERGED_ATOL, residual 4.6721e-13, its 4 > --------------------- Error Message ------------------------------------ > Fatal Error:Error reported by MUMPS in numerical factorization phase: Cannot allocate required memory 990883164 megabytes > at line 721 in /tmp/build/rhel6-64/build.petsc.3.4.4.859b2b9/src/src/mat/impls/aij/mpi/mumps/mumps.c > ------------------------------------------------------------------------ > which reported that MUMPS requires 990883164 megabytes of memory. > The memory requirement is too huge so I takes a look at the reason. > > The code in petsc is listed below: > > if (mumps->id.INFOG(1) < 0) { > if (mumps->id.INFO(1) == -13) > SETERRQ1(PETSC_COMM_SELF,PETSC_ERR_LIB,"Error reported by MUMPS in numerical factorization phase: Cannot allocate required memory %d megabytes\n",mumps->id.INFO(2)); > } > > However, the mumps user's guide said > > ?13 An error occurred in a Fortran ALLOCATE statement. The size that the package requested is > available in INFO(2). If INFO(2) is negative, then the size that the package requested is obtained > by multiplying the absolute value of INFO(2) by 1 million. > > It is clear that 990883164 megabytes should be 990883164 bytes here. Hope this bug can be fixed. > > Regards, > Gong Ding > > > > From zonexo at gmail.com Fri Apr 18 10:58:02 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Fri, 18 Apr 2014 23:58:02 +0800 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: References: <534C9A2C.5060404@gmail.com> <534C9DB5.9070407@gmail.com> Message-ID: <53514B8A.90901@gmail.com> Hi, I tried to pinpoint the problem. I reduced my job size and hence I can run on 1 processor. Tried using valgrind but perhaps I'm using the optimized version, it didn't catch the error, besides saying "Segmentation fault (core dumped)" However, by re-writing my code, I found out a few things: 1. if I write my code this way: call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) u_array = .... v_array = .... w_array = .... call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) The code runs fine. 2. if I write my code this way: call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) call uvw_array_change(u_array,v_array,w_array) -> this subroutine does the same modification as the above. call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) -> error where the subroutine is: subroutine uvw_array_change(u,v,w) real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) u ... v... w ... end subroutine uvw_array_change. The above will give an error at : call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) 3. Same as above, except I change the order of the last 3 lines to: call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) So they are now in reversed order. Now it works. 4. Same as 2 or 3, except the subroutine is changed to : subroutine uvw_array_change(u,v,w) real(8), intent(inout) :: u(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) real(8), intent(inout) :: v(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) real(8), intent(inout) :: w(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) u ... v... w ... end subroutine uvw_array_change. The start_indices and end_indices are simply to shift the 0 indices of C convention to that of the 1 indices of the Fortran convention. This is necessary in my case because most of my codes start array counting at 1, hence the "trick". However, now no matter which order of the DMDAVecRestoreArrayF90 (as in 2 or 3), error will occur at "call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) " So did I violate and cause memory corruption due to the trick above? But I can't think of any way other than the "trick" to continue using the 1 indices convention. Thank you. Yours sincerely, TAY wee-beng On 15/4/2014 8:00 PM, Barry Smith wrote: > Try running under valgrind http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > > > On Apr 14, 2014, at 9:47 PM, TAY wee-beng wrote: > >> Hi Barry, >> >> As I mentioned earlier, the code works fine in PETSc debug mode but fails in non-debug mode. >> >> I have attached my code. >> >> Thank you >> >> Yours sincerely, >> >> TAY wee-beng >> >> On 15/4/2014 2:26 AM, Barry Smith wrote: >>> Please send the code that creates da_w and the declarations of w_array >>> >>> Barry >>> >>> On Apr 14, 2014, at 9:40 AM, TAY wee-beng >>> >>> wrote: >>> >>> >>>> Hi Barry, >>>> >>>> I'm not too sure how to do it. I'm running mpi. So I run: >>>> >>>> mpirun -n 4 ./a.out -start_in_debugger >>>> >>>> I got the msg below. Before the gdb windows appear (thru x11), the program aborts. >>>> >>>> Also I tried running in another cluster and it worked. Also tried in the current cluster in debug mode and it worked too. >>>> >>>> mpirun -n 4 ./a.out -start_in_debugger >>>> -------------------------------------------------------------------------- >>>> An MPI process has executed an operation involving a call to the >>>> "fork()" system call to create a child process. Open MPI is currently >>>> operating in a condition that could result in memory corruption or >>>> other system errors; your MPI job may hang, crash, or produce silent >>>> data corruption. The use of fork() (or system() or other calls that >>>> create child processes) is strongly discouraged. >>>> >>>> The process that invoked fork was: >>>> >>>> Local host: n12-76 (PID 20235) >>>> MPI_COMM_WORLD rank: 2 >>>> >>>> If you are *absolutely sure* that your application will successfully >>>> and correctly survive a call to fork(), you may disable this warning >>>> by setting the mpi_warn_on_fork MCA parameter to 0. >>>> -------------------------------------------------------------------------- >>>> [2]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20235 on display localhost:50.0 on machine n12-76 >>>> [0]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20233 on display localhost:50.0 on machine n12-76 >>>> [1]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20234 on display localhost:50.0 on machine n12-76 >>>> [3]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20236 on display localhost:50.0 on machine n12-76 >>>> [n12-76:20232] 3 more processes have sent help message help-mpi-runtime.txt / mpi_init:warn-fork >>>> [n12-76:20232] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages >>>> >>>> .... >>>> >>>> 1 >>>> [1]PETSC ERROR: ------------------------------------------------------------------------ >>>> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>>> [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>>> [1]PETSC ERROR: or see >>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC ERROR: or try http://valgrind.org >>>> on GNU/linux and Apple Mac OS X to find memory corruption errors >>>> [1]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >>>> [1]PETSC ERROR: to get more information on the crash. >>>> [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >>>> [3]PETSC ERROR: ------------------------------------------------------------------------ >>>> [3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>>> [3]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>>> [3]PETSC ERROR: or see >>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSC ERROR: or try http://valgrind.org >>>> on GNU/linux and Apple Mac OS X to find memory corruption errors >>>> [3]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >>>> [3]PETSC ERROR: to get more information on the crash. >>>> [3]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >>>> >>>> ... >>>> Thank you. >>>> >>>> Yours sincerely, >>>> >>>> TAY wee-beng >>>> >>>> On 14/4/2014 9:05 PM, Barry Smith wrote: >>>> >>>>> Because IO doesn?t always get flushed immediately it may not be hanging at this point. It is better to use the option -start_in_debugger then type cont in each debugger window and then when you think it is ?hanging? do a control C in each debugger window and type where to see where each process is you can also look around in the debugger at variables to see why it is ?hanging? at that point. >>>>> >>>>> Barry >>>>> >>>>> This routines don?t have any parallel communication in them so are unlikely to hang. >>>>> >>>>> On Apr 14, 2014, at 6:52 AM, TAY wee-beng >>>>> >>>>> >>>>> >>>>> wrote: >>>>> >>>>> >>>>> >>>>>> Hi, >>>>>> >>>>>> My code hangs and I added in mpi_barrier and print to catch the bug. I found that it hangs after printing "7". Is it because I'm doing something wrong? I need to access the u,v,w array so I use DMDAVecGetArrayF90. After access, I use DMDAVecRestoreArrayF90. >>>>>> >>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"3" >>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"4" >>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"5" >>>>>> call I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) >>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"6" >>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) !must be in reverse order >>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"7" >>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"8" >>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>> -- >>>>>> Thank you. >>>>>> >>>>>> Yours sincerely, >>>>>> >>>>>> TAY wee-beng >>>>>> >>>>>> >>>>>> >> >> >> From parsani.matteo at gmail.com Fri Apr 18 11:14:23 2014 From: parsani.matteo at gmail.com (Matteo Parsani) Date: Fri, 18 Apr 2014 12:14:23 -0400 Subject: [petsc-users] libpetsc.so: undefined reference to `_gfortran_transfer_character_write@GFORTRAN_1.4' Message-ID: Dear PETSc Users and Developers, I have compiled successfully PETSc 3.4 with gfortran 4.7.2. However, when I compile my code, during the linking phase I get this error: /ump/fldmd/home/pmatteo/research/workspace/codes/ssdc/deps/petsc/lib/libpetsc.so: undefined reference to `_gfortran_transfer_character_write at GFORTRAN_1.4' /ump/fldmd/home/pmatteo/research/workspace/codes/ssdc/deps/petsc/lib/libpetsc.so: undefined reference to `_gfortran_transfer_integer_write at GFORTRAN_1.4' Any idea? Is it related to the gcc compiler options? Thank you, -- Matteo -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Fri Apr 18 11:21:08 2014 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 18 Apr 2014 11:21:08 -0500 Subject: [petsc-users] libpetsc.so: undefined reference to `_gfortran_transfer_character_write@GFORTRAN_1.4' In-Reply-To: References: Message-ID: detailed logs would have been useful. Mostlikely you have multiple version of libgfortran.so - and the wrong version got linked in. Or you have blas [or something else] compiled with a different version gfortran so its attempting to link with a different libgfortran.so] You can look at the link command, and look at all the .so files in the linkcommand - and do 'ldd libblas.so' etc on all the .so files in the link command to indentify the differences. And after you locate the multiple libgfortran.so files [perhaps with 'locate libgfortran.so'] - you can do the following to indentify the libgfortran library that you should be using. 'nm -Ao libgfortran.so |grep _gfortran_transfer_character_write at GFORTRAN_1.4' Or use --with-shared-libraries=0 and see if this problem goes away.. Satish On Fri, 18 Apr 2014, Matteo Parsani wrote: > Dear PETSc Users and Developers, > I have compiled successfully PETSc 3.4 with gfortran 4.7.2. > However, when I compile my code, during the linking phase I get this error: > > /ump/fldmd/home/pmatteo/research/workspace/codes/ssdc/deps/petsc/lib/libpetsc.so: > undefined reference to `_gfortran_transfer_character_write at GFORTRAN_1.4' > /ump/fldmd/home/pmatteo/research/workspace/codes/ssdc/deps/petsc/lib/libpetsc.so: > undefined reference to `_gfortran_transfer_integer_write at GFORTRAN_1.4' > > > Any idea? > Is it related to the gcc compiler options? > > Thank you, > > From salazardetroya at gmail.com Fri Apr 18 13:23:42 2014 From: salazardetroya at gmail.com (Miguel Angel Salazar de Troya) Date: Fri, 18 Apr 2014 13:23:42 -0500 Subject: [petsc-users] Elasticity tensor in ex52 Message-ID: Hello everybody. First, I am taking this example from the petsc-dev version, I am not sure if I should have posted this in another mail-list, if so, my apologies. In this example, for the elasticity case, function g3 is built as: void g3_elas(const PetscScalar u[], const PetscScalar gradU[], const PetscScalar a[], const PetscScalar gradA[], const PetscReal x[], PetscScalar g3[]) { const PetscInt dim = spatialDim; const PetscInt Ncomp = spatialDim; PetscInt compI, d; for (compI = 0; compI < Ncomp; ++compI) { for (d = 0; d < dim; ++d) { g3[((compI*Ncomp+compI)*dim+d)*dim+d] = 1.0; } } } Therefore, a fourth-order tensor is represented as a vector. I was checking the indices for different iterator values, and they do not seem to match the vectorization that I have in mind. For a two dimensional case, the indices for which the value is set as 1 are: compI = 0 , d = 0 -----> index = 0 compI = 0 , d = 1 -----> index = 3 compI = 1 , d = 0 -----> index = 12 compI = 1 , d = 1 -----> index = 15 The values for the first and last seem correct to me, but they other two are confusing me. I see that this elasticity tensor (which is the derivative of the gradient by itself in this case) would be a four by four identity matrix in its matrix representation, so the indices in between would be 5 and 10 instead of 3 and 12, if we put one column on top of each other. I guess my question is then, how did you vectorize the fourth order tensor? Thanks in advance Miguel -- *Miguel Angel Salazar de Troya* Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Apr 18 14:53:34 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 18 Apr 2014 14:53:34 -0500 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: <53514B8A.90901@gmail.com> References: <534C9A2C.5060404@gmail.com> <534C9DB5.9070407@gmail.com> <53514B8A.90901@gmail.com> Message-ID: <495519DB-D3A1-4190-AED2-4ABA885C2835@mcs.anl.gov> Hmm, Interface DMDAVecGetArrayF90 Subroutine DMDAVecGetArrayF903(da1, v,d1,ierr) USE_DM_HIDE DM_HIDE da1 VEC_HIDE v PetscScalar,pointer :: d1(:,:,:) PetscErrorCode ierr End Subroutine So the d1 is a F90 POINTER. But your subroutine seems to be treating it as a ?plain old Fortran array?? real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) Note also that the beginning and end indices of the u,v,w, are different for each process see for example http://www.mcs.anl.gov/petsc/petsc-3.4/src/dm/examples/tutorials/ex11f90.F (and they do not start at 1). This is how to get the loop bounds. Barry On Apr 18, 2014, at 10:58 AM, TAY wee-beng wrote: > Hi, > > I tried to pinpoint the problem. I reduced my job size and hence I can run on 1 processor. Tried using valgrind but perhaps I'm using the optimized version, it didn't catch the error, besides saying "Segmentation fault (core dumped)" > > However, by re-writing my code, I found out a few things: > > 1. if I write my code this way: > > call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) > > call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) > > call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) > > u_array = .... > > v_array = .... > > w_array = .... > > call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) > > call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) > > call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) > > The code runs fine. > > 2. if I write my code this way: > > call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) > > call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) > > call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) > > call uvw_array_change(u_array,v_array,w_array) -> this subroutine does the same modification as the above. > > call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) > > call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) > > call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) -> error > > where the subroutine is: > > subroutine uvw_array_change(u,v,w) > > real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) > > u ... > v... > w ... > > end subroutine uvw_array_change. > > The above will give an error at : > > call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) > > 3. Same as above, except I change the order of the last 3 lines to: > > call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) > > call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) > > call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) > > So they are now in reversed order. Now it works. > > 4. Same as 2 or 3, except the subroutine is changed to : > > subroutine uvw_array_change(u,v,w) > > real(8), intent(inout) :: u(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) > > real(8), intent(inout) :: v(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) > > real(8), intent(inout) :: w(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) > > u ... > v... > w ... > > end subroutine uvw_array_change. > > The start_indices and end_indices are simply to shift the 0 indices of C convention to that of the 1 indices of the Fortran convention. This is necessary in my case because most of my codes start array counting at 1, hence the "trick". > > However, now no matter which order of the DMDAVecRestoreArrayF90 (as in 2 or 3), error will occur at "call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) " > > So did I violate and cause memory corruption due to the trick above? But I can't think of any way other than the "trick" to continue using the 1 indices convention. > > Thank you. > > Yours sincerely, > > TAY wee-beng > > On 15/4/2014 8:00 PM, Barry Smith wrote: >> Try running under valgrind http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >> >> >> On Apr 14, 2014, at 9:47 PM, TAY wee-beng wrote: >> >>> Hi Barry, >>> >>> As I mentioned earlier, the code works fine in PETSc debug mode but fails in non-debug mode. >>> >>> I have attached my code. >>> >>> Thank you >>> >>> Yours sincerely, >>> >>> TAY wee-beng >>> >>> On 15/4/2014 2:26 AM, Barry Smith wrote: >>>> Please send the code that creates da_w and the declarations of w_array >>>> >>>> Barry >>>> >>>> On Apr 14, 2014, at 9:40 AM, TAY wee-beng >>>> >>>> wrote: >>>> >>>> >>>>> Hi Barry, >>>>> >>>>> I'm not too sure how to do it. I'm running mpi. So I run: >>>>> >>>>> mpirun -n 4 ./a.out -start_in_debugger >>>>> >>>>> I got the msg below. Before the gdb windows appear (thru x11), the program aborts. >>>>> >>>>> Also I tried running in another cluster and it worked. Also tried in the current cluster in debug mode and it worked too. >>>>> >>>>> mpirun -n 4 ./a.out -start_in_debugger >>>>> -------------------------------------------------------------------------- >>>>> An MPI process has executed an operation involving a call to the >>>>> "fork()" system call to create a child process. Open MPI is currently >>>>> operating in a condition that could result in memory corruption or >>>>> other system errors; your MPI job may hang, crash, or produce silent >>>>> data corruption. The use of fork() (or system() or other calls that >>>>> create child processes) is strongly discouraged. >>>>> >>>>> The process that invoked fork was: >>>>> >>>>> Local host: n12-76 (PID 20235) >>>>> MPI_COMM_WORLD rank: 2 >>>>> >>>>> If you are *absolutely sure* that your application will successfully >>>>> and correctly survive a call to fork(), you may disable this warning >>>>> by setting the mpi_warn_on_fork MCA parameter to 0. >>>>> -------------------------------------------------------------------------- >>>>> [2]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20235 on display localhost:50.0 on machine n12-76 >>>>> [0]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20233 on display localhost:50.0 on machine n12-76 >>>>> [1]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20234 on display localhost:50.0 on machine n12-76 >>>>> [3]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20236 on display localhost:50.0 on machine n12-76 >>>>> [n12-76:20232] 3 more processes have sent help message help-mpi-runtime.txt / mpi_init:warn-fork >>>>> [n12-76:20232] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages >>>>> >>>>> .... >>>>> >>>>> 1 >>>>> [1]PETSC ERROR: ------------------------------------------------------------------------ >>>>> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>>>> [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>>>> [1]PETSC ERROR: or see >>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC ERROR: or try http://valgrind.org >>>>> on GNU/linux and Apple Mac OS X to find memory corruption errors >>>>> [1]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >>>>> [1]PETSC ERROR: to get more information on the crash. >>>>> [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >>>>> [3]PETSC ERROR: ------------------------------------------------------------------------ >>>>> [3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>>>> [3]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>>>> [3]PETSC ERROR: or see >>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSC ERROR: or try http://valgrind.org >>>>> on GNU/linux and Apple Mac OS X to find memory corruption errors >>>>> [3]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >>>>> [3]PETSC ERROR: to get more information on the crash. >>>>> [3]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >>>>> >>>>> ... >>>>> Thank you. >>>>> >>>>> Yours sincerely, >>>>> >>>>> TAY wee-beng >>>>> >>>>> On 14/4/2014 9:05 PM, Barry Smith wrote: >>>>> >>>>>> Because IO doesn?t always get flushed immediately it may not be hanging at this point. It is better to use the option -start_in_debugger then type cont in each debugger window and then when you think it is ?hanging? do a control C in each debugger window and type where to see where each process is you can also look around in the debugger at variables to see why it is ?hanging? at that point. >>>>>> >>>>>> Barry >>>>>> >>>>>> This routines don?t have any parallel communication in them so are unlikely to hang. >>>>>> >>>>>> On Apr 14, 2014, at 6:52 AM, TAY wee-beng >>>>>> >>>>>> >>>>>> >>>>>> wrote: >>>>>> >>>>>> >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> My code hangs and I added in mpi_barrier and print to catch the bug. I found that it hangs after printing "7". Is it because I'm doing something wrong? I need to access the u,v,w array so I use DMDAVecGetArrayF90. After access, I use DMDAVecRestoreArrayF90. >>>>>>> >>>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"3" >>>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"4" >>>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"5" >>>>>>> call I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) >>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"6" >>>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) !must be in reverse order >>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"7" >>>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"8" >>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>> -- >>>>>>> Thank you. >>>>>>> >>>>>>> Yours sincerely, >>>>>>> >>>>>>> TAY wee-beng >>>>>>> >>>>>>> >>>>>>> >>> >>> >>> > From zonexo at gmail.com Fri Apr 18 21:57:47 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Sat, 19 Apr 2014 10:57:47 +0800 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: <495519DB-D3A1-4190-AED2-4ABA885C2835@mcs.anl.gov> References: <534C9A2C.5060404@gmail.com> <534C9DB5.9070407@gmail.com> <53514B8A.90901@gmail.com> <495519DB-D3A1-4190-AED2-4ABA885C2835@mcs.anl.gov> Message-ID: <5351E62B.6060201@gmail.com> On 19/4/2014 3:53 AM, Barry Smith wrote: > Hmm, > > Interface DMDAVecGetArrayF90 > Subroutine DMDAVecGetArrayF903(da1, v,d1,ierr) > USE_DM_HIDE > DM_HIDE da1 > VEC_HIDE v > PetscScalar,pointer :: d1(:,:,:) > PetscErrorCode ierr > End Subroutine > > So the d1 is a F90 POINTER. But your subroutine seems to be treating it as a ?plain old Fortran array?? > real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) > > Note also that the beginning and end indices of the u,v,w, are different for each process see for example http://www.mcs.anl.gov/petsc/petsc-3.4/src/dm/examples/tutorials/ex11f90.F (and they do not start at 1). This is how to get the loop bounds. Hi, In my case, I fixed the u,v,w such that their indices are the same. I also checked using DMDAGetCorners and DMDAGetGhostCorners. Now the problem lies in my subroutine treating it as a ?plain old Fortran array?. If I declare them as pointers, their indices follow the C 0 start convention, is that so? So my problem now is that in my old MPI code, the u(i,j,k) follow the Fortran 1 start convention. Is there some way to manipulate such that I do not have to change my u(i,j,k) to u(i-1,j-1,k-1)? Thanks. > > Barry > > On Apr 18, 2014, at 10:58 AM, TAY wee-beng wrote: > >> Hi, >> >> I tried to pinpoint the problem. I reduced my job size and hence I can run on 1 processor. Tried using valgrind but perhaps I'm using the optimized version, it didn't catch the error, besides saying "Segmentation fault (core dumped)" >> >> However, by re-writing my code, I found out a few things: >> >> 1. if I write my code this way: >> >> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >> >> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >> >> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >> >> u_array = .... >> >> v_array = .... >> >> w_array = .... >> >> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >> >> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >> >> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >> >> The code runs fine. >> >> 2. if I write my code this way: >> >> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >> >> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >> >> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >> >> call uvw_array_change(u_array,v_array,w_array) -> this subroutine does the same modification as the above. >> >> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >> >> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >> >> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) -> error >> >> where the subroutine is: >> >> subroutine uvw_array_change(u,v,w) >> >> real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) >> >> u ... >> v... >> w ... >> >> end subroutine uvw_array_change. >> >> The above will give an error at : >> >> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >> >> 3. Same as above, except I change the order of the last 3 lines to: >> >> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >> >> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >> >> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >> >> So they are now in reversed order. Now it works. >> >> 4. Same as 2 or 3, except the subroutine is changed to : >> >> subroutine uvw_array_change(u,v,w) >> >> real(8), intent(inout) :: u(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >> >> real(8), intent(inout) :: v(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >> >> real(8), intent(inout) :: w(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >> >> u ... >> v... >> w ... >> >> end subroutine uvw_array_change. >> >> The start_indices and end_indices are simply to shift the 0 indices of C convention to that of the 1 indices of the Fortran convention. This is necessary in my case because most of my codes start array counting at 1, hence the "trick". >> >> However, now no matter which order of the DMDAVecRestoreArrayF90 (as in 2 or 3), error will occur at "call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) " >> >> So did I violate and cause memory corruption due to the trick above? But I can't think of any way other than the "trick" to continue using the 1 indices convention. >> >> Thank you. >> >> Yours sincerely, >> >> TAY wee-beng >> >> On 15/4/2014 8:00 PM, Barry Smith wrote: >>> Try running under valgrind http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >>> >>> >>> On Apr 14, 2014, at 9:47 PM, TAY wee-beng wrote: >>> >>>> Hi Barry, >>>> >>>> As I mentioned earlier, the code works fine in PETSc debug mode but fails in non-debug mode. >>>> >>>> I have attached my code. >>>> >>>> Thank you >>>> >>>> Yours sincerely, >>>> >>>> TAY wee-beng >>>> >>>> On 15/4/2014 2:26 AM, Barry Smith wrote: >>>>> Please send the code that creates da_w and the declarations of w_array >>>>> >>>>> Barry >>>>> >>>>> On Apr 14, 2014, at 9:40 AM, TAY wee-beng >>>>> >>>>> wrote: >>>>> >>>>> >>>>>> Hi Barry, >>>>>> >>>>>> I'm not too sure how to do it. I'm running mpi. So I run: >>>>>> >>>>>> mpirun -n 4 ./a.out -start_in_debugger >>>>>> >>>>>> I got the msg below. Before the gdb windows appear (thru x11), the program aborts. >>>>>> >>>>>> Also I tried running in another cluster and it worked. Also tried in the current cluster in debug mode and it worked too. >>>>>> >>>>>> mpirun -n 4 ./a.out -start_in_debugger >>>>>> -------------------------------------------------------------------------- >>>>>> An MPI process has executed an operation involving a call to the >>>>>> "fork()" system call to create a child process. Open MPI is currently >>>>>> operating in a condition that could result in memory corruption or >>>>>> other system errors; your MPI job may hang, crash, or produce silent >>>>>> data corruption. The use of fork() (or system() or other calls that >>>>>> create child processes) is strongly discouraged. >>>>>> >>>>>> The process that invoked fork was: >>>>>> >>>>>> Local host: n12-76 (PID 20235) >>>>>> MPI_COMM_WORLD rank: 2 >>>>>> >>>>>> If you are *absolutely sure* that your application will successfully >>>>>> and correctly survive a call to fork(), you may disable this warning >>>>>> by setting the mpi_warn_on_fork MCA parameter to 0. >>>>>> -------------------------------------------------------------------------- >>>>>> [2]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20235 on display localhost:50.0 on machine n12-76 >>>>>> [0]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20233 on display localhost:50.0 on machine n12-76 >>>>>> [1]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20234 on display localhost:50.0 on machine n12-76 >>>>>> [3]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20236 on display localhost:50.0 on machine n12-76 >>>>>> [n12-76:20232] 3 more processes have sent help message help-mpi-runtime.txt / mpi_init:warn-fork >>>>>> [n12-76:20232] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages >>>>>> >>>>>> .... >>>>>> >>>>>> 1 >>>>>> [1]PETSC ERROR: ------------------------------------------------------------------------ >>>>>> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>>>>> [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>>>>> [1]PETSC ERROR: or see >>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC ERROR: or try http://valgrind.org >>>>>> on GNU/linux and Apple Mac OS X to find memory corruption errors >>>>>> [1]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >>>>>> [1]PETSC ERROR: to get more information on the crash. >>>>>> [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >>>>>> [3]PETSC ERROR: ------------------------------------------------------------------------ >>>>>> [3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>>>>> [3]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>>>>> [3]PETSC ERROR: or see >>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSC ERROR: or try http://valgrind.org >>>>>> on GNU/linux and Apple Mac OS X to find memory corruption errors >>>>>> [3]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >>>>>> [3]PETSC ERROR: to get more information on the crash. >>>>>> [3]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >>>>>> >>>>>> ... >>>>>> Thank you. >>>>>> >>>>>> Yours sincerely, >>>>>> >>>>>> TAY wee-beng >>>>>> >>>>>> On 14/4/2014 9:05 PM, Barry Smith wrote: >>>>>> >>>>>>> Because IO doesn?t always get flushed immediately it may not be hanging at this point. It is better to use the option -start_in_debugger then type cont in each debugger window and then when you think it is ?hanging? do a control C in each debugger window and type where to see where each process is you can also look around in the debugger at variables to see why it is ?hanging? at that point. >>>>>>> >>>>>>> Barry >>>>>>> >>>>>>> This routines don?t have any parallel communication in them so are unlikely to hang. >>>>>>> >>>>>>> On Apr 14, 2014, at 6:52 AM, TAY wee-beng >>>>>>> >>>>>>> >>>>>>> >>>>>>> wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> My code hangs and I added in mpi_barrier and print to catch the bug. I found that it hangs after printing "7". Is it because I'm doing something wrong? I need to access the u,v,w array so I use DMDAVecGetArrayF90. After access, I use DMDAVecRestoreArrayF90. >>>>>>>> >>>>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"3" >>>>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"4" >>>>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"5" >>>>>>>> call I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) >>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"6" >>>>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) !must be in reverse order >>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"7" >>>>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"8" >>>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>>> -- >>>>>>>> Thank you. >>>>>>>> >>>>>>>> Yours sincerely, >>>>>>>> >>>>>>>> TAY wee-beng >>>>>>>> >>>>>>>> >>>>>>>> >>>> >>>> From bsmith at mcs.anl.gov Fri Apr 18 23:10:25 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 18 Apr 2014 23:10:25 -0500 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: <5351E62B.6060201@gmail.com> References: <534C9A2C.5060404@gmail.com> <534C9DB5.9070407@gmail.com> <53514B8A.90901@gmail.com> <495519DB-D3A1-4190-AED2-4ABA885C2835@mcs.anl.gov> <5351E62B.6060201@gmail.com> Message-ID: On Apr 18, 2014, at 9:57 PM, TAY wee-beng wrote: > On 19/4/2014 3:53 AM, Barry Smith wrote: >> Hmm, >> >> Interface DMDAVecGetArrayF90 >> Subroutine DMDAVecGetArrayF903(da1, v,d1,ierr) >> USE_DM_HIDE >> DM_HIDE da1 >> VEC_HIDE v >> PetscScalar,pointer :: d1(:,:,:) >> PetscErrorCode ierr >> End Subroutine >> >> So the d1 is a F90 POINTER. But your subroutine seems to be treating it as a ?plain old Fortran array?? >> real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) >> >> Note also that the beginning and end indices of the u,v,w, are different for each process see for example http://www.mcs.anl.gov/petsc/petsc-3.4/src/dm/examples/tutorials/ex11f90.F (and they do not start at 1). This is how to get the loop bounds. > Hi, > > In my case, I fixed the u,v,w such that their indices are the same. I also checked using DMDAGetCorners and DMDAGetGhostCorners. Now the problem lies in my subroutine treating it as a ?plain old Fortran array?. > > If I declare them as pointers, their indices follow the C 0 start convention, is that so? Not really. It is that in each process you need to access them from the indices indicated by DMDAGetCorners() for global vectors and DMDAGetGhostCorners() for local vectors. So really C or Fortran doesn?t make any difference. > So my problem now is that in my old MPI code, the u(i,j,k) follow the Fortran 1 start convention. Is there some way to manipulate such that I do not have to change my u(i,j,k) to u(i-1,j-1,k-1)? If you code wishes to access them with indices plus one from the values returned by DMDAGetCorners() for global vectors and DMDAGetGhostCorners() for local vectors then you need to manually subtract off the 1. Barry > > Thanks. >> >> Barry >> >> On Apr 18, 2014, at 10:58 AM, TAY wee-beng wrote: >> >>> Hi, >>> >>> I tried to pinpoint the problem. I reduced my job size and hence I can run on 1 processor. Tried using valgrind but perhaps I'm using the optimized version, it didn't catch the error, besides saying "Segmentation fault (core dumped)" >>> >>> However, by re-writing my code, I found out a few things: >>> >>> 1. if I write my code this way: >>> >>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>> >>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>> >>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>> >>> u_array = .... >>> >>> v_array = .... >>> >>> w_array = .... >>> >>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>> >>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>> >>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>> >>> The code runs fine. >>> >>> 2. if I write my code this way: >>> >>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>> >>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>> >>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>> >>> call uvw_array_change(u_array,v_array,w_array) -> this subroutine does the same modification as the above. >>> >>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>> >>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>> >>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) -> error >>> >>> where the subroutine is: >>> >>> subroutine uvw_array_change(u,v,w) >>> >>> real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) >>> >>> u ... >>> v... >>> w ... >>> >>> end subroutine uvw_array_change. >>> >>> The above will give an error at : >>> >>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>> >>> 3. Same as above, except I change the order of the last 3 lines to: >>> >>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>> >>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>> >>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>> >>> So they are now in reversed order. Now it works. >>> >>> 4. Same as 2 or 3, except the subroutine is changed to : >>> >>> subroutine uvw_array_change(u,v,w) >>> >>> real(8), intent(inout) :: u(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>> >>> real(8), intent(inout) :: v(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>> >>> real(8), intent(inout) :: w(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>> >>> u ... >>> v... >>> w ... >>> >>> end subroutine uvw_array_change. >>> >>> The start_indices and end_indices are simply to shift the 0 indices of C convention to that of the 1 indices of the Fortran convention. This is necessary in my case because most of my codes start array counting at 1, hence the "trick". >>> >>> However, now no matter which order of the DMDAVecRestoreArrayF90 (as in 2 or 3), error will occur at "call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) " >>> >>> So did I violate and cause memory corruption due to the trick above? But I can't think of any way other than the "trick" to continue using the 1 indices convention. >>> >>> Thank you. >>> >>> Yours sincerely, >>> >>> TAY wee-beng >>> >>> On 15/4/2014 8:00 PM, Barry Smith wrote: >>>> Try running under valgrind http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >>>> >>>> >>>> On Apr 14, 2014, at 9:47 PM, TAY wee-beng wrote: >>>> >>>>> Hi Barry, >>>>> >>>>> As I mentioned earlier, the code works fine in PETSc debug mode but fails in non-debug mode. >>>>> >>>>> I have attached my code. >>>>> >>>>> Thank you >>>>> >>>>> Yours sincerely, >>>>> >>>>> TAY wee-beng >>>>> >>>>> On 15/4/2014 2:26 AM, Barry Smith wrote: >>>>>> Please send the code that creates da_w and the declarations of w_array >>>>>> >>>>>> Barry >>>>>> >>>>>> On Apr 14, 2014, at 9:40 AM, TAY wee-beng >>>>>> >>>>>> wrote: >>>>>> >>>>>> >>>>>>> Hi Barry, >>>>>>> >>>>>>> I'm not too sure how to do it. I'm running mpi. So I run: >>>>>>> >>>>>>> mpirun -n 4 ./a.out -start_in_debugger >>>>>>> >>>>>>> I got the msg below. Before the gdb windows appear (thru x11), the program aborts. >>>>>>> >>>>>>> Also I tried running in another cluster and it worked. Also tried in the current cluster in debug mode and it worked too. >>>>>>> >>>>>>> mpirun -n 4 ./a.out -start_in_debugger >>>>>>> -------------------------------------------------------------------------- >>>>>>> An MPI process has executed an operation involving a call to the >>>>>>> "fork()" system call to create a child process. Open MPI is currently >>>>>>> operating in a condition that could result in memory corruption or >>>>>>> other system errors; your MPI job may hang, crash, or produce silent >>>>>>> data corruption. The use of fork() (or system() or other calls that >>>>>>> create child processes) is strongly discouraged. >>>>>>> >>>>>>> The process that invoked fork was: >>>>>>> >>>>>>> Local host: n12-76 (PID 20235) >>>>>>> MPI_COMM_WORLD rank: 2 >>>>>>> >>>>>>> If you are *absolutely sure* that your application will successfully >>>>>>> and correctly survive a call to fork(), you may disable this warning >>>>>>> by setting the mpi_warn_on_fork MCA parameter to 0. >>>>>>> -------------------------------------------------------------------------- >>>>>>> [2]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20235 on display localhost:50.0 on machine n12-76 >>>>>>> [0]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20233 on display localhost:50.0 on machine n12-76 >>>>>>> [1]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20234 on display localhost:50.0 on machine n12-76 >>>>>>> [3]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20236 on display localhost:50.0 on machine n12-76 >>>>>>> [n12-76:20232] 3 more processes have sent help message help-mpi-runtime.txt / mpi_init:warn-fork >>>>>>> [n12-76:20232] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages >>>>>>> >>>>>>> .... >>>>>>> >>>>>>> 1 >>>>>>> [1]PETSC ERROR: ------------------------------------------------------------------------ >>>>>>> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>>>>>> [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>>>>>> [1]PETSC ERROR: or see >>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC ERROR: or try http://valgrind.org >>>>>>> on GNU/linux and Apple Mac OS X to find memory corruption errors >>>>>>> [1]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >>>>>>> [1]PETSC ERROR: to get more information on the crash. >>>>>>> [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >>>>>>> [3]PETSC ERROR: ------------------------------------------------------------------------ >>>>>>> [3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>>>>>> [3]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>>>>>> [3]PETSC ERROR: or see >>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSC ERROR: or try http://valgrind.org >>>>>>> on GNU/linux and Apple Mac OS X to find memory corruption errors >>>>>>> [3]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >>>>>>> [3]PETSC ERROR: to get more information on the crash. >>>>>>> [3]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >>>>>>> >>>>>>> ... >>>>>>> Thank you. >>>>>>> >>>>>>> Yours sincerely, >>>>>>> >>>>>>> TAY wee-beng >>>>>>> >>>>>>> On 14/4/2014 9:05 PM, Barry Smith wrote: >>>>>>> >>>>>>>> Because IO doesn?t always get flushed immediately it may not be hanging at this point. It is better to use the option -start_in_debugger then type cont in each debugger window and then when you think it is ?hanging? do a control C in each debugger window and type where to see where each process is you can also look around in the debugger at variables to see why it is ?hanging? at that point. >>>>>>>> >>>>>>>> Barry >>>>>>>> >>>>>>>> This routines don?t have any parallel communication in them so are unlikely to hang. >>>>>>>> >>>>>>>> On Apr 14, 2014, at 6:52 AM, TAY wee-beng >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> My code hangs and I added in mpi_barrier and print to catch the bug. I found that it hangs after printing "7". Is it because I'm doing something wrong? I need to access the u,v,w array so I use DMDAVecGetArrayF90. After access, I use DMDAVecRestoreArrayF90. >>>>>>>>> >>>>>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"3" >>>>>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"4" >>>>>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"5" >>>>>>>>> call I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) >>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"6" >>>>>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) !must be in reverse order >>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"7" >>>>>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"8" >>>>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>>>> -- >>>>>>>>> Thank you. >>>>>>>>> >>>>>>>>> Yours sincerely, >>>>>>>>> >>>>>>>>> TAY wee-beng >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>> >>>>> From zonexo at gmail.com Sat Apr 19 00:11:35 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Sat, 19 Apr 2014 13:11:35 +0800 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: References: <534C9A2C.5060404@gmail.com> <534C9DB5.9070407@gmail.com> <53514B8A.90901@gmail.com> <495519DB-D3A1-4190-AED2-4ABA885C2835@mcs.anl.gov> <5351E62B.6060201@gmail.com> Message-ID: <53520587.6010606@gmail.com> On 19/4/2014 12:10 PM, Barry Smith wrote: > On Apr 18, 2014, at 9:57 PM, TAY wee-beng wrote: > >> On 19/4/2014 3:53 AM, Barry Smith wrote: >>> Hmm, >>> >>> Interface DMDAVecGetArrayF90 >>> Subroutine DMDAVecGetArrayF903(da1, v,d1,ierr) >>> USE_DM_HIDE >>> DM_HIDE da1 >>> VEC_HIDE v >>> PetscScalar,pointer :: d1(:,:,:) >>> PetscErrorCode ierr >>> End Subroutine >>> >>> So the d1 is a F90 POINTER. But your subroutine seems to be treating it as a ?plain old Fortran array?? >>> real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) Hi, So d1 is a pointer, and it's different if I declare it as "plain old Fortran array"? Because I declare it as a Fortran array and it works w/o any problem if I only call DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 with "u". But if I call DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 with "u", "v" and "w", error starts to happen. I wonder why... Also, supposed I call: call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) u_array .... v_array .... etc Now to restore the array, does it matter the sequence they are restored? As in w, then v and u? call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) thanks >>> >>> Note also that the beginning and end indices of the u,v,w, are different for each process see for example http://www.mcs.anl.gov/petsc/petsc-3.4/src/dm/examples/tutorials/ex11f90.F (and they do not start at 1). This is how to get the loop bounds. >> Hi, >> >> In my case, I fixed the u,v,w such that their indices are the same. I also checked using DMDAGetCorners and DMDAGetGhostCorners. Now the problem lies in my subroutine treating it as a ?plain old Fortran array?. >> >> If I declare them as pointers, their indices follow the C 0 start convention, is that so? > Not really. It is that in each process you need to access them from the indices indicated by DMDAGetCorners() for global vectors and DMDAGetGhostCorners() for local vectors. So really C or Fortran doesn?t make any difference. > > >> So my problem now is that in my old MPI code, the u(i,j,k) follow the Fortran 1 start convention. Is there some way to manipulate such that I do not have to change my u(i,j,k) to u(i-1,j-1,k-1)? > If you code wishes to access them with indices plus one from the values returned by DMDAGetCorners() for global vectors and DMDAGetGhostCorners() for local vectors then you need to manually subtract off the 1. > > Barry > >> Thanks. >>> Barry >>> >>> On Apr 18, 2014, at 10:58 AM, TAY wee-beng wrote: >>> >>>> Hi, >>>> >>>> I tried to pinpoint the problem. I reduced my job size and hence I can run on 1 processor. Tried using valgrind but perhaps I'm using the optimized version, it didn't catch the error, besides saying "Segmentation fault (core dumped)" >>>> >>>> However, by re-writing my code, I found out a few things: >>>> >>>> 1. if I write my code this way: >>>> >>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>> >>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>> >>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>> >>>> u_array = .... >>>> >>>> v_array = .... >>>> >>>> w_array = .... >>>> >>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>> >>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>> >>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>> >>>> The code runs fine. >>>> >>>> 2. if I write my code this way: >>>> >>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>> >>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>> >>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>> >>>> call uvw_array_change(u_array,v_array,w_array) -> this subroutine does the same modification as the above. >>>> >>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>> >>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>> >>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) -> error >>>> >>>> where the subroutine is: >>>> >>>> subroutine uvw_array_change(u,v,w) >>>> >>>> real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) >>>> >>>> u ... >>>> v... >>>> w ... >>>> >>>> end subroutine uvw_array_change. >>>> >>>> The above will give an error at : >>>> >>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>> >>>> 3. Same as above, except I change the order of the last 3 lines to: >>>> >>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>> >>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>> >>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>> >>>> So they are now in reversed order. Now it works. >>>> >>>> 4. Same as 2 or 3, except the subroutine is changed to : >>>> >>>> subroutine uvw_array_change(u,v,w) >>>> >>>> real(8), intent(inout) :: u(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>> >>>> real(8), intent(inout) :: v(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>> >>>> real(8), intent(inout) :: w(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>> >>>> u ... >>>> v... >>>> w ... >>>> >>>> end subroutine uvw_array_change. >>>> >>>> The start_indices and end_indices are simply to shift the 0 indices of C convention to that of the 1 indices of the Fortran convention. This is necessary in my case because most of my codes start array counting at 1, hence the "trick". >>>> >>>> However, now no matter which order of the DMDAVecRestoreArrayF90 (as in 2 or 3), error will occur at "call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) " >>>> >>>> So did I violate and cause memory corruption due to the trick above? But I can't think of any way other than the "trick" to continue using the 1 indices convention. >>>> >>>> Thank you. >>>> >>>> Yours sincerely, >>>> >>>> TAY wee-beng >>>> >>>> On 15/4/2014 8:00 PM, Barry Smith wrote: >>>>> Try running under valgrind http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >>>>> >>>>> >>>>> On Apr 14, 2014, at 9:47 PM, TAY wee-beng wrote: >>>>> >>>>>> Hi Barry, >>>>>> >>>>>> As I mentioned earlier, the code works fine in PETSc debug mode but fails in non-debug mode. >>>>>> >>>>>> I have attached my code. >>>>>> >>>>>> Thank you >>>>>> >>>>>> Yours sincerely, >>>>>> >>>>>> TAY wee-beng >>>>>> >>>>>> On 15/4/2014 2:26 AM, Barry Smith wrote: >>>>>>> Please send the code that creates da_w and the declarations of w_array >>>>>>> >>>>>>> Barry >>>>>>> >>>>>>> On Apr 14, 2014, at 9:40 AM, TAY wee-beng >>>>>>> >>>>>>> wrote: >>>>>>> >>>>>>> >>>>>>>> Hi Barry, >>>>>>>> >>>>>>>> I'm not too sure how to do it. I'm running mpi. So I run: >>>>>>>> >>>>>>>> mpirun -n 4 ./a.out -start_in_debugger >>>>>>>> >>>>>>>> I got the msg below. Before the gdb windows appear (thru x11), the program aborts. >>>>>>>> >>>>>>>> Also I tried running in another cluster and it worked. Also tried in the current cluster in debug mode and it worked too. >>>>>>>> >>>>>>>> mpirun -n 4 ./a.out -start_in_debugger >>>>>>>> -------------------------------------------------------------------------- >>>>>>>> An MPI process has executed an operation involving a call to the >>>>>>>> "fork()" system call to create a child process. Open MPI is currently >>>>>>>> operating in a condition that could result in memory corruption or >>>>>>>> other system errors; your MPI job may hang, crash, or produce silent >>>>>>>> data corruption. The use of fork() (or system() or other calls that >>>>>>>> create child processes) is strongly discouraged. >>>>>>>> >>>>>>>> The process that invoked fork was: >>>>>>>> >>>>>>>> Local host: n12-76 (PID 20235) >>>>>>>> MPI_COMM_WORLD rank: 2 >>>>>>>> >>>>>>>> If you are *absolutely sure* that your application will successfully >>>>>>>> and correctly survive a call to fork(), you may disable this warning >>>>>>>> by setting the mpi_warn_on_fork MCA parameter to 0. >>>>>>>> -------------------------------------------------------------------------- >>>>>>>> [2]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20235 on display localhost:50.0 on machine n12-76 >>>>>>>> [0]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20233 on display localhost:50.0 on machine n12-76 >>>>>>>> [1]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20234 on display localhost:50.0 on machine n12-76 >>>>>>>> [3]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20236 on display localhost:50.0 on machine n12-76 >>>>>>>> [n12-76:20232] 3 more processes have sent help message help-mpi-runtime.txt / mpi_init:warn-fork >>>>>>>> [n12-76:20232] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages >>>>>>>> >>>>>>>> .... >>>>>>>> >>>>>>>> 1 >>>>>>>> [1]PETSC ERROR: ------------------------------------------------------------------------ >>>>>>>> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>>>>>>> [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>>>>>>> [1]PETSC ERROR: or see >>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC ERROR: or try http://valgrind.org >>>>>>>> on GNU/linux and Apple Mac OS X to find memory corruption errors >>>>>>>> [1]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >>>>>>>> [1]PETSC ERROR: to get more information on the crash. >>>>>>>> [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >>>>>>>> [3]PETSC ERROR: ------------------------------------------------------------------------ >>>>>>>> [3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>>>>>>> [3]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>>>>>>> [3]PETSC ERROR: or see >>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSC ERROR: or try http://valgrind.org >>>>>>>> on GNU/linux and Apple Mac OS X to find memory corruption errors >>>>>>>> [3]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >>>>>>>> [3]PETSC ERROR: to get more information on the crash. >>>>>>>> [3]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >>>>>>>> >>>>>>>> ... >>>>>>>> Thank you. >>>>>>>> >>>>>>>> Yours sincerely, >>>>>>>> >>>>>>>> TAY wee-beng >>>>>>>> >>>>>>>> On 14/4/2014 9:05 PM, Barry Smith wrote: >>>>>>>> >>>>>>>>> Because IO doesn?t always get flushed immediately it may not be hanging at this point. It is better to use the option -start_in_debugger then type cont in each debugger window and then when you think it is ?hanging? do a control C in each debugger window and type where to see where each process is you can also look around in the debugger at variables to see why it is ?hanging? at that point. >>>>>>>>> >>>>>>>>> Barry >>>>>>>>> >>>>>>>>> This routines don?t have any parallel communication in them so are unlikely to hang. >>>>>>>>> >>>>>>>>> On Apr 14, 2014, at 6:52 AM, TAY wee-beng >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> My code hangs and I added in mpi_barrier and print to catch the bug. I found that it hangs after printing "7". Is it because I'm doing something wrong? I need to access the u,v,w array so I use DMDAVecGetArrayF90. After access, I use DMDAVecRestoreArrayF90. >>>>>>>>>> >>>>>>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"3" >>>>>>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"4" >>>>>>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"5" >>>>>>>>>> call I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) >>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"6" >>>>>>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) !must be in reverse order >>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"7" >>>>>>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"8" >>>>>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>>>>> -- >>>>>>>>>> Thank you. >>>>>>>>>> >>>>>>>>>> Yours sincerely, >>>>>>>>>> >>>>>>>>>> TAY wee-beng >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>> From bsmith at mcs.anl.gov Sat Apr 19 00:17:27 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 19 Apr 2014 00:17:27 -0500 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: <53520587.6010606@gmail.com> References: <534C9A2C.5060404@gmail.com> <534C9DB5.9070407@gmail.com> <53514B8A.90901@gmail.com> <495519DB-D3A1-4190-AED2-4ABA885C2835@mcs.anl.gov> <5351E62B.6060201@gmail.com> <53520587.6010606@gmail.com> Message-ID: <62DF81C7-0C35-410C-8D4C-206FBB22576A@mcs.anl.gov> On Apr 19, 2014, at 12:11 AM, TAY wee-beng wrote: > On 19/4/2014 12:10 PM, Barry Smith wrote: >> On Apr 18, 2014, at 9:57 PM, TAY wee-beng wrote: >> >>> On 19/4/2014 3:53 AM, Barry Smith wrote: >>>> Hmm, >>>> >>>> Interface DMDAVecGetArrayF90 >>>> Subroutine DMDAVecGetArrayF903(da1, v,d1,ierr) >>>> USE_DM_HIDE >>>> DM_HIDE da1 >>>> VEC_HIDE v >>>> PetscScalar,pointer :: d1(:,:,:) >>>> PetscErrorCode ierr >>>> End Subroutine >>>> >>>> So the d1 is a F90 POINTER. But your subroutine seems to be treating it as a ?plain old Fortran array?? >>>> real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) > Hi, > > So d1 is a pointer, and it's different if I declare it as "plain old Fortran array"? Because I declare it as a Fortran array and it works w/o any problem if I only call DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 with "u". > > But if I call DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 with "u", "v" and "w", error starts to happen. I wonder why... > > Also, supposed I call: > > call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) > > call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) > > call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) > > u_array .... > > v_array .... etc > > Now to restore the array, does it matter the sequence they are restored? No it should not matter. If it matters that is a sign that memory has been written to incorrectly earlier in the code. > As in w, then v and u? > > call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) > call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) > call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) > > thanks >>>> >>>> Note also that the beginning and end indices of the u,v,w, are different for each process see for example http://www.mcs.anl.gov/petsc/petsc-3.4/src/dm/examples/tutorials/ex11f90.F (and they do not start at 1). This is how to get the loop bounds. >>> Hi, >>> >>> In my case, I fixed the u,v,w such that their indices are the same. I also checked using DMDAGetCorners and DMDAGetGhostCorners. Now the problem lies in my subroutine treating it as a ?plain old Fortran array?. >>> >>> If I declare them as pointers, their indices follow the C 0 start convention, is that so? >> Not really. It is that in each process you need to access them from the indices indicated by DMDAGetCorners() for global vectors and DMDAGetGhostCorners() for local vectors. So really C or Fortran doesn?t make any difference. >> >> >>> So my problem now is that in my old MPI code, the u(i,j,k) follow the Fortran 1 start convention. Is there some way to manipulate such that I do not have to change my u(i,j,k) to u(i-1,j-1,k-1)? >> If you code wishes to access them with indices plus one from the values returned by DMDAGetCorners() for global vectors and DMDAGetGhostCorners() for local vectors then you need to manually subtract off the 1. >> >> Barry >> >>> Thanks. >>>> Barry >>>> >>>> On Apr 18, 2014, at 10:58 AM, TAY wee-beng wrote: >>>> >>>>> Hi, >>>>> >>>>> I tried to pinpoint the problem. I reduced my job size and hence I can run on 1 processor. Tried using valgrind but perhaps I'm using the optimized version, it didn't catch the error, besides saying "Segmentation fault (core dumped)" >>>>> >>>>> However, by re-writing my code, I found out a few things: >>>>> >>>>> 1. if I write my code this way: >>>>> >>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>> >>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>> >>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>> >>>>> u_array = .... >>>>> >>>>> v_array = .... >>>>> >>>>> w_array = .... >>>>> >>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>> >>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>> >>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>> >>>>> The code runs fine. >>>>> >>>>> 2. if I write my code this way: >>>>> >>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>> >>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>> >>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>> >>>>> call uvw_array_change(u_array,v_array,w_array) -> this subroutine does the same modification as the above. >>>>> >>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>> >>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>> >>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) -> error >>>>> >>>>> where the subroutine is: >>>>> >>>>> subroutine uvw_array_change(u,v,w) >>>>> >>>>> real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) >>>>> >>>>> u ... >>>>> v... >>>>> w ... >>>>> >>>>> end subroutine uvw_array_change. >>>>> >>>>> The above will give an error at : >>>>> >>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>> >>>>> 3. Same as above, except I change the order of the last 3 lines to: >>>>> >>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>> >>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>> >>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>> >>>>> So they are now in reversed order. Now it works. >>>>> >>>>> 4. Same as 2 or 3, except the subroutine is changed to : >>>>> >>>>> subroutine uvw_array_change(u,v,w) >>>>> >>>>> real(8), intent(inout) :: u(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>>> >>>>> real(8), intent(inout) :: v(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>>> >>>>> real(8), intent(inout) :: w(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>>> >>>>> u ... >>>>> v... >>>>> w ... >>>>> >>>>> end subroutine uvw_array_change. >>>>> >>>>> The start_indices and end_indices are simply to shift the 0 indices of C convention to that of the 1 indices of the Fortran convention. This is necessary in my case because most of my codes start array counting at 1, hence the "trick". >>>>> >>>>> However, now no matter which order of the DMDAVecRestoreArrayF90 (as in 2 or 3), error will occur at "call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) " >>>>> >>>>> So did I violate and cause memory corruption due to the trick above? But I can't think of any way other than the "trick" to continue using the 1 indices convention. >>>>> >>>>> Thank you. >>>>> >>>>> Yours sincerely, >>>>> >>>>> TAY wee-beng >>>>> >>>>> On 15/4/2014 8:00 PM, Barry Smith wrote: >>>>>> Try running under valgrind http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >>>>>> >>>>>> >>>>>> On Apr 14, 2014, at 9:47 PM, TAY wee-beng wrote: >>>>>> >>>>>>> Hi Barry, >>>>>>> >>>>>>> As I mentioned earlier, the code works fine in PETSc debug mode but fails in non-debug mode. >>>>>>> >>>>>>> I have attached my code. >>>>>>> >>>>>>> Thank you >>>>>>> >>>>>>> Yours sincerely, >>>>>>> >>>>>>> TAY wee-beng >>>>>>> >>>>>>> On 15/4/2014 2:26 AM, Barry Smith wrote: >>>>>>>> Please send the code that creates da_w and the declarations of w_array >>>>>>>> >>>>>>>> Barry >>>>>>>> >>>>>>>> On Apr 14, 2014, at 9:40 AM, TAY wee-beng >>>>>>>> >>>>>>>> wrote: >>>>>>>> >>>>>>>> >>>>>>>>> Hi Barry, >>>>>>>>> >>>>>>>>> I'm not too sure how to do it. I'm running mpi. So I run: >>>>>>>>> >>>>>>>>> mpirun -n 4 ./a.out -start_in_debugger >>>>>>>>> >>>>>>>>> I got the msg below. Before the gdb windows appear (thru x11), the program aborts. >>>>>>>>> >>>>>>>>> Also I tried running in another cluster and it worked. Also tried in the current cluster in debug mode and it worked too. >>>>>>>>> >>>>>>>>> mpirun -n 4 ./a.out -start_in_debugger >>>>>>>>> -------------------------------------------------------------------------- >>>>>>>>> An MPI process has executed an operation involving a call to the >>>>>>>>> "fork()" system call to create a child process. Open MPI is currently >>>>>>>>> operating in a condition that could result in memory corruption or >>>>>>>>> other system errors; your MPI job may hang, crash, or produce silent >>>>>>>>> data corruption. The use of fork() (or system() or other calls that >>>>>>>>> create child processes) is strongly discouraged. >>>>>>>>> >>>>>>>>> The process that invoked fork was: >>>>>>>>> >>>>>>>>> Local host: n12-76 (PID 20235) >>>>>>>>> MPI_COMM_WORLD rank: 2 >>>>>>>>> >>>>>>>>> If you are *absolutely sure* that your application will successfully >>>>>>>>> and correctly survive a call to fork(), you may disable this warning >>>>>>>>> by setting the mpi_warn_on_fork MCA parameter to 0. >>>>>>>>> -------------------------------------------------------------------------- >>>>>>>>> [2]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20235 on display localhost:50.0 on machine n12-76 >>>>>>>>> [0]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20233 on display localhost:50.0 on machine n12-76 >>>>>>>>> [1]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20234 on display localhost:50.0 on machine n12-76 >>>>>>>>> [3]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20236 on display localhost:50.0 on machine n12-76 >>>>>>>>> [n12-76:20232] 3 more processes have sent help message help-mpi-runtime.txt / mpi_init:warn-fork >>>>>>>>> [n12-76:20232] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages >>>>>>>>> >>>>>>>>> .... >>>>>>>>> >>>>>>>>> 1 >>>>>>>>> [1]PETSC ERROR: ------------------------------------------------------------------------ >>>>>>>>> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>>>>>>>> [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>>>>>>>> [1]PETSC ERROR: or see >>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC ERROR: or try http://valgrind.org >>>>>>>>> on GNU/linux and Apple Mac OS X to find memory corruption errors >>>>>>>>> [1]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >>>>>>>>> [1]PETSC ERROR: to get more information on the crash. >>>>>>>>> [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >>>>>>>>> [3]PETSC ERROR: ------------------------------------------------------------------------ >>>>>>>>> [3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>>>>>>>> [3]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>>>>>>>> [3]PETSC ERROR: or see >>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSC ERROR: or try http://valgrind.org >>>>>>>>> on GNU/linux and Apple Mac OS X to find memory corruption errors >>>>>>>>> [3]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >>>>>>>>> [3]PETSC ERROR: to get more information on the crash. >>>>>>>>> [3]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >>>>>>>>> >>>>>>>>> ... >>>>>>>>> Thank you. >>>>>>>>> >>>>>>>>> Yours sincerely, >>>>>>>>> >>>>>>>>> TAY wee-beng >>>>>>>>> >>>>>>>>> On 14/4/2014 9:05 PM, Barry Smith wrote: >>>>>>>>> >>>>>>>>>> Because IO doesn?t always get flushed immediately it may not be hanging at this point. It is better to use the option -start_in_debugger then type cont in each debugger window and then when you think it is ?hanging? do a control C in each debugger window and type where to see where each process is you can also look around in the debugger at variables to see why it is ?hanging? at that point. >>>>>>>>>> >>>>>>>>>> Barry >>>>>>>>>> >>>>>>>>>> This routines don?t have any parallel communication in them so are unlikely to hang. >>>>>>>>>> >>>>>>>>>> On Apr 14, 2014, at 6:52 AM, TAY wee-beng >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> My code hangs and I added in mpi_barrier and print to catch the bug. I found that it hangs after printing "7". Is it because I'm doing something wrong? I need to access the u,v,w array so I use DMDAVecGetArrayF90. After access, I use DMDAVecRestoreArrayF90. >>>>>>>>>>> >>>>>>>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"3" >>>>>>>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"4" >>>>>>>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"5" >>>>>>>>>>> call I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) >>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"6" >>>>>>>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) !must be in reverse order >>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"7" >>>>>>>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"8" >>>>>>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>>>>>> -- >>>>>>>>>>> Thank you. >>>>>>>>>>> >>>>>>>>>>> Yours sincerely, >>>>>>>>>>> >>>>>>>>>>> TAY wee-beng >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>> From zonexo at gmail.com Sat Apr 19 04:59:04 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Sat, 19 Apr 2014 17:59:04 +0800 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: <62DF81C7-0C35-410C-8D4C-206FBB22576A@mcs.anl.gov> References: <534C9A2C.5060404@gmail.com> <534C9DB5.9070407@gmail.com> <53514B8A.90901@gmail.com> <495519DB-D3A1-4190-AED2-4ABA885C2835@mcs.anl.gov> <5351E62B.6060201@gmail.com> <53520587.6010606@gmail.com> <62DF81C7-0C35-410C-8D4C-206FBB22576A@mcs.anl.gov> Message-ID: <535248E8.2070002@gmail.com> On 19/4/2014 1:17 PM, Barry Smith wrote: > On Apr 19, 2014, at 12:11 AM, TAY wee-beng wrote: > >> On 19/4/2014 12:10 PM, Barry Smith wrote: >>> On Apr 18, 2014, at 9:57 PM, TAY wee-beng wrote: >>> >>>> On 19/4/2014 3:53 AM, Barry Smith wrote: >>>>> Hmm, >>>>> >>>>> Interface DMDAVecGetArrayF90 >>>>> Subroutine DMDAVecGetArrayF903(da1, v,d1,ierr) >>>>> USE_DM_HIDE >>>>> DM_HIDE da1 >>>>> VEC_HIDE v >>>>> PetscScalar,pointer :: d1(:,:,:) >>>>> PetscErrorCode ierr >>>>> End Subroutine >>>>> >>>>> So the d1 is a F90 POINTER. But your subroutine seems to be treating it as a ?plain old Fortran array?? >>>>> real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) >> Hi, >> >> So d1 is a pointer, and it's different if I declare it as "plain old Fortran array"? Because I declare it as a Fortran array and it works w/o any problem if I only call DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 with "u". >> >> But if I call DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 with "u", "v" and "w", error starts to happen. I wonder why... >> >> Also, supposed I call: >> >> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >> >> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >> >> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >> >> u_array .... >> >> v_array .... etc >> >> Now to restore the array, does it matter the sequence they are restored? > No it should not matter. If it matters that is a sign that memory has been written to incorrectly earlier in the code. > Hi, Hmm, I have been getting different results on different intel compilers. I'm not sure if MPI played a part but I'm only using a single processor. In the debug mode, things run without problem. In optimized mode, in some cases, the code aborts even doing simple initialization: call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) call DMDAVecGetArrayF90(da_p,p_local,p_array,ierr) u_array = 0.d0 v_array = 0.d0 w_array = 0.d0 p_array = 0.d0 call DMDAVecRestoreArrayF90(da_p,p_local,p_array,ierr) call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) The code aborts at call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr), giving segmentation error. But other version of intel compiler passes thru this part w/o error. Since the response is different among different compilers, is this PETSc or intel 's bug? Or mvapich or openmpi? >> As in w, then v and u? >> >> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >> >> thanks >>>>> Note also that the beginning and end indices of the u,v,w, are different for each process see for example http://www.mcs.anl.gov/petsc/petsc-3.4/src/dm/examples/tutorials/ex11f90.F (and they do not start at 1). This is how to get the loop bounds. >>>> Hi, >>>> >>>> In my case, I fixed the u,v,w such that their indices are the same. I also checked using DMDAGetCorners and DMDAGetGhostCorners. Now the problem lies in my subroutine treating it as a ?plain old Fortran array?. >>>> >>>> If I declare them as pointers, their indices follow the C 0 start convention, is that so? >>> Not really. It is that in each process you need to access them from the indices indicated by DMDAGetCorners() for global vectors and DMDAGetGhostCorners() for local vectors. So really C or Fortran doesn?t make any difference. >>> >>> >>>> So my problem now is that in my old MPI code, the u(i,j,k) follow the Fortran 1 start convention. Is there some way to manipulate such that I do not have to change my u(i,j,k) to u(i-1,j-1,k-1)? >>> If you code wishes to access them with indices plus one from the values returned by DMDAGetCorners() for global vectors and DMDAGetGhostCorners() for local vectors then you need to manually subtract off the 1. >>> >>> Barry >>> >>>> Thanks. >>>>> Barry >>>>> >>>>> On Apr 18, 2014, at 10:58 AM, TAY wee-beng wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> I tried to pinpoint the problem. I reduced my job size and hence I can run on 1 processor. Tried using valgrind but perhaps I'm using the optimized version, it didn't catch the error, besides saying "Segmentation fault (core dumped)" >>>>>> >>>>>> However, by re-writing my code, I found out a few things: >>>>>> >>>>>> 1. if I write my code this way: >>>>>> >>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>> >>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>> >>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>> >>>>>> u_array = .... >>>>>> >>>>>> v_array = .... >>>>>> >>>>>> w_array = .... >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>> >>>>>> The code runs fine. >>>>>> >>>>>> 2. if I write my code this way: >>>>>> >>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>> >>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>> >>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>> >>>>>> call uvw_array_change(u_array,v_array,w_array) -> this subroutine does the same modification as the above. >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) -> error >>>>>> >>>>>> where the subroutine is: >>>>>> >>>>>> subroutine uvw_array_change(u,v,w) >>>>>> >>>>>> real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) >>>>>> >>>>>> u ... >>>>>> v... >>>>>> w ... >>>>>> >>>>>> end subroutine uvw_array_change. >>>>>> >>>>>> The above will give an error at : >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>> >>>>>> 3. Same as above, except I change the order of the last 3 lines to: >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>> >>>>>> So they are now in reversed order. Now it works. >>>>>> >>>>>> 4. Same as 2 or 3, except the subroutine is changed to : >>>>>> >>>>>> subroutine uvw_array_change(u,v,w) >>>>>> >>>>>> real(8), intent(inout) :: u(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>>>> >>>>>> real(8), intent(inout) :: v(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>>>> >>>>>> real(8), intent(inout) :: w(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>>>> >>>>>> u ... >>>>>> v... >>>>>> w ... >>>>>> >>>>>> end subroutine uvw_array_change. >>>>>> >>>>>> The start_indices and end_indices are simply to shift the 0 indices of C convention to that of the 1 indices of the Fortran convention. This is necessary in my case because most of my codes start array counting at 1, hence the "trick". >>>>>> >>>>>> However, now no matter which order of the DMDAVecRestoreArrayF90 (as in 2 or 3), error will occur at "call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) " >>>>>> >>>>>> So did I violate and cause memory corruption due to the trick above? But I can't think of any way other than the "trick" to continue using the 1 indices convention. >>>>>> >>>>>> Thank you. >>>>>> >>>>>> Yours sincerely, >>>>>> >>>>>> TAY wee-beng >>>>>> >>>>>> On 15/4/2014 8:00 PM, Barry Smith wrote: >>>>>>> Try running under valgrind http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >>>>>>> >>>>>>> >>>>>>> On Apr 14, 2014, at 9:47 PM, TAY wee-beng wrote: >>>>>>> >>>>>>>> Hi Barry, >>>>>>>> >>>>>>>> As I mentioned earlier, the code works fine in PETSc debug mode but fails in non-debug mode. >>>>>>>> >>>>>>>> I have attached my code. >>>>>>>> >>>>>>>> Thank you >>>>>>>> >>>>>>>> Yours sincerely, >>>>>>>> >>>>>>>> TAY wee-beng >>>>>>>> >>>>>>>> On 15/4/2014 2:26 AM, Barry Smith wrote: >>>>>>>>> Please send the code that creates da_w and the declarations of w_array >>>>>>>>> >>>>>>>>> Barry >>>>>>>>> >>>>>>>>> On Apr 14, 2014, at 9:40 AM, TAY wee-beng >>>>>>>>> >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>>> Hi Barry, >>>>>>>>>> >>>>>>>>>> I'm not too sure how to do it. I'm running mpi. So I run: >>>>>>>>>> >>>>>>>>>> mpirun -n 4 ./a.out -start_in_debugger >>>>>>>>>> >>>>>>>>>> I got the msg below. Before the gdb windows appear (thru x11), the program aborts. >>>>>>>>>> >>>>>>>>>> Also I tried running in another cluster and it worked. Also tried in the current cluster in debug mode and it worked too. >>>>>>>>>> >>>>>>>>>> mpirun -n 4 ./a.out -start_in_debugger >>>>>>>>>> -------------------------------------------------------------------------- >>>>>>>>>> An MPI process has executed an operation involving a call to the >>>>>>>>>> "fork()" system call to create a child process. Open MPI is currently >>>>>>>>>> operating in a condition that could result in memory corruption or >>>>>>>>>> other system errors; your MPI job may hang, crash, or produce silent >>>>>>>>>> data corruption. The use of fork() (or system() or other calls that >>>>>>>>>> create child processes) is strongly discouraged. >>>>>>>>>> >>>>>>>>>> The process that invoked fork was: >>>>>>>>>> >>>>>>>>>> Local host: n12-76 (PID 20235) >>>>>>>>>> MPI_COMM_WORLD rank: 2 >>>>>>>>>> >>>>>>>>>> If you are *absolutely sure* that your application will successfully >>>>>>>>>> and correctly survive a call to fork(), you may disable this warning >>>>>>>>>> by setting the mpi_warn_on_fork MCA parameter to 0. >>>>>>>>>> -------------------------------------------------------------------------- >>>>>>>>>> [2]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20235 on display localhost:50.0 on machine n12-76 >>>>>>>>>> [0]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20233 on display localhost:50.0 on machine n12-76 >>>>>>>>>> [1]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20234 on display localhost:50.0 on machine n12-76 >>>>>>>>>> [3]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20236 on display localhost:50.0 on machine n12-76 >>>>>>>>>> [n12-76:20232] 3 more processes have sent help message help-mpi-runtime.txt / mpi_init:warn-fork >>>>>>>>>> [n12-76:20232] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages >>>>>>>>>> >>>>>>>>>> .... >>>>>>>>>> >>>>>>>>>> 1 >>>>>>>>>> [1]PETSC ERROR: ------------------------------------------------------------------------ >>>>>>>>>> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>>>>>>>>> [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>>>>>>>>> [1]PETSC ERROR: or see >>>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC ERROR: or try http://valgrind.org >>>>>>>>>> on GNU/linux and Apple Mac OS X to find memory corruption errors >>>>>>>>>> [1]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >>>>>>>>>> [1]PETSC ERROR: to get more information on the crash. >>>>>>>>>> [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >>>>>>>>>> [3]PETSC ERROR: ------------------------------------------------------------------------ >>>>>>>>>> [3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>>>>>>>>> [3]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>>>>>>>>> [3]PETSC ERROR: or see >>>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSC ERROR: or try http://valgrind.org >>>>>>>>>> on GNU/linux and Apple Mac OS X to find memory corruption errors >>>>>>>>>> [3]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >>>>>>>>>> [3]PETSC ERROR: to get more information on the crash. >>>>>>>>>> [3]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >>>>>>>>>> >>>>>>>>>> ... >>>>>>>>>> Thank you. >>>>>>>>>> >>>>>>>>>> Yours sincerely, >>>>>>>>>> >>>>>>>>>> TAY wee-beng >>>>>>>>>> >>>>>>>>>> On 14/4/2014 9:05 PM, Barry Smith wrote: >>>>>>>>>> >>>>>>>>>>> Because IO doesn?t always get flushed immediately it may not be hanging at this point. It is better to use the option -start_in_debugger then type cont in each debugger window and then when you think it is ?hanging? do a control C in each debugger window and type where to see where each process is you can also look around in the debugger at variables to see why it is ?hanging? at that point. >>>>>>>>>>> >>>>>>>>>>> Barry >>>>>>>>>>> >>>>>>>>>>> This routines don?t have any parallel communication in them so are unlikely to hang. >>>>>>>>>>> >>>>>>>>>>> On Apr 14, 2014, at 6:52 AM, TAY wee-beng >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> Hi, >>>>>>>>>>>> >>>>>>>>>>>> My code hangs and I added in mpi_barrier and print to catch the bug. I found that it hangs after printing "7". Is it because I'm doing something wrong? I need to access the u,v,w array so I use DMDAVecGetArrayF90. After access, I use DMDAVecRestoreArrayF90. >>>>>>>>>>>> >>>>>>>>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"3" >>>>>>>>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"4" >>>>>>>>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"5" >>>>>>>>>>>> call I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) >>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"6" >>>>>>>>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) !must be in reverse order >>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"7" >>>>>>>>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"8" >>>>>>>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>>>>>>> -- >>>>>>>>>>>> Thank you. >>>>>>>>>>>> >>>>>>>>>>>> Yours sincerely, >>>>>>>>>>>> >>>>>>>>>>>> TAY wee-beng >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>> From knepley at gmail.com Sat Apr 19 05:48:13 2014 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 19 Apr 2014 05:48:13 -0500 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: <535248E8.2070002@gmail.com> References: <534C9A2C.5060404@gmail.com> <534C9DB5.9070407@gmail.com> <53514B8A.90901@gmail.com> <495519DB-D3A1-4190-AED2-4ABA885C2835@mcs.anl.gov> <5351E62B.6060201@gmail.com> <53520587.6010606@gmail.com> <62DF81C7-0C35-410C-8D4C-206FBB22576A@mcs.anl.gov> <535248E8.2070002@gmail.com> Message-ID: On Sat, Apr 19, 2014 at 4:59 AM, TAY wee-beng wrote: > On 19/4/2014 1:17 PM, Barry Smith wrote: > >> On Apr 19, 2014, at 12:11 AM, TAY wee-beng wrote: >> >> On 19/4/2014 12:10 PM, Barry Smith wrote: >>> >>>> On Apr 18, 2014, at 9:57 PM, TAY wee-beng wrote: >>>> >>>> On 19/4/2014 3:53 AM, Barry Smith wrote: >>>>> >>>>>> Hmm, >>>>>> >>>>>> Interface DMDAVecGetArrayF90 >>>>>> Subroutine DMDAVecGetArrayF903(da1, v,d1,ierr) >>>>>> USE_DM_HIDE >>>>>> DM_HIDE da1 >>>>>> VEC_HIDE v >>>>>> PetscScalar,pointer :: d1(:,:,:) >>>>>> PetscErrorCode ierr >>>>>> End Subroutine >>>>>> >>>>>> So the d1 is a F90 POINTER. But your subroutine seems to be >>>>>> treating it as a ?plain old Fortran array?? >>>>>> real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) >>>>>> >>>>> Hi, >>> >>> So d1 is a pointer, and it's different if I declare it as "plain old >>> Fortran array"? Because I declare it as a Fortran array and it works w/o >>> any problem if I only call DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 >>> with "u". >>> >>> But if I call DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 with "u", >>> "v" and "w", error starts to happen. I wonder why... >>> >>> Also, supposed I call: >>> >>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>> >>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>> >>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>> >>> u_array .... >>> >>> v_array .... etc >>> >>> Now to restore the array, does it matter the sequence they are restored? >>> >> No it should not matter. If it matters that is a sign that memory has >> been written to incorrectly earlier in the code. >> >> Hi, > > Hmm, I have been getting different results on different intel compilers. > I'm not sure if MPI played a part but I'm only using a single processor. In > the debug mode, things run without problem. In optimized mode, in some > cases, the code aborts even doing simple initialization: > > > call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) > > call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) > > call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) > > call DMDAVecGetArrayF90(da_p,p_local,p_array,ierr) > > u_array = 0.d0 > > v_array = 0.d0 > > w_array = 0.d0 > > p_array = 0.d0 > > > call DMDAVecRestoreArrayF90(da_p,p_local,p_array,ierr) > > > call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) > > call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) > > call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) > > The code aborts at call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr), > giving segmentation error. But other version of intel compiler passes thru > this part w/o error. Since the response is different among different > compilers, is this PETSc or intel 's bug? Or mvapich or openmpi? We do this is a bunch of examples. Can you reproduce this different behavior in src/dm/examples/tutorials/ex11f90.F? Matt > As in w, then v and u? >>> >>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>> >>> thanks >>> >>>> Note also that the beginning and end indices of the u,v,w, are >>>>>> different for each process see for example >>>>>> http://www.mcs.anl.gov/petsc/petsc-3.4/src/dm/examples/ >>>>>> tutorials/ex11f90.F (and they do not start at 1). This is how to >>>>>> get the loop bounds. >>>>>> >>>>> Hi, >>>>> >>>>> In my case, I fixed the u,v,w such that their indices are the same. I >>>>> also checked using DMDAGetCorners and DMDAGetGhostCorners. Now the problem >>>>> lies in my subroutine treating it as a ?plain old Fortran array?. >>>>> >>>>> If I declare them as pointers, their indices follow the C 0 start >>>>> convention, is that so? >>>>> >>>> Not really. It is that in each process you need to access them from >>>> the indices indicated by DMDAGetCorners() for global vectors and >>>> DMDAGetGhostCorners() for local vectors. So really C or Fortran doesn?t >>>> make any difference. >>>> >>>> >>>> So my problem now is that in my old MPI code, the u(i,j,k) follow the >>>>> Fortran 1 start convention. Is there some way to manipulate such that I do >>>>> not have to change my u(i,j,k) to u(i-1,j-1,k-1)? >>>>> >>>> If you code wishes to access them with indices plus one from the >>>> values returned by DMDAGetCorners() for global vectors and >>>> DMDAGetGhostCorners() for local vectors then you need to manually subtract >>>> off the 1. >>>> >>>> Barry >>>> >>>> Thanks. >>>>> >>>>>> Barry >>>>>> >>>>>> On Apr 18, 2014, at 10:58 AM, TAY wee-beng wrote: >>>>>> >>>>>> Hi, >>>>>>> >>>>>>> I tried to pinpoint the problem. I reduced my job size and hence I >>>>>>> can run on 1 processor. Tried using valgrind but perhaps I'm using the >>>>>>> optimized version, it didn't catch the error, besides saying "Segmentation >>>>>>> fault (core dumped)" >>>>>>> >>>>>>> However, by re-writing my code, I found out a few things: >>>>>>> >>>>>>> 1. if I write my code this way: >>>>>>> >>>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>>> >>>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>>> >>>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>>> >>>>>>> u_array = .... >>>>>>> >>>>>>> v_array = .... >>>>>>> >>>>>>> w_array = .... >>>>>>> >>>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>>> >>>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>> >>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>> >>>>>>> The code runs fine. >>>>>>> >>>>>>> 2. if I write my code this way: >>>>>>> >>>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>>> >>>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>>> >>>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>>> >>>>>>> call uvw_array_change(u_array,v_array,w_array) -> this subroutine >>>>>>> does the same modification as the above. >>>>>>> >>>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>>> >>>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>> >>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) -> error >>>>>>> >>>>>>> where the subroutine is: >>>>>>> >>>>>>> subroutine uvw_array_change(u,v,w) >>>>>>> >>>>>>> real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) >>>>>>> >>>>>>> u ... >>>>>>> v... >>>>>>> w ... >>>>>>> >>>>>>> end subroutine uvw_array_change. >>>>>>> >>>>>>> The above will give an error at : >>>>>>> >>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>> >>>>>>> 3. Same as above, except I change the order of the last 3 lines to: >>>>>>> >>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>> >>>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>> >>>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>>> >>>>>>> So they are now in reversed order. Now it works. >>>>>>> >>>>>>> 4. Same as 2 or 3, except the subroutine is changed to : >>>>>>> >>>>>>> subroutine uvw_array_change(u,v,w) >>>>>>> >>>>>>> real(8), intent(inout) :: u(start_indices(1):end_ >>>>>>> indices(1),start_indices(2):end_indices(2),start_indices( >>>>>>> 3):end_indices(3)) >>>>>>> >>>>>>> real(8), intent(inout) :: v(start_indices(1):end_ >>>>>>> indices(1),start_indices(2):end_indices(2),start_indices( >>>>>>> 3):end_indices(3)) >>>>>>> >>>>>>> real(8), intent(inout) :: w(start_indices(1):end_ >>>>>>> indices(1),start_indices(2):end_indices(2),start_indices( >>>>>>> 3):end_indices(3)) >>>>>>> >>>>>>> u ... >>>>>>> v... >>>>>>> w ... >>>>>>> >>>>>>> end subroutine uvw_array_change. >>>>>>> >>>>>>> The start_indices and end_indices are simply to shift the 0 indices >>>>>>> of C convention to that of the 1 indices of the Fortran convention. This is >>>>>>> necessary in my case because most of my codes start array counting at 1, >>>>>>> hence the "trick". >>>>>>> >>>>>>> However, now no matter which order of the DMDAVecRestoreArrayF90 (as >>>>>>> in 2 or 3), error will occur at "call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>> " >>>>>>> >>>>>>> So did I violate and cause memory corruption due to the trick above? >>>>>>> But I can't think of any way other than the "trick" to continue using the 1 >>>>>>> indices convention. >>>>>>> >>>>>>> Thank you. >>>>>>> >>>>>>> Yours sincerely, >>>>>>> >>>>>>> TAY wee-beng >>>>>>> >>>>>>> On 15/4/2014 8:00 PM, Barry Smith wrote: >>>>>>> >>>>>>>> Try running under valgrind http://www.mcs.anl.gov/petsc/ >>>>>>>> documentation/faq.html#valgrind >>>>>>>> >>>>>>>> >>>>>>>> On Apr 14, 2014, at 9:47 PM, TAY wee-beng wrote: >>>>>>>> >>>>>>>> Hi Barry, >>>>>>>>> >>>>>>>>> As I mentioned earlier, the code works fine in PETSc debug mode >>>>>>>>> but fails in non-debug mode. >>>>>>>>> >>>>>>>>> I have attached my code. >>>>>>>>> >>>>>>>>> Thank you >>>>>>>>> >>>>>>>>> Yours sincerely, >>>>>>>>> >>>>>>>>> TAY wee-beng >>>>>>>>> >>>>>>>>> On 15/4/2014 2:26 AM, Barry Smith wrote: >>>>>>>>> >>>>>>>>>> Please send the code that creates da_w and the declarations of >>>>>>>>>> w_array >>>>>>>>>> >>>>>>>>>> Barry >>>>>>>>>> >>>>>>>>>> On Apr 14, 2014, at 9:40 AM, TAY wee-beng >>>>>>>>>> >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Hi Barry, >>>>>>>>>>> >>>>>>>>>>> I'm not too sure how to do it. I'm running mpi. So I run: >>>>>>>>>>> >>>>>>>>>>> mpirun -n 4 ./a.out -start_in_debugger >>>>>>>>>>> >>>>>>>>>>> I got the msg below. Before the gdb windows appear (thru x11), >>>>>>>>>>> the program aborts. >>>>>>>>>>> >>>>>>>>>>> Also I tried running in another cluster and it worked. Also >>>>>>>>>>> tried in the current cluster in debug mode and it worked too. >>>>>>>>>>> >>>>>>>>>>> mpirun -n 4 ./a.out -start_in_debugger >>>>>>>>>>> ------------------------------------------------------------ >>>>>>>>>>> -------------- >>>>>>>>>>> An MPI process has executed an operation involving a call to the >>>>>>>>>>> "fork()" system call to create a child process. Open MPI is >>>>>>>>>>> currently >>>>>>>>>>> operating in a condition that could result in memory corruption >>>>>>>>>>> or >>>>>>>>>>> other system errors; your MPI job may hang, crash, or produce >>>>>>>>>>> silent >>>>>>>>>>> data corruption. The use of fork() (or system() or other calls >>>>>>>>>>> that >>>>>>>>>>> create child processes) is strongly discouraged. >>>>>>>>>>> >>>>>>>>>>> The process that invoked fork was: >>>>>>>>>>> >>>>>>>>>>> Local host: n12-76 (PID 20235) >>>>>>>>>>> MPI_COMM_WORLD rank: 2 >>>>>>>>>>> >>>>>>>>>>> If you are *absolutely sure* that your application will >>>>>>>>>>> successfully >>>>>>>>>>> and correctly survive a call to fork(), you may disable this >>>>>>>>>>> warning >>>>>>>>>>> by setting the mpi_warn_on_fork MCA parameter to 0. >>>>>>>>>>> ------------------------------------------------------------ >>>>>>>>>>> -------------- >>>>>>>>>>> [2]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20235 on >>>>>>>>>>> display localhost:50.0 on machine n12-76 >>>>>>>>>>> [0]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20233 on >>>>>>>>>>> display localhost:50.0 on machine n12-76 >>>>>>>>>>> [1]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20234 on >>>>>>>>>>> display localhost:50.0 on machine n12-76 >>>>>>>>>>> [3]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20236 on >>>>>>>>>>> display localhost:50.0 on machine n12-76 >>>>>>>>>>> [n12-76:20232] 3 more processes have sent help message >>>>>>>>>>> help-mpi-runtime.txt / mpi_init:warn-fork >>>>>>>>>>> [n12-76:20232] Set MCA parameter "orte_base_help_aggregate" to 0 >>>>>>>>>>> to see all help / error messages >>>>>>>>>>> >>>>>>>>>>> .... >>>>>>>>>>> >>>>>>>>>>> 1 >>>>>>>>>>> [1]PETSC ERROR: ------------------------------ >>>>>>>>>>> ------------------------------------------ >>>>>>>>>>> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation >>>>>>>>>>> Violation, probably memory access out of range >>>>>>>>>>> [1]PETSC ERROR: Try option -start_in_debugger or >>>>>>>>>>> -on_error_attach_debugger >>>>>>>>>>> [1]PETSC ERROR: or see >>>>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html# >>>>>>>>>>> valgrind[1]PETSC ERROR: or try http://valgrind.org >>>>>>>>>>> on GNU/linux and Apple Mac OS X to find memory corruption >>>>>>>>>>> errors >>>>>>>>>>> [1]PETSC ERROR: configure using --with-debugging=yes, recompile, >>>>>>>>>>> link, and run >>>>>>>>>>> [1]PETSC ERROR: to get more information on the crash. >>>>>>>>>>> [1]PETSC ERROR: User provided function() line 0 in unknown >>>>>>>>>>> directory unknown file (null) >>>>>>>>>>> [3]PETSC ERROR: ------------------------------ >>>>>>>>>>> ------------------------------------------ >>>>>>>>>>> [3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation >>>>>>>>>>> Violation, probably memory access out of range >>>>>>>>>>> [3]PETSC ERROR: Try option -start_in_debugger or >>>>>>>>>>> -on_error_attach_debugger >>>>>>>>>>> [3]PETSC ERROR: or see >>>>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html# >>>>>>>>>>> valgrind[3]PETSC ERROR: or try http://valgrind.org >>>>>>>>>>> on GNU/linux and Apple Mac OS X to find memory corruption >>>>>>>>>>> errors >>>>>>>>>>> [3]PETSC ERROR: configure using --with-debugging=yes, recompile, >>>>>>>>>>> link, and run >>>>>>>>>>> [3]PETSC ERROR: to get more information on the crash. >>>>>>>>>>> [3]PETSC ERROR: User provided function() line 0 in unknown >>>>>>>>>>> directory unknown file (null) >>>>>>>>>>> >>>>>>>>>>> ... >>>>>>>>>>> Thank you. >>>>>>>>>>> >>>>>>>>>>> Yours sincerely, >>>>>>>>>>> >>>>>>>>>>> TAY wee-beng >>>>>>>>>>> >>>>>>>>>>> On 14/4/2014 9:05 PM, Barry Smith wrote: >>>>>>>>>>> >>>>>>>>>>> Because IO doesn?t always get flushed immediately it may not >>>>>>>>>>>> be hanging at this point. It is better to use the option >>>>>>>>>>>> -start_in_debugger then type cont in each debugger window and then when you >>>>>>>>>>>> think it is ?hanging? do a control C in each debugger window and type where >>>>>>>>>>>> to see where each process is you can also look around in the debugger at >>>>>>>>>>>> variables to see why it is ?hanging? at that point. >>>>>>>>>>>> >>>>>>>>>>>> Barry >>>>>>>>>>>> >>>>>>>>>>>> This routines don?t have any parallel communication in them >>>>>>>>>>>> so are unlikely to hang. >>>>>>>>>>>> >>>>>>>>>>>> On Apr 14, 2014, at 6:52 AM, TAY wee-beng >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> wrote: >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Hi, >>>>>>>>>>>>> >>>>>>>>>>>>> My code hangs and I added in mpi_barrier and print to catch >>>>>>>>>>>>> the bug. I found that it hangs after printing "7". Is it because I'm doing >>>>>>>>>>>>> something wrong? I need to access the u,v,w array so I use >>>>>>>>>>>>> DMDAVecGetArrayF90. After access, I use DMDAVecRestoreArrayF90. >>>>>>>>>>>>> >>>>>>>>>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) >>>>>>>>>>>>> print *,"3" >>>>>>>>>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) >>>>>>>>>>>>> print *,"4" >>>>>>>>>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) >>>>>>>>>>>>> print *,"5" >>>>>>>>>>>>> call I_IIB_uv_initial_1st_dm(I_ >>>>>>>>>>>>> cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_ >>>>>>>>>>>>> v1,I_cell_w1,u_array,v_array,w_array) >>>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) >>>>>>>>>>>>> print *,"6" >>>>>>>>>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>>>>>>>>> !must be in reverse order >>>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) >>>>>>>>>>>>> print *,"7" >>>>>>>>>>>>> call DMDAVecRestoreArrayF90(da_v,v_ >>>>>>>>>>>>> local,v_array,ierr) >>>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) >>>>>>>>>>>>> print *,"8" >>>>>>>>>>>>> call DMDAVecRestoreArrayF90(da_u,u_ >>>>>>>>>>>>> local,u_array,ierr) >>>>>>>>>>>>> -- >>>>>>>>>>>>> Thank you. >>>>>>>>>>>>> >>>>>>>>>>>>> Yours sincerely, >>>>>>>>>>>>> >>>>>>>>>>>>> TAY wee-beng >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>> >>>>>>>> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Sat Apr 19 09:14:56 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Sat, 19 Apr 2014 22:14:56 +0800 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: References: <534C9A2C.5060404@gmail.com> <534C9DB5.9070407@gmail.com> <53514B8A.90901@gmail.com> <495519DB-D3A1-4190-AED2-4ABA885C2835@mcs.anl.gov> <5351E62B.6060201@gmail.com> <53520587.6010606@gmail.com> <62DF81C7-0C35-410C-8D4C-206FBB22576A@mcs.anl.gov> <535248E8.2070002@gmail.com> Message-ID: <535284E0.8010901@gmail.com> On 19/4/2014 6:48 PM, Matthew Knepley wrote: > On Sat, Apr 19, 2014 at 4:59 AM, TAY wee-beng > wrote: > > On 19/4/2014 1:17 PM, Barry Smith wrote: > > On Apr 19, 2014, at 12:11 AM, TAY wee-beng > wrote: > > On 19/4/2014 12:10 PM, Barry Smith wrote: > > On Apr 18, 2014, at 9:57 PM, TAY wee-beng > > wrote: > > On 19/4/2014 3:53 AM, Barry Smith wrote: > > Hmm, > > Interface DMDAVecGetArrayF90 > Subroutine DMDAVecGetArrayF903(da1, > v,d1,ierr) > USE_DM_HIDE > DM_HIDE da1 > VEC_HIDE v > PetscScalar,pointer :: d1(:,:,:) > PetscErrorCode ierr > End Subroutine > > So the d1 is a F90 POINTER. But your > subroutine seems to be treating it as a ?plain > old Fortran array?? > real(8), intent(inout) :: > u(:,:,:),v(:,:,:),w(:,:,:) > > Hi, > > So d1 is a pointer, and it's different if I declare it as > "plain old Fortran array"? Because I declare it as a > Fortran array and it works w/o any problem if I only call > DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 with "u". > > But if I call DMDAVecGetArrayF90 and > DMDAVecRestoreArrayF90 with "u", "v" and "w", error starts > to happen. I wonder why... > > Also, supposed I call: > > call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) > > call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) > > call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) > > u_array .... > > v_array .... etc > > Now to restore the array, does it matter the sequence they > are restored? > > No it should not matter. If it matters that is a sign that > memory has been written to incorrectly earlier in the code. > > Hi, > > Hmm, I have been getting different results on different intel > compilers. I'm not sure if MPI played a part but I'm only using a > single processor. In the debug mode, things run without problem. > In optimized mode, in some cases, the code aborts even doing > simple initialization: > > > call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) > > call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) > > call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) > > call DMDAVecGetArrayF90(da_p,p_local,p_array,ierr) > > u_array = 0.d0 > > v_array = 0.d0 > > w_array = 0.d0 > > p_array = 0.d0 > > > call DMDAVecRestoreArrayF90(da_p,p_local,p_array,ierr) > > > call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) > > call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) > > call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) > > The code aborts at call > DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr), giving > segmentation error. But other version of intel compiler passes > thru this part w/o error. Since the response is different among > different compilers, is this PETSc or intel 's bug? Or mvapich or > openmpi? > > > We do this is a bunch of examples. Can you reproduce this different > behavior in src/dm/examples/tutorials/ex11f90.F? Hi Matt, Do you mean putting the above lines into ex11f90.F and test? Thanks Regards. > > Matt > > As in w, then v and u? > > call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) > call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) > call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) > > thanks > > Note also that the beginning and end > indices of the u,v,w, are different for each > process see for example > http://www.mcs.anl.gov/petsc/petsc-3.4/src/dm/examples/tutorials/ex11f90.F > (and they do not start at 1). This is how to > get the loop bounds. > > Hi, > > In my case, I fixed the u,v,w such that their > indices are the same. I also checked using > DMDAGetCorners and DMDAGetGhostCorners. Now the > problem lies in my subroutine treating it as a > ?plain old Fortran array?. > > If I declare them as pointers, their indices > follow the C 0 start convention, is that so? > > Not really. It is that in each process you need to > access them from the indices indicated by > DMDAGetCorners() for global vectors and > DMDAGetGhostCorners() for local vectors. So really C > or Fortran doesn?t make any difference. > > > So my problem now is that in my old MPI code, the > u(i,j,k) follow the Fortran 1 start convention. Is > there some way to manipulate such that I do not > have to change my u(i,j,k) to u(i-1,j-1,k-1)? > > If you code wishes to access them with indices plus > one from the values returned by DMDAGetCorners() for > global vectors and DMDAGetGhostCorners() for local > vectors then you need to manually subtract off the 1. > > Barry > > Thanks. > > Barry > > On Apr 18, 2014, at 10:58 AM, TAY wee-beng > > > wrote: > > Hi, > > I tried to pinpoint the problem. I reduced > my job size and hence I can run on 1 > processor. Tried using valgrind but > perhaps I'm using the optimized version, > it didn't catch the error, besides saying > "Segmentation fault (core dumped)" > > However, by re-writing my code, I found > out a few things: > > 1. if I write my code this way: > > call > DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) > > call > DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) > > call > DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) > > u_array = .... > > v_array = .... > > w_array = .... > > call > DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) > > call > DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) > > call > DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) > > The code runs fine. > > 2. if I write my code this way: > > call > DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) > > call > DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) > > call > DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) > > call > uvw_array_change(u_array,v_array,w_array) > -> this subroutine does the same > modification as the above. > > call > DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) > > call > DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) > > call > DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) > -> error > > where the subroutine is: > > subroutine uvw_array_change(u,v,w) > > real(8), intent(inout) :: > u(:,:,:),v(:,:,:),w(:,:,:) > > u ... > v... > w ... > > end subroutine uvw_array_change. > > The above will give an error at : > > call > DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) > > 3. Same as above, except I change the > order of the last 3 lines to: > > call > DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) > > call > DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) > > call > DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) > > So they are now in reversed order. Now it > works. > > 4. Same as 2 or 3, except the subroutine > is changed to : > > subroutine uvw_array_change(u,v,w) > > real(8), intent(inout) :: > u(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) > > real(8), intent(inout) :: > v(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) > > real(8), intent(inout) :: > w(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) > > u ... > v... > w ... > > end subroutine uvw_array_change. > > The start_indices and end_indices are > simply to shift the 0 indices of C > convention to that of the 1 indices of the > Fortran convention. This is necessary in > my case because most of my codes start > array counting at 1, hence the "trick". > > However, now no matter which order of the > DMDAVecRestoreArrayF90 (as in 2 or 3), > error will occur at "call > DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) > " > > So did I violate and cause memory > corruption due to the trick above? But I > can't think of any way other than the > "trick" to continue using the 1 indices > convention. > > Thank you. > > Yours sincerely, > > TAY wee-beng > > On 15/4/2014 8:00 PM, Barry Smith wrote: > > Try running under valgrind > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > > > On Apr 14, 2014, at 9:47 PM, TAY > wee-beng > wrote: > > Hi Barry, > > As I mentioned earlier, the code > works fine in PETSc debug mode but > fails in non-debug mode. > > I have attached my code. > > Thank you > > Yours sincerely, > > TAY wee-beng > > On 15/4/2014 2:26 AM, Barry Smith > wrote: > > Please send the code that > creates da_w and the > declarations of w_array > > Barry > > On Apr 14, 2014, at 9:40 AM, > TAY wee-beng > > > wrote: > > > Hi Barry, > > I'm not too sure how to do > it. I'm running mpi. So I run: > > mpirun -n 4 ./a.out > -start_in_debugger > > I got the msg below. > Before the gdb windows > appear (thru x11), the > program aborts. > > Also I tried running in > another cluster and it > worked. Also tried in the > current cluster in debug > mode and it worked too. > > mpirun -n 4 ./a.out > -start_in_debugger > -------------------------------------------------------------------------- > An MPI process has > executed an operation > involving a call to the > "fork()" system call to > create a child process. > Open MPI is currently > operating in a condition > that could result in > memory corruption or > other system errors; your > MPI job may hang, crash, > or produce silent > data corruption. The use > of fork() (or system() or > other calls that > create child processes) is > strongly discouraged. > > The process that invoked > fork was: > > Local host: > n12-76 (PID 20235) > MPI_COMM_WORLD rank: 2 > > If you are *absolutely > sure* that your > application will successfully > and correctly survive a > call to fork(), you may > disable this warning > by setting the > mpi_warn_on_fork MCA > parameter to 0. > -------------------------------------------------------------------------- > [2]PETSC ERROR: PETSC: > Attaching gdb to ./a.out > of pid 20235 on display > localhost:50.0 on machine > n12-76 > [0]PETSC ERROR: PETSC: > Attaching gdb to ./a.out > of pid 20233 on display > localhost:50.0 on machine > n12-76 > [1]PETSC ERROR: PETSC: > Attaching gdb to ./a.out > of pid 20234 on display > localhost:50.0 on machine > n12-76 > [3]PETSC ERROR: PETSC: > Attaching gdb to ./a.out > of pid 20236 on display > localhost:50.0 on machine > n12-76 > [n12-76:20232] 3 more > processes have sent help > message > help-mpi-runtime.txt / > mpi_init:warn-fork > [n12-76:20232] Set MCA > parameter > "orte_base_help_aggregate" > to 0 to see all help / > error messages > > .... > > 1 > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: Caught > signal number 11 SEGV: > Segmentation Violation, > probably memory access out > of range > [1]PETSC ERROR: Try option > -start_in_debugger or > -on_error_attach_debugger > [1]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC > ERROR: or try > http://valgrind.org > on GNU/linux and Apple > Mac OS X to find memory > corruption errors > [1]PETSC ERROR: configure > using > --with-debugging=yes, > recompile, link, and run > [1]PETSC ERROR: to get > more information on the crash. > [1]PETSC ERROR: User > provided function() line 0 > in unknown directory > unknown file (null) > [3]PETSC ERROR: > ------------------------------------------------------------------------ > [3]PETSC ERROR: Caught > signal number 11 SEGV: > Segmentation Violation, > probably memory access out > of range > [3]PETSC ERROR: Try option > -start_in_debugger or > -on_error_attach_debugger > [3]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSC > ERROR: or try > http://valgrind.org > on GNU/linux and Apple > Mac OS X to find memory > corruption errors > [3]PETSC ERROR: configure > using > --with-debugging=yes, > recompile, link, and run > [3]PETSC ERROR: to get > more information on the crash. > [3]PETSC ERROR: User > provided function() line 0 > in unknown directory > unknown file (null) > > ... > Thank you. > > Yours sincerely, > > TAY wee-beng > > On 14/4/2014 9:05 PM, > Barry Smith wrote: > > Because IO doesn?t > always get flushed > immediately it may not > be hanging at this > point. It is better > to use the option > -start_in_debugger > then type cont in each > debugger window and > then when you think it > is ?hanging? do a > control C in each > debugger window and > type where to see > where each process is > you can also look > around in the debugger > at variables to see > why it is ?hanging? at > that point. > > Barry > > This routines don?t > have any parallel > communication in them > so are unlikely to hang. > > On Apr 14, 2014, at > 6:52 AM, TAY wee-beng > > > > > wrote: > > > > Hi, > > My code hangs and > I added in > mpi_barrier and > print to catch the > bug. I found that > it hangs after > printing "7". Is > it because I'm > doing something > wrong? I need to > access the u,v,w > array so I use > DMDAVecGetArrayF90. After > access, I use > DMDAVecRestoreArrayF90. > > call > DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) > call > MPI_Barrier(MPI_COMM_WORLD,ierr); > if (myid==0) > print *,"3" > call > DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) > call > MPI_Barrier(MPI_COMM_WORLD,ierr); > if (myid==0) > print *,"4" > call > DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) > call > MPI_Barrier(MPI_COMM_WORLD,ierr); > if (myid==0) > print *,"5" > call > I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) > call > MPI_Barrier(MPI_COMM_WORLD,ierr); > if (myid==0) > print *,"6" > call > DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) > !must be in > reverse order > call > MPI_Barrier(MPI_COMM_WORLD,ierr); > if (myid==0) > print *,"7" > call > DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) > call > MPI_Barrier(MPI_COMM_WORLD,ierr); > if (myid==0) > print *,"8" > call > DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) > -- > Thank you. > > Yours sincerely, > > TAY wee-beng > > > > > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Apr 19 09:55:16 2014 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 19 Apr 2014 09:55:16 -0500 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: <535284E0.8010901@gmail.com> References: <534C9A2C.5060404@gmail.com> <534C9DB5.9070407@gmail.com> <53514B8A.90901@gmail.com> <495519DB-D3A1-4190-AED2-4ABA885C2835@mcs.anl.gov> <5351E62B.6060201@gmail.com> <53520587.6010606@gmail.com> <62DF81C7-0C35-410C-8D4C-206FBB22576A@mcs.anl.gov> <535248E8.2070002@gmail.com> <535284E0.8010901@gmail.com> Message-ID: On Sat, Apr 19, 2014 at 9:14 AM, TAY wee-beng wrote: > On 19/4/2014 6:48 PM, Matthew Knepley wrote: > > On Sat, Apr 19, 2014 at 4:59 AM, TAY wee-beng wrote: > >> On 19/4/2014 1:17 PM, Barry Smith wrote: >> >>> On Apr 19, 2014, at 12:11 AM, TAY wee-beng wrote: >>> >>> On 19/4/2014 12:10 PM, Barry Smith wrote: >>>> >>>>> On Apr 18, 2014, at 9:57 PM, TAY wee-beng wrote: >>>>> >>>>> On 19/4/2014 3:53 AM, Barry Smith wrote: >>>>>> >>>>>>> Hmm, >>>>>>> >>>>>>> Interface DMDAVecGetArrayF90 >>>>>>> Subroutine DMDAVecGetArrayF903(da1, v,d1,ierr) >>>>>>> USE_DM_HIDE >>>>>>> DM_HIDE da1 >>>>>>> VEC_HIDE v >>>>>>> PetscScalar,pointer :: d1(:,:,:) >>>>>>> PetscErrorCode ierr >>>>>>> End Subroutine >>>>>>> >>>>>>> So the d1 is a F90 POINTER. But your subroutine seems to be >>>>>>> treating it as a ?plain old Fortran array?? >>>>>>> real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) >>>>>>> >>>>>> Hi, >>>> >>>> So d1 is a pointer, and it's different if I declare it as "plain old >>>> Fortran array"? Because I declare it as a Fortran array and it works w/o >>>> any problem if I only call DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 >>>> with "u". >>>> >>>> But if I call DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 with "u", >>>> "v" and "w", error starts to happen. I wonder why... >>>> >>>> Also, supposed I call: >>>> >>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>> >>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>> >>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>> >>>> u_array .... >>>> >>>> v_array .... etc >>>> >>>> Now to restore the array, does it matter the sequence they are restored? >>>> >>> No it should not matter. If it matters that is a sign that memory >>> has been written to incorrectly earlier in the code. >>> >>> Hi, >> >> Hmm, I have been getting different results on different intel compilers. >> I'm not sure if MPI played a part but I'm only using a single processor. In >> the debug mode, things run without problem. In optimized mode, in some >> cases, the code aborts even doing simple initialization: >> >> >> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >> >> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >> >> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >> >> call DMDAVecGetArrayF90(da_p,p_local,p_array,ierr) >> >> u_array = 0.d0 >> >> v_array = 0.d0 >> >> w_array = 0.d0 >> >> p_array = 0.d0 >> >> >> call DMDAVecRestoreArrayF90(da_p,p_local,p_array,ierr) >> >> >> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >> >> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >> >> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >> >> The code aborts at call >> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr), giving segmentation >> error. But other version of intel compiler passes thru this part w/o error. >> Since the response is different among different compilers, is this PETSc or >> intel 's bug? Or mvapich or openmpi? > > > We do this is a bunch of examples. Can you reproduce this different > behavior in src/dm/examples/tutorials/ex11f90.F? > > > Hi Matt, > > Do you mean putting the above lines into ex11f90.F and test? > It already has DMDAVecGetArray(). Just run it. Matt > Thanks > > Regards. > > > Matt > > >> As in w, then v and u? >>>> >>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>> >>>> thanks >>>> >>>>> Note also that the beginning and end indices of the u,v,w, are >>>>>>> different for each process see for example >>>>>>> http://www.mcs.anl.gov/petsc/petsc-3.4/src/dm/examples/tutorials/ex11f90.F (and they do not start at 1). This is how to get the loop bounds. >>>>>>> >>>>>> Hi, >>>>>> >>>>>> In my case, I fixed the u,v,w such that their indices are the same. I >>>>>> also checked using DMDAGetCorners and DMDAGetGhostCorners. Now the problem >>>>>> lies in my subroutine treating it as a ?plain old Fortran array?. >>>>>> >>>>>> If I declare them as pointers, their indices follow the C 0 start >>>>>> convention, is that so? >>>>>> >>>>> Not really. It is that in each process you need to access them >>>>> from the indices indicated by DMDAGetCorners() for global vectors and >>>>> DMDAGetGhostCorners() for local vectors. So really C or Fortran doesn?t >>>>> make any difference. >>>>> >>>>> >>>>> So my problem now is that in my old MPI code, the u(i,j,k) follow the >>>>>> Fortran 1 start convention. Is there some way to manipulate such that I do >>>>>> not have to change my u(i,j,k) to u(i-1,j-1,k-1)? >>>>>> >>>>> If you code wishes to access them with indices plus one from the >>>>> values returned by DMDAGetCorners() for global vectors and >>>>> DMDAGetGhostCorners() for local vectors then you need to manually subtract >>>>> off the 1. >>>>> >>>>> Barry >>>>> >>>>> Thanks. >>>>>> >>>>>>> Barry >>>>>>> >>>>>>> On Apr 18, 2014, at 10:58 AM, TAY wee-beng wrote: >>>>>>> >>>>>>> Hi, >>>>>>>> >>>>>>>> I tried to pinpoint the problem. I reduced my job size and hence I >>>>>>>> can run on 1 processor. Tried using valgrind but perhaps I'm using the >>>>>>>> optimized version, it didn't catch the error, besides saying "Segmentation >>>>>>>> fault (core dumped)" >>>>>>>> >>>>>>>> However, by re-writing my code, I found out a few things: >>>>>>>> >>>>>>>> 1. if I write my code this way: >>>>>>>> >>>>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>>>> >>>>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>>>> >>>>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>>>> >>>>>>>> u_array = .... >>>>>>>> >>>>>>>> v_array = .... >>>>>>>> >>>>>>>> w_array = .... >>>>>>>> >>>>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>>>> >>>>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>>> >>>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>>> >>>>>>>> The code runs fine. >>>>>>>> >>>>>>>> 2. if I write my code this way: >>>>>>>> >>>>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>>>> >>>>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>>>> >>>>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>>>> >>>>>>>> call uvw_array_change(u_array,v_array,w_array) -> this subroutine >>>>>>>> does the same modification as the above. >>>>>>>> >>>>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>>>> >>>>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>>> >>>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) -> error >>>>>>>> >>>>>>>> where the subroutine is: >>>>>>>> >>>>>>>> subroutine uvw_array_change(u,v,w) >>>>>>>> >>>>>>>> real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) >>>>>>>> >>>>>>>> u ... >>>>>>>> v... >>>>>>>> w ... >>>>>>>> >>>>>>>> end subroutine uvw_array_change. >>>>>>>> >>>>>>>> The above will give an error at : >>>>>>>> >>>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>>> >>>>>>>> 3. Same as above, except I change the order of the last 3 lines to: >>>>>>>> >>>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>>> >>>>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>>> >>>>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>>>> >>>>>>>> So they are now in reversed order. Now it works. >>>>>>>> >>>>>>>> 4. Same as 2 or 3, except the subroutine is changed to : >>>>>>>> >>>>>>>> subroutine uvw_array_change(u,v,w) >>>>>>>> >>>>>>>> real(8), intent(inout) :: >>>>>>>> u(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>>>>>> >>>>>>>> real(8), intent(inout) :: >>>>>>>> v(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>>>>>> >>>>>>>> real(8), intent(inout) :: >>>>>>>> w(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>>>>>> >>>>>>>> u ... >>>>>>>> v... >>>>>>>> w ... >>>>>>>> >>>>>>>> end subroutine uvw_array_change. >>>>>>>> >>>>>>>> The start_indices and end_indices are simply to shift the 0 indices >>>>>>>> of C convention to that of the 1 indices of the Fortran convention. This is >>>>>>>> necessary in my case because most of my codes start array counting at 1, >>>>>>>> hence the "trick". >>>>>>>> >>>>>>>> However, now no matter which order of the DMDAVecRestoreArrayF90 >>>>>>>> (as in 2 or 3), error will occur at "call >>>>>>>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) " >>>>>>>> >>>>>>>> So did I violate and cause memory corruption due to the trick >>>>>>>> above? But I can't think of any way other than the "trick" to continue >>>>>>>> using the 1 indices convention. >>>>>>>> >>>>>>>> Thank you. >>>>>>>> >>>>>>>> Yours sincerely, >>>>>>>> >>>>>>>> TAY wee-beng >>>>>>>> >>>>>>>> On 15/4/2014 8:00 PM, Barry Smith wrote: >>>>>>>> >>>>>>>>> Try running under valgrind >>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >>>>>>>>> >>>>>>>>> >>>>>>>>> On Apr 14, 2014, at 9:47 PM, TAY wee-beng >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>> Hi Barry, >>>>>>>>>> >>>>>>>>>> As I mentioned earlier, the code works fine in PETSc debug mode >>>>>>>>>> but fails in non-debug mode. >>>>>>>>>> >>>>>>>>>> I have attached my code. >>>>>>>>>> >>>>>>>>>> Thank you >>>>>>>>>> >>>>>>>>>> Yours sincerely, >>>>>>>>>> >>>>>>>>>> TAY wee-beng >>>>>>>>>> >>>>>>>>>> On 15/4/2014 2:26 AM, Barry Smith wrote: >>>>>>>>>> >>>>>>>>>>> Please send the code that creates da_w and the declarations >>>>>>>>>>> of w_array >>>>>>>>>>> >>>>>>>>>>> Barry >>>>>>>>>>> >>>>>>>>>>> On Apr 14, 2014, at 9:40 AM, TAY wee-beng >>>>>>>>>>> >>>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Hi Barry, >>>>>>>>>>>> >>>>>>>>>>>> I'm not too sure how to do it. I'm running mpi. So I run: >>>>>>>>>>>> >>>>>>>>>>>> mpirun -n 4 ./a.out -start_in_debugger >>>>>>>>>>>> >>>>>>>>>>>> I got the msg below. Before the gdb windows appear (thru x11), >>>>>>>>>>>> the program aborts. >>>>>>>>>>>> >>>>>>>>>>>> Also I tried running in another cluster and it worked. Also >>>>>>>>>>>> tried in the current cluster in debug mode and it worked too. >>>>>>>>>>>> >>>>>>>>>>>> mpirun -n 4 ./a.out -start_in_debugger >>>>>>>>>>>> >>>>>>>>>>>> -------------------------------------------------------------------------- >>>>>>>>>>>> An MPI process has executed an operation involving a call to the >>>>>>>>>>>> "fork()" system call to create a child process. Open MPI is >>>>>>>>>>>> currently >>>>>>>>>>>> operating in a condition that could result in memory corruption >>>>>>>>>>>> or >>>>>>>>>>>> other system errors; your MPI job may hang, crash, or produce >>>>>>>>>>>> silent >>>>>>>>>>>> data corruption. The use of fork() (or system() or other calls >>>>>>>>>>>> that >>>>>>>>>>>> create child processes) is strongly discouraged. >>>>>>>>>>>> >>>>>>>>>>>> The process that invoked fork was: >>>>>>>>>>>> >>>>>>>>>>>> Local host: n12-76 (PID 20235) >>>>>>>>>>>> MPI_COMM_WORLD rank: 2 >>>>>>>>>>>> >>>>>>>>>>>> If you are *absolutely sure* that your application will >>>>>>>>>>>> successfully >>>>>>>>>>>> and correctly survive a call to fork(), you may disable this >>>>>>>>>>>> warning >>>>>>>>>>>> by setting the mpi_warn_on_fork MCA parameter to 0. >>>>>>>>>>>> >>>>>>>>>>>> -------------------------------------------------------------------------- >>>>>>>>>>>> [2]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20235 on >>>>>>>>>>>> display localhost:50.0 on machine n12-76 >>>>>>>>>>>> [0]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20233 on >>>>>>>>>>>> display localhost:50.0 on machine n12-76 >>>>>>>>>>>> [1]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20234 on >>>>>>>>>>>> display localhost:50.0 on machine n12-76 >>>>>>>>>>>> [3]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20236 on >>>>>>>>>>>> display localhost:50.0 on machine n12-76 >>>>>>>>>>>> [n12-76:20232] 3 more processes have sent help message >>>>>>>>>>>> help-mpi-runtime.txt / mpi_init:warn-fork >>>>>>>>>>>> [n12-76:20232] Set MCA parameter "orte_base_help_aggregate" to >>>>>>>>>>>> 0 to see all help / error messages >>>>>>>>>>>> >>>>>>>>>>>> .... >>>>>>>>>>>> >>>>>>>>>>>> 1 >>>>>>>>>>>> [1]PETSC ERROR: >>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation >>>>>>>>>>>> Violation, probably memory access out of range >>>>>>>>>>>> [1]PETSC ERROR: Try option -start_in_debugger or >>>>>>>>>>>> -on_error_attach_debugger >>>>>>>>>>>> [1]PETSC ERROR: or see >>>>>>>>>>>> >>>>>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSCERROR: or try >>>>>>>>>>>> http://valgrind.org >>>>>>>>>>>> on GNU/linux and Apple Mac OS X to find memory corruption >>>>>>>>>>>> errors >>>>>>>>>>>> [1]PETSC ERROR: configure using --with-debugging=yes, >>>>>>>>>>>> recompile, link, and run >>>>>>>>>>>> [1]PETSC ERROR: to get more information on the crash. >>>>>>>>>>>> [1]PETSC ERROR: User provided function() line 0 in unknown >>>>>>>>>>>> directory unknown file (null) >>>>>>>>>>>> [3]PETSC ERROR: >>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>> [3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation >>>>>>>>>>>> Violation, probably memory access out of range >>>>>>>>>>>> [3]PETSC ERROR: Try option -start_in_debugger or >>>>>>>>>>>> -on_error_attach_debugger >>>>>>>>>>>> [3]PETSC ERROR: or see >>>>>>>>>>>> >>>>>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSCERROR: or try >>>>>>>>>>>> http://valgrind.org >>>>>>>>>>>> on GNU/linux and Apple Mac OS X to find memory corruption >>>>>>>>>>>> errors >>>>>>>>>>>> [3]PETSC ERROR: configure using --with-debugging=yes, >>>>>>>>>>>> recompile, link, and run >>>>>>>>>>>> [3]PETSC ERROR: to get more information on the crash. >>>>>>>>>>>> [3]PETSC ERROR: User provided function() line 0 in unknown >>>>>>>>>>>> directory unknown file (null) >>>>>>>>>>>> >>>>>>>>>>>> ... >>>>>>>>>>>> Thank you. >>>>>>>>>>>> >>>>>>>>>>>> Yours sincerely, >>>>>>>>>>>> >>>>>>>>>>>> TAY wee-beng >>>>>>>>>>>> >>>>>>>>>>>> On 14/4/2014 9:05 PM, Barry Smith wrote: >>>>>>>>>>>> >>>>>>>>>>>> Because IO doesn?t always get flushed immediately it may >>>>>>>>>>>>> not be hanging at this point. It is better to use the option >>>>>>>>>>>>> -start_in_debugger then type cont in each debugger window and then when you >>>>>>>>>>>>> think it is ?hanging? do a control C in each debugger window and type where >>>>>>>>>>>>> to see where each process is you can also look around in the debugger at >>>>>>>>>>>>> variables to see why it is ?hanging? at that point. >>>>>>>>>>>>> >>>>>>>>>>>>> Barry >>>>>>>>>>>>> >>>>>>>>>>>>> This routines don?t have any parallel communication in them >>>>>>>>>>>>> so are unlikely to hang. >>>>>>>>>>>>> >>>>>>>>>>>>> On Apr 14, 2014, at 6:52 AM, TAY wee-beng >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Hi, >>>>>>>>>>>>>> >>>>>>>>>>>>>> My code hangs and I added in mpi_barrier and print to catch >>>>>>>>>>>>>> the bug. I found that it hangs after printing "7". Is it because I'm doing >>>>>>>>>>>>>> something wrong? I need to access the u,v,w array so I use >>>>>>>>>>>>>> DMDAVecGetArrayF90. After access, I use DMDAVecRestoreArrayF90. >>>>>>>>>>>>>> >>>>>>>>>>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) >>>>>>>>>>>>>> print *,"3" >>>>>>>>>>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) >>>>>>>>>>>>>> print *,"4" >>>>>>>>>>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) >>>>>>>>>>>>>> print *,"5" >>>>>>>>>>>>>> call >>>>>>>>>>>>>> I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) >>>>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) >>>>>>>>>>>>>> print *,"6" >>>>>>>>>>>>>> call >>>>>>>>>>>>>> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) !must be in reverse order >>>>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) >>>>>>>>>>>>>> print *,"7" >>>>>>>>>>>>>> call >>>>>>>>>>>>>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) >>>>>>>>>>>>>> print *,"8" >>>>>>>>>>>>>> call >>>>>>>>>>>>>> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> Thank you. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Yours sincerely, >>>>>>>>>>>>>> >>>>>>>>>>>>>> TAY wee-beng >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Sat Apr 19 10:16:28 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Sat, 19 Apr 2014 23:16:28 +0800 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: References: <534C9A2C.5060404@gmail.com> <534C9DB5.9070407@gmail.com> <53514B8A.90901@gmail.com> <495519DB-D3A1-4190-AED2-4ABA885C2835@mcs.anl.gov> <5351E62B.6060201@gmail.com> <53520587.6010606@gmail.com> <62DF81C7-0C35-410C-8D4C-206FBB22576A@mcs.anl.gov> <535248E8.2070002@gmail.com> <535284E0.8010901@gmail.com> Message-ID: <5352934C.1010306@gmail.com> On 19/4/2014 10:55 PM, Matthew Knepley wrote: > On Sat, Apr 19, 2014 at 9:14 AM, TAY wee-beng > wrote: > > On 19/4/2014 6:48 PM, Matthew Knepley wrote: >> On Sat, Apr 19, 2014 at 4:59 AM, TAY wee-beng > > wrote: >> >> On 19/4/2014 1:17 PM, Barry Smith wrote: >> >> On Apr 19, 2014, at 12:11 AM, TAY wee-beng >> > wrote: >> >> On 19/4/2014 12:10 PM, Barry Smith wrote: >> >> On Apr 18, 2014, at 9:57 PM, TAY wee-beng >> > wrote: >> >> On 19/4/2014 3:53 AM, Barry Smith wrote: >> >> Hmm, >> >> Interface DMDAVecGetArrayF90 >> Subroutine >> DMDAVecGetArrayF903(da1, v,d1,ierr) >> USE_DM_HIDE >> DM_HIDE da1 >> VEC_HIDE v >> PetscScalar,pointer :: d1(:,:,:) >> PetscErrorCode ierr >> End Subroutine >> >> So the d1 is a F90 POINTER. But your >> subroutine seems to be treating it as a >> ?plain old Fortran array?? >> real(8), intent(inout) :: >> u(:,:,:),v(:,:,:),w(:,:,:) >> >> Hi, >> >> So d1 is a pointer, and it's different if I declare >> it as "plain old Fortran array"? Because I declare it >> as a Fortran array and it works w/o any problem if I >> only call DMDAVecGetArrayF90 and >> DMDAVecRestoreArrayF90 with "u". >> >> But if I call DMDAVecGetArrayF90 and >> DMDAVecRestoreArrayF90 with "u", "v" and "w", error >> starts to happen. I wonder why... >> >> Also, supposed I call: >> >> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >> >> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >> >> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >> >> u_array .... >> >> v_array .... etc >> >> Now to restore the array, does it matter the sequence >> they are restored? >> >> No it should not matter. If it matters that is a sign >> that memory has been written to incorrectly earlier in >> the code. >> >> Hi, >> >> Hmm, I have been getting different results on different intel >> compilers. I'm not sure if MPI played a part but I'm only >> using a single processor. In the debug mode, things run >> without problem. In optimized mode, in some cases, the code >> aborts even doing simple initialization: >> >> >> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >> >> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >> >> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >> >> call DMDAVecGetArrayF90(da_p,p_local,p_array,ierr) >> >> u_array = 0.d0 >> >> v_array = 0.d0 >> >> w_array = 0.d0 >> >> p_array = 0.d0 >> >> >> call DMDAVecRestoreArrayF90(da_p,p_local,p_array,ierr) >> >> >> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >> >> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >> >> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >> >> The code aborts at call >> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr), giving >> segmentation error. But other version of intel compiler >> passes thru this part w/o error. Since the response is >> different among different compilers, is this PETSc or intel >> 's bug? Or mvapich or openmpi? >> >> >> We do this is a bunch of examples. Can you reproduce this >> different behavior in src/dm/examples/tutorials/ex11f90.F? > > Hi Matt, > > Do you mean putting the above lines into ex11f90.F and test? > > > It already has DMDAVecGetArray(). Just run it. Hi, It worked. The differences between mine and the code is the way the fortran modules are defined, and the ex11f90 only uses global vectors. Does it make a difference whether global or local vectors are used? Because the way it accesses x1 only touches the local region. Also, before using DMDAVecGetArrayF90, DMGetGlobalVector must be used 1st, is that so? I can't find the equivalent for local vector though. Thanks. > > Matt > > Thanks > > Regards. >> >> Matt >> >> As in w, then v and u? >> >> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >> >> thanks >> >> Note also that the beginning and end >> indices of the u,v,w, are different for >> each process see for example >> http://www.mcs.anl.gov/petsc/petsc-3.4/src/dm/examples/tutorials/ex11f90.F >> (and they do not start at 1). This is >> how to get the loop bounds. >> >> Hi, >> >> In my case, I fixed the u,v,w such that their >> indices are the same. I also checked using >> DMDAGetCorners and DMDAGetGhostCorners. Now >> the problem lies in my subroutine treating it >> as a ?plain old Fortran array?. >> >> If I declare them as pointers, their indices >> follow the C 0 start convention, is that so? >> >> Not really. It is that in each process you >> need to access them from the indices indicated by >> DMDAGetCorners() for global vectors and >> DMDAGetGhostCorners() for local vectors. So >> really C or Fortran doesn?t make any difference. >> >> >> So my problem now is that in my old MPI code, >> the u(i,j,k) follow the Fortran 1 start >> convention. Is there some way to manipulate >> such that I do not have to change my u(i,j,k) >> to u(i-1,j-1,k-1)? >> >> If you code wishes to access them with indices >> plus one from the values returned by >> DMDAGetCorners() for global vectors and >> DMDAGetGhostCorners() for local vectors then you >> need to manually subtract off the 1. >> >> Barry >> >> Thanks. >> >> Barry >> >> On Apr 18, 2014, at 10:58 AM, TAY >> wee-beng > > wrote: >> >> Hi, >> >> I tried to pinpoint the problem. I >> reduced my job size and hence I can >> run on 1 processor. Tried using >> valgrind but perhaps I'm using the >> optimized version, it didn't catch >> the error, besides saying >> "Segmentation fault (core dumped)" >> >> However, by re-writing my code, I >> found out a few things: >> >> 1. if I write my code this way: >> >> call >> DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >> >> call >> DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >> >> call >> DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >> >> u_array = .... >> >> v_array = .... >> >> w_array = .... >> >> call >> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >> >> call >> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >> >> call >> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >> >> The code runs fine. >> >> 2. if I write my code this way: >> >> call >> DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >> >> call >> DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >> >> call >> DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >> >> call >> uvw_array_change(u_array,v_array,w_array) >> -> this subroutine does the same >> modification as the above. >> >> call >> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >> >> call >> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >> >> call >> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >> -> error >> >> where the subroutine is: >> >> subroutine uvw_array_change(u,v,w) >> >> real(8), intent(inout) :: >> u(:,:,:),v(:,:,:),w(:,:,:) >> >> u ... >> v... >> w ... >> >> end subroutine uvw_array_change. >> >> The above will give an error at : >> >> call >> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >> >> 3. Same as above, except I change the >> order of the last 3 lines to: >> >> call >> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >> >> call >> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >> >> call >> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >> >> So they are now in reversed order. >> Now it works. >> >> 4. Same as 2 or 3, except the >> subroutine is changed to : >> >> subroutine uvw_array_change(u,v,w) >> >> real(8), intent(inout) :: >> u(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >> >> real(8), intent(inout) :: >> v(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >> >> real(8), intent(inout) :: >> w(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >> >> u ... >> v... >> w ... >> >> end subroutine uvw_array_change. >> >> The start_indices and end_indices are >> simply to shift the 0 indices of C >> convention to that of the 1 indices >> of the Fortran convention. This is >> necessary in my case because most of >> my codes start array counting at 1, >> hence the "trick". >> >> However, now no matter which order of >> the DMDAVecRestoreArrayF90 (as in 2 >> or 3), error will occur at "call >> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >> " >> >> So did I violate and cause memory >> corruption due to the trick above? >> But I can't think of any way other >> than the "trick" to continue using >> the 1 indices convention. >> >> Thank you. >> >> Yours sincerely, >> >> TAY wee-beng >> >> On 15/4/2014 8:00 PM, Barry Smith wrote: >> >> Try running under valgrind >> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >> >> >> On Apr 14, 2014, at 9:47 PM, TAY >> wee-beng > > wrote: >> >> Hi Barry, >> >> As I mentioned earlier, the >> code works fine in PETSc >> debug mode but fails in >> non-debug mode. >> >> I have attached my code. >> >> Thank you >> >> Yours sincerely, >> >> TAY wee-beng >> >> On 15/4/2014 2:26 AM, Barry >> Smith wrote: >> >> Please send the code >> that creates da_w and the >> declarations of w_array >> >> Barry >> >> On Apr 14, 2014, at 9:40 >> AM, TAY wee-beng >> > > >> wrote: >> >> >> Hi Barry, >> >> I'm not too sure how >> to do it. I'm running >> mpi. So I run: >> >> mpirun -n 4 ./a.out >> -start_in_debugger >> >> I got the msg below. >> Before the gdb >> windows appear (thru >> x11), the program aborts. >> >> Also I tried running >> in another cluster >> and it worked. Also >> tried in the current >> cluster in debug mode >> and it worked too. >> >> mpirun -n 4 ./a.out >> -start_in_debugger >> -------------------------------------------------------------------------- >> An MPI process has >> executed an operation >> involving a call to the >> "fork()" system call >> to create a child >> process. Open MPI is >> currently >> operating in a >> condition that could >> result in memory >> corruption or >> other system errors; >> your MPI job may >> hang, crash, or >> produce silent >> data corruption. The >> use of fork() (or >> system() or other >> calls that >> create child >> processes) is >> strongly discouraged. >> >> The process that >> invoked fork was: >> >> Local host: >> n12-76 (PID 20235) >> MPI_COMM_WORLD rank: 2 >> >> If you are >> *absolutely sure* >> that your application >> will successfully >> and correctly survive >> a call to fork(), you >> may disable this warning >> by setting the >> mpi_warn_on_fork MCA >> parameter to 0. >> -------------------------------------------------------------------------- >> [2]PETSC ERROR: >> PETSC: Attaching gdb >> to ./a.out of pid >> 20235 on display >> localhost:50.0 on >> machine n12-76 >> [0]PETSC ERROR: >> PETSC: Attaching gdb >> to ./a.out of pid >> 20233 on display >> localhost:50.0 on >> machine n12-76 >> [1]PETSC ERROR: >> PETSC: Attaching gdb >> to ./a.out of pid >> 20234 on display >> localhost:50.0 on >> machine n12-76 >> [3]PETSC ERROR: >> PETSC: Attaching gdb >> to ./a.out of pid >> 20236 on display >> localhost:50.0 on >> machine n12-76 >> [n12-76:20232] 3 more >> processes have sent >> help message >> help-mpi-runtime.txt >> / mpi_init:warn-fork >> [n12-76:20232] Set >> MCA parameter >> "orte_base_help_aggregate" >> to 0 to see all help >> / error messages >> >> .... >> >> 1 >> [1]PETSC ERROR: >> ------------------------------------------------------------------------ >> [1]PETSC ERROR: >> Caught signal number >> 11 SEGV: Segmentation >> Violation, probably >> memory access out of >> range >> [1]PETSC ERROR: Try >> option >> -start_in_debugger or >> -on_error_attach_debugger >> [1]PETSC ERROR: or see >> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC >> ERROR: or try >> http://valgrind.org >> on GNU/linux and >> Apple Mac OS X to >> find memory >> corruption errors >> [1]PETSC ERROR: >> configure using >> --with-debugging=yes, >> recompile, link, and run >> [1]PETSC ERROR: to >> get more information >> on the crash. >> [1]PETSC ERROR: User >> provided function() >> line 0 in unknown >> directory unknown >> file (null) >> [3]PETSC ERROR: >> ------------------------------------------------------------------------ >> [3]PETSC ERROR: >> Caught signal number >> 11 SEGV: Segmentation >> Violation, probably >> memory access out of >> range >> [3]PETSC ERROR: Try >> option >> -start_in_debugger or >> -on_error_attach_debugger >> [3]PETSC ERROR: or see >> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSC >> ERROR: or try >> http://valgrind.org >> on GNU/linux and >> Apple Mac OS X to >> find memory >> corruption errors >> [3]PETSC ERROR: >> configure using >> --with-debugging=yes, >> recompile, link, and run >> [3]PETSC ERROR: to >> get more information >> on the crash. >> [3]PETSC ERROR: User >> provided function() >> line 0 in unknown >> directory unknown >> file (null) >> >> ... >> Thank you. >> >> Yours sincerely, >> >> TAY wee-beng >> >> On 14/4/2014 9:05 PM, >> Barry Smith wrote: >> >> Because IO >> doesn?t always >> get flushed >> immediately it >> may not be >> hanging at this >> point. It is >> better to use the >> option >> -start_in_debugger then >> type cont in each >> debugger window >> and then when you >> think it is >> ?hanging? do a >> control C in each >> debugger window >> and type where to >> see where each >> process is you >> can also look >> around in the >> debugger at >> variables to see >> why it is >> ?hanging? at that >> point. >> >> Barry >> >> This routines >> don?t have any >> parallel >> communication in >> them so are >> unlikely to hang. >> >> On Apr 14, 2014, >> at 6:52 AM, TAY >> wee-beng >> >> > > >> >> wrote: >> >> >> >> Hi, >> >> My code hangs >> and I added >> in >> mpi_barrier >> and print to >> catch the >> bug. I found >> that it hangs >> after >> printing "7". >> Is it because >> I'm doing >> something >> wrong? I need >> to access the >> u,v,w array >> so I use >> DMDAVecGetArrayF90. >> After access, >> I use >> DMDAVecRestoreArrayF90. >> >> call >> DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >> call >> MPI_Barrier(MPI_COMM_WORLD,ierr); >> if (myid==0) >> print *,"3" >> call >> DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >> call >> MPI_Barrier(MPI_COMM_WORLD,ierr); >> if (myid==0) >> print *,"4" >> call >> DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >> call >> MPI_Barrier(MPI_COMM_WORLD,ierr); >> if (myid==0) >> print *,"5" >> call >> I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) >> call >> MPI_Barrier(MPI_COMM_WORLD,ierr); >> if (myid==0) >> print *,"6" >> call >> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >> !must be in >> reverse order >> call >> MPI_Barrier(MPI_COMM_WORLD,ierr); >> if (myid==0) >> print *,"7" >> call >> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >> call >> MPI_Barrier(MPI_COMM_WORLD,ierr); >> if (myid==0) >> print *,"8" >> call >> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >> -- >> Thank you. >> >> Yours sincerely, >> >> TAY wee-beng >> >> >> >> >> >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to >> which their experiments lead. >> -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Apr 19 10:39:46 2014 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 19 Apr 2014 10:39:46 -0500 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: <5352934C.1010306@gmail.com> References: <534C9A2C.5060404@gmail.com> <534C9DB5.9070407@gmail.com> <53514B8A.90901@gmail.com> <495519DB-D3A1-4190-AED2-4ABA885C2835@mcs.anl.gov> <5351E62B.6060201@gmail.com> <53520587.6010606@gmail.com> <62DF81C7-0C35-410C-8D4C-206FBB22576A@mcs.anl.gov> <535248E8.2070002@gmail.com> <535284E0.8010901@gmail.com> <5352934C.1010306@gmail.com> Message-ID: On Sat, Apr 19, 2014 at 10:16 AM, TAY wee-beng wrote: > On 19/4/2014 10:55 PM, Matthew Knepley wrote: > > On Sat, Apr 19, 2014 at 9:14 AM, TAY wee-beng wrote: > >> On 19/4/2014 6:48 PM, Matthew Knepley wrote: >> >> On Sat, Apr 19, 2014 at 4:59 AM, TAY wee-beng wrote: >> >>> On 19/4/2014 1:17 PM, Barry Smith wrote: >>> >>>> On Apr 19, 2014, at 12:11 AM, TAY wee-beng wrote: >>>> >>>> On 19/4/2014 12:10 PM, Barry Smith wrote: >>>>> >>>>>> On Apr 18, 2014, at 9:57 PM, TAY wee-beng wrote: >>>>>> >>>>>> On 19/4/2014 3:53 AM, Barry Smith wrote: >>>>>>> >>>>>>>> Hmm, >>>>>>>> >>>>>>>> Interface DMDAVecGetArrayF90 >>>>>>>> Subroutine DMDAVecGetArrayF903(da1, v,d1,ierr) >>>>>>>> USE_DM_HIDE >>>>>>>> DM_HIDE da1 >>>>>>>> VEC_HIDE v >>>>>>>> PetscScalar,pointer :: d1(:,:,:) >>>>>>>> PetscErrorCode ierr >>>>>>>> End Subroutine >>>>>>>> >>>>>>>> So the d1 is a F90 POINTER. But your subroutine seems to be >>>>>>>> treating it as a ?plain old Fortran array?? >>>>>>>> real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) >>>>>>>> >>>>>>> Hi, >>>>> >>>>> So d1 is a pointer, and it's different if I declare it as "plain old >>>>> Fortran array"? Because I declare it as a Fortran array and it works w/o >>>>> any problem if I only call DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 >>>>> with "u". >>>>> >>>>> But if I call DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 with "u", >>>>> "v" and "w", error starts to happen. I wonder why... >>>>> >>>>> Also, supposed I call: >>>>> >>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>> >>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>> >>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>> >>>>> u_array .... >>>>> >>>>> v_array .... etc >>>>> >>>>> Now to restore the array, does it matter the sequence they are >>>>> restored? >>>>> >>>> No it should not matter. If it matters that is a sign that memory >>>> has been written to incorrectly earlier in the code. >>>> >>>> Hi, >>> >>> Hmm, I have been getting different results on different intel compilers. >>> I'm not sure if MPI played a part but I'm only using a single processor. In >>> the debug mode, things run without problem. In optimized mode, in some >>> cases, the code aborts even doing simple initialization: >>> >>> >>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>> >>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>> >>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>> >>> call DMDAVecGetArrayF90(da_p,p_local,p_array,ierr) >>> >>> u_array = 0.d0 >>> >>> v_array = 0.d0 >>> >>> w_array = 0.d0 >>> >>> p_array = 0.d0 >>> >>> >>> call DMDAVecRestoreArrayF90(da_p,p_local,p_array,ierr) >>> >>> >>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>> >>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>> >>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>> >>> The code aborts at call >>> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr), giving segmentation >>> error. But other version of intel compiler passes thru this part w/o error. >>> Since the response is different among different compilers, is this PETSc or >>> intel 's bug? Or mvapich or openmpi? >> >> >> We do this is a bunch of examples. Can you reproduce this different >> behavior in src/dm/examples/tutorials/ex11f90.F? >> >> >> Hi Matt, >> >> Do you mean putting the above lines into ex11f90.F and test? >> > > It already has DMDAVecGetArray(). Just run it. > > > Hi, > > It worked. The differences between mine and the code is the way the > fortran modules are defined, and the ex11f90 only uses global vectors. Does > it make a difference whether global or local vectors are used? Because the > way it accesses x1 only touches the local region. > No the global/local difference should not matter. > Also, before using DMDAVecGetArrayF90, DMGetGlobalVector must be used 1st, > is that so? I can't find the equivalent for local vector though. > DMGetLocalVector() Matt > Thanks. > > > Matt > > >> Thanks >> >> Regards. >> >> >> Matt >> >> >>> As in w, then v and u? >>>>> >>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>> >>>>> thanks >>>>> >>>>>> Note also that the beginning and end indices of the u,v,w, are >>>>>>>> different for each process see for example >>>>>>>> http://www.mcs.anl.gov/petsc/petsc-3.4/src/dm/examples/tutorials/ex11f90.F (and they do not start at 1). This is how to get the loop bounds. >>>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> In my case, I fixed the u,v,w such that their indices are the same. >>>>>>> I also checked using DMDAGetCorners and DMDAGetGhostCorners. Now the >>>>>>> problem lies in my subroutine treating it as a ?plain old Fortran array?. >>>>>>> >>>>>>> If I declare them as pointers, their indices follow the C 0 start >>>>>>> convention, is that so? >>>>>>> >>>>>> Not really. It is that in each process you need to access them >>>>>> from the indices indicated by DMDAGetCorners() for global vectors and >>>>>> DMDAGetGhostCorners() for local vectors. So really C or Fortran doesn?t >>>>>> make any difference. >>>>>> >>>>>> >>>>>> So my problem now is that in my old MPI code, the u(i,j,k) follow >>>>>>> the Fortran 1 start convention. Is there some way to manipulate such that I >>>>>>> do not have to change my u(i,j,k) to u(i-1,j-1,k-1)? >>>>>>> >>>>>> If you code wishes to access them with indices plus one from the >>>>>> values returned by DMDAGetCorners() for global vectors and >>>>>> DMDAGetGhostCorners() for local vectors then you need to manually subtract >>>>>> off the 1. >>>>>> >>>>>> Barry >>>>>> >>>>>> Thanks. >>>>>>> >>>>>>>> Barry >>>>>>>> >>>>>>>> On Apr 18, 2014, at 10:58 AM, TAY wee-beng >>>>>>>> wrote: >>>>>>>> >>>>>>>> Hi, >>>>>>>>> >>>>>>>>> I tried to pinpoint the problem. I reduced my job size and hence I >>>>>>>>> can run on 1 processor. Tried using valgrind but perhaps I'm using the >>>>>>>>> optimized version, it didn't catch the error, besides saying "Segmentation >>>>>>>>> fault (core dumped)" >>>>>>>>> >>>>>>>>> However, by re-writing my code, I found out a few things: >>>>>>>>> >>>>>>>>> 1. if I write my code this way: >>>>>>>>> >>>>>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>>>>> >>>>>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>>>>> >>>>>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>>>>> >>>>>>>>> u_array = .... >>>>>>>>> >>>>>>>>> v_array = .... >>>>>>>>> >>>>>>>>> w_array = .... >>>>>>>>> >>>>>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>>>>> >>>>>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>>>> >>>>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>>>> >>>>>>>>> The code runs fine. >>>>>>>>> >>>>>>>>> 2. if I write my code this way: >>>>>>>>> >>>>>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>>>>> >>>>>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>>>>> >>>>>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>>>>> >>>>>>>>> call uvw_array_change(u_array,v_array,w_array) -> this subroutine >>>>>>>>> does the same modification as the above. >>>>>>>>> >>>>>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>>>>> >>>>>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>>>> >>>>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) -> error >>>>>>>>> >>>>>>>>> where the subroutine is: >>>>>>>>> >>>>>>>>> subroutine uvw_array_change(u,v,w) >>>>>>>>> >>>>>>>>> real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) >>>>>>>>> >>>>>>>>> u ... >>>>>>>>> v... >>>>>>>>> w ... >>>>>>>>> >>>>>>>>> end subroutine uvw_array_change. >>>>>>>>> >>>>>>>>> The above will give an error at : >>>>>>>>> >>>>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>>>> >>>>>>>>> 3. Same as above, except I change the order of the last 3 lines to: >>>>>>>>> >>>>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>>>> >>>>>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>>>> >>>>>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>>>>> >>>>>>>>> So they are now in reversed order. Now it works. >>>>>>>>> >>>>>>>>> 4. Same as 2 or 3, except the subroutine is changed to : >>>>>>>>> >>>>>>>>> subroutine uvw_array_change(u,v,w) >>>>>>>>> >>>>>>>>> real(8), intent(inout) :: >>>>>>>>> u(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>>>>>>> >>>>>>>>> real(8), intent(inout) :: >>>>>>>>> v(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>>>>>>> >>>>>>>>> real(8), intent(inout) :: >>>>>>>>> w(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>>>>>>> >>>>>>>>> u ... >>>>>>>>> v... >>>>>>>>> w ... >>>>>>>>> >>>>>>>>> end subroutine uvw_array_change. >>>>>>>>> >>>>>>>>> The start_indices and end_indices are simply to shift the 0 >>>>>>>>> indices of C convention to that of the 1 indices of the Fortran convention. >>>>>>>>> This is necessary in my case because most of my codes start array counting >>>>>>>>> at 1, hence the "trick". >>>>>>>>> >>>>>>>>> However, now no matter which order of the DMDAVecRestoreArrayF90 >>>>>>>>> (as in 2 or 3), error will occur at "call >>>>>>>>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) " >>>>>>>>> >>>>>>>>> So did I violate and cause memory corruption due to the trick >>>>>>>>> above? But I can't think of any way other than the "trick" to continue >>>>>>>>> using the 1 indices convention. >>>>>>>>> >>>>>>>>> Thank you. >>>>>>>>> >>>>>>>>> Yours sincerely, >>>>>>>>> >>>>>>>>> TAY wee-beng >>>>>>>>> >>>>>>>>> On 15/4/2014 8:00 PM, Barry Smith wrote: >>>>>>>>> >>>>>>>>>> Try running under valgrind >>>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Apr 14, 2014, at 9:47 PM, TAY wee-beng >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>> Hi Barry, >>>>>>>>>>> >>>>>>>>>>> As I mentioned earlier, the code works fine in PETSc debug mode >>>>>>>>>>> but fails in non-debug mode. >>>>>>>>>>> >>>>>>>>>>> I have attached my code. >>>>>>>>>>> >>>>>>>>>>> Thank you >>>>>>>>>>> >>>>>>>>>>> Yours sincerely, >>>>>>>>>>> >>>>>>>>>>> TAY wee-beng >>>>>>>>>>> >>>>>>>>>>> On 15/4/2014 2:26 AM, Barry Smith wrote: >>>>>>>>>>> >>>>>>>>>>>> Please send the code that creates da_w and the declarations >>>>>>>>>>>> of w_array >>>>>>>>>>>> >>>>>>>>>>>> Barry >>>>>>>>>>>> >>>>>>>>>>>> On Apr 14, 2014, at 9:40 AM, TAY wee-beng >>>>>>>>>>>> >>>>>>>>>>>> wrote: >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Hi Barry, >>>>>>>>>>>>> >>>>>>>>>>>>> I'm not too sure how to do it. I'm running mpi. So I run: >>>>>>>>>>>>> >>>>>>>>>>>>> mpirun -n 4 ./a.out -start_in_debugger >>>>>>>>>>>>> >>>>>>>>>>>>> I got the msg below. Before the gdb windows appear (thru x11), >>>>>>>>>>>>> the program aborts. >>>>>>>>>>>>> >>>>>>>>>>>>> Also I tried running in another cluster and it worked. Also >>>>>>>>>>>>> tried in the current cluster in debug mode and it worked too. >>>>>>>>>>>>> >>>>>>>>>>>>> mpirun -n 4 ./a.out -start_in_debugger >>>>>>>>>>>>> >>>>>>>>>>>>> -------------------------------------------------------------------------- >>>>>>>>>>>>> An MPI process has executed an operation involving a call to >>>>>>>>>>>>> the >>>>>>>>>>>>> "fork()" system call to create a child process. Open MPI is >>>>>>>>>>>>> currently >>>>>>>>>>>>> operating in a condition that could result in memory >>>>>>>>>>>>> corruption or >>>>>>>>>>>>> other system errors; your MPI job may hang, crash, or produce >>>>>>>>>>>>> silent >>>>>>>>>>>>> data corruption. The use of fork() (or system() or other >>>>>>>>>>>>> calls that >>>>>>>>>>>>> create child processes) is strongly discouraged. >>>>>>>>>>>>> >>>>>>>>>>>>> The process that invoked fork was: >>>>>>>>>>>>> >>>>>>>>>>>>> Local host: n12-76 (PID 20235) >>>>>>>>>>>>> MPI_COMM_WORLD rank: 2 >>>>>>>>>>>>> >>>>>>>>>>>>> If you are *absolutely sure* that your application will >>>>>>>>>>>>> successfully >>>>>>>>>>>>> and correctly survive a call to fork(), you may disable this >>>>>>>>>>>>> warning >>>>>>>>>>>>> by setting the mpi_warn_on_fork MCA parameter to 0. >>>>>>>>>>>>> >>>>>>>>>>>>> -------------------------------------------------------------------------- >>>>>>>>>>>>> [2]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20235 >>>>>>>>>>>>> on display localhost:50.0 on machine n12-76 >>>>>>>>>>>>> [0]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20233 >>>>>>>>>>>>> on display localhost:50.0 on machine n12-76 >>>>>>>>>>>>> [1]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20234 >>>>>>>>>>>>> on display localhost:50.0 on machine n12-76 >>>>>>>>>>>>> [3]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20236 >>>>>>>>>>>>> on display localhost:50.0 on machine n12-76 >>>>>>>>>>>>> [n12-76:20232] 3 more processes have sent help message >>>>>>>>>>>>> help-mpi-runtime.txt / mpi_init:warn-fork >>>>>>>>>>>>> [n12-76:20232] Set MCA parameter "orte_base_help_aggregate" to >>>>>>>>>>>>> 0 to see all help / error messages >>>>>>>>>>>>> >>>>>>>>>>>>> .... >>>>>>>>>>>>> >>>>>>>>>>>>> 1 >>>>>>>>>>>>> [1]PETSC ERROR: >>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation >>>>>>>>>>>>> Violation, probably memory access out of range >>>>>>>>>>>>> [1]PETSC ERROR: Try option -start_in_debugger or >>>>>>>>>>>>> -on_error_attach_debugger >>>>>>>>>>>>> [1]PETSC ERROR: or see >>>>>>>>>>>>> >>>>>>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSCERROR: or try >>>>>>>>>>>>> http://valgrind.org >>>>>>>>>>>>> on GNU/linux and Apple Mac OS X to find memory corruption >>>>>>>>>>>>> errors >>>>>>>>>>>>> [1]PETSC ERROR: configure using --with-debugging=yes, >>>>>>>>>>>>> recompile, link, and run >>>>>>>>>>>>> [1]PETSC ERROR: to get more information on the crash. >>>>>>>>>>>>> [1]PETSC ERROR: User provided function() line 0 in unknown >>>>>>>>>>>>> directory unknown file (null) >>>>>>>>>>>>> [3]PETSC ERROR: >>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>> [3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation >>>>>>>>>>>>> Violation, probably memory access out of range >>>>>>>>>>>>> [3]PETSC ERROR: Try option -start_in_debugger or >>>>>>>>>>>>> -on_error_attach_debugger >>>>>>>>>>>>> [3]PETSC ERROR: or see >>>>>>>>>>>>> >>>>>>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSCERROR: or try >>>>>>>>>>>>> http://valgrind.org >>>>>>>>>>>>> on GNU/linux and Apple Mac OS X to find memory corruption >>>>>>>>>>>>> errors >>>>>>>>>>>>> [3]PETSC ERROR: configure using --with-debugging=yes, >>>>>>>>>>>>> recompile, link, and run >>>>>>>>>>>>> [3]PETSC ERROR: to get more information on the crash. >>>>>>>>>>>>> [3]PETSC ERROR: User provided function() line 0 in unknown >>>>>>>>>>>>> directory unknown file (null) >>>>>>>>>>>>> >>>>>>>>>>>>> ... >>>>>>>>>>>>> Thank you. >>>>>>>>>>>>> >>>>>>>>>>>>> Yours sincerely, >>>>>>>>>>>>> >>>>>>>>>>>>> TAY wee-beng >>>>>>>>>>>>> >>>>>>>>>>>>> On 14/4/2014 9:05 PM, Barry Smith wrote: >>>>>>>>>>>>> >>>>>>>>>>>>> Because IO doesn?t always get flushed immediately it may >>>>>>>>>>>>>> not be hanging at this point. It is better to use the option >>>>>>>>>>>>>> -start_in_debugger then type cont in each debugger window and then when you >>>>>>>>>>>>>> think it is ?hanging? do a control C in each debugger window and type where >>>>>>>>>>>>>> to see where each process is you can also look around in the debugger at >>>>>>>>>>>>>> variables to see why it is ?hanging? at that point. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Barry >>>>>>>>>>>>>> >>>>>>>>>>>>>> This routines don?t have any parallel communication in >>>>>>>>>>>>>> them so are unlikely to hang. >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Apr 14, 2014, at 6:52 AM, TAY wee-beng >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> My code hangs and I added in mpi_barrier and print to catch >>>>>>>>>>>>>>> the bug. I found that it hangs after printing "7". Is it because I'm doing >>>>>>>>>>>>>>> something wrong? I need to access the u,v,w array so I use >>>>>>>>>>>>>>> DMDAVecGetArrayF90. After access, I use DMDAVecRestoreArrayF90. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if >>>>>>>>>>>>>>> (myid==0) print *,"3" >>>>>>>>>>>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if >>>>>>>>>>>>>>> (myid==0) print *,"4" >>>>>>>>>>>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if >>>>>>>>>>>>>>> (myid==0) print *,"5" >>>>>>>>>>>>>>> call >>>>>>>>>>>>>>> I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) >>>>>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if >>>>>>>>>>>>>>> (myid==0) print *,"6" >>>>>>>>>>>>>>> call >>>>>>>>>>>>>>> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) !must be in reverse order >>>>>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if >>>>>>>>>>>>>>> (myid==0) print *,"7" >>>>>>>>>>>>>>> call >>>>>>>>>>>>>>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if >>>>>>>>>>>>>>> (myid==0) print *,"8" >>>>>>>>>>>>>>> call >>>>>>>>>>>>>>> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> Thank you. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Yours sincerely, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> TAY wee-beng >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Sat Apr 19 10:49:29 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Sat, 19 Apr 2014 23:49:29 +0800 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: References: <534C9A2C.5060404@gmail.com> <534C9DB5.9070407@gmail.com> <53514B8A.90901@gmail.com> <495519DB-D3A1-4190-AED2-4ABA885C2835@mcs.anl.gov> <5351E62B.6060201@gmail.com> <53520587.6010606@gmail.com> <62DF81C7-0C35-410C-8D4C-206FBB22576A@mcs.anl.gov> <535248E8.2070002@gmail.com> <535284E0.8010901@gmail.com> <5352934C.1010306@gmail.com> Message-ID: <53529B09.8040009@gmail.com> On 19/4/2014 11:39 PM, Matthew Knepley wrote: > On Sat, Apr 19, 2014 at 10:16 AM, TAY wee-beng > wrote: > > On 19/4/2014 10:55 PM, Matthew Knepley wrote: >> On Sat, Apr 19, 2014 at 9:14 AM, TAY wee-beng > > wrote: >> >> On 19/4/2014 6:48 PM, Matthew Knepley wrote: >>> On Sat, Apr 19, 2014 at 4:59 AM, TAY wee-beng >>> > wrote: >>> >>> On 19/4/2014 1:17 PM, Barry Smith wrote: >>> >>> On Apr 19, 2014, at 12:11 AM, TAY wee-beng >>> > wrote: >>> >>> On 19/4/2014 12:10 PM, Barry Smith wrote: >>> >>> On Apr 18, 2014, at 9:57 PM, TAY wee-beng >>> > >>> wrote: >>> >>> On 19/4/2014 3:53 AM, Barry Smith wrote: >>> >>> Hmm, >>> >>> Interface DMDAVecGetArrayF90 >>> Subroutine >>> DMDAVecGetArrayF903(da1, v,d1,ierr) >>> USE_DM_HIDE >>> DM_HIDE da1 >>> VEC_HIDE v >>> PetscScalar,pointer :: d1(:,:,:) >>> PetscErrorCode ierr >>> End Subroutine >>> >>> So the d1 is a F90 POINTER. But >>> your subroutine seems to be treating >>> it as a ?plain old Fortran array?? >>> real(8), intent(inout) :: >>> u(:,:,:),v(:,:,:),w(:,:,:) >>> >>> Hi, >>> >>> So d1 is a pointer, and it's different if I >>> declare it as "plain old Fortran array"? Because >>> I declare it as a Fortran array and it works w/o >>> any problem if I only call DMDAVecGetArrayF90 >>> and DMDAVecRestoreArrayF90 with "u". >>> >>> But if I call DMDAVecGetArrayF90 and >>> DMDAVecRestoreArrayF90 with "u", "v" and "w", >>> error starts to happen. I wonder why... >>> >>> Also, supposed I call: >>> >>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>> >>> call >>> DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>> >>> call >>> DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>> >>> u_array .... >>> >>> v_array .... etc >>> >>> Now to restore the array, does it matter the >>> sequence they are restored? >>> >>> No it should not matter. If it matters that is a >>> sign that memory has been written to incorrectly >>> earlier in the code. >>> >>> Hi, >>> >>> Hmm, I have been getting different results on different >>> intel compilers. I'm not sure if MPI played a part but >>> I'm only using a single processor. In the debug mode, >>> things run without problem. In optimized mode, in some >>> cases, the code aborts even doing simple initialization: >>> >>> >>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>> >>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>> >>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>> >>> call DMDAVecGetArrayF90(da_p,p_local,p_array,ierr) >>> >>> u_array = 0.d0 >>> >>> v_array = 0.d0 >>> >>> w_array = 0.d0 >>> >>> p_array = 0.d0 >>> >>> >>> call DMDAVecRestoreArrayF90(da_p,p_local,p_array,ierr) >>> >>> >>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>> >>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>> >>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>> >>> The code aborts at call >>> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr), >>> giving segmentation error. But other version of intel >>> compiler passes thru this part w/o error. Since the >>> response is different among different compilers, is this >>> PETSc or intel 's bug? Or mvapich or openmpi? >>> >>> >>> We do this is a bunch of examples. Can you reproduce this >>> different behavior in src/dm/examples/tutorials/ex11f90.F? >> >> Hi Matt, >> >> Do you mean putting the above lines into ex11f90.F and test? >> >> >> It already has DMDAVecGetArray(). Just run it. > > Hi, > > It worked. The differences between mine and the code is the way > the fortran modules are defined, and the ex11f90 only uses global > vectors. Does it make a difference whether global or local vectors > are used? Because the way it accesses x1 only touches the local > region. > > > No the global/local difference should not matter. > > Also, before using DMDAVecGetArrayF90, DMGetGlobalVector must be > used 1st, is that so? I can't find the equivalent for local vector > though. > > > DMGetLocalVector() Ops, I do not have DMGetLocalVector and DMRestoreLocalVector in my code. Does it matter? If so, when should I call them? Thanks. > > Matt > > Thanks. >> >> Matt >> >> Thanks >> >> Regards. >>> >>> Matt >>> >>> As in w, then v and u? >>> >>> call >>> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>> call >>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>> call >>> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>> >>> thanks >>> >>> Note also that the beginning and >>> end indices of the u,v,w, are >>> different for each process see for >>> example >>> http://www.mcs.anl.gov/petsc/petsc-3.4/src/dm/examples/tutorials/ex11f90.F >>> (and they do not start at 1). This >>> is how to get the loop bounds. >>> >>> Hi, >>> >>> In my case, I fixed the u,v,w such that >>> their indices are the same. I also >>> checked using DMDAGetCorners and >>> DMDAGetGhostCorners. Now the problem >>> lies in my subroutine treating it as a >>> ?plain old Fortran array?. >>> >>> If I declare them as pointers, their >>> indices follow the C 0 start convention, >>> is that so? >>> >>> Not really. It is that in each process >>> you need to access them from the indices >>> indicated by DMDAGetCorners() for global >>> vectors and DMDAGetGhostCorners() for local >>> vectors. So really C or Fortran doesn?t >>> make any difference. >>> >>> >>> So my problem now is that in my old MPI >>> code, the u(i,j,k) follow the Fortran 1 >>> start convention. Is there some way to >>> manipulate such that I do not have to >>> change my u(i,j,k) to u(i-1,j-1,k-1)? >>> >>> If you code wishes to access them with >>> indices plus one from the values returned by >>> DMDAGetCorners() for global vectors and >>> DMDAGetGhostCorners() for local vectors then >>> you need to manually subtract off the 1. >>> >>> Barry >>> >>> Thanks. >>> >>> Barry >>> >>> On Apr 18, 2014, at 10:58 AM, TAY >>> wee-beng >> > wrote: >>> >>> Hi, >>> >>> I tried to pinpoint the problem. >>> I reduced my job size and hence >>> I can run on 1 processor. Tried >>> using valgrind but perhaps I'm >>> using the optimized version, it >>> didn't catch the error, besides >>> saying "Segmentation fault (core >>> dumped)" >>> >>> However, by re-writing my code, >>> I found out a few things: >>> >>> 1. if I write my code this way: >>> >>> call >>> DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>> >>> call >>> DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>> >>> call >>> DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>> >>> u_array = .... >>> >>> v_array = .... >>> >>> w_array = .... >>> >>> call >>> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>> >>> call >>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>> >>> call >>> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>> >>> The code runs fine. >>> >>> 2. if I write my code this way: >>> >>> call >>> DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>> >>> call >>> DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>> >>> call >>> DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>> >>> call >>> uvw_array_change(u_array,v_array,w_array) >>> -> this subroutine does the same >>> modification as the above. >>> >>> call >>> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>> >>> call >>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>> >>> call >>> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>> -> error >>> >>> where the subroutine is: >>> >>> subroutine uvw_array_change(u,v,w) >>> >>> real(8), intent(inout) :: >>> u(:,:,:),v(:,:,:),w(:,:,:) >>> >>> u ... >>> v... >>> w ... >>> >>> end subroutine uvw_array_change. >>> >>> The above will give an error at : >>> >>> call >>> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>> >>> 3. Same as above, except I >>> change the order of the last 3 >>> lines to: >>> >>> call >>> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>> >>> call >>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>> >>> call >>> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>> >>> So they are now in reversed >>> order. Now it works. >>> >>> 4. Same as 2 or 3, except the >>> subroutine is changed to : >>> >>> subroutine uvw_array_change(u,v,w) >>> >>> real(8), intent(inout) :: >>> u(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>> >>> real(8), intent(inout) :: >>> v(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>> >>> real(8), intent(inout) :: >>> w(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>> >>> u ... >>> v... >>> w ... >>> >>> end subroutine uvw_array_change. >>> >>> The start_indices and >>> end_indices are simply to shift >>> the 0 indices of C convention to >>> that of the 1 indices of the >>> Fortran convention. This is >>> necessary in my case because >>> most of my codes start array >>> counting at 1, hence the "trick". >>> >>> However, now no matter which >>> order of the >>> DMDAVecRestoreArrayF90 (as in 2 >>> or 3), error will occur at "call >>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>> " >>> >>> So did I violate and cause >>> memory corruption due to the >>> trick above? But I can't think >>> of any way other than the >>> "trick" to continue using the 1 >>> indices convention. >>> >>> Thank you. >>> >>> Yours sincerely, >>> >>> TAY wee-beng >>> >>> On 15/4/2014 8:00 PM, Barry >>> Smith wrote: >>> >>> Try running under >>> valgrind >>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >>> >>> >>> On Apr 14, 2014, at 9:47 PM, >>> TAY wee-beng >>> >> > >>> wrote: >>> >>> Hi Barry, >>> >>> As I mentioned earlier, >>> the code works fine in >>> PETSc debug mode but >>> fails in non-debug mode. >>> >>> I have attached my code. >>> >>> Thank you >>> >>> Yours sincerely, >>> >>> TAY wee-beng >>> >>> On 15/4/2014 2:26 AM, >>> Barry Smith wrote: >>> >>> Please send the >>> code that creates >>> da_w and the >>> declarations of w_array >>> >>> Barry >>> >>> On Apr 14, 2014, at >>> 9:40 AM, TAY wee-beng >>> >> > >>> wrote: >>> >>> >>> Hi Barry, >>> >>> I'm not too sure >>> how to do it. >>> I'm running mpi. >>> So I run: >>> >>> mpirun -n 4 >>> ./a.out >>> -start_in_debugger >>> >>> I got the msg >>> below. Before >>> the gdb windows >>> appear (thru >>> x11), the >>> program aborts. >>> >>> Also I tried >>> running in >>> another cluster >>> and it worked. >>> Also tried in >>> the current >>> cluster in debug >>> mode and it >>> worked too. >>> >>> mpirun -n 4 >>> ./a.out >>> -start_in_debugger >>> -------------------------------------------------------------------------- >>> An MPI process >>> has executed an >>> operation >>> involving a call >>> to the >>> "fork()" system >>> call to create a >>> child process. >>> Open MPI is >>> currently >>> operating in a >>> condition that >>> could result in >>> memory corruption or >>> other system >>> errors; your MPI >>> job may hang, >>> crash, or >>> produce silent >>> data corruption. >>> The use of >>> fork() (or >>> system() or >>> other calls that >>> create child >>> processes) is >>> strongly >>> discouraged. >>> >>> The process that >>> invoked fork was: >>> >>> Local host: >>> n12-76 (PID 20235) >>> MPI_COMM_WORLD >>> rank: 2 >>> >>> If you are >>> *absolutely >>> sure* that your >>> application will >>> successfully >>> and correctly >>> survive a call >>> to fork(), you >>> may disable this >>> warning >>> by setting the >>> mpi_warn_on_fork >>> MCA parameter to 0. >>> -------------------------------------------------------------------------- >>> [2]PETSC ERROR: >>> PETSC: Attaching >>> gdb to ./a.out >>> of pid 20235 on >>> display >>> localhost:50.0 >>> on machine n12-76 >>> [0]PETSC ERROR: >>> PETSC: Attaching >>> gdb to ./a.out >>> of pid 20233 on >>> display >>> localhost:50.0 >>> on machine n12-76 >>> [1]PETSC ERROR: >>> PETSC: Attaching >>> gdb to ./a.out >>> of pid 20234 on >>> display >>> localhost:50.0 >>> on machine n12-76 >>> [3]PETSC ERROR: >>> PETSC: Attaching >>> gdb to ./a.out >>> of pid 20236 on >>> display >>> localhost:50.0 >>> on machine n12-76 >>> [n12-76:20232] 3 >>> more processes >>> have sent help >>> message >>> help-mpi-runtime.txt >>> / mpi_init:warn-fork >>> [n12-76:20232] >>> Set MCA >>> parameter >>> "orte_base_help_aggregate" >>> to 0 to see all >>> help / error >>> messages >>> >>> .... >>> >>> 1 >>> [1]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [1]PETSC ERROR: >>> Caught signal >>> number 11 SEGV: >>> Segmentation >>> Violation, >>> probably memory >>> access out of range >>> [1]PETSC ERROR: >>> Try option >>> -start_in_debugger >>> or >>> -on_error_attach_debugger >>> [1]PETSC ERROR: >>> or see >>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC >>> ERROR: or try >>> http://valgrind.org >>> on GNU/linux >>> and Apple Mac OS >>> X to find memory >>> corruption errors >>> [1]PETSC ERROR: >>> configure using >>> --with-debugging=yes, >>> recompile, link, >>> and run >>> [1]PETSC ERROR: >>> to get more >>> information on >>> the crash. >>> [1]PETSC ERROR: >>> User provided >>> function() line >>> 0 in unknown >>> directory >>> unknown file (null) >>> [3]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [3]PETSC ERROR: >>> Caught signal >>> number 11 SEGV: >>> Segmentation >>> Violation, >>> probably memory >>> access out of range >>> [3]PETSC ERROR: >>> Try option >>> -start_in_debugger >>> or >>> -on_error_attach_debugger >>> [3]PETSC ERROR: >>> or see >>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSC >>> ERROR: or try >>> http://valgrind.org >>> on GNU/linux >>> and Apple Mac OS >>> X to find memory >>> corruption errors >>> [3]PETSC ERROR: >>> configure using >>> --with-debugging=yes, >>> recompile, link, >>> and run >>> [3]PETSC ERROR: >>> to get more >>> information on >>> the crash. >>> [3]PETSC ERROR: >>> User provided >>> function() line >>> 0 in unknown >>> directory >>> unknown file (null) >>> >>> ... >>> Thank you. >>> >>> Yours sincerely, >>> >>> TAY wee-beng >>> >>> On 14/4/2014 >>> 9:05 PM, Barry >>> Smith wrote: >>> >>> Because >>> IO doesn?t >>> always get >>> flushed >>> immediately >>> it may not >>> be hanging >>> at this >>> point. It >>> is better to >>> use the >>> option >>> -start_in_debugger >>> then type >>> cont in each >>> debugger >>> window and >>> then when >>> you think it >>> is ?hanging? >>> do a control >>> C in each >>> debugger >>> window and >>> type where >>> to see where >>> each process >>> is you can >>> also look >>> around in >>> the debugger >>> at variables >>> to see why >>> it is >>> ?hanging? at >>> that point. >>> >>> Barry >>> >>> This >>> routines >>> don?t have >>> any parallel >>> communication in >>> them so are >>> unlikely to >>> hang. >>> >>> On Apr 14, >>> 2014, at >>> 6:52 AM, TAY >>> wee-beng >>> >>> >> > >>> >>> wrote: >>> >>> >>> >>> Hi, >>> >>> My code >>> hangs >>> and I >>> added in >>> mpi_barrier >>> and >>> print to >>> catch >>> the bug. >>> I found >>> that it >>> hangs >>> after >>> printing >>> "7". Is >>> it >>> because >>> I'm >>> doing >>> something wrong? >>> I need >>> to >>> access >>> the >>> u,v,w >>> array so >>> I use >>> DMDAVecGetArrayF90. >>> After >>> access, >>> I use >>> DMDAVecRestoreArrayF90. >>> >>> >>> call >>> DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>> >>> call >>> MPI_Barrier(MPI_COMM_WORLD,ierr); >>> if >>> (myid==0) print >>> *,"3" >>> >>> call >>> DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>> >>> call >>> MPI_Barrier(MPI_COMM_WORLD,ierr); >>> if >>> (myid==0) print >>> *,"4" >>> >>> call >>> DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>> >>> call >>> MPI_Barrier(MPI_COMM_WORLD,ierr); >>> if >>> (myid==0) print >>> *,"5" >>> >>> call >>> I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) >>> >>> call >>> MPI_Barrier(MPI_COMM_WORLD,ierr); >>> if >>> (myid==0) print >>> *,"6" >>> >>> call >>> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>> !must >>> be in >>> reverse >>> order >>> >>> call >>> MPI_Barrier(MPI_COMM_WORLD,ierr); >>> if >>> (myid==0) print >>> *,"7" >>> >>> call >>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>> >>> call >>> MPI_Barrier(MPI_COMM_WORLD,ierr); >>> if >>> (myid==0) print >>> *,"8" >>> >>> call >>> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>> -- >>> Thank you. >>> >>> Yours >>> sincerely, >>> >>> TAY wee-beng >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin >>> their experiments is infinitely more interesting than any >>> results to which their experiments lead. >>> -- Norbert Wiener >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to >> which their experiments lead. >> -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Apr 19 11:35:20 2014 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 19 Apr 2014 11:35:20 -0500 Subject: [petsc-users] Elasticity tensor in ex52 In-Reply-To: References: Message-ID: On Fri, Apr 18, 2014 at 1:23 PM, Miguel Angel Salazar de Troya < salazardetroya at gmail.com> wrote: > Hello everybody. > > First, I am taking this example from the petsc-dev version, I am not sure > if I should have posted this in another mail-list, if so, my apologies. > > In this example, for the elasticity case, function g3 is built as: > > void g3_elas(const PetscScalar u[], const PetscScalar gradU[], const > PetscScalar a[], const PetscScalar gradA[], const PetscReal x[], > PetscScalar g3[]) > { > const PetscInt dim = spatialDim; > const PetscInt Ncomp = spatialDim; > PetscInt compI, d; > > for (compI = 0; compI < Ncomp; ++compI) { > for (d = 0; d < dim; ++d) { > g3[((compI*Ncomp+compI)*dim+d)*dim+d] = 1.0; > } > } > } > > Therefore, a fourth-order tensor is represented as a vector. I was > checking the indices for different iterator values, and they do not seem to > match the vectorization that I have in mind. For a two dimensional case, > the indices for which the value is set as 1 are: > > compI = 0 , d = 0 -----> index = 0 > compI = 0 , d = 1 -----> index = 3 > compI = 1 , d = 0 -----> index = 12 > compI = 1 , d = 1 -----> index = 15 > > The values for the first and last seem correct to me, but they other two > are confusing me. I see that this elasticity tensor (which is the > derivative of the gradient by itself in this case) would be a four by four > identity matrix in its matrix representation, so the indices in between > would be 5 and 10 instead of 3 and 12, if we put one column on top of each > other. > I have read this a few times, but I cannot understand that you are asking. The simplest thing I can respond is that we are indexing a row-major array, using the indices: g3[ic, id, jc, jd] where ic indexes the components of the trial field, id indexes the derivative components, jc indexes the basis field components, and jd its derivative components. Matt > I guess my question is then, how did you vectorize the fourth order > tensor? > > Thanks in advance > Miguel > > -- > *Miguel Angel Salazar de Troya* > Graduate Research Assistant > Department of Mechanical Science and Engineering > University of Illinois at Urbana-Champaign > (217) 550-2360 > salaza11 at illinois.edu > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Apr 19 12:02:04 2014 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 19 Apr 2014 12:02:04 -0500 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: <53529B09.8040009@gmail.com> References: <534C9A2C.5060404@gmail.com> <534C9DB5.9070407@gmail.com> <53514B8A.90901@gmail.com> <495519DB-D3A1-4190-AED2-4ABA885C2835@mcs.anl.gov> <5351E62B.6060201@gmail.com> <53520587.6010606@gmail.com> <62DF81C7-0C35-410C-8D4C-206FBB22576A@mcs.anl.gov> <535248E8.2070002@gmail.com> <535284E0.8010901@gmail.com> <5352934C.1010306@gmail.com> <53529B09.8040009@gmail.com> Message-ID: On Sat, Apr 19, 2014 at 10:49 AM, TAY wee-beng wrote: > On 19/4/2014 11:39 PM, Matthew Knepley wrote: > > On Sat, Apr 19, 2014 at 10:16 AM, TAY wee-beng wrote: > >> On 19/4/2014 10:55 PM, Matthew Knepley wrote: >> >> On Sat, Apr 19, 2014 at 9:14 AM, TAY wee-beng wrote: >> >>> On 19/4/2014 6:48 PM, Matthew Knepley wrote: >>> >>> On Sat, Apr 19, 2014 at 4:59 AM, TAY wee-beng wrote: >>> >>>> On 19/4/2014 1:17 PM, Barry Smith wrote: >>>> >>>>> On Apr 19, 2014, at 12:11 AM, TAY wee-beng wrote: >>>>> >>>>> On 19/4/2014 12:10 PM, Barry Smith wrote: >>>>>> >>>>>>> On Apr 18, 2014, at 9:57 PM, TAY wee-beng wrote: >>>>>>> >>>>>>> On 19/4/2014 3:53 AM, Barry Smith wrote: >>>>>>>> >>>>>>>>> Hmm, >>>>>>>>> >>>>>>>>> Interface DMDAVecGetArrayF90 >>>>>>>>> Subroutine DMDAVecGetArrayF903(da1, v,d1,ierr) >>>>>>>>> USE_DM_HIDE >>>>>>>>> DM_HIDE da1 >>>>>>>>> VEC_HIDE v >>>>>>>>> PetscScalar,pointer :: d1(:,:,:) >>>>>>>>> PetscErrorCode ierr >>>>>>>>> End Subroutine >>>>>>>>> >>>>>>>>> So the d1 is a F90 POINTER. But your subroutine seems to be >>>>>>>>> treating it as a ?plain old Fortran array?? >>>>>>>>> real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) >>>>>>>>> >>>>>>>> Hi, >>>>>> >>>>>> So d1 is a pointer, and it's different if I declare it as "plain old >>>>>> Fortran array"? Because I declare it as a Fortran array and it works w/o >>>>>> any problem if I only call DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 >>>>>> with "u". >>>>>> >>>>>> But if I call DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 with "u", >>>>>> "v" and "w", error starts to happen. I wonder why... >>>>>> >>>>>> Also, supposed I call: >>>>>> >>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>> >>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>> >>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>> >>>>>> u_array .... >>>>>> >>>>>> v_array .... etc >>>>>> >>>>>> Now to restore the array, does it matter the sequence they are >>>>>> restored? >>>>>> >>>>> No it should not matter. If it matters that is a sign that memory >>>>> has been written to incorrectly earlier in the code. >>>>> >>>>> Hi, >>>> >>>> Hmm, I have been getting different results on different intel >>>> compilers. I'm not sure if MPI played a part but I'm only using a single >>>> processor. In the debug mode, things run without problem. In optimized >>>> mode, in some cases, the code aborts even doing simple initialization: >>>> >>>> >>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>> >>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>> >>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>> >>>> call DMDAVecGetArrayF90(da_p,p_local,p_array,ierr) >>>> >>>> u_array = 0.d0 >>>> >>>> v_array = 0.d0 >>>> >>>> w_array = 0.d0 >>>> >>>> p_array = 0.d0 >>>> >>>> >>>> call DMDAVecRestoreArrayF90(da_p,p_local,p_array,ierr) >>>> >>>> >>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>> >>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>> >>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>> >>>> The code aborts at call >>>> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr), giving segmentation >>>> error. But other version of intel compiler passes thru this part w/o error. >>>> Since the response is different among different compilers, is this PETSc or >>>> intel 's bug? Or mvapich or openmpi? >>> >>> >>> We do this is a bunch of examples. Can you reproduce this different >>> behavior in src/dm/examples/tutorials/ex11f90.F? >>> >>> >>> Hi Matt, >>> >>> Do you mean putting the above lines into ex11f90.F and test? >>> >> >> It already has DMDAVecGetArray(). Just run it. >> >> >> Hi, >> >> It worked. The differences between mine and the code is the way the >> fortran modules are defined, and the ex11f90 only uses global vectors. Does >> it make a difference whether global or local vectors are used? Because the >> way it accesses x1 only touches the local region. >> > > No the global/local difference should not matter. > > >> Also, before using DMDAVecGetArrayF90, DMGetGlobalVector must be used >> 1st, is that so? I can't find the equivalent for local vector though. >> > > DMGetLocalVector() > > > Ops, I do not have DMGetLocalVector and DMRestoreLocalVector in my code. > Does it matter? > > If so, when should I call them? > You just need a local vector from somewhere. Matt > Thanks. > > > Matt > > >> Thanks. >> >> >> Matt >> >> >>> Thanks >>> >>> Regards. >>> >>> >>> Matt >>> >>> >>>> As in w, then v and u? >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>> >>>>>> thanks >>>>>> >>>>>>> Note also that the beginning and end indices of the u,v,w, are >>>>>>>>> different for each process see for example >>>>>>>>> http://www.mcs.anl.gov/petsc/petsc-3.4/src/dm/examples/tutorials/ex11f90.F (and they do not start at 1). This is how to get the loop bounds. >>>>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> In my case, I fixed the u,v,w such that their indices are the same. >>>>>>>> I also checked using DMDAGetCorners and DMDAGetGhostCorners. Now the >>>>>>>> problem lies in my subroutine treating it as a ?plain old Fortran array?. >>>>>>>> >>>>>>>> If I declare them as pointers, their indices follow the C 0 start >>>>>>>> convention, is that so? >>>>>>>> >>>>>>> Not really. It is that in each process you need to access them >>>>>>> from the indices indicated by DMDAGetCorners() for global vectors and >>>>>>> DMDAGetGhostCorners() for local vectors. So really C or Fortran doesn?t >>>>>>> make any difference. >>>>>>> >>>>>>> >>>>>>> So my problem now is that in my old MPI code, the u(i,j,k) follow >>>>>>>> the Fortran 1 start convention. Is there some way to manipulate such that I >>>>>>>> do not have to change my u(i,j,k) to u(i-1,j-1,k-1)? >>>>>>>> >>>>>>> If you code wishes to access them with indices plus one from the >>>>>>> values returned by DMDAGetCorners() for global vectors and >>>>>>> DMDAGetGhostCorners() for local vectors then you need to manually subtract >>>>>>> off the 1. >>>>>>> >>>>>>> Barry >>>>>>> >>>>>>> Thanks. >>>>>>>> >>>>>>>>> Barry >>>>>>>>> >>>>>>>>> On Apr 18, 2014, at 10:58 AM, TAY wee-beng >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> I tried to pinpoint the problem. I reduced my job size and hence >>>>>>>>>> I can run on 1 processor. Tried using valgrind but perhaps I'm using the >>>>>>>>>> optimized version, it didn't catch the error, besides saying "Segmentation >>>>>>>>>> fault (core dumped)" >>>>>>>>>> >>>>>>>>>> However, by re-writing my code, I found out a few things: >>>>>>>>>> >>>>>>>>>> 1. if I write my code this way: >>>>>>>>>> >>>>>>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>>>>>> >>>>>>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>>>>>> >>>>>>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>>>>>> >>>>>>>>>> u_array = .... >>>>>>>>>> >>>>>>>>>> v_array = .... >>>>>>>>>> >>>>>>>>>> w_array = .... >>>>>>>>>> >>>>>>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>>>>>> >>>>>>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>>>>> >>>>>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>>>>> >>>>>>>>>> The code runs fine. >>>>>>>>>> >>>>>>>>>> 2. if I write my code this way: >>>>>>>>>> >>>>>>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>>>>>> >>>>>>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>>>>>> >>>>>>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>>>>>> >>>>>>>>>> call uvw_array_change(u_array,v_array,w_array) -> this subroutine >>>>>>>>>> does the same modification as the above. >>>>>>>>>> >>>>>>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>>>>>> >>>>>>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>>>>> >>>>>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) -> error >>>>>>>>>> >>>>>>>>>> where the subroutine is: >>>>>>>>>> >>>>>>>>>> subroutine uvw_array_change(u,v,w) >>>>>>>>>> >>>>>>>>>> real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) >>>>>>>>>> >>>>>>>>>> u ... >>>>>>>>>> v... >>>>>>>>>> w ... >>>>>>>>>> >>>>>>>>>> end subroutine uvw_array_change. >>>>>>>>>> >>>>>>>>>> The above will give an error at : >>>>>>>>>> >>>>>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>>>>> >>>>>>>>>> 3. Same as above, except I change the order of the last 3 lines >>>>>>>>>> to: >>>>>>>>>> >>>>>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>>>>> >>>>>>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>>>>> >>>>>>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>>>>>> >>>>>>>>>> So they are now in reversed order. Now it works. >>>>>>>>>> >>>>>>>>>> 4. Same as 2 or 3, except the subroutine is changed to : >>>>>>>>>> >>>>>>>>>> subroutine uvw_array_change(u,v,w) >>>>>>>>>> >>>>>>>>>> real(8), intent(inout) :: >>>>>>>>>> u(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>>>>>>>> >>>>>>>>>> real(8), intent(inout) :: >>>>>>>>>> v(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>>>>>>>> >>>>>>>>>> real(8), intent(inout) :: >>>>>>>>>> w(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>>>>>>>> >>>>>>>>>> u ... >>>>>>>>>> v... >>>>>>>>>> w ... >>>>>>>>>> >>>>>>>>>> end subroutine uvw_array_change. >>>>>>>>>> >>>>>>>>>> The start_indices and end_indices are simply to shift the 0 >>>>>>>>>> indices of C convention to that of the 1 indices of the Fortran convention. >>>>>>>>>> This is necessary in my case because most of my codes start array counting >>>>>>>>>> at 1, hence the "trick". >>>>>>>>>> >>>>>>>>>> However, now no matter which order of the DMDAVecRestoreArrayF90 >>>>>>>>>> (as in 2 or 3), error will occur at "call >>>>>>>>>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) " >>>>>>>>>> >>>>>>>>>> So did I violate and cause memory corruption due to the trick >>>>>>>>>> above? But I can't think of any way other than the "trick" to continue >>>>>>>>>> using the 1 indices convention. >>>>>>>>>> >>>>>>>>>> Thank you. >>>>>>>>>> >>>>>>>>>> Yours sincerely, >>>>>>>>>> >>>>>>>>>> TAY wee-beng >>>>>>>>>> >>>>>>>>>> On 15/4/2014 8:00 PM, Barry Smith wrote: >>>>>>>>>> >>>>>>>>>>> Try running under valgrind >>>>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Apr 14, 2014, at 9:47 PM, TAY wee-beng >>>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>> Hi Barry, >>>>>>>>>>>> >>>>>>>>>>>> As I mentioned earlier, the code works fine in PETSc debug mode >>>>>>>>>>>> but fails in non-debug mode. >>>>>>>>>>>> >>>>>>>>>>>> I have attached my code. >>>>>>>>>>>> >>>>>>>>>>>> Thank you >>>>>>>>>>>> >>>>>>>>>>>> Yours sincerely, >>>>>>>>>>>> >>>>>>>>>>>> TAY wee-beng >>>>>>>>>>>> >>>>>>>>>>>> On 15/4/2014 2:26 AM, Barry Smith wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Please send the code that creates da_w and the declarations >>>>>>>>>>>>> of w_array >>>>>>>>>>>>> >>>>>>>>>>>>> Barry >>>>>>>>>>>>> >>>>>>>>>>>>> On Apr 14, 2014, at 9:40 AM, TAY wee-beng >>>>>>>>>>>>> >>>>>>>>>>>>> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Hi Barry, >>>>>>>>>>>>>> >>>>>>>>>>>>>> I'm not too sure how to do it. I'm running mpi. So I run: >>>>>>>>>>>>>> >>>>>>>>>>>>>> mpirun -n 4 ./a.out -start_in_debugger >>>>>>>>>>>>>> >>>>>>>>>>>>>> I got the msg below. Before the gdb windows appear (thru >>>>>>>>>>>>>> x11), the program aborts. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Also I tried running in another cluster and it worked. Also >>>>>>>>>>>>>> tried in the current cluster in debug mode and it worked too. >>>>>>>>>>>>>> >>>>>>>>>>>>>> mpirun -n 4 ./a.out -start_in_debugger >>>>>>>>>>>>>> >>>>>>>>>>>>>> -------------------------------------------------------------------------- >>>>>>>>>>>>>> An MPI process has executed an operation involving a call to >>>>>>>>>>>>>> the >>>>>>>>>>>>>> "fork()" system call to create a child process. Open MPI is >>>>>>>>>>>>>> currently >>>>>>>>>>>>>> operating in a condition that could result in memory >>>>>>>>>>>>>> corruption or >>>>>>>>>>>>>> other system errors; your MPI job may hang, crash, or produce >>>>>>>>>>>>>> silent >>>>>>>>>>>>>> data corruption. The use of fork() (or system() or other >>>>>>>>>>>>>> calls that >>>>>>>>>>>>>> create child processes) is strongly discouraged. >>>>>>>>>>>>>> >>>>>>>>>>>>>> The process that invoked fork was: >>>>>>>>>>>>>> >>>>>>>>>>>>>> Local host: n12-76 (PID 20235) >>>>>>>>>>>>>> MPI_COMM_WORLD rank: 2 >>>>>>>>>>>>>> >>>>>>>>>>>>>> If you are *absolutely sure* that your application will >>>>>>>>>>>>>> successfully >>>>>>>>>>>>>> and correctly survive a call to fork(), you may disable this >>>>>>>>>>>>>> warning >>>>>>>>>>>>>> by setting the mpi_warn_on_fork MCA parameter to 0. >>>>>>>>>>>>>> >>>>>>>>>>>>>> -------------------------------------------------------------------------- >>>>>>>>>>>>>> [2]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20235 >>>>>>>>>>>>>> on display localhost:50.0 on machine n12-76 >>>>>>>>>>>>>> [0]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20233 >>>>>>>>>>>>>> on display localhost:50.0 on machine n12-76 >>>>>>>>>>>>>> [1]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20234 >>>>>>>>>>>>>> on display localhost:50.0 on machine n12-76 >>>>>>>>>>>>>> [3]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20236 >>>>>>>>>>>>>> on display localhost:50.0 on machine n12-76 >>>>>>>>>>>>>> [n12-76:20232] 3 more processes have sent help message >>>>>>>>>>>>>> help-mpi-runtime.txt / mpi_init:warn-fork >>>>>>>>>>>>>> [n12-76:20232] Set MCA parameter "orte_base_help_aggregate" >>>>>>>>>>>>>> to 0 to see all help / error messages >>>>>>>>>>>>>> >>>>>>>>>>>>>> .... >>>>>>>>>>>>>> >>>>>>>>>>>>>> 1 >>>>>>>>>>>>>> [1]PETSC ERROR: >>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation >>>>>>>>>>>>>> Violation, probably memory access out of range >>>>>>>>>>>>>> [1]PETSC ERROR: Try option -start_in_debugger or >>>>>>>>>>>>>> -on_error_attach_debugger >>>>>>>>>>>>>> [1]PETSC ERROR: or see >>>>>>>>>>>>>> >>>>>>>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSCERROR: or try >>>>>>>>>>>>>> http://valgrind.org >>>>>>>>>>>>>> on GNU/linux and Apple Mac OS X to find memory corruption >>>>>>>>>>>>>> errors >>>>>>>>>>>>>> [1]PETSC ERROR: configure using --with-debugging=yes, >>>>>>>>>>>>>> recompile, link, and run >>>>>>>>>>>>>> [1]PETSC ERROR: to get more information on the crash. >>>>>>>>>>>>>> [1]PETSC ERROR: User provided function() line 0 in unknown >>>>>>>>>>>>>> directory unknown file (null) >>>>>>>>>>>>>> [3]PETSC ERROR: >>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>> [3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation >>>>>>>>>>>>>> Violation, probably memory access out of range >>>>>>>>>>>>>> [3]PETSC ERROR: Try option -start_in_debugger or >>>>>>>>>>>>>> -on_error_attach_debugger >>>>>>>>>>>>>> [3]PETSC ERROR: or see >>>>>>>>>>>>>> >>>>>>>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSCERROR: or try >>>>>>>>>>>>>> http://valgrind.org >>>>>>>>>>>>>> on GNU/linux and Apple Mac OS X to find memory corruption >>>>>>>>>>>>>> errors >>>>>>>>>>>>>> [3]PETSC ERROR: configure using --with-debugging=yes, >>>>>>>>>>>>>> recompile, link, and run >>>>>>>>>>>>>> [3]PETSC ERROR: to get more information on the crash. >>>>>>>>>>>>>> [3]PETSC ERROR: User provided function() line 0 in unknown >>>>>>>>>>>>>> directory unknown file (null) >>>>>>>>>>>>>> >>>>>>>>>>>>>> ... >>>>>>>>>>>>>> Thank you. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Yours sincerely, >>>>>>>>>>>>>> >>>>>>>>>>>>>> TAY wee-beng >>>>>>>>>>>>>> >>>>>>>>>>>>>> On 14/4/2014 9:05 PM, Barry Smith wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>> Because IO doesn?t always get flushed immediately it may >>>>>>>>>>>>>>> not be hanging at this point. It is better to use the option >>>>>>>>>>>>>>> -start_in_debugger then type cont in each debugger window and then when you >>>>>>>>>>>>>>> think it is ?hanging? do a control C in each debugger window and type where >>>>>>>>>>>>>>> to see where each process is you can also look around in the debugger at >>>>>>>>>>>>>>> variables to see why it is ?hanging? at that point. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Barry >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> This routines don?t have any parallel communication in >>>>>>>>>>>>>>> them so are unlikely to hang. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Apr 14, 2014, at 6:52 AM, TAY wee-beng >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> My code hangs and I added in mpi_barrier and print to catch >>>>>>>>>>>>>>>> the bug. I found that it hangs after printing "7". Is it because I'm doing >>>>>>>>>>>>>>>> something wrong? I need to access the u,v,w array so I use >>>>>>>>>>>>>>>> DMDAVecGetArrayF90. After access, I use DMDAVecRestoreArrayF90. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if >>>>>>>>>>>>>>>> (myid==0) print *,"3" >>>>>>>>>>>>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if >>>>>>>>>>>>>>>> (myid==0) print *,"4" >>>>>>>>>>>>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if >>>>>>>>>>>>>>>> (myid==0) print *,"5" >>>>>>>>>>>>>>>> call >>>>>>>>>>>>>>>> I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) >>>>>>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if >>>>>>>>>>>>>>>> (myid==0) print *,"6" >>>>>>>>>>>>>>>> call >>>>>>>>>>>>>>>> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) !must be in reverse order >>>>>>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if >>>>>>>>>>>>>>>> (myid==0) print *,"7" >>>>>>>>>>>>>>>> call >>>>>>>>>>>>>>>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>>>>>>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if >>>>>>>>>>>>>>>> (myid==0) print *,"8" >>>>>>>>>>>>>>>> call >>>>>>>>>>>>>>>> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> Thank you. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Yours sincerely, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> TAY wee-beng >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From u.tabak at tudelft.nl Sat Apr 19 14:13:13 2014 From: u.tabak at tudelft.nl (Umut Tabak) Date: Sat, 19 Apr 2014 21:13:13 +0200 Subject: [petsc-users] an ambiguity on a Lanczos solver for a symmetric system -- advice needed Message-ID: <5352CAC9.9010505@tudelft.nl> Dear all, I am experiencing lately some issues with a symmetric Lanczos eigensolver in FORTRAN. Basically, I have test code in MATLAB where I am using HSL_MA97(MATLAB interface) at the moment When I program Lanczos iterations in blocks in MATLAB by using HSL_MA97, as expected my overall solution time decreases meaning that block solution improves the solution efficiency. Then, to apply the same algorithm on problems on the orders of millions, I am transferring the same algorithm to a FORTRAN code but this time with MUMPS as the solver then I was expecting the solution time to decrease as well, but my overall solution times are increasing when I increase the block size. For a check with MUMPS, I only tried the block solution phase and compared 120 single solutions to 60 solutions by blocks of 2 30 solutions by blocks of 4 20 solutions by blocks of 6 15 solutions by blocks of 8 and saw that the total solution time in comparison to single solves are decreasing so I am thinking this is not the source of the problem, I believe. What I am doing is that I am performing a full reorthogonalization in the Lanczos loop, which includes some dgemm calls and moreover there are some other calls for sparse symmetric matrix vector multiplications from Intel MKL. I could not really understand why the overall solution time is increasing with the increase of the block sizes in FORTRAN whereas I was expecting even an improvement over my MATLAB code. Any ideas on what could be going wrong. Best regards and thanks in advance, Umut From salazardetroya at gmail.com Sat Apr 19 17:25:00 2014 From: salazardetroya at gmail.com (Miguel Angel Salazar de Troya) Date: Sat, 19 Apr 2014 17:25:00 -0500 Subject: [petsc-users] Elasticity tensor in ex52 In-Reply-To: References: Message-ID: Thanks for your response. Now I understand a bit better your implementation, but I am still confused. Is the g3 function equivalent to the f_{1,1} function in the matrix in equation (3) in this paper: http://arxiv.org/pdf/1309.1204v2.pdf ? If so, why would it have terms that depend on the trial fields? I think this is what is confusing me. On Sat, Apr 19, 2014 at 11:35 AM, Matthew Knepley wrote: > On Fri, Apr 18, 2014 at 1:23 PM, Miguel Angel Salazar de Troya < > salazardetroya at gmail.com> wrote: > >> Hello everybody. >> >> First, I am taking this example from the petsc-dev version, I am not sure >> if I should have posted this in another mail-list, if so, my apologies. >> >> In this example, for the elasticity case, function g3 is built as: >> >> void g3_elas(const PetscScalar u[], const PetscScalar gradU[], const >> PetscScalar a[], const PetscScalar gradA[], const PetscReal x[], >> PetscScalar g3[]) >> { >> const PetscInt dim = spatialDim; >> const PetscInt Ncomp = spatialDim; >> PetscInt compI, d; >> >> for (compI = 0; compI < Ncomp; ++compI) { >> for (d = 0; d < dim; ++d) { >> g3[((compI*Ncomp+compI)*dim+d)*dim+d] = 1.0; >> } >> } >> } >> >> Therefore, a fourth-order tensor is represented as a vector. I was >> checking the indices for different iterator values, and they do not seem to >> match the vectorization that I have in mind. For a two dimensional case, >> the indices for which the value is set as 1 are: >> >> compI = 0 , d = 0 -----> index = 0 >> compI = 0 , d = 1 -----> index = 3 >> compI = 1 , d = 0 -----> index = 12 >> compI = 1 , d = 1 -----> index = 15 >> >> The values for the first and last seem correct to me, but they other two >> are confusing me. I see that this elasticity tensor (which is the >> derivative of the gradient by itself in this case) would be a four by four >> identity matrix in its matrix representation, so the indices in between >> would be 5 and 10 instead of 3 and 12, if we put one column on top of each >> other. >> > > I have read this a few times, but I cannot understand that you are asking. > The simplest thing I can > respond is that we are indexing a row-major array, using the indices: > > g3[ic, id, jc, jd] > > where ic indexes the components of the trial field, id indexes the > derivative components, > jc indexes the basis field components, and jd its derivative components. > > Matt > > >> I guess my question is then, how did you vectorize the fourth order >> tensor? >> >> Thanks in advance >> Miguel >> >> -- >> *Miguel Angel Salazar de Troya* >> Graduate Research Assistant >> Department of Mechanical Science and Engineering >> University of Illinois at Urbana-Champaign >> (217) 550-2360 >> salaza11 at illinois.edu >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- *Miguel Angel Salazar de Troya* Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Apr 19 18:19:10 2014 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 19 Apr 2014 18:19:10 -0500 Subject: [petsc-users] Elasticity tensor in ex52 In-Reply-To: References: Message-ID: On Sat, Apr 19, 2014 at 5:25 PM, Miguel Angel Salazar de Troya < salazardetroya at gmail.com> wrote: > Thanks for your response. Now I understand a bit better your > implementation, but I am still confused. Is the g3 function equivalent to > the f_{1,1} function in the matrix in equation (3) in this paper: > http://arxiv.org/pdf/1309.1204v2.pdf ? If so, why would it have terms > that depend on the trial fields? I think this is what is confusing me. > Yes, it is. It has no terms that depend on the trial fields. It is just indexed by the number of components in that field. It is a continuum thing, which has nothing to do with the discretization. That is why it acts pointwise. Matt > On Sat, Apr 19, 2014 at 11:35 AM, Matthew Knepley wrote: > >> On Fri, Apr 18, 2014 at 1:23 PM, Miguel Angel Salazar de Troya < >> salazardetroya at gmail.com> wrote: >> >>> Hello everybody. >>> >>> First, I am taking this example from the petsc-dev version, I am not >>> sure if I should have posted this in another mail-list, if so, my >>> apologies. >>> >>> In this example, for the elasticity case, function g3 is built as: >>> >>> void g3_elas(const PetscScalar u[], const PetscScalar gradU[], const >>> PetscScalar a[], const PetscScalar gradA[], const PetscReal x[], >>> PetscScalar g3[]) >>> { >>> const PetscInt dim = spatialDim; >>> const PetscInt Ncomp = spatialDim; >>> PetscInt compI, d; >>> >>> for (compI = 0; compI < Ncomp; ++compI) { >>> for (d = 0; d < dim; ++d) { >>> g3[((compI*Ncomp+compI)*dim+d)*dim+d] = 1.0; >>> } >>> } >>> } >>> >>> Therefore, a fourth-order tensor is represented as a vector. I was >>> checking the indices for different iterator values, and they do not seem to >>> match the vectorization that I have in mind. For a two dimensional case, >>> the indices for which the value is set as 1 are: >>> >>> compI = 0 , d = 0 -----> index = 0 >>> compI = 0 , d = 1 -----> index = 3 >>> compI = 1 , d = 0 -----> index = 12 >>> compI = 1 , d = 1 -----> index = 15 >>> >>> The values for the first and last seem correct to me, but they other two >>> are confusing me. I see that this elasticity tensor (which is the >>> derivative of the gradient by itself in this case) would be a four by four >>> identity matrix in its matrix representation, so the indices in between >>> would be 5 and 10 instead of 3 and 12, if we put one column on top of each >>> other. >>> >> >> I have read this a few times, but I cannot understand that you are >> asking. The simplest thing I can >> respond is that we are indexing a row-major array, using the indices: >> >> g3[ic, id, jc, jd] >> >> where ic indexes the components of the trial field, id indexes the >> derivative components, >> jc indexes the basis field components, and jd its derivative components. >> >> Matt >> >> >>> I guess my question is then, how did you vectorize the fourth order >>> tensor? >>> >>> Thanks in advance >>> Miguel >>> >>> -- >>> *Miguel Angel Salazar de Troya* >>> Graduate Research Assistant >>> Department of Mechanical Science and Engineering >>> University of Illinois at Urbana-Champaign >>> (217) 550-2360 >>> salaza11 at illinois.edu >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > *Miguel Angel Salazar de Troya* > Graduate Research Assistant > Department of Mechanical Science and Engineering > University of Illinois at Urbana-Champaign > (217) 550-2360 > salaza11 at illinois.edu > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Sat Apr 19 19:39:25 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Sun, 20 Apr 2014 08:39:25 +0800 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: References: <534C9A2C.5060404@gmail.com> <534C9DB5.9070407@gmail.com> <53514B8A.90901@gmail.com> <495519DB-D3A1-4190-AED2-4ABA885C2835@mcs.anl.gov> <5351E62B.6060201@gmail.com> <53520587.6010606@gmail.com> <62DF81C7-0C35-410C-8D4C-206FBB22576A@mcs.anl.gov> <535248E8.2070002@gmail.com> <535284E0.8010901@gmail.com> <5352934C.1010306@gmail.com> <53529B09.8040009@gmail.com> Message-ID: <5353173D.60609@gmail.com> On 20/4/2014 1:02 AM, Matthew Knepley wrote: > On Sat, Apr 19, 2014 at 10:49 AM, TAY wee-beng > wrote: > > On 19/4/2014 11:39 PM, Matthew Knepley wrote: >> On Sat, Apr 19, 2014 at 10:16 AM, TAY wee-beng > > wrote: >> >> On 19/4/2014 10:55 PM, Matthew Knepley wrote: >>> On Sat, Apr 19, 2014 at 9:14 AM, TAY wee-beng >>> > wrote: >>> >>> On 19/4/2014 6:48 PM, Matthew Knepley wrote: >>>> On Sat, Apr 19, 2014 at 4:59 AM, TAY wee-beng >>>> > wrote: >>>> >>>> On 19/4/2014 1:17 PM, Barry Smith wrote: >>>> >>>> On Apr 19, 2014, at 12:11 AM, TAY wee-beng >>>> > wrote: >>>> >>>> On 19/4/2014 12:10 PM, Barry Smith wrote: >>>> >>>> On Apr 18, 2014, at 9:57 PM, TAY >>>> wee-beng >>> > wrote: >>>> >>>> On 19/4/2014 3:53 AM, Barry Smith >>>> wrote: >>>> >>>> Hmm, >>>> >>>> Interface DMDAVecGetArrayF90 >>>> Subroutine >>>> DMDAVecGetArrayF903(da1, v,d1,ierr) >>>> USE_DM_HIDE >>>> DM_HIDE da1 >>>> VEC_HIDE v >>>> PetscScalar,pointer :: d1(:,:,:) >>>> PetscErrorCode ierr >>>> End Subroutine >>>> >>>> So the d1 is a F90 POINTER. >>>> But your subroutine seems to be >>>> treating it as a ?plain old >>>> Fortran array?? >>>> real(8), intent(inout) :: >>>> u(:,:,:),v(:,:,:),w(:,:,:) >>>> >>>> Hi, >>>> >>>> So d1 is a pointer, and it's different if I >>>> declare it as "plain old Fortran array"? >>>> Because I declare it as a Fortran array and >>>> it works w/o any problem if I only call >>>> DMDAVecGetArrayF90 and >>>> DMDAVecRestoreArrayF90 with "u". >>>> >>>> But if I call DMDAVecGetArrayF90 and >>>> DMDAVecRestoreArrayF90 with "u", "v" and >>>> "w", error starts to happen. I wonder why... >>>> >>>> Also, supposed I call: >>>> >>>> call >>>> DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>> >>>> call >>>> DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>> >>>> call >>>> DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>> >>>> u_array .... >>>> >>>> v_array .... etc >>>> >>>> Now to restore the array, does it matter >>>> the sequence they are restored? >>>> >>>> No it should not matter. If it matters that >>>> is a sign that memory has been written to >>>> incorrectly earlier in the code. >>>> >>>> Hi, >>>> >>>> Hmm, I have been getting different results on >>>> different intel compilers. I'm not sure if MPI >>>> played a part but I'm only using a single >>>> processor. In the debug mode, things run without >>>> problem. In optimized mode, in some cases, the code >>>> aborts even doing simple initialization: >>>> >>>> >>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>> >>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>> >>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>> >>>> call DMDAVecGetArrayF90(da_p,p_local,p_array,ierr) >>>> >>>> u_array = 0.d0 >>>> >>>> v_array = 0.d0 >>>> >>>> w_array = 0.d0 >>>> >>>> p_array = 0.d0 >>>> >>>> >>>> call >>>> DMDAVecRestoreArrayF90(da_p,p_local,p_array,ierr) >>>> >>>> >>>> call >>>> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>> >>>> call >>>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>> >>>> call >>>> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>> >>>> The code aborts at call >>>> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr), >>>> giving segmentation error. But other version of >>>> intel compiler passes thru this part w/o error. >>>> Since the response is different among different >>>> compilers, is this PETSc or intel 's bug? Or >>>> mvapich or openmpi? >>>> >>>> >>>> We do this is a bunch of examples. Can you reproduce >>>> this different behavior in >>>> src/dm/examples/tutorials/ex11f90.F? >>> >>> Hi Matt, >>> >>> Do you mean putting the above lines into ex11f90.F and test? >>> >>> >>> It already has DMDAVecGetArray(). Just run it. >> >> Hi, >> >> It worked. The differences between mine and the code is the >> way the fortran modules are defined, and the ex11f90 only >> uses global vectors. Does it make a difference whether global >> or local vectors are used? Because the way it accesses x1 >> only touches the local region. >> >> >> No the global/local difference should not matter. >> >> Also, before using DMDAVecGetArrayF90, DMGetGlobalVector must >> be used 1st, is that so? I can't find the equivalent for >> local vector though. >> >> >> DMGetLocalVector() > > Ops, I do not have DMGetLocalVector and DMRestoreLocalVector in my > code. Does it matter? > > If so, when should I call them? > > > You just need a local vector from somewhere. Hi, I insert part of my error region code into ex11f90: call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) call DMDAVecGetArrayF90(da_p,p_local,p_array,ierr) u_array = 0.d0 v_array = 0.d0 w_array = 0.d0 p_array = 0.d0 call DMDAVecRestoreArrayF90(da_p,p_local,p_array,ierr) call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) It worked w/o error. I'm going to change the way the modules are defined in my code. My code contains a main program and a number of modules files, with subroutines inside e.g. module solve <- add include file? subroutine RRK <- add include file? end subroutine RRK end module solve So where should the include files (#include ) be placed? After the module or inside the subroutine? Thanks. > > Matt > > Thanks. >> >> Matt >> >> Thanks. >>> >>> Matt >>> >>> Thanks >>> >>> Regards. >>>> >>>> Matt >>>> >>>> As in w, then v and u? >>>> >>>> call >>>> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>> call >>>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>> call >>>> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>> >>>> thanks >>>> >>>> Note also that the >>>> beginning and end indices of >>>> the u,v,w, are different for >>>> each process see for example >>>> http://www.mcs.anl.gov/petsc/petsc-3.4/src/dm/examples/tutorials/ex11f90.F >>>> (and they do not start at 1). >>>> This is how to get the loop bounds. >>>> >>>> Hi, >>>> >>>> In my case, I fixed the u,v,w such >>>> that their indices are the same. I >>>> also checked using DMDAGetCorners >>>> and DMDAGetGhostCorners. Now the >>>> problem lies in my subroutine >>>> treating it as a ?plain old Fortran >>>> array?. >>>> >>>> If I declare them as pointers, >>>> their indices follow the C 0 start >>>> convention, is that so? >>>> >>>> Not really. It is that in each >>>> process you need to access them from >>>> the indices indicated by >>>> DMDAGetCorners() for global vectors and >>>> DMDAGetGhostCorners() for local >>>> vectors. So really C or Fortran >>>> doesn?t make any difference. >>>> >>>> >>>> So my problem now is that in my old >>>> MPI code, the u(i,j,k) follow the >>>> Fortran 1 start convention. Is >>>> there some way to manipulate such >>>> that I do not have to change my >>>> u(i,j,k) to u(i-1,j-1,k-1)? >>>> >>>> If you code wishes to access them >>>> with indices plus one from the values >>>> returned by DMDAGetCorners() for global >>>> vectors and DMDAGetGhostCorners() for >>>> local vectors then you need to manually >>>> subtract off the 1. >>>> >>>> Barry >>>> >>>> Thanks. >>>> >>>> Barry >>>> >>>> On Apr 18, 2014, at 10:58 AM, >>>> TAY wee-beng >>> > wrote: >>>> >>>> Hi, >>>> >>>> I tried to pinpoint the >>>> problem. I reduced my job >>>> size and hence I can run on >>>> 1 processor. Tried using >>>> valgrind but perhaps I'm >>>> using the optimized >>>> version, it didn't catch >>>> the error, besides saying >>>> "Segmentation fault (core >>>> dumped)" >>>> >>>> However, by re-writing my >>>> code, I found out a few things: >>>> >>>> 1. if I write my code this way: >>>> >>>> call >>>> DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>> >>>> call >>>> DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>> >>>> call >>>> DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>> >>>> u_array = .... >>>> >>>> v_array = .... >>>> >>>> w_array = .... >>>> >>>> call >>>> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>> >>>> call >>>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>> >>>> call >>>> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>> >>>> The code runs fine. >>>> >>>> 2. if I write my code this way: >>>> >>>> call >>>> DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>> >>>> call >>>> DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>> >>>> call >>>> DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>> >>>> call >>>> uvw_array_change(u_array,v_array,w_array) >>>> -> this subroutine does the >>>> same modification as the above. >>>> >>>> call >>>> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>> >>>> call >>>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>> >>>> call >>>> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>> -> error >>>> >>>> where the subroutine is: >>>> >>>> subroutine >>>> uvw_array_change(u,v,w) >>>> >>>> real(8), intent(inout) :: >>>> u(:,:,:),v(:,:,:),w(:,:,:) >>>> >>>> u ... >>>> v... >>>> w ... >>>> >>>> end subroutine >>>> uvw_array_change. >>>> >>>> The above will give an >>>> error at : >>>> >>>> call >>>> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>> >>>> 3. Same as above, except I >>>> change the order of the >>>> last 3 lines to: >>>> >>>> call >>>> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>> >>>> call >>>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>> >>>> call >>>> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>> >>>> So they are now in reversed >>>> order. Now it works. >>>> >>>> 4. Same as 2 or 3, except >>>> the subroutine is changed to : >>>> >>>> subroutine >>>> uvw_array_change(u,v,w) >>>> >>>> real(8), intent(inout) :: >>>> u(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>> >>>> real(8), intent(inout) :: >>>> v(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>> >>>> real(8), intent(inout) :: >>>> w(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>> >>>> u ... >>>> v... >>>> w ... >>>> >>>> end subroutine >>>> uvw_array_change. >>>> >>>> The start_indices and >>>> end_indices are simply to >>>> shift the 0 indices of C >>>> convention to that of the 1 >>>> indices of the Fortran >>>> convention. This is >>>> necessary in my case >>>> because most of my codes >>>> start array counting at 1, >>>> hence the "trick". >>>> >>>> However, now no matter >>>> which order of the >>>> DMDAVecRestoreArrayF90 (as >>>> in 2 or 3), error will >>>> occur at "call >>>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>> " >>>> >>>> So did I violate and cause >>>> memory corruption due to >>>> the trick above? But I >>>> can't think of any way >>>> other than the "trick" to >>>> continue using the 1 >>>> indices convention. >>>> >>>> Thank you. >>>> >>>> Yours sincerely, >>>> >>>> TAY wee-beng >>>> >>>> On 15/4/2014 8:00 PM, Barry >>>> Smith wrote: >>>> >>>> Try running under >>>> valgrind >>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >>>> >>>> >>>> On Apr 14, 2014, at >>>> 9:47 PM, TAY wee-beng >>>> >>> > >>>> wrote: >>>> >>>> Hi Barry, >>>> >>>> As I mentioned >>>> earlier, the code >>>> works fine in PETSc >>>> debug mode but >>>> fails in non-debug >>>> mode. >>>> >>>> I have attached my >>>> code. >>>> >>>> Thank you >>>> >>>> Yours sincerely, >>>> >>>> TAY wee-beng >>>> >>>> On 15/4/2014 2:26 >>>> AM, Barry Smith wrote: >>>> >>>> Please send >>>> the code that >>>> creates da_w >>>> and the >>>> declarations of >>>> w_array >>>> >>>> Barry >>>> >>>> On Apr 14, >>>> 2014, at 9:40 >>>> AM, TAY wee-beng >>>> >>> > >>>> wrote: >>>> >>>> >>>> Hi Barry, >>>> >>>> I'm not too >>>> sure how to >>>> do it. I'm >>>> running >>>> mpi. So I run: >>>> >>>> mpirun -n >>>> 4 ./a.out >>>> -start_in_debugger >>>> >>>> I got the >>>> msg below. >>>> Before the >>>> gdb windows >>>> appear >>>> (thru x11), >>>> the program >>>> aborts. >>>> >>>> Also I >>>> tried >>>> running in >>>> another >>>> cluster and >>>> it worked. >>>> Also tried >>>> in the >>>> current >>>> cluster in >>>> debug mode >>>> and it >>>> worked too. >>>> >>>> mpirun -n 4 >>>> ./a.out >>>> -start_in_debugger >>>> -------------------------------------------------------------------------- >>>> An MPI >>>> process has >>>> executed an >>>> operation >>>> involving a >>>> call to the >>>> "fork()" >>>> system call >>>> to create a >>>> child >>>> process. >>>> Open MPI >>>> is currently >>>> operating >>>> in a >>>> condition >>>> that could >>>> result in >>>> memory >>>> corruption or >>>> other >>>> system >>>> errors; >>>> your MPI >>>> job may >>>> hang, >>>> crash, or >>>> produce silent >>>> data >>>> corruption. >>>> The use of >>>> fork() (or >>>> system() or >>>> other calls >>>> that >>>> create >>>> child >>>> processes) >>>> is strongly >>>> discouraged. >>>> >>>> The process >>>> that >>>> invoked >>>> fork was: >>>> >>>> Local >>>> host: >>>> n12-76 >>>> (PID 20235) >>>> MPI_COMM_WORLD >>>> rank: 2 >>>> >>>> If you are >>>> *absolutely >>>> sure* that >>>> your >>>> application >>>> will >>>> successfully >>>> and >>>> correctly >>>> survive a >>>> call to >>>> fork(), you >>>> may disable >>>> this warning >>>> by setting >>>> the >>>> mpi_warn_on_fork >>>> MCA >>>> parameter to 0. >>>> -------------------------------------------------------------------------- >>>> [2]PETSC >>>> ERROR: >>>> PETSC: >>>> Attaching >>>> gdb to >>>> ./a.out of >>>> pid 20235 >>>> on display >>>> localhost:50.0 >>>> on machine >>>> n12-76 >>>> [0]PETSC >>>> ERROR: >>>> PETSC: >>>> Attaching >>>> gdb to >>>> ./a.out of >>>> pid 20233 >>>> on display >>>> localhost:50.0 >>>> on machine >>>> n12-76 >>>> [1]PETSC >>>> ERROR: >>>> PETSC: >>>> Attaching >>>> gdb to >>>> ./a.out of >>>> pid 20234 >>>> on display >>>> localhost:50.0 >>>> on machine >>>> n12-76 >>>> [3]PETSC >>>> ERROR: >>>> PETSC: >>>> Attaching >>>> gdb to >>>> ./a.out of >>>> pid 20236 >>>> on display >>>> localhost:50.0 >>>> on machine >>>> n12-76 >>>> [n12-76:20232] >>>> 3 more >>>> processes >>>> have sent >>>> help >>>> message >>>> help-mpi-runtime.txt >>>> / >>>> mpi_init:warn-fork >>>> [n12-76:20232] >>>> Set MCA >>>> parameter >>>> "orte_base_help_aggregate" >>>> to 0 to see >>>> all help / >>>> error messages >>>> >>>> .... >>>> >>>> 1 >>>> [1]PETSC >>>> ERROR: >>>> ------------------------------------------------------------------------ >>>> [1]PETSC >>>> ERROR: >>>> Caught >>>> signal >>>> number 11 >>>> SEGV: >>>> Segmentation Violation, >>>> probably >>>> memory >>>> access out >>>> of range >>>> [1]PETSC >>>> ERROR: Try >>>> option >>>> -start_in_debugger >>>> or >>>> -on_error_attach_debugger >>>> [1]PETSC >>>> ERROR: or see >>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC >>>> ERROR: or >>>> try >>>> http://valgrind.org >>>> on >>>> GNU/linux >>>> and Apple >>>> Mac OS X to >>>> find memory >>>> corruption >>>> errors >>>> [1]PETSC >>>> ERROR: >>>> configure >>>> using >>>> --with-debugging=yes, >>>> recompile, >>>> link, and run >>>> [1]PETSC >>>> ERROR: to >>>> get more >>>> information >>>> on the crash. >>>> [1]PETSC >>>> ERROR: User >>>> provided >>>> function() >>>> line 0 in >>>> unknown >>>> directory >>>> unknown >>>> file (null) >>>> [3]PETSC >>>> ERROR: >>>> ------------------------------------------------------------------------ >>>> [3]PETSC >>>> ERROR: >>>> Caught >>>> signal >>>> number 11 >>>> SEGV: >>>> Segmentation Violation, >>>> probably >>>> memory >>>> access out >>>> of range >>>> [3]PETSC >>>> ERROR: Try >>>> option >>>> -start_in_debugger >>>> or >>>> -on_error_attach_debugger >>>> [3]PETSC >>>> ERROR: or see >>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSC >>>> ERROR: or >>>> try >>>> http://valgrind.org >>>> on >>>> GNU/linux >>>> and Apple >>>> Mac OS X to >>>> find memory >>>> corruption >>>> errors >>>> [3]PETSC >>>> ERROR: >>>> configure >>>> using >>>> --with-debugging=yes, >>>> recompile, >>>> link, and run >>>> [3]PETSC >>>> ERROR: to >>>> get more >>>> information >>>> on the crash. >>>> [3]PETSC >>>> ERROR: User >>>> provided >>>> function() >>>> line 0 in >>>> unknown >>>> directory >>>> unknown >>>> file (null) >>>> >>>> ... >>>> Thank you. >>>> >>>> Yours >>>> sincerely, >>>> >>>> TAY wee-beng >>>> >>>> On >>>> 14/4/2014 >>>> 9:05 PM, >>>> Barry Smith >>>> wrote: >>>> >>>> >>>> Because IO >>>> doesn?t >>>> always >>>> get >>>> flushed >>>> immediately >>>> it may >>>> not be >>>> hanging >>>> at this >>>> point. >>>> It is >>>> better >>>> to use >>>> the >>>> option >>>> -start_in_debugger >>>> then >>>> type >>>> cont in >>>> each >>>> debugger window >>>> and >>>> then >>>> when >>>> you >>>> think >>>> it is >>>> ?hanging? >>>> do a >>>> control >>>> C in >>>> each >>>> debugger window >>>> and >>>> type >>>> where >>>> to see >>>> where >>>> each >>>> process >>>> is you >>>> can >>>> also >>>> look >>>> around >>>> in the >>>> debugger at >>>> variables >>>> to see >>>> why it >>>> is >>>> ?hanging? >>>> at that >>>> point. >>>> >>>> Barry >>>> >>>> This >>>> routines don?t >>>> have >>>> any >>>> parallel communication >>>> in them >>>> so are >>>> unlikely to >>>> hang. >>>> >>>> On Apr >>>> 14, >>>> 2014, >>>> at 6:52 >>>> AM, TAY >>>> wee-beng >>>> >>>> >>> > >>>> >>>> wrote: >>>> >>>> >>>> >>>> Hi, >>>> >>>> My >>>> code hangs >>>> and >>>> I >>>> added >>>> in >>>> mpi_barrier >>>> and >>>> print >>>> to >>>> catch >>>> the >>>> bug. I >>>> found >>>> that it >>>> hangs >>>> after >>>> printing >>>> "7". Is >>>> it >>>> because >>>> I'm >>>> doing >>>> something >>>> wrong? >>>> I >>>> need to >>>> access >>>> the >>>> u,v,w >>>> array >>>> so >>>> I >>>> use >>>> DMDAVecGetArrayF90. >>>> After >>>> access, >>>> I >>>> use >>>> DMDAVecRestoreArrayF90. >>>> >>>> >>>> >>>> call >>>> DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>> >>>> >>>> call >>>> MPI_Barrier(MPI_COMM_WORLD,ierr); >>>> if >>>> (myid==0) >>>> print >>>> *,"3" >>>> >>>> >>>> call >>>> DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>> >>>> >>>> call >>>> MPI_Barrier(MPI_COMM_WORLD,ierr); >>>> if >>>> (myid==0) >>>> print >>>> *,"4" >>>> >>>> >>>> call >>>> DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>> >>>> >>>> call >>>> MPI_Barrier(MPI_COMM_WORLD,ierr); >>>> if >>>> (myid==0) >>>> print >>>> *,"5" >>>> >>>> >>>> call >>>> I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) >>>> >>>> >>>> call >>>> MPI_Barrier(MPI_COMM_WORLD,ierr); >>>> if >>>> (myid==0) >>>> print >>>> *,"6" >>>> >>>> >>>> call >>>> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>> !must >>>> be >>>> in >>>> reverse >>>> order >>>> >>>> >>>> call >>>> MPI_Barrier(MPI_COMM_WORLD,ierr); >>>> if >>>> (myid==0) >>>> print >>>> *,"7" >>>> >>>> >>>> call >>>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>> >>>> >>>> call >>>> MPI_Barrier(MPI_COMM_WORLD,ierr); >>>> if >>>> (myid==0) >>>> print >>>> *,"8" >>>> >>>> >>>> call >>>> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>> -- >>>> Thank >>>> you. >>>> >>>> Yours >>>> sincerely, >>>> >>>> TAY >>>> wee-beng >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they >>>> begin their experiments is infinitely more interesting >>>> than any results to which their experiments lead. >>>> -- Norbert Wiener >>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin >>> their experiments is infinitely more interesting than any >>> results to which their experiments lead. >>> -- Norbert Wiener >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to >> which their experiments lead. >> -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From romvegeta at yahoo.fr Sun Apr 20 15:34:51 2014 From: romvegeta at yahoo.fr (Veltz Romain) Date: Sun, 20 Apr 2014 22:34:51 +0200 Subject: [petsc-users] petsc4py, MatShell and SNES Message-ID: Hi Petsc users, I would like to know how the following code must be modified in order to work with the latest versions of Petsc4py. http://code.google.com/p/petsc4py/issues/attachmentText?id=11&aid=4044638157340430804&name=SNES_MatShell.py&token=e664eb9a46d94c453334a46c924d3e93 Thank you for your help, Bests From salazardetroya at gmail.com Sun Apr 20 16:17:46 2014 From: salazardetroya at gmail.com (Miguel Angel Salazar de Troya) Date: Sun, 20 Apr 2014 16:17:46 -0500 Subject: [petsc-users] Elasticity tensor in ex52 In-Reply-To: References: Message-ID: I understand. So if we had a linear elastic material, the weak form would be something like phi_{i,j} C_{ijkl} u_{k,l} where the term C_{ijkl} u_{k,l} would correspond to the term f_1(U, gradient_U) in equation (1) in your paper I mentioned above, and g_3 would be C_{ijkl}. Therefore the indices {i,j} would be equivalent to the indices "ic, id" you mentioned before and "jc" and "jd" would be {k,l}? For g3[ic, id, jc, jd], transforming the four dimensional array to linear memory would be like this: g3[((ic*Ncomp+id)*dim+jc)*dim+jd] = 1.0; where Ncomp and dim are equal to the problem's spatial dimension. However, in the code, there are only two loops, to exploit the symmetry of the fourth order identity tensor: for (compI = 0; compI < Ncomp; ++compI) { for (d = 0; d < dim; ++d) { g3[((compI*Ncomp+compI)*dim+d)*dim+d] = 1.0; } } Therefore the tensor entries that are set to one are: g3[0,0,0,0] g3[1,1,0,0] g3[0,0,1,1] g3[1,1,1,1] This would be equivalent to the fourth order tensor \delta_{ij} \delta_{kl}, but I think the one we need is \delta_{ik} \delta_{jl}, because it is the derivative of a matrix with respect itself (or the derivative of a gradient with respect to itself). This is assuming the indices of g3 correspond to what I said. Thanks in advance. Miguel On Apr 19, 2014 6:19 PM, "Matthew Knepley" wrote: > On Sat, Apr 19, 2014 at 5:25 PM, Miguel Angel Salazar de Troya < > salazardetroya at gmail.com> wrote: > >> Thanks for your response. Now I understand a bit better your >> implementation, but I am still confused. Is the g3 function equivalent to >> the f_{1,1} function in the matrix in equation (3) in this paper: >> http://arxiv.org/pdf/1309.1204v2.pdf ? If so, why would it have terms >> that depend on the trial fields? I think this is what is confusing me. >> > > Yes, it is. It has no terms that depend on the trial fields. It is just > indexed by the number of components in that field. It is > a continuum thing, which has nothing to do with the discretization. That > is why it acts pointwise. > > Matt > > >> On Sat, Apr 19, 2014 at 11:35 AM, Matthew Knepley wrote: >> >>> On Fri, Apr 18, 2014 at 1:23 PM, Miguel Angel Salazar de Troya < >>> salazardetroya at gmail.com> wrote: >>> >>>> Hello everybody. >>>> >>>> First, I am taking this example from the petsc-dev version, I am not >>>> sure if I should have posted this in another mail-list, if so, my >>>> apologies. >>>> >>>> In this example, for the elasticity case, function g3 is built as: >>>> >>>> void g3_elas(const PetscScalar u[], const PetscScalar gradU[], const >>>> PetscScalar a[], const PetscScalar gradA[], const PetscReal x[], >>>> PetscScalar g3[]) >>>> { >>>> const PetscInt dim = spatialDim; >>>> const PetscInt Ncomp = spatialDim; >>>> PetscInt compI, d; >>>> >>>> for (compI = 0; compI < Ncomp; ++compI) { >>>> for (d = 0; d < dim; ++d) { >>>> g3[((compI*Ncomp+compI)*dim+d)*dim+d] = 1.0; >>>> } >>>> } >>>> } >>>> >>>> Therefore, a fourth-order tensor is represented as a vector. I was >>>> checking the indices for different iterator values, and they do not seem to >>>> match the vectorization that I have in mind. For a two dimensional case, >>>> the indices for which the value is set as 1 are: >>>> >>>> compI = 0 , d = 0 -----> index = 0 >>>> compI = 0 , d = 1 -----> index = 3 >>>> compI = 1 , d = 0 -----> index = 12 >>>> compI = 1 , d = 1 -----> index = 15 >>>> >>>> The values for the first and last seem correct to me, but they other >>>> two are confusing me. I see that this elasticity tensor (which is the >>>> derivative of the gradient by itself in this case) would be a four by four >>>> identity matrix in its matrix representation, so the indices in between >>>> would be 5 and 10 instead of 3 and 12, if we put one column on top of each >>>> other. >>>> >>> >>> I have read this a few times, but I cannot understand that you are >>> asking. The simplest thing I can >>> respond is that we are indexing a row-major array, using the indices: >>> >>> g3[ic, id, jc, jd] >>> >>> where ic indexes the components of the trial field, id indexes the >>> derivative components, >>> jc indexes the basis field components, and jd its derivative components. >>> >>> Matt >>> >>> >>>> I guess my question is then, how did you vectorize the fourth order >>>> tensor? >>>> >>>> Thanks in advance >>>> Miguel >>>> >>>> -- >>>> *Miguel Angel Salazar de Troya* >>>> Graduate Research Assistant >>>> Department of Mechanical Science and Engineering >>>> University of Illinois at Urbana-Champaign >>>> (217) 550-2360 >>>> salaza11 at illinois.edu >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> *Miguel Angel Salazar de Troya* >> Graduate Research Assistant >> Department of Mechanical Science and Engineering >> University of Illinois at Urbana-Champaign >> (217) 550-2360 >> salaza11 at illinois.edu >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Sun Apr 20 19:49:07 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Mon, 21 Apr 2014 08:49:07 +0800 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: <5353173D.60609@gmail.com> References: <534C9A2C.5060404@gmail.com> <534C9DB5.9070407@gmail.com> <53514B8A.90901@gmail.com> <495519DB-D3A1-4190-AED2-4ABA885C2835@mcs.anl.gov> <5351E62B.6060201@gmail.com> <53520587.6010606@gmail.com> <62DF81C7-0C35-410C-8D4C-206FBB22576A@mcs.anl.gov> <535248E8.2070002@gmail.com> <535284E0.8010901@gmail.com> <5352934C.1010306@gmail.com> <53529B09.8040009@gmail.com> <5353173D.60609@gmail.com> Message-ID: <53546B03.1010407@gmail.com> On 20/4/2014 8:39 AM, TAY wee-beng wrote: > On 20/4/2014 1:02 AM, Matthew Knepley wrote: >> On Sat, Apr 19, 2014 at 10:49 AM, TAY wee-beng > > wrote: >> >> On 19/4/2014 11:39 PM, Matthew Knepley wrote: >>> On Sat, Apr 19, 2014 at 10:16 AM, TAY wee-beng >> > wrote: >>> >>> On 19/4/2014 10:55 PM, Matthew Knepley wrote: >>>> On Sat, Apr 19, 2014 at 9:14 AM, TAY wee-beng >>>> > wrote: >>>> >>>> On 19/4/2014 6:48 PM, Matthew Knepley wrote: >>>>> On Sat, Apr 19, 2014 at 4:59 AM, TAY wee-beng >>>>> > wrote: >>>>> >>>>> On 19/4/2014 1:17 PM, Barry Smith wrote: >>>>> >>>>> On Apr 19, 2014, at 12:11 AM, TAY wee-beng >>>>> > >>>>> wrote: >>>>> >>>>> On 19/4/2014 12:10 PM, Barry Smith wrote: >>>>> >>>>> On Apr 18, 2014, at 9:57 PM, TAY >>>>> wee-beng >>>> > wrote: >>>>> >>>>> On 19/4/2014 3:53 AM, Barry Smith >>>>> wrote: >>>>> >>>>> Hmm, >>>>> >>>>> Interface DMDAVecGetArrayF90 >>>>> Subroutine >>>>> DMDAVecGetArrayF903(da1, >>>>> v,d1,ierr) >>>>> USE_DM_HIDE >>>>> DM_HIDE da1 >>>>> VEC_HIDE v >>>>> PetscScalar,pointer :: d1(:,:,:) >>>>> PetscErrorCode ierr >>>>> End Subroutine >>>>> >>>>> So the d1 is a F90 >>>>> POINTER. But your subroutine >>>>> seems to be treating it as a >>>>> ?plain old Fortran array?? >>>>> real(8), intent(inout) :: >>>>> u(:,:,:),v(:,:,:),w(:,:,:) >>>>> >>>>> Hi, >>>>> >>>>> So d1 is a pointer, and it's different if >>>>> I declare it as "plain old Fortran array"? >>>>> Because I declare it as a Fortran array >>>>> and it works w/o any problem if I only >>>>> call DMDAVecGetArrayF90 and >>>>> DMDAVecRestoreArrayF90 with "u". >>>>> >>>>> But if I call DMDAVecGetArrayF90 and >>>>> DMDAVecRestoreArrayF90 with "u", "v" and >>>>> "w", error starts to happen. I wonder why... >>>>> >>>>> Also, supposed I call: >>>>> >>>>> call >>>>> DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>> >>>>> call >>>>> DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>> >>>>> call >>>>> DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>> >>>>> u_array .... >>>>> >>>>> v_array .... etc >>>>> >>>>> Now to restore the array, does it matter >>>>> the sequence they are restored? >>>>> >>>>> No it should not matter. If it matters >>>>> that is a sign that memory has been written to >>>>> incorrectly earlier in the code. >>>>> >>>>> Hi, >>>>> >>>>> Hmm, I have been getting different results on >>>>> different intel compilers. I'm not sure if MPI >>>>> played a part but I'm only using a single >>>>> processor. In the debug mode, things run without >>>>> problem. In optimized mode, in some cases, the >>>>> code aborts even doing simple initialization: >>>>> >>>>> >>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>> >>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>> >>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>> >>>>> call DMDAVecGetArrayF90(da_p,p_local,p_array,ierr) >>>>> >>>>> u_array = 0.d0 >>>>> >>>>> v_array = 0.d0 >>>>> >>>>> w_array = 0.d0 >>>>> >>>>> p_array = 0.d0 >>>>> >>>>> >>>>> call >>>>> DMDAVecRestoreArrayF90(da_p,p_local,p_array,ierr) >>>>> >>>>> >>>>> call >>>>> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>> >>>>> call >>>>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>> >>>>> call >>>>> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>> >>>>> The code aborts at call >>>>> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr), >>>>> giving segmentation error. But other version of >>>>> intel compiler passes thru this part w/o error. >>>>> Since the response is different among different >>>>> compilers, is this PETSc or intel 's bug? Or >>>>> mvapich or openmpi? >>>>> >>>>> >>>>> We do this is a bunch of examples. Can you reproduce >>>>> this different behavior in >>>>> src/dm/examples/tutorials/ex11f90.F? >>>> >>>> Hi Matt, >>>> >>>> Do you mean putting the above lines into ex11f90.F and >>>> test? >>>> >>>> >>>> It already has DMDAVecGetArray(). Just run it. >>> >>> Hi, >>> >>> It worked. The differences between mine and the code is the >>> way the fortran modules are defined, and the ex11f90 only >>> uses global vectors. Does it make a difference whether >>> global or local vectors are used? Because the way it >>> accesses x1 only touches the local region. >>> >>> >>> No the global/local difference should not matter. >>> >>> Also, before using DMDAVecGetArrayF90, DMGetGlobalVector >>> must be used 1st, is that so? I can't find the equivalent >>> for local vector though. >>> >>> >>> DMGetLocalVector() >> >> Ops, I do not have DMGetLocalVector and DMRestoreLocalVector in >> my code. Does it matter? >> >> If so, when should I call them? >> >> >> You just need a local vector from somewhere. Hi, Anyone can help with the questions below? Still trying to find why my code doesn't work. Thanks. > Hi, > > I insert part of my error region code into ex11f90: > > call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) > > call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) > > call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) > > call DMDAVecGetArrayF90(da_p,p_local,p_array,ierr) > > u_array = 0.d0 > > v_array = 0.d0 > > w_array = 0.d0 > > p_array = 0.d0 > > call DMDAVecRestoreArrayF90(da_p,p_local,p_array,ierr) > > call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) > > call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) > > call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) > > It worked w/o error. I'm going to change the way the modules are > defined in my code. > > My code contains a main program and a number of modules files, with > subroutines inside e.g. > > module solve > <- add include file? > subroutine RRK > <- add include file? > end subroutine RRK > > end module solve > > So where should the include files (#include ) > be placed? > > After the module or inside the subroutine? > > Thanks. >> >> Matt >> >> Thanks. >>> >>> Matt >>> >>> Thanks. >>>> >>>> Matt >>>> >>>> Thanks >>>> >>>> Regards. >>>>> >>>>> Matt >>>>> >>>>> As in w, then v and u? >>>>> >>>>> call >>>>> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>> call >>>>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>> call >>>>> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>> >>>>> thanks >>>>> >>>>> Note also that the >>>>> beginning and end indices of >>>>> the u,v,w, are different for >>>>> each process see for example >>>>> http://www.mcs.anl.gov/petsc/petsc-3.4/src/dm/examples/tutorials/ex11f90.F >>>>> (and they do not start at 1). >>>>> This is how to get the loop >>>>> bounds. >>>>> >>>>> Hi, >>>>> >>>>> In my case, I fixed the u,v,w such >>>>> that their indices are the same. I >>>>> also checked using DMDAGetCorners >>>>> and DMDAGetGhostCorners. Now the >>>>> problem lies in my subroutine >>>>> treating it as a ?plain old >>>>> Fortran array?. >>>>> >>>>> If I declare them as pointers, >>>>> their indices follow the C 0 start >>>>> convention, is that so? >>>>> >>>>> Not really. It is that in each >>>>> process you need to access them from >>>>> the indices indicated by >>>>> DMDAGetCorners() for global vectors >>>>> and DMDAGetGhostCorners() for local >>>>> vectors. So really C or Fortran >>>>> doesn?t make any difference. >>>>> >>>>> >>>>> So my problem now is that in my >>>>> old MPI code, the u(i,j,k) follow >>>>> the Fortran 1 start convention. Is >>>>> there some way to manipulate such >>>>> that I do not have to change my >>>>> u(i,j,k) to u(i-1,j-1,k-1)? >>>>> >>>>> If you code wishes to access them >>>>> with indices plus one from the values >>>>> returned by DMDAGetCorners() for >>>>> global vectors and >>>>> DMDAGetGhostCorners() for local >>>>> vectors then you need to manually >>>>> subtract off the 1. >>>>> >>>>> Barry >>>>> >>>>> Thanks. >>>>> >>>>> Barry >>>>> >>>>> On Apr 18, 2014, at 10:58 AM, >>>>> TAY wee-beng >>>> > wrote: >>>>> >>>>> Hi, >>>>> >>>>> I tried to pinpoint the >>>>> problem. I reduced my job >>>>> size and hence I can run >>>>> on 1 processor. Tried >>>>> using valgrind but perhaps >>>>> I'm using the optimized >>>>> version, it didn't catch >>>>> the error, besides saying >>>>> "Segmentation fault (core >>>>> dumped)" >>>>> >>>>> However, by re-writing my >>>>> code, I found out a few >>>>> things: >>>>> >>>>> 1. if I write my code this >>>>> way: >>>>> >>>>> call >>>>> DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>> >>>>> call >>>>> DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>> >>>>> call >>>>> DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>> >>>>> u_array = .... >>>>> >>>>> v_array = .... >>>>> >>>>> w_array = .... >>>>> >>>>> call >>>>> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>> >>>>> call >>>>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>> >>>>> call >>>>> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>> >>>>> The code runs fine. >>>>> >>>>> 2. if I write my code this >>>>> way: >>>>> >>>>> call >>>>> DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>> >>>>> call >>>>> DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>> >>>>> call >>>>> DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>> >>>>> call >>>>> uvw_array_change(u_array,v_array,w_array) >>>>> -> this subroutine does >>>>> the same modification as >>>>> the above. >>>>> >>>>> call >>>>> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>> >>>>> call >>>>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>> >>>>> call >>>>> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>> -> error >>>>> >>>>> where the subroutine is: >>>>> >>>>> subroutine >>>>> uvw_array_change(u,v,w) >>>>> >>>>> real(8), intent(inout) :: >>>>> u(:,:,:),v(:,:,:),w(:,:,:) >>>>> >>>>> u ... >>>>> v... >>>>> w ... >>>>> >>>>> end subroutine >>>>> uvw_array_change. >>>>> >>>>> The above will give an >>>>> error at : >>>>> >>>>> call >>>>> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>> >>>>> 3. Same as above, except I >>>>> change the order of the >>>>> last 3 lines to: >>>>> >>>>> call >>>>> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>> >>>>> call >>>>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>> >>>>> call >>>>> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>> >>>>> So they are now in >>>>> reversed order. Now it works. >>>>> >>>>> 4. Same as 2 or 3, except >>>>> the subroutine is changed to : >>>>> >>>>> subroutine >>>>> uvw_array_change(u,v,w) >>>>> >>>>> real(8), intent(inout) :: >>>>> u(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>>> >>>>> real(8), intent(inout) :: >>>>> v(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>>> >>>>> real(8), intent(inout) :: >>>>> w(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>>> >>>>> u ... >>>>> v... >>>>> w ... >>>>> >>>>> end subroutine >>>>> uvw_array_change. >>>>> >>>>> The start_indices and >>>>> end_indices are simply to >>>>> shift the 0 indices of C >>>>> convention to that of the >>>>> 1 indices of the Fortran >>>>> convention. This is >>>>> necessary in my case >>>>> because most of my codes >>>>> start array counting at 1, >>>>> hence the "trick". >>>>> >>>>> However, now no matter >>>>> which order of the >>>>> DMDAVecRestoreArrayF90 (as >>>>> in 2 or 3), error will >>>>> occur at "call >>>>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>> " >>>>> >>>>> So did I violate and cause >>>>> memory corruption due to >>>>> the trick above? But I >>>>> can't think of any way >>>>> other than the "trick" to >>>>> continue using the 1 >>>>> indices convention. >>>>> >>>>> Thank you. >>>>> >>>>> Yours sincerely, >>>>> >>>>> TAY wee-beng >>>>> >>>>> On 15/4/2014 8:00 PM, >>>>> Barry Smith wrote: >>>>> >>>>> Try running under >>>>> valgrind >>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >>>>> >>>>> >>>>> On Apr 14, 2014, at >>>>> 9:47 PM, TAY wee-beng >>>>> >>>> > >>>>> wrote: >>>>> >>>>> Hi Barry, >>>>> >>>>> As I mentioned >>>>> earlier, the code >>>>> works fine in >>>>> PETSc debug mode >>>>> but fails in >>>>> non-debug mode. >>>>> >>>>> I have attached my >>>>> code. >>>>> >>>>> Thank you >>>>> >>>>> Yours sincerely, >>>>> >>>>> TAY wee-beng >>>>> >>>>> On 15/4/2014 2:26 >>>>> AM, Barry Smith wrote: >>>>> >>>>> Please send >>>>> the code that >>>>> creates da_w >>>>> and the >>>>> declarations >>>>> of w_array >>>>> >>>>> Barry >>>>> >>>>> On Apr 14, >>>>> 2014, at 9:40 >>>>> AM, TAY wee-beng >>>>> >>>> > >>>>> wrote: >>>>> >>>>> >>>>> Hi Barry, >>>>> >>>>> I'm not >>>>> too sure >>>>> how to do >>>>> it. I'm >>>>> running >>>>> mpi. So I run: >>>>> >>>>> mpirun >>>>> -n 4 >>>>> ./a.out >>>>> -start_in_debugger >>>>> >>>>> I got the >>>>> msg below. >>>>> Before the >>>>> gdb >>>>> windows >>>>> appear >>>>> (thru >>>>> x11), the >>>>> program >>>>> aborts. >>>>> >>>>> Also I >>>>> tried >>>>> running in >>>>> another >>>>> cluster >>>>> and it >>>>> worked. >>>>> Also tried >>>>> in the >>>>> current >>>>> cluster in >>>>> debug mode >>>>> and it >>>>> worked too. >>>>> >>>>> mpirun -n >>>>> 4 ./a.out >>>>> -start_in_debugger >>>>> -------------------------------------------------------------------------- >>>>> An MPI >>>>> process >>>>> has >>>>> executed >>>>> an >>>>> operation >>>>> involving >>>>> a call to the >>>>> "fork()" >>>>> system >>>>> call to >>>>> create a >>>>> child >>>>> process. >>>>> Open MPI >>>>> is currently >>>>> operating >>>>> in a >>>>> condition >>>>> that could >>>>> result in >>>>> memory >>>>> corruption or >>>>> other >>>>> system >>>>> errors; >>>>> your MPI >>>>> job may >>>>> hang, >>>>> crash, or >>>>> produce silent >>>>> data >>>>> corruption. The >>>>> use of >>>>> fork() (or >>>>> system() >>>>> or other >>>>> calls that >>>>> create >>>>> child >>>>> processes) >>>>> is >>>>> strongly >>>>> discouraged. >>>>> >>>>> The >>>>> process >>>>> that >>>>> invoked >>>>> fork was: >>>>> >>>>> Local >>>>> host: >>>>> n12-76 >>>>> (PID 20235) >>>>> MPI_COMM_WORLD >>>>> rank: 2 >>>>> >>>>> If you are >>>>> *absolutely sure* >>>>> that your >>>>> application will >>>>> successfully >>>>> and >>>>> correctly >>>>> survive a >>>>> call to >>>>> fork(), >>>>> you may >>>>> disable >>>>> this warning >>>>> by setting >>>>> the >>>>> mpi_warn_on_fork >>>>> MCA >>>>> parameter >>>>> to 0. >>>>> -------------------------------------------------------------------------- >>>>> [2]PETSC >>>>> ERROR: >>>>> PETSC: >>>>> Attaching >>>>> gdb to >>>>> ./a.out of >>>>> pid 20235 >>>>> on display >>>>> localhost:50.0 >>>>> on machine >>>>> n12-76 >>>>> [0]PETSC >>>>> ERROR: >>>>> PETSC: >>>>> Attaching >>>>> gdb to >>>>> ./a.out of >>>>> pid 20233 >>>>> on display >>>>> localhost:50.0 >>>>> on machine >>>>> n12-76 >>>>> [1]PETSC >>>>> ERROR: >>>>> PETSC: >>>>> Attaching >>>>> gdb to >>>>> ./a.out of >>>>> pid 20234 >>>>> on display >>>>> localhost:50.0 >>>>> on machine >>>>> n12-76 >>>>> [3]PETSC >>>>> ERROR: >>>>> PETSC: >>>>> Attaching >>>>> gdb to >>>>> ./a.out of >>>>> pid 20236 >>>>> on display >>>>> localhost:50.0 >>>>> on machine >>>>> n12-76 >>>>> [n12-76:20232] >>>>> 3 more >>>>> processes >>>>> have sent >>>>> help >>>>> message >>>>> help-mpi-runtime.txt >>>>> / >>>>> mpi_init:warn-fork >>>>> [n12-76:20232] >>>>> Set MCA >>>>> parameter >>>>> "orte_base_help_aggregate" >>>>> to 0 to >>>>> see all >>>>> help / >>>>> error messages >>>>> >>>>> .... >>>>> >>>>> 1 >>>>> [1]PETSC >>>>> ERROR: >>>>> ------------------------------------------------------------------------ >>>>> [1]PETSC >>>>> ERROR: >>>>> Caught >>>>> signal >>>>> number 11 >>>>> SEGV: >>>>> Segmentation >>>>> Violation, >>>>> probably >>>>> memory >>>>> access out >>>>> of range >>>>> [1]PETSC >>>>> ERROR: Try >>>>> option >>>>> -start_in_debugger >>>>> or >>>>> -on_error_attach_debugger >>>>> [1]PETSC >>>>> ERROR: or see >>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC >>>>> ERROR: or >>>>> try >>>>> http://valgrind.org >>>>> on >>>>> GNU/linux >>>>> and Apple >>>>> Mac OS X >>>>> to find >>>>> memory >>>>> corruption >>>>> errors >>>>> [1]PETSC >>>>> ERROR: >>>>> configure >>>>> using >>>>> --with-debugging=yes, >>>>> recompile, >>>>> link, and run >>>>> [1]PETSC >>>>> ERROR: to >>>>> get more >>>>> information on >>>>> the crash. >>>>> [1]PETSC >>>>> ERROR: >>>>> User >>>>> provided >>>>> function() >>>>> line 0 in >>>>> unknown >>>>> directory >>>>> unknown >>>>> file (null) >>>>> [3]PETSC >>>>> ERROR: >>>>> ------------------------------------------------------------------------ >>>>> [3]PETSC >>>>> ERROR: >>>>> Caught >>>>> signal >>>>> number 11 >>>>> SEGV: >>>>> Segmentation >>>>> Violation, >>>>> probably >>>>> memory >>>>> access out >>>>> of range >>>>> [3]PETSC >>>>> ERROR: Try >>>>> option >>>>> -start_in_debugger >>>>> or >>>>> -on_error_attach_debugger >>>>> [3]PETSC >>>>> ERROR: or see >>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSC >>>>> ERROR: or >>>>> try >>>>> http://valgrind.org >>>>> on >>>>> GNU/linux >>>>> and Apple >>>>> Mac OS X >>>>> to find >>>>> memory >>>>> corruption >>>>> errors >>>>> [3]PETSC >>>>> ERROR: >>>>> configure >>>>> using >>>>> --with-debugging=yes, >>>>> recompile, >>>>> link, and run >>>>> [3]PETSC >>>>> ERROR: to >>>>> get more >>>>> information on >>>>> the crash. >>>>> [3]PETSC >>>>> ERROR: >>>>> User >>>>> provided >>>>> function() >>>>> line 0 in >>>>> unknown >>>>> directory >>>>> unknown >>>>> file (null) >>>>> >>>>> ... >>>>> Thank you. >>>>> >>>>> Yours >>>>> sincerely, >>>>> >>>>> TAY wee-beng >>>>> >>>>> On >>>>> 14/4/2014 >>>>> 9:05 PM, >>>>> Barry >>>>> Smith wrote: >>>>> >>>>> >>>>> Because >>>>> IO >>>>> doesn?t always >>>>> get >>>>> flushed immediately >>>>> it may >>>>> not be >>>>> hanging at >>>>> this >>>>> point. >>>>> It is >>>>> better >>>>> to use >>>>> the >>>>> option >>>>> -start_in_debugger >>>>> then >>>>> type >>>>> cont >>>>> in >>>>> each >>>>> debugger >>>>> window >>>>> and >>>>> then >>>>> when >>>>> you >>>>> think >>>>> it is >>>>> ?hanging? >>>>> do a >>>>> control C >>>>> in >>>>> each >>>>> debugger >>>>> window >>>>> and >>>>> type >>>>> where >>>>> to see >>>>> where >>>>> each >>>>> process is >>>>> you >>>>> can >>>>> also >>>>> look >>>>> around >>>>> in the >>>>> debugger >>>>> at >>>>> variables >>>>> to see >>>>> why it >>>>> is >>>>> ?hanging? >>>>> at >>>>> that >>>>> point. >>>>> >>>>> Barry >>>>> >>>>> >>>>> This >>>>> routines >>>>> don?t >>>>> have >>>>> any >>>>> parallel >>>>> communication >>>>> in >>>>> them >>>>> so are >>>>> unlikely >>>>> to hang. >>>>> >>>>> On Apr >>>>> 14, >>>>> 2014, >>>>> at >>>>> 6:52 >>>>> AM, >>>>> TAY >>>>> wee-beng >>>>> >>>>> >>>> > >>>>> >>>>> wrote: >>>>> >>>>> >>>>> >>>>> Hi, >>>>> >>>>> My >>>>> code >>>>> hangs >>>>> and I >>>>> added >>>>> in >>>>> mpi_barrier >>>>> and print >>>>> to >>>>> catch >>>>> the bug. >>>>> I >>>>> found >>>>> that >>>>> it >>>>> hangs >>>>> after >>>>> printing >>>>> "7". >>>>> Is >>>>> it >>>>> because >>>>> I'm doing >>>>> something >>>>> wrong? >>>>> I >>>>> need >>>>> to >>>>> access >>>>> the u,v,w >>>>> array >>>>> so >>>>> I >>>>> use DMDAVecGetArrayF90. >>>>> After >>>>> access, >>>>> I >>>>> use DMDAVecRestoreArrayF90. >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> call >>>>> DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>> >>>>> >>>>> >>>>> >>>>> call >>>>> MPI_Barrier(MPI_COMM_WORLD,ierr); >>>>> if (myid==0) >>>>> print >>>>> *,"3" >>>>> >>>>> >>>>> >>>>> >>>>> call >>>>> DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>> >>>>> >>>>> >>>>> >>>>> call >>>>> MPI_Barrier(MPI_COMM_WORLD,ierr); >>>>> if (myid==0) >>>>> print >>>>> *,"4" >>>>> >>>>> >>>>> >>>>> >>>>> call >>>>> DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>> >>>>> >>>>> >>>>> >>>>> call >>>>> MPI_Barrier(MPI_COMM_WORLD,ierr); >>>>> if (myid==0) >>>>> print >>>>> *,"5" >>>>> >>>>> >>>>> >>>>> >>>>> call >>>>> I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) >>>>> >>>>> >>>>> >>>>> >>>>> call >>>>> MPI_Barrier(MPI_COMM_WORLD,ierr); >>>>> if (myid==0) >>>>> print >>>>> *,"6" >>>>> >>>>> >>>>> >>>>> >>>>> call >>>>> DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>> !must >>>>> be >>>>> in >>>>> reverse >>>>> order >>>>> >>>>> >>>>> >>>>> >>>>> call >>>>> MPI_Barrier(MPI_COMM_WORLD,ierr); >>>>> if (myid==0) >>>>> print >>>>> *,"7" >>>>> >>>>> >>>>> >>>>> >>>>> call >>>>> DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>> >>>>> >>>>> >>>>> >>>>> call >>>>> MPI_Barrier(MPI_COMM_WORLD,ierr); >>>>> if (myid==0) >>>>> print >>>>> *,"8" >>>>> >>>>> >>>>> >>>>> >>>>> call >>>>> DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>> -- >>>>> Thank >>>>> you. >>>>> >>>>> Yours >>>>> sincerely, >>>>> >>>>> TAY wee-beng >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they >>>>> begin their experiments is infinitely more interesting >>>>> than any results to which their experiments lead. >>>>> -- Norbert Wiener >>>> >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin >>>> their experiments is infinitely more interesting than any >>>> results to which their experiments lead. >>>> -- Norbert Wiener >>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to >>> which their experiments lead. >>> -- Norbert Wiener >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which >> their experiments lead. >> -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Sun Apr 20 19:50:17 2014 From: jed at jedbrown.org (Jed Brown) Date: Sun, 20 Apr 2014 19:50:17 -0500 Subject: [petsc-users] petsc4py, MatShell and SNES In-Reply-To: References: Message-ID: <87d2gbecae.fsf@jedbrown.org> Veltz Romain writes: > Hi Petsc users, > > I would like to know how the following code must be modified in order to work with the latest versions of Petsc4py. > > http://code.google.com/p/petsc4py/issues/attachmentText?id=11&aid=4044638157340430804&name=SNES_MatShell.py&token=e664eb9a46d94c453334a46c924d3e93 > > Thank you for your help, See attached. In the future, it's a good idea to try updating the example and let us know what went wrong. We can't tell from your email if anything needs updating, for example. -------------- next part -------------- A non-text attachment was scrubbed... Name: test.py Type: text/x-python Size: 741 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From bsmith at mcs.anl.gov Sun Apr 20 19:58:46 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 20 Apr 2014 19:58:46 -0500 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: <53546B03.1010407@gmail.com> References: <534C9A2C.5060404@gmail.com> <534C9DB5.9070407@gmail.com> <53514B8A.90901@gmail.com> <495519DB-D3A1-4190-AED2-4ABA885C2835@mcs.anl.gov> <5351E62B.6060201@gmail.com> <53520587.6010606@gmail.com> <62DF81C7-0C35-410C-8D4C-206FBB22576A@mcs.anl.gov> <535248E8.2070002@gmail.com> <535284E0.8010901@gmail.com> <5352934C.1010306@gmail.com> <53529B09.8040009@gmail.com> <5353173D.60609@gmail.com> <53546B03.1010407@gmail.com> Message-ID: Please send the entire code. If we can run it and reproduce the problem we can likely track down the issue much faster than through endless rounds of email. Barry On Apr 20, 2014, at 7:49 PM, TAY wee-beng wrote: > On 20/4/2014 8:39 AM, TAY wee-beng wrote: >> On 20/4/2014 1:02 AM, Matthew Knepley wrote: >>> On Sat, Apr 19, 2014 at 10:49 AM, TAY wee-beng wrote: >>> On 19/4/2014 11:39 PM, Matthew Knepley wrote: >>>> On Sat, Apr 19, 2014 at 10:16 AM, TAY wee-beng wrote: >>>> On 19/4/2014 10:55 PM, Matthew Knepley wrote: >>>>> On Sat, Apr 19, 2014 at 9:14 AM, TAY wee-beng wrote: >>>>> On 19/4/2014 6:48 PM, Matthew Knepley wrote: >>>>>> On Sat, Apr 19, 2014 at 4:59 AM, TAY wee-beng wrote: >>>>>> On 19/4/2014 1:17 PM, Barry Smith wrote: >>>>>> On Apr 19, 2014, at 12:11 AM, TAY wee-beng wrote: >>>>>> >>>>>> On 19/4/2014 12:10 PM, Barry Smith wrote: >>>>>> On Apr 18, 2014, at 9:57 PM, TAY wee-beng wrote: >>>>>> >>>>>> On 19/4/2014 3:53 AM, Barry Smith wrote: >>>>>> Hmm, >>>>>> >>>>>> Interface DMDAVecGetArrayF90 >>>>>> Subroutine DMDAVecGetArrayF903(da1, v,d1,ierr) >>>>>> USE_DM_HIDE >>>>>> DM_HIDE da1 >>>>>> VEC_HIDE v >>>>>> PetscScalar,pointer :: d1(:,:,:) >>>>>> PetscErrorCode ierr >>>>>> End Subroutine >>>>>> >>>>>> So the d1 is a F90 POINTER. But your subroutine seems to be treating it as a ?plain old Fortran array?? >>>>>> real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) >>>>>> Hi, >>>>>> >>>>>> So d1 is a pointer, and it's different if I declare it as "plain old Fortran array"? Because I declare it as a Fortran array and it works w/o any problem if I only call DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 with "u". >>>>>> >>>>>> But if I call DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 with "u", "v" and "w", error starts to happen. I wonder why... >>>>>> >>>>>> Also, supposed I call: >>>>>> >>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>> >>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>> >>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>> >>>>>> u_array .... >>>>>> >>>>>> v_array .... etc >>>>>> >>>>>> Now to restore the array, does it matter the sequence they are restored? >>>>>> No it should not matter. If it matters that is a sign that memory has been written to incorrectly earlier in the code. >>>>>> >>>>>> Hi, >>>>>> >>>>>> Hmm, I have been getting different results on different intel compilers. I'm not sure if MPI played a part but I'm only using a single processor. In the debug mode, things run without problem. In optimized mode, in some cases, the code aborts even doing simple initialization: >>>>>> >>>>>> >>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>> >>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>> >>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>> >>>>>> call DMDAVecGetArrayF90(da_p,p_local,p_array,ierr) >>>>>> >>>>>> u_array = 0.d0 >>>>>> >>>>>> v_array = 0.d0 >>>>>> >>>>>> w_array = 0.d0 >>>>>> >>>>>> p_array = 0.d0 >>>>>> >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_p,p_local,p_array,ierr) >>>>>> >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>> >>>>>> The code aborts at call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr), giving segmentation error. But other version of intel compiler passes thru this part w/o error. Since the response is different among different compilers, is this PETSc or intel 's bug? Or mvapich or openmpi? >>>>>> >>>>>> We do this is a bunch of examples. Can you reproduce this different behavior in src/dm/examples/tutorials/ex11f90.F? >>>>> >>>>> Hi Matt, >>>>> >>>>> Do you mean putting the above lines into ex11f90.F and test? >>>>> >>>>> It already has DMDAVecGetArray(). Just run it. >>>> >>>> Hi, >>>> >>>> It worked. The differences between mine and the code is the way the fortran modules are defined, and the ex11f90 only uses global vectors. Does it make a difference whether global or local vectors are used? Because the way it accesses x1 only touches the local region. >>>> >>>> No the global/local difference should not matter. >>>> >>>> Also, before using DMDAVecGetArrayF90, DMGetGlobalVector must be used 1st, is that so? I can't find the equivalent for local vector though. >>>> >>>> DMGetLocalVector() >>> >>> Ops, I do not have DMGetLocalVector and DMRestoreLocalVector in my code. Does it matter? >>> >>> If so, when should I call them? >>> >>> You just need a local vector from somewhere. > > Hi, > > Anyone can help with the questions below? Still trying to find why my code doesn't work. > > Thanks. >> Hi, >> >> I insert part of my error region code into ex11f90: >> >> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >> >> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >> >> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >> >> call DMDAVecGetArrayF90(da_p,p_local,p_array,ierr) >> >> u_array = 0.d0 >> >> v_array = 0.d0 >> >> w_array = 0.d0 >> >> p_array = 0.d0 >> >> call DMDAVecRestoreArrayF90(da_p,p_local,p_array,ierr) >> >> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >> >> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >> >> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >> >> It worked w/o error. I'm going to change the way the modules are defined in my code. >> >> My code contains a main program and a number of modules files, with subroutines inside e.g. >> >> module solve >> <- add include file? >> subroutine RRK >> <- add include file? >> end subroutine RRK >> >> end module solve >> >> So where should the include files (#include ) be placed? >> >> After the module or inside the subroutine? >> >> Thanks. >>> >>> Matt >>> >>> Thanks. >>>> >>>> Matt >>>> >>>> Thanks. >>>>> >>>>> Matt >>>>> >>>>> Thanks >>>>> >>>>> Regards. >>>>>> >>>>>> Matt >>>>>> >>>>>> As in w, then v and u? >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>> >>>>>> thanks >>>>>> Note also that the beginning and end indices of the u,v,w, are different for each process see for example http://www.mcs.anl.gov/petsc/petsc-3.4/src/dm/examples/tutorials/ex11f90.F (and they do not start at 1). This is how to get the loop bounds. >>>>>> Hi, >>>>>> >>>>>> In my case, I fixed the u,v,w such that their indices are the same. I also checked using DMDAGetCorners and DMDAGetGhostCorners. Now the problem lies in my subroutine treating it as a ?plain old Fortran array?. >>>>>> >>>>>> If I declare them as pointers, their indices follow the C 0 start convention, is that so? >>>>>> Not really. It is that in each process you need to access them from the indices indicated by DMDAGetCorners() for global vectors and DMDAGetGhostCorners() for local vectors. So really C or Fortran doesn?t make any difference. >>>>>> >>>>>> >>>>>> So my problem now is that in my old MPI code, the u(i,j,k) follow the Fortran 1 start convention. Is there some way to manipulate such that I do not have to change my u(i,j,k) to u(i-1,j-1,k-1)? >>>>>> If you code wishes to access them with indices plus one from the values returned by DMDAGetCorners() for global vectors and DMDAGetGhostCorners() for local vectors then you need to manually subtract off the 1. >>>>>> >>>>>> Barry >>>>>> >>>>>> Thanks. >>>>>> Barry >>>>>> >>>>>> On Apr 18, 2014, at 10:58 AM, TAY wee-beng wrote: >>>>>> >>>>>> Hi, >>>>>> >>>>>> I tried to pinpoint the problem. I reduced my job size and hence I can run on 1 processor. Tried using valgrind but perhaps I'm using the optimized version, it didn't catch the error, besides saying "Segmentation fault (core dumped)" >>>>>> >>>>>> However, by re-writing my code, I found out a few things: >>>>>> >>>>>> 1. if I write my code this way: >>>>>> >>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>> >>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>> >>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>> >>>>>> u_array = .... >>>>>> >>>>>> v_array = .... >>>>>> >>>>>> w_array = .... >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>> >>>>>> The code runs fine. >>>>>> >>>>>> 2. if I write my code this way: >>>>>> >>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>> >>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>> >>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>> >>>>>> call uvw_array_change(u_array,v_array,w_array) -> this subroutine does the same modification as the above. >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) -> error >>>>>> >>>>>> where the subroutine is: >>>>>> >>>>>> subroutine uvw_array_change(u,v,w) >>>>>> >>>>>> real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) >>>>>> >>>>>> u ... >>>>>> v... >>>>>> w ... >>>>>> >>>>>> end subroutine uvw_array_change. >>>>>> >>>>>> The above will give an error at : >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>> >>>>>> 3. Same as above, except I change the order of the last 3 lines to: >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>> >>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>> >>>>>> So they are now in reversed order. Now it works. >>>>>> >>>>>> 4. Same as 2 or 3, except the subroutine is changed to : >>>>>> >>>>>> subroutine uvw_array_change(u,v,w) >>>>>> >>>>>> real(8), intent(inout) :: u(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>>>> >>>>>> real(8), intent(inout) :: v(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>>>> >>>>>> real(8), intent(inout) :: w(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>>>> >>>>>> u ... >>>>>> v... >>>>>> w ... >>>>>> >>>>>> end subroutine uvw_array_change. >>>>>> >>>>>> The start_indices and end_indices are simply to shift the 0 indices of C convention to that of the 1 indices of the Fortran convention. This is necessary in my case because most of my codes start array counting at 1, hence the "trick". >>>>>> >>>>>> However, now no matter which order of the DMDAVecRestoreArrayF90 (as in 2 or 3), error will occur at "call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) " >>>>>> >>>>>> So did I violate and cause memory corruption due to the trick above? But I can't think of any way other than the "trick" to continue using the 1 indices convention. >>>>>> >>>>>> Thank you. >>>>>> >>>>>> Yours sincerely, >>>>>> >>>>>> TAY wee-beng >>>>>> >>>>>> On 15/4/2014 8:00 PM, Barry Smith wrote: >>>>>> Try running under valgrind http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >>>>>> >>>>>> >>>>>> On Apr 14, 2014, at 9:47 PM, TAY wee-beng wrote: >>>>>> >>>>>> Hi Barry, >>>>>> >>>>>> As I mentioned earlier, the code works fine in PETSc debug mode but fails in non-debug mode. >>>>>> >>>>>> I have attached my code. >>>>>> >>>>>> Thank you >>>>>> >>>>>> Yours sincerely, >>>>>> >>>>>> TAY wee-beng >>>>>> >>>>>> On 15/4/2014 2:26 AM, Barry Smith wrote: >>>>>> Please send the code that creates da_w and the declarations of w_array >>>>>> >>>>>> Barry >>>>>> >>>>>> On Apr 14, 2014, at 9:40 AM, TAY wee-beng >>>>>> >>>>>> wrote: >>>>>> >>>>>> >>>>>> Hi Barry, >>>>>> >>>>>> I'm not too sure how to do it. I'm running mpi. So I run: >>>>>> >>>>>> mpirun -n 4 ./a.out -start_in_debugger >>>>>> >>>>>> I got the msg below. Before the gdb windows appear (thru x11), the program aborts. >>>>>> >>>>>> Also I tried running in another cluster and it worked. Also tried in the current cluster in debug mode and it worked too. >>>>>> >>>>>> mpirun -n 4 ./a.out -start_in_debugger >>>>>> -------------------------------------------------------------------------- >>>>>> An MPI process has executed an operation involving a call to the >>>>>> "fork()" system call to create a child process. Open MPI is currently >>>>>> operating in a condition that could result in memory corruption or >>>>>> other system errors; your MPI job may hang, crash, or produce silent >>>>>> data corruption. The use of fork() (or system() or other calls that >>>>>> create child processes) is strongly discouraged. >>>>>> >>>>>> The process that invoked fork was: >>>>>> >>>>>> Local host: n12-76 (PID 20235) >>>>>> MPI_COMM_WORLD rank: 2 >>>>>> >>>>>> If you are *absolutely sure* that your application will successfully >>>>>> and correctly survive a call to fork(), you may disable this warning >>>>>> by setting the mpi_warn_on_fork MCA parameter to 0. >>>>>> -------------------------------------------------------------------------- >>>>>> [2]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20235 on display localhost:50.0 on machine n12-76 >>>>>> [0]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20233 on display localhost:50.0 on machine n12-76 >>>>>> [1]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20234 on display localhost:50.0 on machine n12-76 >>>>>> [3]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20236 on display localhost:50.0 on machine n12-76 >>>>>> [n12-76:20232] 3 more processes have sent help message help-mpi-runtime.txt / mpi_init:warn-fork >>>>>> [n12-76:20232] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages >>>>>> >>>>>> .... >>>>>> >>>>>> 1 >>>>>> [1]PETSC ERROR: ------------------------------------------------------------------------ >>>>>> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>>>>> [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>>>>> [1]PETSC ERROR: or see >>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC ERROR: or try http://valgrind.org >>>>>> on GNU/linux and Apple Mac OS X to find memory corruption errors >>>>>> [1]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >>>>>> [1]PETSC ERROR: to get more information on the crash. >>>>>> [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >>>>>> [3]PETSC ERROR: ------------------------------------------------------------------------ >>>>>> [3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>>>>> [3]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>>>>> [3]PETSC ERROR: or see >>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSC ERROR: or try http://valgrind.org >>>>>> on GNU/linux and Apple Mac OS X to find memory corruption errors >>>>>> [3]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >>>>>> [3]PETSC ERROR: to get more information on the crash. >>>>>> [3]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >>>>>> >>>>>> ... >>>>>> Thank you. >>>>>> >>>>>> Yours sincerely, >>>>>> >>>>>> TAY wee-beng >>>>>> >>>>>> On 14/4/2014 9:05 PM, Barry Smith wrote: >>>>>> >>>>>> Because IO doesn?t always get flushed immediately it may not be hanging at this point. It is better to use the option -start_in_debugger then type cont in each debugger window and then when you think it is ?hanging? do a control C in each debugger window and type where to see where each process is you can also look around in the debugger at variables to see why it is ?hanging? at that point. >>>>>> >>>>>> Barry >>>>>> >>>>>> This routines don?t have any parallel communication in them so are unlikely to hang. >>>>>> >>>>>> On Apr 14, 2014, at 6:52 AM, TAY wee-beng >>>>>> >>>>>> >>>>>> >>>>>> wrote: >>>>>> >>>>>> >>>>>> >>>>>> Hi, >>>>>> >>>>>> My code hangs and I added in mpi_barrier and print to catch the bug. I found that it hangs after printing "7". Is it because I'm doing something wrong? I need to access the u,v,w array so I use DMDAVecGetArrayF90. After access, I use DMDAVecRestoreArrayF90. >>>>>> >>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"3" >>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"4" >>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"5" >>>>>> call I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) >>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"6" >>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) !must be in reverse order >>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"7" >>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"8" >>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>> -- >>>>>> Thank you. >>>>>> >>>>>> Yours sincerely, >>>>>> >>>>>> TAY wee-beng >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>>>> -- Norbert Wiener >>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>>> -- Norbert Wiener >>>> >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>> -- Norbert Wiener >>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> -- Norbert Wiener >> > From fuentesdt at gmail.com Sun Apr 20 21:26:10 2014 From: fuentesdt at gmail.com (David Fuentes) Date: Sun, 20 Apr 2014 21:26:10 -0500 Subject: [petsc-users] Problems running ex12.c In-Reply-To: References: <8448FFCE4362914496BCEAF8BE810C13EF1D34@DCPWPEXMBX02.mdanderson.edu> <8448FFCE4362914496BCEAF8BE810C13EF1D5E@DCPWPEXMBX02.mdanderson.edu> <8448FFCE4362914496BCEAF8BE810C13EF1D7C@DCPWPEXMBX02.mdanderson.edu> <8448FFCE4362914496BCEAF8BE810C13EF1D8F@DCPWPEXMBX02.mdanderson.edu> <8448FFCE4362914496BCEAF8BE810C13EF1DA3@DCPWPEXMBX02.mdanderson.edu> <8448FFCE4362914496BCEAF8BE810C13EF1DBF@DCPWPEXMBX02.mdanderson.edu> <8448FFCE4362914496BCEAF8BE810C13EF1E22@DCPWPEXMBX02.mdanderson.edu> <8448FFCE4362914496BCEAF8BE810C13EF1E50@DCPWPEXMBX02.mdanderson.edu> Message-ID: Hi, the dev version of ex12 seems to be running with the opencl options: -petscfe_type opencl -mat_petscfe_type opencl How are the num_blocks and num_batches options to be used ? -petscfe_num_blocks 2 -petscfe_num_batches 2 https://bitbucket.org/petsc/petsc/src/64715f0f033346c10c77b73cf58216d111db8789/config/builder.py#cl-337 Thanks, David On Fri, Apr 4, 2014 at 10:37 AM, Matthew Knepley wrote: > On Mon, Mar 31, 2014 at 7:08 PM, Matthew Knepley wrote: > >> On Mon, Mar 31, 2014 at 6:37 PM, Miguel Angel Salazar de Troya < >> salazardetroya at gmail.com> wrote: >> >>> Thanks for your response. Now I am trying to modify this example to >>> include Dirichlet and Neumann conditions at the same time. >>> >>> I can see that inside of DMPlexCreateSquareBoundary there is an option >>> ("-dm_plex_separate_marker") to just mark the top boundary with 1. I >>> understand that only this side would have Dirichlet conditions that are >>> described by the function bcFuncs in user.fem (the exact function in this >>> example). However, when we run the Neumann condition, we fix all the >>> boundary as Neumann condition with the function DMPlexAddBoundary, is this >>> right? >>> >> >> Right about the shortcoming, but wrong about the source. >> DMPlexAddBoundary() takes an argument that is the marker value for the >> given label, so you can select boundaries. >> However, DMPlexComputeResidualFEM() currently hardcodes the boundary name >> ("boundary") >> and the marker value (1). I wrote this when we had no boundary >> representation in PETSc. Now that we have DMAddBoundary(), we can >> loop over the Neumann boundaries. I have put this on my todo list. If you >> are motivated, you can do it first and I will help. >> > > I pushed this change today, so now you can have multiple Neumann > boundaries with whatever labels > you want. I have not written a test for multiple boundaries, but the old > single boundary tests pass. > > Thanks, > > Matt > > >> Thanks, >> >> Matt >> >> >>> Could there be a way to just fix a certain boundary with the Neumann >>> condition in this example? Would it be easier with an external library as >>> Exodus II? >>> >>> >>> On Sun, Mar 30, 2014 at 7:51 PM, Matthew Knepley wrote: >>> >>>> On Sun, Mar 30, 2014 at 7:07 PM, Miguel Angel Salazar de Troya < >>>> salazardetroya at gmail.com> wrote: >>>> >>>>> Thanks for your response. Your help is really useful to me. >>>>> >>>>> The difference between the analytic and the field options are that for >>>>> the field options the function is projected onto the function space defined >>>>> for feAux right? What is the advantage of doing this? >>>>> >>>> >>>> If it is not purely a function of the coordinates, or you do not know >>>> that function, there is no option left. >>>> >>>> >>>>> Also, for this field case I see that the function always has to be a >>>>> vector. What if we wanted to implement a heterogeneous material in linear >>>>> elasticity? Would we implement the constitutive tensor as a vector? It >>>>> would not be very difficult I think, I just want to make sure it would be >>>>> this way. >>>>> >>>> >>>> Its not a vector, which indicates a particular behavior under >>>> coordinate transformations, but an array >>>> which can hold any data you want. >>>> >>>> Matt >>>> >>>> >>>>> Thanks in advance >>>>> Miguel >>>>> >>>>> >>>>> On Sun, Mar 30, 2014 at 2:01 PM, Matthew Knepley wrote: >>>>> >>>>>> On Sun, Mar 30, 2014 at 1:57 PM, Miguel Angel Salazar de Troya < >>>>>> salazardetroya at gmail.com> wrote: >>>>>> >>>>>>> Hello everybody >>>>>>> >>>>>>> I had a question about this example. In the petsc-dev next version, >>>>>>> why don't we create a PetscSection in the function SetupSection, but we do >>>>>>> it in the function SetupMaterialSection and in the function SetupSection of >>>>>>> the petsc-current version. >>>>>>> >>>>>> >>>>>> 1) I wanted to try and make things more automatic for the user >>>>>> >>>>>> 2) I needed a way to automatically layout data for coarser/finer >>>>>> grids in unstructured MG >>>>>> >>>>>> Thus, now when you set for PetscFE into the DM using DMSetField(), it >>>>>> will automatically create >>>>>> the section on the first call to DMGetDefaultSection(). >>>>>> >>>>>> I do not have a similar provision now for materials, so you create >>>>>> your own section. I think this is >>>>>> alright until we have some idea of a nicer interface. >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> >>>>>> >>>>>>> petsc-dev: >>>>>>> >>>>>>> #undef __FUNCT__ >>>>>>> #define __FUNCT__ "SetupSection" >>>>>>> PetscErrorCode SetupSection(DM dm, AppCtx *user) >>>>>>> { >>>>>>> DM cdm = dm; >>>>>>> const PetscInt id = 1; >>>>>>> PetscErrorCode ierr; >>>>>>> >>>>>>> PetscFunctionBeginUser; >>>>>>> ierr = PetscObjectSetName((PetscObject) user->fe[0], >>>>>>> "potential");CHKERRQ(ierr); >>>>>>> while (cdm) { >>>>>>> ierr = DMSetNumFields(cdm, 1);CHKERRQ(ierr); >>>>>>> ierr = DMSetField(cdm, 0, (PetscObject) >>>>>>> user->fe[0]);CHKERRQ(ierr); >>>>>>> ierr = DMPlexAddBoundary(cdm, user->bcType == DIRICHLET, >>>>>>> user->bcType == NEUMANN ? "boundary" : "marker", 0, user->exactFuncs[0], 1, >>>>>>> &id, user);CHKERRQ(ierr); >>>>>>> ierr = DMPlexGetCoarseDM(cdm, &cdm);CHKERRQ(ierr); >>>>>>> } >>>>>>> PetscFunctionReturn(0); >>>>>>> } >>>>>>> >>>>>>> >>>>>>> It seems that it adds the number of fields directly to the DM, and >>>>>>> takes the number of components that were specified in SetupElementCommon, >>>>>>> but what about the number of degrees of freedom? Why we added it for the >>>>>>> MaterialSection but not for the regular Section. >>>>>>> >>>>>>> Thanks in advance >>>>>>> Miguel >>>>>>> >>>>>>> >>>>>>> On Sat, Mar 15, 2014 at 4:16 PM, Miguel Angel Salazar de Troya < >>>>>>> salazardetroya at gmail.com> wrote: >>>>>>> >>>>>>>> Thanks a lot. >>>>>>>> >>>>>>>> >>>>>>>> On Sat, Mar 15, 2014 at 3:36 PM, Matthew Knepley >>>>>>> > wrote: >>>>>>>> >>>>>>>>> On Sat, Mar 15, 2014 at 3:31 PM, Miguel Angel Salazar de Troya < >>>>>>>>> salazardetroya at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hello everybody >>>>>>>>>> >>>>>>>>>> I keep trying to understand this example. I don't have any >>>>>>>>>> problems with this example when I run it like this: >>>>>>>>>> >>>>>>>>>> [salaza11 at maya PETSC]$ ./ex12 -bc_type dirichlet -interpolate >>>>>>>>>> -petscspace_order 1 -variable_coefficient nonlinear -dim 2 -run_type full >>>>>>>>>> -show_solution >>>>>>>>>> Number of SNES iterations = 5 >>>>>>>>>> L_2 Error: 0.107289 >>>>>>>>>> Solution >>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>> type: seq >>>>>>>>>> 0.484618 >>>>>>>>>> >>>>>>>>>> However, when I change the boundary conditions to Neumann, I get >>>>>>>>>> this error. >>>>>>>>>> >>>>>>>>>> [salaza11 at maya PETSC]$ ./ex12 -bc_type neumann -interpolate 1 >>>>>>>>>> -petscspace_order 2 -variable_coefficient nonlinear -dim 2 -run_type full >>>>>>>>>> -show_solution >>>>>>>>>> >>>>>>>>> >>>>>>>>> Here you set the order of the element used in bulk, but not on the >>>>>>>>> boundary where you condition is, so it defaults to 0. In >>>>>>>>> order to become more familiar, take a look at the tests that I run >>>>>>>>> here: >>>>>>>>> >>>>>>>>> >>>>>>>>> https://bitbucket.org/petsc/petsc/src/64715f0f033346c10c77b73cf58216d111db8789/config/builder.py?at=master#cl-216 >>>>>>>>> >>>>>>>>> Matt >>>>>>>>> >>>>>>>>> [0]PETSC ERROR: --------------------- Error Message >>>>>>>>>> -------------------------------------------------------------- >>>>>>>>>> [0]PETSC ERROR: Petsc has generated inconsistent data >>>>>>>>>> [0]PETSC ERROR: Number of dual basis vectors 0 not equal to >>>>>>>>>> dimension 1 >>>>>>>>>> [0]PETSC ERROR: See http:// >>>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble >>>>>>>>>> shooting. >>>>>>>>>> [0]PETSC ERROR: Petsc Development GIT revision: >>>>>>>>>> v3.4.3-4776-gb18359b GIT Date: 2014-03-04 10:53:30 -0600 >>>>>>>>>> [0]PETSC ERROR: ./ex12 on a linux-gnu-c-debug named maya by >>>>>>>>>> salaza11 Sat Mar 15 14:28:05 2014 >>>>>>>>>> [0]PETSC ERROR: Configure options --download-mpich >>>>>>>>>> --download-scientificpython --download-triangle --download-ctetgen >>>>>>>>>> --download-chaco --with-c2html=0 >>>>>>>>>> [0]PETSC ERROR: #1 PetscDualSpaceSetUp_Lagrange() line 1763 in >>>>>>>>>> /home/salaza11/petsc/src/dm/dt/interface/dtfe.c >>>>>>>>>> [0]PETSC ERROR: #2 PetscDualSpaceSetUp() line 1277 in >>>>>>>>>> /home/salaza11/petsc/src/dm/dt/interface/dtfe.c >>>>>>>>>> [0]PETSC ERROR: #3 SetupElementCommon() line 474 in >>>>>>>>>> /home/salaza11/workspace/PETSC/ex12.c >>>>>>>>>> [0]PETSC ERROR: #4 SetupBdElement() line 559 in >>>>>>>>>> /home/salaza11/workspace/PETSC/ex12.c >>>>>>>>>> [0]PETSC ERROR: #5 main() line 755 in >>>>>>>>>> /home/salaza11/workspace/PETSC/ex12.c >>>>>>>>>> [0]PETSC ERROR: ----------------End of Error Message -------send >>>>>>>>>> entire error message to petsc-maint at mcs.anl.gov---------- >>>>>>>>>> application called MPI_Abort(MPI_COMM_WORLD, 77) - process 0 >>>>>>>>>> [unset]: aborting job: >>>>>>>>>> application called MPI_Abort(MPI_COMM_WORLD, 77) - process 0 >>>>>>>>>> >>>>>>>>>> I honestly do not know much about using dual spaces in a finite >>>>>>>>>> element context. I have been trying to find some material that could help >>>>>>>>>> me without much success. I tried to modify the dual space order with the >>>>>>>>>> option -petscdualspace_order but I kept getting errors. In particular, I >>>>>>>>>> got this when I set it to 1. >>>>>>>>>> >>>>>>>>>> [salaza11 at maya PETSC]$ ./ex12 -bc_type neumann -interpolate 1 >>>>>>>>>> -petscspace_order 2 -variable_coefficient nonlinear -dim 2 -run_type full >>>>>>>>>> -show_solution -petscdualspace_order 1 >>>>>>>>>> [0]PETSC ERROR: PetscTrFreeDefault() called from >>>>>>>>>> PetscFESetUp_Basic() line 2492 in >>>>>>>>>> /home/salaza11/petsc/src/dm/dt/interface/dtfe.c >>>>>>>>>> [0]PETSC ERROR: Block [id=0(32)] at address 0x1cc32f0 is >>>>>>>>>> corrupted (probably write past end of array) >>>>>>>>>> [0]PETSC ERROR: Block allocated in PetscFESetUp_Basic() line 2483 >>>>>>>>>> in /home/salaza11/petsc/src/dm/dt/interface/dtfe.c >>>>>>>>>> [0]PETSC ERROR: --------------------- Error Message >>>>>>>>>> -------------------------------------------------------------- >>>>>>>>>> [0]PETSC ERROR: Memory corruption: >>>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/installation.html#valgrind >>>>>>>>>> [0]PETSC ERROR: Corrupted memory >>>>>>>>>> [0]PETSC ERROR: See http:// >>>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble >>>>>>>>>> shooting. >>>>>>>>>> [0]PETSC ERROR: Petsc Development GIT revision: >>>>>>>>>> v3.4.3-4776-gb18359b GIT Date: 2014-03-04 10:53:30 -0600 >>>>>>>>>> [0]PETSC ERROR: ./ex12 on a linux-gnu-c-debug named maya by >>>>>>>>>> salaza11 Sat Mar 15 14:37:34 2014 >>>>>>>>>> [0]PETSC ERROR: Configure options --download-mpich >>>>>>>>>> --download-scientificpython --download-triangle --download-ctetgen >>>>>>>>>> --download-chaco --with-c2html=0 >>>>>>>>>> [0]PETSC ERROR: #1 PetscTrFreeDefault() line 289 in >>>>>>>>>> /home/salaza11/petsc/src/sys/memory/mtr.c >>>>>>>>>> [0]PETSC ERROR: #2 PetscFESetUp_Basic() line 2492 in >>>>>>>>>> /home/salaza11/petsc/src/dm/dt/interface/dtfe.c >>>>>>>>>> [0]PETSC ERROR: #3 PetscFESetUp() line 2126 in >>>>>>>>>> /home/salaza11/petsc/src/dm/dt/interface/dtfe.c >>>>>>>>>> [0]PETSC ERROR: #4 SetupElementCommon() line 482 in >>>>>>>>>> /home/salaza11/workspace/PETSC/ex12.c >>>>>>>>>> [0]PETSC ERROR: #5 SetupElement() line 506 in >>>>>>>>>> /home/salaza11/workspace/PETSC/ex12.c >>>>>>>>>> [0]PETSC ERROR: #6 main() line 754 in >>>>>>>>>> /home/salaza11/workspace/PETSC/ex12.c >>>>>>>>>> [0]PETSC ERROR: ----------------End of Error Message -------send >>>>>>>>>> entire error message to petsc-maint at mcs.anl.gov---------- >>>>>>>>>> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0 >>>>>>>>>> [unset]: aborting job: >>>>>>>>>> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0 >>>>>>>>>> [salaza11 at maya PETSC]$ >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Then again, I do not know much what I am doing given my ignorance >>>>>>>>>> with respect to the dual spaces in FE. I apologize for that. My questions >>>>>>>>>> are: >>>>>>>>>> >>>>>>>>>> - Where could I find more resources in order to understand the >>>>>>>>>> PETSc implementation of dual spaces for FE? >>>>>>>>>> - Why does it run with Dirichlet but not with Neumann? >>>>>>>>>> >>>>>>>>>> Thanks in advance. >>>>>>>>>> Miguel. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Tue, Mar 4, 2014 at 11:28 PM, Matthew Knepley < >>>>>>>>>> knepley at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> On Tue, Mar 4, 2014 at 12:01 PM, Matthew Knepley < >>>>>>>>>>> knepley at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> On Tue, Mar 4, 2014 at 11:51 AM, Miguel Angel Salazar de Troya >>>>>>>>>>>> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> I can run it now, thanks. Although if I run it with valgrind >>>>>>>>>>>>> 3.5.0 (should I update to the last version?) I get some memory leaks >>>>>>>>>>>>> related with the function DMPlexCreateBoxMesh. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> I will check it out. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> This is now fixed. >>>>>>>>>>> >>>>>>>>>>> Thanks for finding it >>>>>>>>>>> >>>>>>>>>>> Matt >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> Thanks, >>>>>>>>>>>> >>>>>>>>>>>> Matt >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> [salaza11 at maya tutorials]$ valgrind --leak-check=full ./ex12 >>>>>>>>>>>>> -run_type test -refinement_limit 0.0 -bc_type dirichlet -interpolate 0 >>>>>>>>>>>>> -petscspace_order 1 -show_initial -dm_plex_print_fem 1 >>>>>>>>>>>>> ==9625== Memcheck, a memory error detector >>>>>>>>>>>>> ==9625== Copyright (C) 2002-2009, and GNU GPL'd, by Julian >>>>>>>>>>>>> Seward et al. >>>>>>>>>>>>> ==9625== Using Valgrind-3.5.0 and LibVEX; rerun with -h for >>>>>>>>>>>>> copyright info >>>>>>>>>>>>> ==9625== Command: ./ex12 -run_type test -refinement_limit 0.0 >>>>>>>>>>>>> -bc_type dirichlet -interpolate 0 -petscspace_order 1 -show_initial >>>>>>>>>>>>> -dm_plex_print_fem 1 >>>>>>>>>>>>> ==9625== >>>>>>>>>>>>> Local function: >>>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>>> type: seq >>>>>>>>>>>>> 0 >>>>>>>>>>>>> 0.25 >>>>>>>>>>>>> 1 >>>>>>>>>>>>> 0.25 >>>>>>>>>>>>> 0.5 >>>>>>>>>>>>> 1.25 >>>>>>>>>>>>> 1 >>>>>>>>>>>>> 1.25 >>>>>>>>>>>>> 2 >>>>>>>>>>>>> Initial guess >>>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>>> type: seq >>>>>>>>>>>>> 0.5 >>>>>>>>>>>>> L_2 Error: 0.111111 >>>>>>>>>>>>> Residual: >>>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>>> type: seq >>>>>>>>>>>>> 0 >>>>>>>>>>>>> 0 >>>>>>>>>>>>> 0 >>>>>>>>>>>>> 0 >>>>>>>>>>>>> 0 >>>>>>>>>>>>> 0 >>>>>>>>>>>>> 0 >>>>>>>>>>>>> 0 >>>>>>>>>>>>> 0 >>>>>>>>>>>>> Initial Residual >>>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>>> type: seq >>>>>>>>>>>>> 0 >>>>>>>>>>>>> L_2 Residual: 0 >>>>>>>>>>>>> Jacobian: >>>>>>>>>>>>> Mat Object: 1 MPI processes >>>>>>>>>>>>> type: seqaij >>>>>>>>>>>>> row 0: (0, 4) >>>>>>>>>>>>> Residual: >>>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>>> type: seq >>>>>>>>>>>>> 0 >>>>>>>>>>>>> 0 >>>>>>>>>>>>> 0 >>>>>>>>>>>>> 0 >>>>>>>>>>>>> -2 >>>>>>>>>>>>> 0 >>>>>>>>>>>>> 0 >>>>>>>>>>>>> 0 >>>>>>>>>>>>> 0 >>>>>>>>>>>>> Au - b = Au + F(0) >>>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>>> type: seq >>>>>>>>>>>>> 0 >>>>>>>>>>>>> Linear L_2 Residual: 0 >>>>>>>>>>>>> ==9625== >>>>>>>>>>>>> ==9625== HEAP SUMMARY: >>>>>>>>>>>>> ==9625== in use at exit: 288 bytes in 3 blocks >>>>>>>>>>>>> ==9625== total heap usage: 2,484 allocs, 2,481 frees, >>>>>>>>>>>>> 1,009,287 bytes allocated >>>>>>>>>>>>> ==9625== >>>>>>>>>>>>> ==9625== 48 bytes in 1 blocks are definitely lost in loss >>>>>>>>>>>>> record 1 of 3 >>>>>>>>>>>>> ==9625== at 0x4A05E46: malloc (vg_replace_malloc.c:195) >>>>>>>>>>>>> ==9625== by 0x5D8D4E1: writepoly (triangle.c:12012) >>>>>>>>>>>>> ==9625== by 0x5D8FAAC: triangulate (triangle.c:13167) >>>>>>>>>>>>> ==9625== by 0x56B0884: DMPlexGenerate_Triangle (plex.c:3749) >>>>>>>>>>>>> ==9625== by 0x56B5EE4: DMPlexGenerate (plex.c:4503) >>>>>>>>>>>>> ==9625== by 0x567F414: DMPlexCreateBoxMesh >>>>>>>>>>>>> (plexcreate.c:668) >>>>>>>>>>>>> ==9625== by 0x4051FA: CreateMesh (ex12.c:341) >>>>>>>>>>>>> ==9625== by 0x408D3D: main (ex12.c:651) >>>>>>>>>>>>> ==9625== >>>>>>>>>>>>> ==9625== 96 bytes in 1 blocks are definitely lost in loss >>>>>>>>>>>>> record 2 of 3 >>>>>>>>>>>>> ==9625== at 0x4A05E46: malloc (vg_replace_malloc.c:195) >>>>>>>>>>>>> ==9625== by 0x5D8D485: writepoly (triangle.c:12004) >>>>>>>>>>>>> ==9625== by 0x5D8FAAC: triangulate (triangle.c:13167) >>>>>>>>>>>>> ==9625== by 0x56B0884: DMPlexGenerate_Triangle (plex.c:3749) >>>>>>>>>>>>> ==9625== by 0x56B5EE4: DMPlexGenerate (plex.c:4503) >>>>>>>>>>>>> ==9625== by 0x567F414: DMPlexCreateBoxMesh >>>>>>>>>>>>> (plexcreate.c:668) >>>>>>>>>>>>> ==9625== by 0x4051FA: CreateMesh (ex12.c:341) >>>>>>>>>>>>> ==9625== by 0x408D3D: main (ex12.c:651) >>>>>>>>>>>>> ==9625== >>>>>>>>>>>>> ==9625== 144 bytes in 1 blocks are definitely lost in loss >>>>>>>>>>>>> record 3 of 3 >>>>>>>>>>>>> ==9625== at 0x4A05E46: malloc (vg_replace_malloc.c:195) >>>>>>>>>>>>> ==9625== by 0x5D8CD20: writenodes (triangle.c:11718) >>>>>>>>>>>>> ==9625== by 0x5D8F9DE: triangulate (triangle.c:13132) >>>>>>>>>>>>> ==9625== by 0x56B0884: DMPlexGenerate_Triangle (plex.c:3749) >>>>>>>>>>>>> ==9625== by 0x56B5EE4: DMPlexGenerate (plex.c:4503) >>>>>>>>>>>>> ==9625== by 0x567F414: DMPlexCreateBoxMesh >>>>>>>>>>>>> (plexcreate.c:668) >>>>>>>>>>>>> ==9625== by 0x4051FA: CreateMesh (ex12.c:341) >>>>>>>>>>>>> ==9625== by 0x408D3D: main (ex12.c:651) >>>>>>>>>>>>> ==9625== >>>>>>>>>>>>> ==9625== LEAK SUMMARY: >>>>>>>>>>>>> ==9625== definitely lost: 288 bytes in 3 blocks >>>>>>>>>>>>> ==9625== indirectly lost: 0 bytes in 0 blocks >>>>>>>>>>>>> ==9625== possibly lost: 0 bytes in 0 blocks >>>>>>>>>>>>> ==9625== still reachable: 0 bytes in 0 blocks >>>>>>>>>>>>> ==9625== suppressed: 0 bytes in 0 blocks >>>>>>>>>>>>> ==9625== >>>>>>>>>>>>> ==9625== For counts of detected and suppressed errors, rerun >>>>>>>>>>>>> with: -v >>>>>>>>>>>>> ==9625== ERROR SUMMARY: 3 errors from 3 contexts (suppressed: >>>>>>>>>>>>> 6 from 6) >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Mon, Mar 3, 2014 at 7:05 PM, Matthew Knepley < >>>>>>>>>>>>> knepley at gmail.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> On Mon, Mar 3, 2014 at 4:59 PM, Miguel Angel Salazar de Troya >>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> You are welcome, thanks for your help. >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Okay, I have rebuilt completely clean, and ex12 runs for me. >>>>>>>>>>>>>> Can you try again after pulling? >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>> >>>>>>>>>>>>>> Matt >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Mon, Mar 3, 2014 at 4:13 PM, Matthew Knepley < >>>>>>>>>>>>>>> knepley at gmail.com> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Mon, Mar 3, 2014 at 1:44 PM, Miguel Angel Salazar de >>>>>>>>>>>>>>>> Troya wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Thanks. This is what I get. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Okay, this was broken by a new push to master/next in the >>>>>>>>>>>>>>>> last few days. I have pushed a fix, >>>>>>>>>>>>>>>> however next is currently broken due to a failure to check >>>>>>>>>>>>>>>> in a file. This should be fixed shortly, >>>>>>>>>>>>>>>> and then ex12 will work. I will mail you when its ready. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Thanks for finding this, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> (gdb) cont >>>>>>>>>>>>>>>>> Continuing. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Program received signal SIGSEGV, Segmentation fault. >>>>>>>>>>>>>>>>> 0x00007fd6811bea7b in DMPlexComputeJacobianFEM >>>>>>>>>>>>>>>>> (dm=0x159a180, X=0x168b5b0, >>>>>>>>>>>>>>>>> Jac=0x7fffae6e8a88, JacP=0x7fffae6e8a88, >>>>>>>>>>>>>>>>> str=0x7fffae6e7970, >>>>>>>>>>>>>>>>> user=0x7fd6811be509) >>>>>>>>>>>>>>>>> at /home/salaza11/petsc/src/dm/impls/plex/plexfem.c:882 >>>>>>>>>>>>>>>>> 882 ierr = PetscFEGetDimension(fe[f], >>>>>>>>>>>>>>>>> &Nb);CHKERRQ(ierr); >>>>>>>>>>>>>>>>> (gdb) where >>>>>>>>>>>>>>>>> #0 0x00007fd6811bea7b in DMPlexComputeJacobianFEM >>>>>>>>>>>>>>>>> (dm=0x159a180, X=0x168b5b0, >>>>>>>>>>>>>>>>> Jac=0x7fffae6e8a88, JacP=0x7fffae6e8a88, >>>>>>>>>>>>>>>>> str=0x7fffae6e7970, >>>>>>>>>>>>>>>>> user=0x7fd6811be509) >>>>>>>>>>>>>>>>> at /home/salaza11/petsc/src/dm/impls/plex/plexfem.c:882 >>>>>>>>>>>>>>>>> #1 0x00007fd6814a5bf6 in SNESComputeJacobian_DMLocal >>>>>>>>>>>>>>>>> (snes=0x14e9450, >>>>>>>>>>>>>>>>> X=0x1622ad0, A=0x7fffae6e8a88, B=0x7fffae6e8a88, >>>>>>>>>>>>>>>>> ctx=0x1652300) >>>>>>>>>>>>>>>>> at >>>>>>>>>>>>>>>>> /home/salaza11/petsc/src/snes/utils/dmlocalsnes.c:102 >>>>>>>>>>>>>>>>> #2 0x00007fd6814cc609 in SNESComputeJacobian >>>>>>>>>>>>>>>>> (snes=0x14e9450, X=0x1622ad0, >>>>>>>>>>>>>>>>> A=0x7fffae6e8a88, B=0x7fffae6e8a88) >>>>>>>>>>>>>>>>> at /home/salaza11/petsc/src/snes/interface/snes.c:2245 >>>>>>>>>>>>>>>>> #3 0x000000000040af72 in main (argc=15, >>>>>>>>>>>>>>>>> argv=0x7fffae6e8bc8) >>>>>>>>>>>>>>>>> at >>>>>>>>>>>>>>>>> /home/salaza11/petsc/src/snes/examples/tutorials/ex12.c:784 >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Mon, Mar 3, 2014 at 1:40 PM, Matthew Knepley < >>>>>>>>>>>>>>>>> knepley at gmail.com> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> On Mon, Mar 3, 2014 at 1:39 PM, Miguel Angel Salazar de >>>>>>>>>>>>>>>>>> Troya wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> This is what I get at gdb when I type 'where'. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> You have to type 'cont', and then when it fails you type >>>>>>>>>>>>>>>>>> 'where'. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> #0 0x000000310e0aa860 in __nanosleep_nocancel () from >>>>>>>>>>>>>>>>>>> /lib64/libc.so.6 >>>>>>>>>>>>>>>>>>> #1 0x000000310e0aa70f in sleep () from /lib64/libc.so.6 >>>>>>>>>>>>>>>>>>> #2 0x00007fd83a00a8be in PetscSleep (s=10) >>>>>>>>>>>>>>>>>>> at /home/salaza11/petsc/src/sys/utils/psleep.c:52 >>>>>>>>>>>>>>>>>>> #3 0x00007fd83a06f331 in PetscAttachDebugger () >>>>>>>>>>>>>>>>>>> at /home/salaza11/petsc/src/sys/error/adebug.c:397 >>>>>>>>>>>>>>>>>>> #4 0x00007fd83a0af1d2 in >>>>>>>>>>>>>>>>>>> PetscOptionsCheckInitial_Private () >>>>>>>>>>>>>>>>>>> at /home/salaza11/petsc/src/sys/objects/init.c:444 >>>>>>>>>>>>>>>>>>> #5 0x00007fd83a0b6448 in PetscInitialize >>>>>>>>>>>>>>>>>>> (argc=0x7fff5cd8df2c, >>>>>>>>>>>>>>>>>>> args=0x7fff5cd8df20, file=0x0, >>>>>>>>>>>>>>>>>>> help=0x60ce40 "Poisson Problem in 2d and 3d with >>>>>>>>>>>>>>>>>>> simplicial finite elements.\nWe solve the Poisson problem in a >>>>>>>>>>>>>>>>>>> rectangular\ndomain, using a parallel unstructured mesh (DMPLEX) to >>>>>>>>>>>>>>>>>>> discretize it.\n\n\n") >>>>>>>>>>>>>>>>>>> at /home/salaza11/petsc/src/sys/objects/pinit.c:876 >>>>>>>>>>>>>>>>>>> #6 0x0000000000408f2c in main (argc=15, >>>>>>>>>>>>>>>>>>> argv=0x7fff5cd8f1f8) >>>>>>>>>>>>>>>>>>> at >>>>>>>>>>>>>>>>>>> /home/salaza11/petsc/src/snes/examples/tutorials/ex12.c:663 >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> The rest of the gdb output is attached. I am a bit >>>>>>>>>>>>>>>>>>> ignorant with gdb, I apologize for that. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> On Mon, Mar 3, 2014 at 12:48 PM, Matthew Knepley < >>>>>>>>>>>>>>>>>>> knepley at gmail.com> wrote: >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> On Mon, Mar 3, 2014 at 12:39 PM, Miguel Angel Salazar >>>>>>>>>>>>>>>>>>>> de Troya wrote: >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> Thanks for your response. Sorry I did not have the >>>>>>>>>>>>>>>>>>>>> "next" version, but the "master" version. I still have an error though. I >>>>>>>>>>>>>>>>>>>>> followed the steps given here ( >>>>>>>>>>>>>>>>>>>>> https://bitbucket.org/petsc/petsc/wiki/Home) to >>>>>>>>>>>>>>>>>>>>> obtain the next version, I configured petsc as above and ran ex12 as above >>>>>>>>>>>>>>>>>>>>> as well, getting this error: >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> [salaza11 at maya tutorials]$ ./ex12 -run_type test >>>>>>>>>>>>>>>>>>>>> -refinement_limit 0.0 -bc_type dirichlet -interpolate 0 >>>>>>>>>>>>>>>>>>>>> -petscspace_order 1 -show_initial -dm_plex_print_fem 1 >>>>>>>>>>>>>>>>>>>>> Local function: >>>>>>>>>>>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>>>>>>>>>>> type: seq >>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>> 0.25 >>>>>>>>>>>>>>>>>>>>> 1 >>>>>>>>>>>>>>>>>>>>> 0.25 >>>>>>>>>>>>>>>>>>>>> 0.5 >>>>>>>>>>>>>>>>>>>>> 1.25 >>>>>>>>>>>>>>>>>>>>> 1 >>>>>>>>>>>>>>>>>>>>> 1.25 >>>>>>>>>>>>>>>>>>>>> 2 >>>>>>>>>>>>>>>>>>>>> Initial guess >>>>>>>>>>>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>>>>>>>>>>> type: seq >>>>>>>>>>>>>>>>>>>>> 0.5 >>>>>>>>>>>>>>>>>>>>> L_2 Error: 0.111111 >>>>>>>>>>>>>>>>>>>>> Residual: >>>>>>>>>>>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>>>>>>>>>>> type: seq >>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>> Initial Residual >>>>>>>>>>>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>>>>>>>>>>> type: seq >>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>> L_2 Residual: 0 >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Okay, now run with -start_in_debugger, and give me a >>>>>>>>>>>>>>>>>>>> stack trace using 'where'. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: >>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Caught signal number 11 SEGV: >>>>>>>>>>>>>>>>>>>>> Segmentation Violation, probably memory access out of range >>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Try option -start_in_debugger or >>>>>>>>>>>>>>>>>>>>> -on_error_attach_debugger >>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: or see >>>>>>>>>>>>>>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSCERROR: or try >>>>>>>>>>>>>>>>>>>>> http://valgrind.org on GNU/linux and Apple Mac OS X >>>>>>>>>>>>>>>>>>>>> to find memory corruption errors >>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: likely location of problem given in >>>>>>>>>>>>>>>>>>>>> stack below >>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: --------------------- Stack Frames >>>>>>>>>>>>>>>>>>>>> ------------------------------------ >>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Note: The EXACT line numbers in the >>>>>>>>>>>>>>>>>>>>> stack are not available, >>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: INSTEAD the line number of the >>>>>>>>>>>>>>>>>>>>> start of the function >>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: is given. >>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] DMPlexComputeJacobianFEM line 871 >>>>>>>>>>>>>>>>>>>>> /home/salaza11/petsc/src/dm/impls/plex/plexfem.c >>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] SNESComputeJacobian_DMLocal line >>>>>>>>>>>>>>>>>>>>> 94 /home/salaza11/petsc/src/snes/utils/dmlocalsnes.c >>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] SNES user Jacobian function line >>>>>>>>>>>>>>>>>>>>> 2244 /home/salaza11/petsc/src/snes/interface/snes.c >>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] SNESComputeJacobian line 2203 >>>>>>>>>>>>>>>>>>>>> /home/salaza11/petsc/src/snes/interface/snes.c >>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: --------------------- Error Message >>>>>>>>>>>>>>>>>>>>> -------------------------------------------------------------- >>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Signal received >>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: See http:// >>>>>>>>>>>>>>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.htmlfor trouble shooting. >>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Petsc Development GIT revision: >>>>>>>>>>>>>>>>>>>>> v3.4.3-4705-gfb6b3bc GIT Date: 2014-03-03 08:23:43 -0600 >>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: ./ex12 on a linux-gnu-c-debug named >>>>>>>>>>>>>>>>>>>>> maya by salaza11 Mon Mar 3 11:49:15 2014 >>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Configure options --download-mpich >>>>>>>>>>>>>>>>>>>>> --download-scientificpython --download-triangle --download-ctetgen >>>>>>>>>>>>>>>>>>>>> --download-chaco --with-c2html=0 >>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: #1 User provided function() line 0 in >>>>>>>>>>>>>>>>>>>>> unknown file >>>>>>>>>>>>>>>>>>>>> application called MPI_Abort(MPI_COMM_WORLD, 59) - >>>>>>>>>>>>>>>>>>>>> process 0 >>>>>>>>>>>>>>>>>>>>> [unset]: aborting job: >>>>>>>>>>>>>>>>>>>>> application called MPI_Abort(MPI_COMM_WORLD, 59) - >>>>>>>>>>>>>>>>>>>>> process 0 >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> On Sun, Mar 2, 2014 at 7:11 PM, Matthew Knepley < >>>>>>>>>>>>>>>>>>>>> knepley at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> On Sun, Mar 2, 2014 at 6:54 PM, Miguel Angel Salazar >>>>>>>>>>>>>>>>>>>>>> de Troya wrote: >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> Hi everybody >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> I am trying to run example ex12.c without much >>>>>>>>>>>>>>>>>>>>>>> success. I specifically run it with the command options: >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> We need to start narrowing down differences, because >>>>>>>>>>>>>>>>>>>>>> it runs for me and our nightly tests. So, first can >>>>>>>>>>>>>>>>>>>>>> you confirm that you are using the latest 'next' >>>>>>>>>>>>>>>>>>>>>> branch? >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> ./ex12 -run_type test -refinement_limit 0.0 >>>>>>>>>>>>>>>>>>>>>>> -bc_type dirichlet -interpolate 0 -petscspace_order 1 -show_initial >>>>>>>>>>>>>>>>>>>>>>> -dm_plex_print_fem 1 >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> And I get this output >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> Local function: >>>>>>>>>>>>>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>>>>>>>>>>>>> type: seq >>>>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>>>> 1 >>>>>>>>>>>>>>>>>>>>>>> 1 >>>>>>>>>>>>>>>>>>>>>>> 2 >>>>>>>>>>>>>>>>>>>>>>> 1 >>>>>>>>>>>>>>>>>>>>>>> 2 >>>>>>>>>>>>>>>>>>>>>>> 2 >>>>>>>>>>>>>>>>>>>>>>> 3 >>>>>>>>>>>>>>>>>>>>>>> Initial guess >>>>>>>>>>>>>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>>>>>>>>>>>>> type: seq >>>>>>>>>>>>>>>>>>>>>>> L_2 Error: 0.625 >>>>>>>>>>>>>>>>>>>>>>> Residual: >>>>>>>>>>>>>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>>>>>>>>>>>>> type: seq >>>>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>>>> 0 >>>>>>>>>>>>>>>>>>>>>>> Initial Residual >>>>>>>>>>>>>>>>>>>>>>> Vec Object: 1 MPI processes >>>>>>>>>>>>>>>>>>>>>>> type: seq >>>>>>>>>>>>>>>>>>>>>>> L_2 Residual: 0 >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: >>>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Caught signal number 11 SEGV: >>>>>>>>>>>>>>>>>>>>>>> Segmentation Violation, probably memory access out of range >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Try option -start_in_debugger or >>>>>>>>>>>>>>>>>>>>>>> -on_error_attach_debugger >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: or see >>>>>>>>>>>>>>>>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSCERROR: or try >>>>>>>>>>>>>>>>>>>>>>> http://valgrind.org on GNU/linux and Apple Mac OS X >>>>>>>>>>>>>>>>>>>>>>> to find memory corruption errors >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: likely location of problem given in >>>>>>>>>>>>>>>>>>>>>>> stack below >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: --------------------- Stack Frames >>>>>>>>>>>>>>>>>>>>>>> ------------------------------------ >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Note: The EXACT line numbers in the >>>>>>>>>>>>>>>>>>>>>>> stack are not available, >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: INSTEAD the line number of the >>>>>>>>>>>>>>>>>>>>>>> start of the function >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: is given. >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] DMPlexComputeJacobianFEM line >>>>>>>>>>>>>>>>>>>>>>> 867 /home/salaza11/petsc/src/dm/impls/plex/plexfem.c >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] SNESComputeJacobian_DMLocal line >>>>>>>>>>>>>>>>>>>>>>> 94 /home/salaza11/petsc/src/snes/utils/dmlocalsnes.c >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] SNES user Jacobian function line >>>>>>>>>>>>>>>>>>>>>>> 2244 /home/salaza11/petsc/src/snes/interface/snes.c >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] SNESComputeJacobian line 2203 >>>>>>>>>>>>>>>>>>>>>>> /home/salaza11/petsc/src/snes/interface/snes.c >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: --------------------- Error Message >>>>>>>>>>>>>>>>>>>>>>> ------------------------------------ >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Signal received! >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: >>>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Petsc Development GIT revision: >>>>>>>>>>>>>>>>>>>>>>> v3.4.3-3453-g0a94005 GIT Date: 2014-03-02 13:12:04 -0600 >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: See docs/changes/index.html for >>>>>>>>>>>>>>>>>>>>>>> recent updates. >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: See docs/faq.html for hints about >>>>>>>>>>>>>>>>>>>>>>> trouble shooting. >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: See docs/index.html for manual pages. >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: >>>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: ./ex12 on a linux-gnu-c-debug named >>>>>>>>>>>>>>>>>>>>>>> maya by salaza11 Sun Mar 2 17:00:09 2014 >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Libraries linked from >>>>>>>>>>>>>>>>>>>>>>> /home/salaza11/petsc/linux-gnu-c-debug/lib >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Configure run at Sun Mar 2 16:46:51 >>>>>>>>>>>>>>>>>>>>>>> 2014 >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Configure options --download-mpich >>>>>>>>>>>>>>>>>>>>>>> --download-scientificpython --download-triangle --download-ctetgen >>>>>>>>>>>>>>>>>>>>>>> --download-chaco --with-c2html=0 >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: >>>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: User provided function() line 0 in >>>>>>>>>>>>>>>>>>>>>>> unknown file >>>>>>>>>>>>>>>>>>>>>>> application called MPI_Abort(MPI_COMM_WORLD, 59) - >>>>>>>>>>>>>>>>>>>>>>> process 0 >>>>>>>>>>>>>>>>>>>>>>> [unset]: aborting job: >>>>>>>>>>>>>>>>>>>>>>> application called MPI_Abort(MPI_COMM_WORLD, 59) - >>>>>>>>>>>>>>>>>>>>>>> process 0 >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> Probably my problems could be on my configuration. I >>>>>>>>>>>>>>>>>>>>>>> attach the configure.log. I ran ./configure like this >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> ./configure --download-mpich >>>>>>>>>>>>>>>>>>>>>>> --download-scientificpython --download-triangle --download-ctetgen >>>>>>>>>>>>>>>>>>>>>>> --download-chaco --with-c2html=0 >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> Thanks a lot in advance. >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> On Tue, Jan 28, 2014 at 10:37 AM, Matthew Knepley < >>>>>>>>>>>>>>>>>>>>>>> knepley at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> On Tue, Jan 28, 2014 at 10:31 AM, Yaakoub El Khamra >>>>>>>>>>>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> If >>>>>>>>>>>>>>>>>>>>>>>>> ./ex12 -run_type test -dim 3 -refinement_limit >>>>>>>>>>>>>>>>>>>>>>>>> 0.0125 -variable_coefficient field -interpolate 1 -petscspace_order 2 >>>>>>>>>>>>>>>>>>>>>>>>> -show_initial -dm_plex_print_fem >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> is for serial, any chance we can get the options >>>>>>>>>>>>>>>>>>>>>>>>> to run in parallel? >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> Just use mpiexec -n >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> Regards >>>>>>>>>>>>>>>>>>>>>>>>> Yaakoub El Khamra >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Jan 17, 2014 at 11:29 AM, Matthew Knepley >>>>>>>>>>>>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Jan 17, 2014 at 11:06 AM, Jones,Martin >>>>>>>>>>>>>>>>>>>>>>>>>> Alexander wrote: >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>> *From:* Matthew Knepley [knepley at gmail.com] >>>>>>>>>>>>>>>>>>>>>>>>>>> *Sent:* Friday, January 17, 2014 11:04 AM >>>>>>>>>>>>>>>>>>>>>>>>>>> *To:* Jones,Martin Alexander >>>>>>>>>>>>>>>>>>>>>>>>>>> *Cc:* petsc-users at mcs.anl.gov >>>>>>>>>>>>>>>>>>>>>>>>>>> *Subject:* Re: [petsc-users] Problems running >>>>>>>>>>>>>>>>>>>>>>>>>>> ex12.c >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Jan 17, 2014 at 11:00 AM, >>>>>>>>>>>>>>>>>>>>>>>>>>> Jones,Martin Alexander wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> These examples all seem to run excepting the >>>>>>>>>>>>>>>>>>>>>>>>>>>> following command, >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> ex12 -run_type test -dim 3 -refinement_limit >>>>>>>>>>>>>>>>>>>>>>>>>>>> 0.0125 -variable_coefficient field -interpolate 1 -petscspace_order 2 >>>>>>>>>>>>>>>>>>>>>>>>>>>> -show_initial -dm_plex_print_fem >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> I get the following ouput: >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> ./ex12 -run_type test -dim 3 -refinement_limit >>>>>>>>>>>>>>>>>>>>>>>>>>>> 0.0125 -variable_coefficient field -interpolate 1 -petscspace_order 2 >>>>>>>>>>>>>>>>>>>>>>>>>>>> -show_initial -dm_plex_print_fem >>>>>>>>>>>>>>>>>>>>>>>>>>>> Local function: >>>>>>>>>>>>>>>>>>>>>>>>>>>> ./ex12: symbol lookup error: >>>>>>>>>>>>>>>>>>>>>>>>>>>> /opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libmkl_intel_thread.so: undefined >>>>>>>>>>>>>>>>>>>>>>>>>>>> symbol: omp_get_num_procs >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> This is a build problem, but it should affect >>>>>>>>>>>>>>>>>>>>>>>>>>> all the runs. Is this reproducible? Can you send configure.log? MKL is the >>>>>>>>>>>>>>>>>>>>>>>>>>> worst. If this >>>>>>>>>>>>>>>>>>>>>>>>>>> persists, I would just switch to >>>>>>>>>>>>>>>>>>>>>>>>>>> --download-f-blas-lapack. >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> Thanks. I have some advice on options >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> --with-precision=single # I would not use this >>>>>>>>>>>>>>>>>>>>>>>>>> unless you are doing something special, like CUDA >>>>>>>>>>>>>>>>>>>>>>>>>> --with-clanguage=C++ # I would recommend >>>>>>>>>>>>>>>>>>>>>>>>>> switching to C, the build is much faster >>>>>>>>>>>>>>>>>>>>>>>>>> --with-mpi-dir=/usr --with-mpi4py=0 >>>>>>>>>>>>>>>>>>>>>>>>>> --with-shared-libraries --CFLAGS=-O0 >>>>>>>>>>>>>>>>>>>>>>>>>> --CXXFLAGS=-O0 --with-fc=0 >>>>>>>>>>>>>>>>>>>>>>>>>> --with-etags=1 # This is >>>>>>>>>>>>>>>>>>>>>>>>>> unnecessary >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> --with-blas-lapack-lib="[/opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libmkl_rt.so,/opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libmkl_intel_thread.so,/opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libmkl_core.so,/opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libiomp5.so]" >>>>>>>>>>>>>>>>>>>>>>>>>> # Here is the problem, see below >>>>>>>>>>>>>>>>>>>>>>>>>> --download-metis >>>>>>>>>>>>>>>>>>>>>>>>>> --download-fiat=yes --download-generator >>>>>>>>>>>>>>>>>>>>>>>>>> --download-scientificpython # Get rid of these, they are obsolete >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> Your MKL needs another library for the OpenMP >>>>>>>>>>>>>>>>>>>>>>>>>> symbols. I would recommend switching to --download-f2cblaslapack, >>>>>>>>>>>>>>>>>>>>>>>>>> or you can try and find that library. >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>> *From:* Matthew Knepley [knepley at gmail.com] >>>>>>>>>>>>>>>>>>>>>>>>>>>> *Sent:* Thursday, January 16, 2014 6:35 PM >>>>>>>>>>>>>>>>>>>>>>>>>>>> *To:* Jones,Martin Alexander >>>>>>>>>>>>>>>>>>>>>>>>>>>> *Cc:* petsc-users at mcs.anl.gov >>>>>>>>>>>>>>>>>>>>>>>>>>>> *Subject:* Re: [petsc-users] Problems running >>>>>>>>>>>>>>>>>>>>>>>>>>>> ex12.c >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Jan 16, 2014 at 5:43 PM, >>>>>>>>>>>>>>>>>>>>>>>>>>>> Jones,Martin Alexander >>>>>>>>>>>>>>>>>>>>>>>>>>> > wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi, This is the next error message after >>>>>>>>>>>>>>>>>>>>>>>>>>>>> configuring and building with the triangle package when trying to run ex12 >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> This is my fault for bad defaults. I will >>>>>>>>>>>>>>>>>>>>>>>>>>>> fix. Try running >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> ./ex12 -run_type test -refinement_limit 0.0 >>>>>>>>>>>>>>>>>>>>>>>>>>>> -bc_type dirichlet -interpolate 0 -petscspace_order 1 -show_initial >>>>>>>>>>>>>>>>>>>>>>>>>>>> -dm_plex_print_fem 1 >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> for a representative run. Then you could try >>>>>>>>>>>>>>>>>>>>>>>>>>>> 3D >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> ex12 -run_type test -dim 3 >>>>>>>>>>>>>>>>>>>>>>>>>>>> -refinement_limit 0.0125 -variable_coefficient field -interpolate 1 >>>>>>>>>>>>>>>>>>>>>>>>>>>> -petscspace_order 2 -show_initial -dm_plex_print_fem >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> or a full run >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> ex12 -refinement_limit 0.01 -bc_type >>>>>>>>>>>>>>>>>>>>>>>>>>>> dirichlet -interpolate -petscspace_order 1 >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> ex12 -refinement_limit 0.01 -bc_type >>>>>>>>>>>>>>>>>>>>>>>>>>>> dirichlet -interpolate -petscspace_order 2 >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Let me know if those work. >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> ./ex12 >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Caught signal number 8 FPE: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Floating Point Exception,probably divide by zero >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Try option -start_in_debugger >>>>>>>>>>>>>>>>>>>>>>>>>>>>> or -on_error_attach_debugger >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: or see >>>>>>>>>>>>>>>>>>>>>>>>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSCERROR: or try >>>>>>>>>>>>>>>>>>>>>>>>>>>>> http://valgrind.org on GNU/linux and Apple >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mac OS X to find memory corruption errors >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: likely location of problem >>>>>>>>>>>>>>>>>>>>>>>>>>>>> given in stack below >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: --------------------- Stack >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Frames ------------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Note: The EXACT line numbers >>>>>>>>>>>>>>>>>>>>>>>>>>>>> in the stack are not available, >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: INSTEAD the line number >>>>>>>>>>>>>>>>>>>>>>>>>>>>> of the start of the function >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: is given. >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] DMPlexComputeResidualFEM >>>>>>>>>>>>>>>>>>>>>>>>>>>>> line 531 /home/mjonesa/PETSc/petsc/src/dm/impls/plex/plexfem.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] >>>>>>>>>>>>>>>>>>>>>>>>>>>>> SNESComputeFunction_DMLocal line 63 >>>>>>>>>>>>>>>>>>>>>>>>>>>>> /home/mjonesa/PETSc/petsc/src/snes/utils/dmlocalsnes.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] SNES user function line >>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2088 /home/mjonesa/PETSc/petsc/src/snes/interface/snes.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] SNESComputeFunction line >>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2076 /home/mjonesa/PETSc/petsc/src/snes/interface/snes.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] SNESSolve_NEWTONLS line >>>>>>>>>>>>>>>>>>>>>>>>>>>>> 144 /home/mjonesa/PETSc/petsc/src/snes/impls/ls/ls.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: [0] SNESSolve line 3765 >>>>>>>>>>>>>>>>>>>>>>>>>>>>> /home/mjonesa/PETSc/petsc/src/snes/interface/snes.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: --------------------- Error >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Message ------------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Signal received! >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Petsc Development GIT >>>>>>>>>>>>>>>>>>>>>>>>>>>>> revision: v3.4.3-2317-gcd0e7f7 GIT Date: 2014-01-15 20:33:42 -0600 >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: See docs/changes/index.html >>>>>>>>>>>>>>>>>>>>>>>>>>>>> for recent updates. >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: See docs/faq.html for hints >>>>>>>>>>>>>>>>>>>>>>>>>>>>> about trouble shooting. >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: See docs/index.html for manual >>>>>>>>>>>>>>>>>>>>>>>>>>>>> pages. >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: ./ex12 on a >>>>>>>>>>>>>>>>>>>>>>>>>>>>> arch-linux2-cxx-debug named maeda by mjonesa Thu Jan 16 17:41:23 2014 >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Libraries linked from >>>>>>>>>>>>>>>>>>>>>>>>>>>>> /home/mjonesa/local/lib >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Configure run at Thu Jan 16 >>>>>>>>>>>>>>>>>>>>>>>>>>>>> 17:38:33 2014 >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Configure options >>>>>>>>>>>>>>>>>>>>>>>>>>>>> --prefix=/home/mjonesa/local >>>>>>>>>>>>>>>>>>>>>>>>>>>>> --with-blas-lapack-lib="[/opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libmkl_rt.so,/opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libmkl_intel_thread.so,/opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libmkl_core.so,/opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libiomp5.so]" >>>>>>>>>>>>>>>>>>>>>>>>>>>>> --with-c2html=0 --with-clanguage=c++ PETSC_ARCH=arch-linux2-cxx-debug >>>>>>>>>>>>>>>>>>>>>>>>>>>>> --download-triangle >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: User provided function() line >>>>>>>>>>>>>>>>>>>>>>>>>>>>> 0 in unknown file >>>>>>>>>>>>>>>>>>>>>>>>>>>>> application called MPI_Abort(MPI_COMM_WORLD, >>>>>>>>>>>>>>>>>>>>>>>>>>>>> 59) - process 0 >>>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>>> *From:* Matthew Knepley [knepley at gmail.com] >>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Sent:* Thursday, January 16, 2014 4:37 PM >>>>>>>>>>>>>>>>>>>>>>>>>>>>> *To:* Jones,Martin Alexander >>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Cc:* petsc-users at mcs.anl.gov >>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Subject:* Re: [petsc-users] Problems running >>>>>>>>>>>>>>>>>>>>>>>>>>>>> ex12.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Jan 16, 2014 at 4:33 PM, >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jones,Martin Alexander < >>>>>>>>>>>>>>>>>>>>>>>>>>>>> MAJones2 at mdanderson.org> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi, I have downloaded and built the dev >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> version you suggested. I think I need the triangle package to run this >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> particular case. Is there any thing else that appears wrong in what I have >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> done from the error messages below: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Great! Its running. You can reconfigure like >>>>>>>>>>>>>>>>>>>>>>>>>>>>> this: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> $PETSC_DIR/$PETSC_ARCH/conf/reconfigure-$PETSC_ARCH.py --download-triangle >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> and then rebuild >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> make >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> and then rerun. You can load meshes, but its >>>>>>>>>>>>>>>>>>>>>>>>>>>>> much easier to have triangle create them. >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks for being patient, >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: --------------------- Error >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Message ------------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: No support for this operation >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for this object type! >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Mesh generation needs >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> external package support. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Please reconfigure with --download-triangle.! >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Petsc Development GIT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> revision: v3.4.3-2317-gcd0e7f7 GIT Date: 2014-01-15 20:33:42 -0600 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: See docs/changes/index.html >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for recent updates. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: See docs/faq.html for hints >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> about trouble shooting. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: See docs/index.html for >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> manual pages. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: ./ex12 on a >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> arch-linux2-cxx-debug named maeda by mjonesa Thu Jan 16 16:28:20 2014 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Libraries linked from >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /home/mjonesa/local/lib >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Configure run at Thu Jan 16 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 16:25:53 2014 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Configure options >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --prefix=/home/mjonesa/local --with-clanguage=c++ --with-c2html=0 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --with-blas-lapack-lib="[/opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libmkl_rt.so,/opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libmkl_intel_thread.so,/opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libmkl_core.so,/opt/apps/EPD/epd-7.3-1-rh5-x86_64/lib/libiomp5.so]" >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: DMPlexGenerate() line 4332 in >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /home/mjonesa/PETSc/petsc/src/dm/impls/plex/plex.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: DMPlexCreateBoxMesh() line >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 600 in /home/mjonesa/PETSc/petsc/src/dm/impls/plex/plexcreate.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: CreateMesh() line 295 in >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /home/mjonesa/PETSc/petsc/src/snes/examples/tutorials/ex12.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: main() line 659 in >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /home/mjonesa/PETSc/petsc/src/snes/examples/tutorials/ex12.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> application called MPI_Abort(MPI_COMM_WORLD, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 56) - process 0 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *From:* Matthew Knepley [knepley at gmail.com] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Sent:* Thursday, January 16, 2014 4:06 PM >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *To:* Jones,Martin Alexander >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Cc:* petsc-users at mcs.anl.gov >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Subject:* Re: [petsc-users] Problems >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> running ex12.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Jan 16, 2014 at 4:05 PM, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jones,Martin Alexander < >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> MAJones2 at mdanderson.org> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi. I changed the ENV variable to the >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> correct entry. when I type make ex12 I get this: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mjonesa at maeda:~/PETSc/petsc-3.4.3/src/snes/examples/tutorials$ >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> make ex12 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> g++ -o ex12.o -c -Wall -Wwrite-strings >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -Wno-strict-aliasing -Wno-unknown-pragmas -g -fPIC >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -I/home/mjonesa/PETSc/petsc-3.4.3/include >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -I/home/mjonesa/PETSc/petsc-3.4.3/arch-linux2-cxx-debug/include >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -I/home/mjonesa/PETSc/petsc-3.4.3/include/mpiuni >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -D__INSDIR__=src/snes/examples/tutorials/ ex12.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ex12.c:14:18: fatal error: ex12.h: No such >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> file or directory >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> compilation terminated. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> make: *** [ex12.o] Error 1 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Any help of yours is very much appreciated. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, this relates to my 3). This is not >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> going to work for you with the release. Please see the link I sent. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *From:* Matthew Knepley [knepley at gmail.com] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Sent:* Thursday, January 16, 2014 3:58 PM >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *To:* Jones,Martin Alexander >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Cc:* petsc-users at mcs.anl.gov >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Subject:* Re: [petsc-users] Problems >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> running ex12.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Jan 16, 2014 at 3:55 PM, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jones,Martin Alexander < >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> MAJones2 at mdanderson.org> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks! >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You built with >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> PETSC_ARCH=arch-linux2-cxx-debug >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *From:* Matthew Knepley [knepley at gmail.com] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Sent:* Thursday, January 16, 2014 3:31 PM >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *To:* Jones,Martin Alexander >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Cc:* petsc-users at mcs.anl.gov >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Subject:* Re: [petsc-users] Problems >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> running ex12.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Jan 16, 2014 at 3:11 PM, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jones,Martin Alexander < >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> MAJones2 at mdanderson.org> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I went to the directory where ex12.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sits and just did a 'make ex12.c' with the following error if this helps? : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mjonesa at maeda:~/PETSc/petsc-3.4.3/src/snes/examples/tutorials$ >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> make ex12.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /home/mjonesa/PETSc/petsc-3.4.3/conf/variables:108: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /home/mjonesa/PETSc/petsc-3.4.3/linux-gnu-cxx-debug/conf/petscvariables: No >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> such file or directory >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /home/mjonesa/PETSc/petsc-3.4.3/conf/rules:962: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /home/mjonesa/PETSc/petsc-3.4.3/linux-gnu-cxx-debug/conf/petscrules: No >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> such file or directory >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> make: *** No rule to make target >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> `/home/mjonesa/PETSc/petsc-3.4.3/linux-gnu-cxx-debug/conf/petscrules'. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Stop. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 1) You would type 'make ex12' >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2) Either you PETSC_DIR ( >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /home/mjonesa/PETSc/petsc-3.4.3) or >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> PETSC_ARCH (linux-gnu-cxx-debug) >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> environment variables >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> do not match what you built. Please >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> send configure.log and make.log >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 3) Since it was only recently added, if >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you want to use the FEM functionality, you must use the development version: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> http://www.mcs.anl.gov/petsc/developers/index.html >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *From:* Matthew Knepley [mailto: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> knepley at gmail.com] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Sent:* Thursday, January 16, 2014 2:48 PM >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *To:* Jones,Martin Alexander >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Cc:* petsc-users at mcs.anl.gov >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> *Subject:* Re: [petsc-users] Problems >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> running ex12.c >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Jan 16, 2014 at 2:35 PM, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jones,Martin Alexander < >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> MAJones2 at mdanderson.org> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hello To Whom it Concerns, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am trying to run the tutorial ex12.c by >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> running 'bin/pythonscripts/PetscGenerateFEMQuadrature.py >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> dim order dim 1 laplacian dim order dim 1 boundary >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> src/snes/examples/tutorials/ex12.h' >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> but getting the following error: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> $ >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> bin/pythonscripts/PetscGenerateFEMQuadrature.py dim order dim 1 laplacian >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> dim order dim 1 boundary src/snes/examples/tutorials/ex12.h >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Traceback (most recent call last): >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> File >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "bin/pythonscripts/PetscGenerateFEMQuadrature.py", line 15, in >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from FIAT.reference_element import >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> default_simplex >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ImportError: No module named >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> FIAT.reference_element >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I have removed the requirement of >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> generating the header file (its now all handled in C). I thought >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I changed the documentation everywhere >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (including the latest tutorial slides). Can you try running >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> with 'master' (or 'next'), and point me >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> toward the old docs? >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What most experimenters take for granted >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> before they begin their experiments is infinitely more interesting than any >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> results to which their experiments lead. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What most experimenters take for granted >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> before they begin their experiments is infinitely more interesting than any >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> results to which their experiments lead. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What most experimenters take for granted >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> before they begin their experiments is infinitely more interesting than any >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> results to which their experiments lead. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What most experimenters take for granted >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> before they begin their experiments is infinitely more interesting than any >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> results to which their experiments lead. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>> What most experimenters take for granted >>>>>>>>>>>>>>>>>>>>>>>>>>>>> before they begin their experiments is infinitely more interesting than any >>>>>>>>>>>>>>>>>>>>>>>>>>>>> results to which their experiments lead. >>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>> What most experimenters take for granted before >>>>>>>>>>>>>>>>>>>>>>>>>>>> they begin their experiments is infinitely more interesting than any >>>>>>>>>>>>>>>>>>>>>>>>>>>> results to which their experiments lead. >>>>>>>>>>>>>>>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>> What most experimenters take for granted before >>>>>>>>>>>>>>>>>>>>>>>>>>> they begin their experiments is infinitely more interesting than any >>>>>>>>>>>>>>>>>>>>>>>>>>> results to which their experiments lead. >>>>>>>>>>>>>>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>> What most experimenters take for granted before >>>>>>>>>>>>>>>>>>>>>>>>>> they begin their experiments is infinitely more interesting than any >>>>>>>>>>>>>>>>>>>>>>>>>> results to which their experiments lead. >>>>>>>>>>>>>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>> What most experimenters take for granted before >>>>>>>>>>>>>>>>>>>>>>>> they begin their experiments is infinitely more interesting than any >>>>>>>>>>>>>>>>>>>>>>>> results to which their experiments lead. >>>>>>>>>>>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>>>>>>>>>>>>>>> Graduate Research Assistant >>>>>>>>>>>>>>>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>>>>>>>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>>>>>>>>>>>>>>> (217) 550-2360 >>>>>>>>>>>>>>>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>> What most experimenters take for granted before they >>>>>>>>>>>>>>>>>>>>>> begin their experiments is infinitely more interesting than any results to >>>>>>>>>>>>>>>>>>>>>> which their experiments lead. >>>>>>>>>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>>>>>>>>>>>>> Graduate Research Assistant >>>>>>>>>>>>>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>>>>>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>>>>>>>>>>>>> (217) 550-2360 >>>>>>>>>>>>>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>> What most experimenters take for granted before they >>>>>>>>>>>>>>>>>>>> begin their experiments is infinitely more interesting than any results to >>>>>>>>>>>>>>>>>>>> which their experiments lead. >>>>>>>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>>>>>>>>>>> Graduate Research Assistant >>>>>>>>>>>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>>>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>>>>>>>>>>> (217) 550-2360 >>>>>>>>>>>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> What most experimenters take for granted before they >>>>>>>>>>>>>>>>>> begin their experiments is infinitely more interesting than any results to >>>>>>>>>>>>>>>>>> which their experiments lead. >>>>>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>>>>>>>>> Graduate Research Assistant >>>>>>>>>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>>>>>>>>> (217) 550-2360 >>>>>>>>>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> What most experimenters take for granted before they begin >>>>>>>>>>>>>>>> their experiments is infinitely more interesting than any results to which >>>>>>>>>>>>>>>> their experiments lead. >>>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>>>>>>> Graduate Research Assistant >>>>>>>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>>>>>>> (217) 550-2360 >>>>>>>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> What most experimenters take for granted before they begin >>>>>>>>>>>>>> their experiments is infinitely more interesting than any results to which >>>>>>>>>>>>>> their experiments lead. >>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>>>>> Graduate Research Assistant >>>>>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>>>>> (217) 550-2360 >>>>>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> What most experimenters take for granted before they begin >>>>>>>>>>>> their experiments is infinitely more interesting than any results to which >>>>>>>>>>>> their experiments lead. >>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> What most experimenters take for granted before they begin their >>>>>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>>>>> experiments lead. >>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>> Graduate Research Assistant >>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>> (217) 550-2360 >>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> What most experimenters take for granted before they begin their >>>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>>> experiments lead. >>>>>>>>> -- Norbert Wiener >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>> Graduate Research Assistant >>>>>>>> Department of Mechanical Science and Engineering >>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>> (217) 550-2360 >>>>>>>> salaza11 at illinois.edu >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> *Miguel Angel Salazar de Troya* >>>>>>> Graduate Research Assistant >>>>>>> Department of Mechanical Science and Engineering >>>>>>> University of Illinois at Urbana-Champaign >>>>>>> (217) 550-2360 >>>>>>> salaza11 at illinois.edu >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> *Miguel Angel Salazar de Troya* >>>>> Graduate Research Assistant >>>>> Department of Mechanical Science and Engineering >>>>> University of Illinois at Urbana-Champaign >>>>> (217) 550-2360 >>>>> salaza11 at illinois.edu >>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >>> >>> -- >>> *Miguel Angel Salazar de Troya* >>> Graduate Research Assistant >>> Department of Mechanical Science and Engineering >>> University of Illinois at Urbana-Champaign >>> (217) 550-2360 >>> salaza11 at illinois.edu >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From parsani.matteo at gmail.com Mon Apr 21 07:40:52 2014 From: parsani.matteo at gmail.com (Matteo Parsani) Date: Mon, 21 Apr 2014 08:40:52 -0400 Subject: [petsc-users] libpetsc.so: undefined reference to `_gfortran_transfer_character_write@GFORTRAN_1.4' In-Reply-To: References: Message-ID: Thanks Satish! I am trying to fix the issue following your suggestions. I will let you know. Thanks On Fri, Apr 18, 2014 at 12:21 PM, Satish Balay wrote: > detailed logs would have been useful. > > Mostlikely you have multiple version of libgfortran.so - and the wrong > version got linked in. > > Or you have blas [or something else] compiled with a different version > gfortran so its attempting to link with a different libgfortran.so] > > You can look at the link command, and look at all the .so files in the > linkcommand - and do 'ldd libblas.so' etc on all the .so files in the > link command to indentify the differences. > > And after you locate the multiple libgfortran.so files [perhaps with > 'locate libgfortran.so'] - you can do the following to indentify the > libgfortran library that you should be using. > > 'nm -Ao libgfortran.so |grep > _gfortran_transfer_character_write at GFORTRAN_1.4' > > Or use --with-shared-libraries=0 and see if this problem goes away.. > > Satish > > On Fri, 18 Apr 2014, Matteo Parsani wrote: > > > Dear PETSc Users and Developers, > > I have compiled successfully PETSc 3.4 with gfortran 4.7.2. > > However, when I compile my code, during the linking phase I get this > error: > > > > > /ump/fldmd/home/pmatteo/research/workspace/codes/ssdc/deps/petsc/lib/libpetsc.so: > > undefined reference to `_gfortran_transfer_character_write at GFORTRAN_1.4' > > > /ump/fldmd/home/pmatteo/research/workspace/codes/ssdc/deps/petsc/lib/libpetsc.so: > > undefined reference to `_gfortran_transfer_integer_write at GFORTRAN_1.4' > > > > > > Any idea? > > Is it related to the gcc compiler options? > > > > Thank you, > > > > > > -- Matteo -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Apr 21 09:39:17 2014 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 21 Apr 2014 09:39:17 -0500 Subject: [petsc-users] an ambiguity on a Lanczos solver for a symmetric system -- advice needed In-Reply-To: <5352CAC9.9010505@tudelft.nl> References: <5352CAC9.9010505@tudelft.nl> Message-ID: On Sat, Apr 19, 2014 at 2:13 PM, Umut Tabak wrote: > Dear all, > For any timing question, we need to see the output f -log_summary. Also, if you have significant time in routines you wrote, we need you to create PETSc events for these. Matt > I am experiencing lately some issues with a symmetric Lanczos eigensolver > in FORTRAN. Basically, I have test code in MATLAB where I am using > HSL_MA97(MATLAB interface) at the moment > > When I program Lanczos iterations in blocks in MATLAB by using HSL_MA97, > as expected my overall solution time decreases meaning that block solution > improves the solution efficiency. > > Then, to apply the same algorithm on problems on the orders of millions, I > am transferring the same algorithm to a FORTRAN code but this time with > MUMPS as the solver then I was expecting the solution time to decrease as > well, but my overall solution times are increasing when I increase the > block size. > > For a check with MUMPS, I only tried the block solution phase and compared > 120 single solutions to > > 60 solutions by blocks of 2 > 30 solutions by blocks of 4 > 20 solutions by blocks of 6 > 15 solutions by blocks of 8 > > and saw that the total solution time in comparison to single solves are > decreasing so I am thinking this is not the source of the problem, I > believe. > > What I am doing is that I am performing a full reorthogonalization in the > Lanczos loop, which includes some dgemm calls and moreover there are some > other calls for sparse symmetric matrix vector multiplications from Intel > MKL. > > I could not really understand why the overall solution time is increasing > with the increase of the block sizes in FORTRAN whereas I was expecting > even an improvement over my MATLAB code. > > Any ideas on what could be going wrong. > > Best regards and thanks in advance, > > Umut > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Apr 21 09:48:44 2014 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 21 Apr 2014 09:48:44 -0500 Subject: [petsc-users] Elasticity tensor in ex52 In-Reply-To: References: Message-ID: On Sun, Apr 20, 2014 at 4:17 PM, Miguel Angel Salazar de Troya < salazardetroya at gmail.com> wrote: > I understand. So if we had a linear elastic material, the weak form would > be something like > > phi_{i,j} C_{ijkl} u_{k,l} > > where the term C_{ijkl} u_{k,l} would correspond to the term f_1(U, > gradient_U) in equation (1) in your paper I mentioned above, and g_3 would > be C_{ijkl}. Therefore the indices {i,j} would be equivalent to the indices > "ic, id" you mentioned before and "jc" and "jd" would be {k,l}? > > For g3[ic, id, jc, jd], transforming the four dimensional array to linear > memory would be like this: > > g3[((ic*Ncomp+id)*dim+jc)*dim+jd] = 1.0; > > where Ncomp and dim are equal to the problem's spatial dimension. > > However, in the code, there are only two loops, to exploit the symmetry of > the fourth order identity tensor: > > for (compI = 0; compI < Ncomp; ++compI) { > for (d = 0; d < dim; ++d) { > g3[((compI*Ncomp+compI)*dim+d)*dim+d] = 1.0; > } > } > > Therefore the tensor entries that are set to one are: > > g3[0,0,0,0] > g3[1,1,0,0] > g3[0,0,1,1] > g3[1,1,1,1] > > This would be equivalent to the fourth order tensor \delta_{ij} > \delta_{kl}, but I think the one we need is \delta_{ik} \delta_{jl}, > because it is the derivative of a matrix with respect itself (or the > derivative of a gradient with respect to itself). This is assuming the > indices of g3 correspond to what I said. > I made an error explaining g3, which is indexed g3[ic, jc, id, jd] I thought this might be better since, it is composed of dim x dim blocks. I am not opposed to changing this if there is evidence that another thing is better. Matt > Thanks in advance. > > Miguel > > > On Apr 19, 2014 6:19 PM, "Matthew Knepley" wrote: > >> On Sat, Apr 19, 2014 at 5:25 PM, Miguel Angel Salazar de Troya < >> salazardetroya at gmail.com> wrote: >> >>> Thanks for your response. Now I understand a bit better your >>> implementation, but I am still confused. Is the g3 function equivalent to >>> the f_{1,1} function in the matrix in equation (3) in this paper: >>> http://arxiv.org/pdf/1309.1204v2.pdf ? If so, why would it have terms >>> that depend on the trial fields? I think this is what is confusing me. >>> >> >> Yes, it is. It has no terms that depend on the trial fields. It is just >> indexed by the number of components in that field. It is >> a continuum thing, which has nothing to do with the discretization. That >> is why it acts pointwise. >> >> Matt >> >> >>> On Sat, Apr 19, 2014 at 11:35 AM, Matthew Knepley wrote: >>> >>>> On Fri, Apr 18, 2014 at 1:23 PM, Miguel Angel Salazar de Troya < >>>> salazardetroya at gmail.com> wrote: >>>> >>>>> Hello everybody. >>>>> >>>>> First, I am taking this example from the petsc-dev version, I am not >>>>> sure if I should have posted this in another mail-list, if so, my >>>>> apologies. >>>>> >>>>> In this example, for the elasticity case, function g3 is built as: >>>>> >>>>> void g3_elas(const PetscScalar u[], const PetscScalar gradU[], const >>>>> PetscScalar a[], const PetscScalar gradA[], const PetscReal x[], >>>>> PetscScalar g3[]) >>>>> { >>>>> const PetscInt dim = spatialDim; >>>>> const PetscInt Ncomp = spatialDim; >>>>> PetscInt compI, d; >>>>> >>>>> for (compI = 0; compI < Ncomp; ++compI) { >>>>> for (d = 0; d < dim; ++d) { >>>>> g3[((compI*Ncomp+compI)*dim+d)*dim+d] = 1.0; >>>>> } >>>>> } >>>>> } >>>>> >>>>> Therefore, a fourth-order tensor is represented as a vector. I was >>>>> checking the indices for different iterator values, and they do not seem to >>>>> match the vectorization that I have in mind. For a two dimensional case, >>>>> the indices for which the value is set as 1 are: >>>>> >>>>> compI = 0 , d = 0 -----> index = 0 >>>>> compI = 0 , d = 1 -----> index = 3 >>>>> compI = 1 , d = 0 -----> index = 12 >>>>> compI = 1 , d = 1 -----> index = 15 >>>>> >>>>> The values for the first and last seem correct to me, but they other >>>>> two are confusing me. I see that this elasticity tensor (which is the >>>>> derivative of the gradient by itself in this case) would be a four by four >>>>> identity matrix in its matrix representation, so the indices in between >>>>> would be 5 and 10 instead of 3 and 12, if we put one column on top of each >>>>> other. >>>>> >>>> >>>> I have read this a few times, but I cannot understand that you are >>>> asking. The simplest thing I can >>>> respond is that we are indexing a row-major array, using the indices: >>>> >>>> g3[ic, id, jc, jd] >>>> >>>> where ic indexes the components of the trial field, id indexes the >>>> derivative components, >>>> jc indexes the basis field components, and jd its derivative components. >>>> >>>> Matt >>>> >>>> >>>>> I guess my question is then, how did you vectorize the fourth order >>>>> tensor? >>>>> >>>>> Thanks in advance >>>>> Miguel >>>>> >>>>> -- >>>>> *Miguel Angel Salazar de Troya* >>>>> Graduate Research Assistant >>>>> Department of Mechanical Science and Engineering >>>>> University of Illinois at Urbana-Champaign >>>>> (217) 550-2360 >>>>> salaza11 at illinois.edu >>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >>> >>> -- >>> *Miguel Angel Salazar de Troya* >>> Graduate Research Assistant >>> Department of Mechanical Science and Engineering >>> University of Illinois at Urbana-Champaign >>> (217) 550-2360 >>> salaza11 at illinois.edu >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Mon Apr 21 09:59:42 2014 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Mon, 21 Apr 2014 09:59:42 -0500 Subject: [petsc-users] an ambiguity on a Lanczos solver for a symmetric system -- advice needed In-Reply-To: <6a74bbefc1fa40119c08f3444f3c78c2@LUCKMAN.anl.gov> References: <5352CAC9.9010505@tudelft.nl> <6a74bbefc1fa40119c08f3444f3c78c2@LUCKMAN.anl.gov> Message-ID: Umut, Have you tried slepc for eigenvalue problems? Why do you need mumps in your eigensolver? Shift-and-invert? Hong On Mon, Apr 21, 2014 at 9:39 AM, Matthew Knepley wrote: > On Sat, Apr 19, 2014 at 2:13 PM, Umut Tabak wrote: >> >> Dear all, > > > For any timing question, we need to see the output f -log_summary. Also, if > you have significant > time in routines you wrote, we need you to create PETSc events for these. > > Matt > >> >> I am experiencing lately some issues with a symmetric Lanczos eigensolver >> in FORTRAN. Basically, I have test code in MATLAB where I am using >> HSL_MA97(MATLAB interface) at the moment >> >> When I program Lanczos iterations in blocks in MATLAB by using HSL_MA97, >> as expected my overall solution time decreases meaning that block solution >> improves the solution efficiency. >> >> Then, to apply the same algorithm on problems on the orders of millions, I >> am transferring the same algorithm to a FORTRAN code but this time with >> MUMPS as the solver then I was expecting the solution time to decrease as >> well, but my overall solution times are increasing when I increase the block >> size. >> >> For a check with MUMPS, I only tried the block solution phase and compared >> 120 single solutions to >> >> 60 solutions by blocks of 2 >> 30 solutions by blocks of 4 >> 20 solutions by blocks of 6 >> 15 solutions by blocks of 8 >> >> and saw that the total solution time in comparison to single solves are >> decreasing so I am thinking this is not the source of the problem, I >> believe. >> >> What I am doing is that I am performing a full reorthogonalization in the >> Lanczos loop, which includes some dgemm calls and moreover there are some >> other calls for sparse symmetric matrix vector multiplications from Intel >> MKL. >> >> I could not really understand why the overall solution time is increasing >> with the increase of the block sizes in FORTRAN whereas I was expecting even >> an improvement over my MATLAB code. >> >> Any ideas on what could be going wrong. >> >> Best regards and thanks in advance, >> >> Umut > > > > > -- > What most experimenters take for granted before they begin their experiments > is infinitely more interesting than any results to which their experiments > lead. > -- Norbert Wiener From hzhang at mcs.anl.gov Mon Apr 21 10:00:34 2014 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Mon, 21 Apr 2014 10:00:34 -0500 Subject: [petsc-users] an ambiguity on a Lanczos solver for a symmetric system -- advice needed In-Reply-To: References: <5352CAC9.9010505@tudelft.nl> <6a74bbefc1fa40119c08f3444f3c78c2@LUCKMAN.anl.gov> Message-ID: Sorry, ignore my previous email. You are not solving eigenvalue problem. Hong On Mon, Apr 21, 2014 at 9:59 AM, Hong Zhang wrote: > Umut, > Have you tried slepc for eigenvalue problems? > Why do you need mumps in your eigensolver? Shift-and-invert? > > Hong > > On Mon, Apr 21, 2014 at 9:39 AM, Matthew Knepley wrote: >> On Sat, Apr 19, 2014 at 2:13 PM, Umut Tabak wrote: >>> >>> Dear all, >> >> >> For any timing question, we need to see the output f -log_summary. Also, if >> you have significant >> time in routines you wrote, we need you to create PETSc events for these. >> >> Matt >> >>> >>> I am experiencing lately some issues with a symmetric Lanczos eigensolver >>> in FORTRAN. Basically, I have test code in MATLAB where I am using >>> HSL_MA97(MATLAB interface) at the moment >>> >>> When I program Lanczos iterations in blocks in MATLAB by using HSL_MA97, >>> as expected my overall solution time decreases meaning that block solution >>> improves the solution efficiency. >>> >>> Then, to apply the same algorithm on problems on the orders of millions, I >>> am transferring the same algorithm to a FORTRAN code but this time with >>> MUMPS as the solver then I was expecting the solution time to decrease as >>> well, but my overall solution times are increasing when I increase the block >>> size. >>> >>> For a check with MUMPS, I only tried the block solution phase and compared >>> 120 single solutions to >>> >>> 60 solutions by blocks of 2 >>> 30 solutions by blocks of 4 >>> 20 solutions by blocks of 6 >>> 15 solutions by blocks of 8 >>> >>> and saw that the total solution time in comparison to single solves are >>> decreasing so I am thinking this is not the source of the problem, I >>> believe. >>> >>> What I am doing is that I am performing a full reorthogonalization in the >>> Lanczos loop, which includes some dgemm calls and moreover there are some >>> other calls for sparse symmetric matrix vector multiplications from Intel >>> MKL. >>> >>> I could not really understand why the overall solution time is increasing >>> with the increase of the block sizes in FORTRAN whereas I was expecting even >>> an improvement over my MATLAB code. >>> >>> Any ideas on what could be going wrong. >>> >>> Best regards and thanks in advance, >>> >>> Umut >> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments >> is infinitely more interesting than any results to which their experiments >> lead. >> -- Norbert Wiener From salazardetroya at gmail.com Mon Apr 21 10:23:25 2014 From: salazardetroya at gmail.com (Miguel Angel Salazar de Troya) Date: Mon, 21 Apr 2014 10:23:25 -0500 Subject: [petsc-users] Elasticity tensor in ex52 In-Reply-To: References: Message-ID: Thank you. Now it makes sense. Just to confirm though, back to the weak form I wrote above in index notation: phi_{i,j} C_{ijkl} u_{k,l} The g3 indices are equivalent to the indices of this weak form in this manner: ic = i jc = k id = j jd = l Is this correct? This is what I understand when you talk about the components and the derivatives of the components. On Mon, Apr 21, 2014 at 9:48 AM, Matthew Knepley wrote: > On Sun, Apr 20, 2014 at 4:17 PM, Miguel Angel Salazar de Troya < > salazardetroya at gmail.com> wrote: > >> I understand. So if we had a linear elastic material, the weak form would >> be something like >> >> phi_{i,j} C_{ijkl} u_{k,l} >> >> where the term C_{ijkl} u_{k,l} would correspond to the term f_1(U, >> gradient_U) in equation (1) in your paper I mentioned above, and g_3 would >> be C_{ijkl}. Therefore the indices {i,j} would be equivalent to the indices >> "ic, id" you mentioned before and "jc" and "jd" would be {k,l}? >> >> For g3[ic, id, jc, jd], transforming the four dimensional array to >> linear memory would be like this: >> >> g3[((ic*Ncomp+id)*dim+jc)*dim+jd] = 1.0; >> >> where Ncomp and dim are equal to the problem's spatial dimension. >> >> However, in the code, there are only two loops, to exploit the symmetry >> of the fourth order identity tensor: >> >> for (compI = 0; compI < Ncomp; ++compI) { >> for (d = 0; d < dim; ++d) { >> g3[((compI*Ncomp+compI)*dim+d)*dim+d] = 1.0; >> } >> } >> >> Therefore the tensor entries that are set to one are: >> >> g3[0,0,0,0] >> g3[1,1,0,0] >> g3[0,0,1,1] >> g3[1,1,1,1] >> >> This would be equivalent to the fourth order tensor \delta_{ij} >> \delta_{kl}, but I think the one we need is \delta_{ik} \delta_{jl}, >> because it is the derivative of a matrix with respect itself (or the >> derivative of a gradient with respect to itself). This is assuming the >> indices of g3 correspond to what I said. >> > > I made an error explaining g3, which is indexed > > g3[ic, jc, id, jd] > > I thought this might be better since, it is composed of dim x dim blocks. > I am not opposed to changing > this if there is evidence that another thing is better. > > Matt > > >> Thanks in advance. >> >> Miguel >> >> >> On Apr 19, 2014 6:19 PM, "Matthew Knepley" wrote: >> >>> On Sat, Apr 19, 2014 at 5:25 PM, Miguel Angel Salazar de Troya < >>> salazardetroya at gmail.com> wrote: >>> >>>> Thanks for your response. Now I understand a bit better your >>>> implementation, but I am still confused. Is the g3 function equivalent to >>>> the f_{1,1} function in the matrix in equation (3) in this paper: >>>> http://arxiv.org/pdf/1309.1204v2.pdf ? If so, why would it have terms >>>> that depend on the trial fields? I think this is what is confusing me. >>>> >>> >>> Yes, it is. It has no terms that depend on the trial fields. It is just >>> indexed by the number of components in that field. It is >>> a continuum thing, which has nothing to do with the discretization. That >>> is why it acts pointwise. >>> >>> Matt >>> >>> >>>> On Sat, Apr 19, 2014 at 11:35 AM, Matthew Knepley wrote: >>>> >>>>> On Fri, Apr 18, 2014 at 1:23 PM, Miguel Angel Salazar de Troya < >>>>> salazardetroya at gmail.com> wrote: >>>>> >>>>>> Hello everybody. >>>>>> >>>>>> First, I am taking this example from the petsc-dev version, I am not >>>>>> sure if I should have posted this in another mail-list, if so, my >>>>>> apologies. >>>>>> >>>>>> In this example, for the elasticity case, function g3 is built as: >>>>>> >>>>>> void g3_elas(const PetscScalar u[], const PetscScalar gradU[], const >>>>>> PetscScalar a[], const PetscScalar gradA[], const PetscReal x[], >>>>>> PetscScalar g3[]) >>>>>> { >>>>>> const PetscInt dim = spatialDim; >>>>>> const PetscInt Ncomp = spatialDim; >>>>>> PetscInt compI, d; >>>>>> >>>>>> for (compI = 0; compI < Ncomp; ++compI) { >>>>>> for (d = 0; d < dim; ++d) { >>>>>> g3[((compI*Ncomp+compI)*dim+d)*dim+d] = 1.0; >>>>>> } >>>>>> } >>>>>> } >>>>>> >>>>>> Therefore, a fourth-order tensor is represented as a vector. I was >>>>>> checking the indices for different iterator values, and they do not seem to >>>>>> match the vectorization that I have in mind. For a two dimensional case, >>>>>> the indices for which the value is set as 1 are: >>>>>> >>>>>> compI = 0 , d = 0 -----> index = 0 >>>>>> compI = 0 , d = 1 -----> index = 3 >>>>>> compI = 1 , d = 0 -----> index = 12 >>>>>> compI = 1 , d = 1 -----> index = 15 >>>>>> >>>>>> The values for the first and last seem correct to me, but they other >>>>>> two are confusing me. I see that this elasticity tensor (which is the >>>>>> derivative of the gradient by itself in this case) would be a four by four >>>>>> identity matrix in its matrix representation, so the indices in between >>>>>> would be 5 and 10 instead of 3 and 12, if we put one column on top of each >>>>>> other. >>>>>> >>>>> >>>>> I have read this a few times, but I cannot understand that you are >>>>> asking. The simplest thing I can >>>>> respond is that we are indexing a row-major array, using the indices: >>>>> >>>>> g3[ic, id, jc, jd] >>>>> >>>>> where ic indexes the components of the trial field, id indexes the >>>>> derivative components, >>>>> jc indexes the basis field components, and jd its derivative >>>>> components. >>>>> >>>>> Matt >>>>> >>>>> >>>>>> I guess my question is then, how did you vectorize the fourth order >>>>>> tensor? >>>>>> >>>>>> Thanks in advance >>>>>> Miguel >>>>>> >>>>>> -- >>>>>> *Miguel Angel Salazar de Troya* >>>>>> Graduate Research Assistant >>>>>> Department of Mechanical Science and Engineering >>>>>> University of Illinois at Urbana-Champaign >>>>>> (217) 550-2360 >>>>>> salaza11 at illinois.edu >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>> >>>> >>>> >>>> -- >>>> *Miguel Angel Salazar de Troya* >>>> Graduate Research Assistant >>>> Department of Mechanical Science and Engineering >>>> University of Illinois at Urbana-Champaign >>>> (217) 550-2360 >>>> salaza11 at illinois.edu >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- *Miguel Angel Salazar de Troya* Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Apr 21 10:24:32 2014 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 21 Apr 2014 10:24:32 -0500 Subject: [petsc-users] Elasticity tensor in ex52 In-Reply-To: References: Message-ID: On Mon, Apr 21, 2014 at 10:23 AM, Miguel Angel Salazar de Troya < salazardetroya at gmail.com> wrote: > Thank you. Now it makes sense. Just to confirm though, back to the weak > form I wrote above in index notation: > > phi_{i,j} C_{ijkl} u_{k,l} > > The g3 indices are equivalent to the indices of this weak form in this > manner: > > ic = i > jc = k > id = j > jd = l > > Is this correct? This is what I understand when you talk about the > components and the derivatives of the components. > Yep, that is it. Thanks, Matt > On Mon, Apr 21, 2014 at 9:48 AM, Matthew Knepley wrote: > >> On Sun, Apr 20, 2014 at 4:17 PM, Miguel Angel Salazar de Troya < >> salazardetroya at gmail.com> wrote: >> >>> I understand. So if we had a linear elastic material, the weak form >>> would be something like >>> >>> phi_{i,j} C_{ijkl} u_{k,l} >>> >>> where the term C_{ijkl} u_{k,l} would correspond to the term f_1(U, >>> gradient_U) in equation (1) in your paper I mentioned above, and g_3 would >>> be C_{ijkl}. Therefore the indices {i,j} would be equivalent to the indices >>> "ic, id" you mentioned before and "jc" and "jd" would be {k,l}? >>> >>> For g3[ic, id, jc, jd], transforming the four dimensional array to >>> linear memory would be like this: >>> >>> g3[((ic*Ncomp+id)*dim+jc)*dim+jd] = 1.0; >>> >>> where Ncomp and dim are equal to the problem's spatial dimension. >>> >>> However, in the code, there are only two loops, to exploit the symmetry >>> of the fourth order identity tensor: >>> >>> for (compI = 0; compI < Ncomp; ++compI) { >>> for (d = 0; d < dim; ++d) { >>> g3[((compI*Ncomp+compI)*dim+d)*dim+d] = 1.0; >>> } >>> } >>> >>> Therefore the tensor entries that are set to one are: >>> >>> g3[0,0,0,0] >>> g3[1,1,0,0] >>> g3[0,0,1,1] >>> g3[1,1,1,1] >>> >>> This would be equivalent to the fourth order tensor \delta_{ij} >>> \delta_{kl}, but I think the one we need is \delta_{ik} \delta_{jl}, >>> because it is the derivative of a matrix with respect itself (or the >>> derivative of a gradient with respect to itself). This is assuming the >>> indices of g3 correspond to what I said. >>> >> >> I made an error explaining g3, which is indexed >> >> g3[ic, jc, id, jd] >> >> I thought this might be better since, it is composed of dim x dim blocks. >> I am not opposed to changing >> this if there is evidence that another thing is better. >> >> Matt >> >> >>> Thanks in advance. >>> >>> Miguel >>> >>> >>> On Apr 19, 2014 6:19 PM, "Matthew Knepley" wrote: >>> >>>> On Sat, Apr 19, 2014 at 5:25 PM, Miguel Angel Salazar de Troya < >>>> salazardetroya at gmail.com> wrote: >>>> >>>>> Thanks for your response. Now I understand a bit better your >>>>> implementation, but I am still confused. Is the g3 function equivalent to >>>>> the f_{1,1} function in the matrix in equation (3) in this paper: >>>>> http://arxiv.org/pdf/1309.1204v2.pdf ? If so, why would it have terms >>>>> that depend on the trial fields? I think this is what is confusing me. >>>>> >>>> >>>> Yes, it is. It has no terms that depend on the trial fields. It is just >>>> indexed by the number of components in that field. It is >>>> a continuum thing, which has nothing to do with the discretization. >>>> That is why it acts pointwise. >>>> >>>> Matt >>>> >>>> >>>>> On Sat, Apr 19, 2014 at 11:35 AM, Matthew Knepley wrote: >>>>> >>>>>> On Fri, Apr 18, 2014 at 1:23 PM, Miguel Angel Salazar de Troya < >>>>>> salazardetroya at gmail.com> wrote: >>>>>> >>>>>>> Hello everybody. >>>>>>> >>>>>>> First, I am taking this example from the petsc-dev version, I am not >>>>>>> sure if I should have posted this in another mail-list, if so, my >>>>>>> apologies. >>>>>>> >>>>>>> In this example, for the elasticity case, function g3 is built as: >>>>>>> >>>>>>> void g3_elas(const PetscScalar u[], const PetscScalar gradU[], const >>>>>>> PetscScalar a[], const PetscScalar gradA[], const PetscReal x[], >>>>>>> PetscScalar g3[]) >>>>>>> { >>>>>>> const PetscInt dim = spatialDim; >>>>>>> const PetscInt Ncomp = spatialDim; >>>>>>> PetscInt compI, d; >>>>>>> >>>>>>> for (compI = 0; compI < Ncomp; ++compI) { >>>>>>> for (d = 0; d < dim; ++d) { >>>>>>> g3[((compI*Ncomp+compI)*dim+d)*dim+d] = 1.0; >>>>>>> } >>>>>>> } >>>>>>> } >>>>>>> >>>>>>> Therefore, a fourth-order tensor is represented as a vector. I was >>>>>>> checking the indices for different iterator values, and they do not seem to >>>>>>> match the vectorization that I have in mind. For a two dimensional case, >>>>>>> the indices for which the value is set as 1 are: >>>>>>> >>>>>>> compI = 0 , d = 0 -----> index = 0 >>>>>>> compI = 0 , d = 1 -----> index = 3 >>>>>>> compI = 1 , d = 0 -----> index = 12 >>>>>>> compI = 1 , d = 1 -----> index = 15 >>>>>>> >>>>>>> The values for the first and last seem correct to me, but they other >>>>>>> two are confusing me. I see that this elasticity tensor (which is the >>>>>>> derivative of the gradient by itself in this case) would be a four by four >>>>>>> identity matrix in its matrix representation, so the indices in between >>>>>>> would be 5 and 10 instead of 3 and 12, if we put one column on top of each >>>>>>> other. >>>>>>> >>>>>> >>>>>> I have read this a few times, but I cannot understand that you are >>>>>> asking. The simplest thing I can >>>>>> respond is that we are indexing a row-major array, using the indices: >>>>>> >>>>>> g3[ic, id, jc, jd] >>>>>> >>>>>> where ic indexes the components of the trial field, id indexes the >>>>>> derivative components, >>>>>> jc indexes the basis field components, and jd its derivative >>>>>> components. >>>>>> >>>>>> Matt >>>>>> >>>>>> >>>>>>> I guess my question is then, how did you vectorize the fourth order >>>>>>> tensor? >>>>>>> >>>>>>> Thanks in advance >>>>>>> Miguel >>>>>>> >>>>>>> -- >>>>>>> *Miguel Angel Salazar de Troya* >>>>>>> Graduate Research Assistant >>>>>>> Department of Mechanical Science and Engineering >>>>>>> University of Illinois at Urbana-Champaign >>>>>>> (217) 550-2360 >>>>>>> salaza11 at illinois.edu >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> *Miguel Angel Salazar de Troya* >>>>> Graduate Research Assistant >>>>> Department of Mechanical Science and Engineering >>>>> University of Illinois at Urbana-Champaign >>>>> (217) 550-2360 >>>>> salaza11 at illinois.edu >>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > *Miguel Angel Salazar de Troya* > Graduate Research Assistant > Department of Mechanical Science and Engineering > University of Illinois at Urbana-Champaign > (217) 550-2360 > salaza11 at illinois.edu > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From salazardetroya at gmail.com Mon Apr 21 10:27:55 2014 From: salazardetroya at gmail.com (Miguel Angel Salazar de Troya) Date: Mon, 21 Apr 2014 10:27:55 -0500 Subject: [petsc-users] Elasticity tensor in ex52 In-Reply-To: References: Message-ID: Thank you. On Mon, Apr 21, 2014 at 10:24 AM, Matthew Knepley wrote: > On Mon, Apr 21, 2014 at 10:23 AM, Miguel Angel Salazar de Troya < > salazardetroya at gmail.com> wrote: > >> Thank you. Now it makes sense. Just to confirm though, back to the weak >> form I wrote above in index notation: >> >> phi_{i,j} C_{ijkl} u_{k,l} >> >> The g3 indices are equivalent to the indices of this weak form in this >> manner: >> >> ic = i >> jc = k >> id = j >> jd = l >> >> Is this correct? This is what I understand when you talk about the >> components and the derivatives of the components. >> > > Yep, that is it. > > Thanks, > > Matt > > >> On Mon, Apr 21, 2014 at 9:48 AM, Matthew Knepley wrote: >> >>> On Sun, Apr 20, 2014 at 4:17 PM, Miguel Angel Salazar de Troya < >>> salazardetroya at gmail.com> wrote: >>> >>>> I understand. So if we had a linear elastic material, the weak form >>>> would be something like >>>> >>>> phi_{i,j} C_{ijkl} u_{k,l} >>>> >>>> where the term C_{ijkl} u_{k,l} would correspond to the term f_1(U, >>>> gradient_U) in equation (1) in your paper I mentioned above, and g_3 would >>>> be C_{ijkl}. Therefore the indices {i,j} would be equivalent to the indices >>>> "ic, id" you mentioned before and "jc" and "jd" would be {k,l}? >>>> >>>> For g3[ic, id, jc, jd], transforming the four dimensional array to >>>> linear memory would be like this: >>>> >>>> g3[((ic*Ncomp+id)*dim+jc)*dim+jd] = 1.0; >>>> >>>> where Ncomp and dim are equal to the problem's spatial dimension. >>>> >>>> However, in the code, there are only two loops, to exploit the symmetry >>>> of the fourth order identity tensor: >>>> >>>> for (compI = 0; compI < Ncomp; ++compI) { >>>> for (d = 0; d < dim; ++d) { >>>> g3[((compI*Ncomp+compI)*dim+d)*dim+d] = 1.0; >>>> } >>>> } >>>> >>>> Therefore the tensor entries that are set to one are: >>>> >>>> g3[0,0,0,0] >>>> g3[1,1,0,0] >>>> g3[0,0,1,1] >>>> g3[1,1,1,1] >>>> >>>> This would be equivalent to the fourth order tensor \delta_{ij} >>>> \delta_{kl}, but I think the one we need is \delta_{ik} \delta_{jl}, >>>> because it is the derivative of a matrix with respect itself (or the >>>> derivative of a gradient with respect to itself). This is assuming the >>>> indices of g3 correspond to what I said. >>>> >>> >>> I made an error explaining g3, which is indexed >>> >>> g3[ic, jc, id, jd] >>> >>> I thought this might be better since, it is composed of dim x dim >>> blocks. I am not opposed to changing >>> this if there is evidence that another thing is better. >>> >>> Matt >>> >>> >>>> Thanks in advance. >>>> >>>> Miguel >>>> >>>> >>>> On Apr 19, 2014 6:19 PM, "Matthew Knepley" wrote: >>>> >>>>> On Sat, Apr 19, 2014 at 5:25 PM, Miguel Angel Salazar de Troya < >>>>> salazardetroya at gmail.com> wrote: >>>>> >>>>>> Thanks for your response. Now I understand a bit better your >>>>>> implementation, but I am still confused. Is the g3 function equivalent to >>>>>> the f_{1,1} function in the matrix in equation (3) in this paper: >>>>>> http://arxiv.org/pdf/1309.1204v2.pdf ? If so, why would it have >>>>>> terms that depend on the trial fields? I think this is what is confusing me. >>>>>> >>>>> >>>>> Yes, it is. It has no terms that depend on the trial fields. It is >>>>> just indexed by the number of components in that field. It is >>>>> a continuum thing, which has nothing to do with the discretization. >>>>> That is why it acts pointwise. >>>>> >>>>> Matt >>>>> >>>>> >>>>>> On Sat, Apr 19, 2014 at 11:35 AM, Matthew Knepley wrote: >>>>>> >>>>>>> On Fri, Apr 18, 2014 at 1:23 PM, Miguel Angel Salazar de Troya < >>>>>>> salazardetroya at gmail.com> wrote: >>>>>>> >>>>>>>> Hello everybody. >>>>>>>> >>>>>>>> First, I am taking this example from the petsc-dev version, I am >>>>>>>> not sure if I should have posted this in another mail-list, if so, my >>>>>>>> apologies. >>>>>>>> >>>>>>>> In this example, for the elasticity case, function g3 is built as: >>>>>>>> >>>>>>>> void g3_elas(const PetscScalar u[], const PetscScalar gradU[], >>>>>>>> const PetscScalar a[], const PetscScalar gradA[], const PetscReal x[], >>>>>>>> PetscScalar g3[]) >>>>>>>> { >>>>>>>> const PetscInt dim = spatialDim; >>>>>>>> const PetscInt Ncomp = spatialDim; >>>>>>>> PetscInt compI, d; >>>>>>>> >>>>>>>> for (compI = 0; compI < Ncomp; ++compI) { >>>>>>>> for (d = 0; d < dim; ++d) { >>>>>>>> g3[((compI*Ncomp+compI)*dim+d)*dim+d] = 1.0; >>>>>>>> } >>>>>>>> } >>>>>>>> } >>>>>>>> >>>>>>>> Therefore, a fourth-order tensor is represented as a vector. I was >>>>>>>> checking the indices for different iterator values, and they do not seem to >>>>>>>> match the vectorization that I have in mind. For a two dimensional case, >>>>>>>> the indices for which the value is set as 1 are: >>>>>>>> >>>>>>>> compI = 0 , d = 0 -----> index = 0 >>>>>>>> compI = 0 , d = 1 -----> index = 3 >>>>>>>> compI = 1 , d = 0 -----> index = 12 >>>>>>>> compI = 1 , d = 1 -----> index = 15 >>>>>>>> >>>>>>>> The values for the first and last seem correct to me, but they >>>>>>>> other two are confusing me. I see that this elasticity tensor (which is the >>>>>>>> derivative of the gradient by itself in this case) would be a four by four >>>>>>>> identity matrix in its matrix representation, so the indices in between >>>>>>>> would be 5 and 10 instead of 3 and 12, if we put one column on top of each >>>>>>>> other. >>>>>>>> >>>>>>> >>>>>>> I have read this a few times, but I cannot understand that you are >>>>>>> asking. The simplest thing I can >>>>>>> respond is that we are indexing a row-major array, using the indices: >>>>>>> >>>>>>> g3[ic, id, jc, jd] >>>>>>> >>>>>>> where ic indexes the components of the trial field, id indexes the >>>>>>> derivative components, >>>>>>> jc indexes the basis field components, and jd its derivative >>>>>>> components. >>>>>>> >>>>>>> Matt >>>>>>> >>>>>>> >>>>>>>> I guess my question is then, how did you vectorize the fourth order >>>>>>>> tensor? >>>>>>>> >>>>>>>> Thanks in advance >>>>>>>> Miguel >>>>>>>> >>>>>>>> -- >>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>> Graduate Research Assistant >>>>>>>> Department of Mechanical Science and Engineering >>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>> (217) 550-2360 >>>>>>>> salaza11 at illinois.edu >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> What most experimenters take for granted before they begin their >>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>> experiments lead. >>>>>>> -- Norbert Wiener >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> *Miguel Angel Salazar de Troya* >>>>>> Graduate Research Assistant >>>>>> Department of Mechanical Science and Engineering >>>>>> University of Illinois at Urbana-Champaign >>>>>> (217) 550-2360 >>>>>> salaza11 at illinois.edu >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> *Miguel Angel Salazar de Troya* >> Graduate Research Assistant >> Department of Mechanical Science and Engineering >> University of Illinois at Urbana-Champaign >> (217) 550-2360 >> salaza11 at illinois.edu >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- *Miguel Angel Salazar de Troya* Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuentesdt at gmail.com Mon Apr 21 22:30:50 2014 From: fuentesdt at gmail.com (David Fuentes) Date: Mon, 21 Apr 2014 22:30:50 -0500 Subject: [petsc-users] petscfe_type opencl, variable_coefficient field Message-ID: Hi, Building off of snes/ex12, I'm trying to use -variable_coefficient field with -petscfe_type opencl -mat_petscfe_type opencl constant coefficients "-variable_coefficient none" works fine, but I am getting NaN in my residual vector for spatially varying coefficients. I tried changing N_t manually for N_comp_aux =1, also tried setting my variable coefficient field to a constant at each quadrature point to see if any difference. Is there any other place I may look ? https://bitbucket.org/petsc/petsc/src/fb3a4ffdea0449562fb0f58dad4b44552b6d3589/src/dm/dt/interface/dtfe.c?at=master#cl-4924 if (useAux) { ierr = PetscSNPrintfCount(string_tail, end_of_buffer - string_tail," /* Load coefficients a_i for this cell */\n"" /* TODO: This should not be N_t here, it should be N_bc*N_comp_aux */\n"" a_i[tidx] = coefficientsAux[Goffset+batch*N_t+tidx];\n", &count);STRING_ERROR_CHECK("Message to short"); } $ ./ireSolver -run_type full -dim 3 -petscspace_order 1 -mat_petscspace_order 0 -variable_coefficient field -snes_type ksponly -snes_monitor -snes_converged_reason -ksp_converged_reason -ksp_rtol 1.e-12 -pc_type bjacobi -vtk geserviceStudy0025axt1.vtk -f meshIRElores.e -sourcelandmark sourcelandmarks.vtk -targetlandmark targetlandmarks.vtk -petscfe_type opencl -mat_petscfe_type opencl -info -info_exclude null,vec,mat -petscfe_num_blocks 2 -petscfe_num_batches 2 . . . [0] PetscFEIntegrateResidual_OpenCL(): GPU layout grid(32,59,1) block(8,1,1) with 2 batches [0] PetscFEIntegrateResidual_OpenCL(): N_t: 8, N_cb: 2 [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Floating point exception [0]PETSC ERROR: Vec entry at local location 120 is not-a-number or infinite at end of function: Parameter number 3 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.4.4-4111-g7562b2e GIT Date: 2014-04-18 18:37:01 -0500 [0]PETSC ERROR: ./ireSolver on a arch-precise-gcc-4.6.3-dbg named maeda by fuentes Mon Apr 21 22:23:25 2014 [0]PETSC ERROR: Configure options --with-shared-libraries --with-clanguage=c++ --CFLAGS=-O0 --CXXFLAGS=-O0 --download-ctetgen --download-triangle --with-debugging=yes --with-blas-lapack-dir=/opt/apps/MKL/12.1/lib/intel64/ --with-exodusii-lib="[/usr/lib/x86_64-linux-gnu/libexoIIv2.so]" --with-netcdf-dir=/usr/lib --with-hdf5-dir=/usr/lib --with-c2html=0 --with-exodusii-include=/usr/include --with-opencl-include=/usr/include/ --with-opencl-lib=/usr/lib/libOpenCL.so --download-viennacl=yes [0]PETSC ERROR: #1 VecValidValues() line 34 in /opt/apps/PETSc/petsc-usr/src/vec/vec/interface/rvector.c [0]PETSC ERROR: #2 SNESComputeFunction() line 2096 in /opt/apps/PETSc/petsc-usr/src/snes/interface/snes.c [0]PETSC ERROR: #3 SNESSolve_KSPONLY() line 24 in /opt/apps/PETSc/petsc-usr/src/snes/impls/ksponly/ksponly.c [0]PETSC ERROR: #4 SNESSolve() line 3794 in /opt/apps/PETSc/petsc-usr/src/snes/interface/snes.c [0]PETSC ERROR: #5 main() line 808 in /home/fuentes/github/DMPlexApplications/ireSolver.c [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- application called MPI_Abort(MPI_COMM_WORLD, 72) - process 0 thanks, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From norihiro.w at gmail.com Tue Apr 22 03:31:42 2014 From: norihiro.w at gmail.com (Norihiro Watanabe) Date: Tue, 22 Apr 2014 10:31:42 +0200 Subject: [petsc-users] SNES test with a nested matrix In-Reply-To: <87d2ggt373.fsf@jedbrown.org> References: <87d2ggt373.fsf@jedbrown.org> Message-ID: Thank you, Jed. "-snes_compare_explicit" worked with a nested matrix. I'll try to use MatGetLocalSubMatrix in my programm. Best, nori On Thu, Apr 17, 2014 at 4:56 PM, Jed Brown wrote: > Norihiro Watanabe writes: > > > Hi, I wonder whether "-snes_type test" option can work also with a nested > > matrix. I got the following error in my program and am not sure if there > is > > a problem in my codes or just PETSc doesn't support it. > > I believe -snes_compare_explicit should convert if necessary (it > provides similar functionality to -snes_type test). > > > Testing hand-coded Jacobian, if the ratio is > > O(1.e-8), the hand-coded Jacobian is probably correct. > > Run with -snes_test_display to show difference > > of hand-coded and finite difference Jacobian. > > [0]PETSC ERROR: --------------------- Error Message > > ------------------------------------ > > [0]PETSC ERROR: No support for this operation for this object type! > > [0]PETSC ERROR: Mat type nest! > > We strongly recommend writing your code in a way that is agnostic to > matrix format (i.e., assemble using MatGetLocalSubMatrix). Then just > use a monolithic format for comparisons like this and switch to MatNest > when trying to get the best performance out of a fieldsplit > preconditioner. > -- Norihiro Watanabe -------------- next part -------------- An HTML attachment was scrubbed... URL: From niklas at niklasfi.de Tue Apr 22 04:29:30 2014 From: niklas at niklasfi.de (Niklas Fischer) Date: Tue, 22 Apr 2014 11:29:30 +0200 Subject: [petsc-users] [beginner question] Different communicators in the two objects: Argument # 1 and 2 flag 3! Message-ID: <5356367A.3090906@niklasfi.de> Hello, I have attached a small test case for a problem I am experiencing. What this dummy program does is it reads a vector and a matrix from a text file and then solves Ax=b. The same data is available in two forms: - everything is in one file (matops.s.0 and vops.s.0) - the matrix and vector are split between processes (matops.0, matops.1, vops.0, vops.1) The serial version of the program works perfectly fine but unfortunately errors occure, when running the parallel version: make && mpirun -n 2 a.out matops vops mpic++ -DPETSC_CLANGUAGE_CXX -isystem /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/include -isystem /home/data/fischer/libs/petsc-3.4.3/include petsctest.cpp -Werror -Wall -Wpedantic -std=c++11 -L /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib -lpetsc /usr/bin/ld: warning: libmpi_cxx.so.0, needed by /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so, may conflict with libmpi_cxx.so.1 /usr/bin/ld: warning: libmpi.so.0, needed by /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so, may conflict with libmpi.so.1 librdmacm: couldn't read ABI version. librdmacm: assuming: 4 CMA: unable to get RDMA device list -------------------------------------------------------------------------- [[43019,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: dornroeschen.igpm.rwth-aachen.de CMA: unable to get RDMA device list -------------------------------------------------------------------------- [[43019,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: dornroeschen.igpm.rwth-aachen.de Another transport will be used instead, although this may result Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: couldn't read ABI version. librdmacm: assuming: 4 CMA: unable to get RDMA device list Matrix size is 32x32 Matrix size is 32x32 Vector size is 32 Vector size is 32 [1]PETSC ERROR: --------------------- Error Message ------------------------------------ [1]PETSC ERROR: Arguments must have same communicators! [1]PETSC ERROR: Different communicators in the two objects: Argument # 1 and 2 flag 3! [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Petsc Release Version 3.4.0, May, 13, 2013 [1]PETSC ERROR: See docs/changes/index.html for recent updates. [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [1]PETSC ERROR: See docs/index.html for manual pages. [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: petscwrap on a arch-linux2-c-debug named dornroeschen.igpm.rwth-aachen.de by fischer Tue Apr 22 11:14:39 2014 [1]PETSC ERROR: Libraries linked from /usr/local/openmpi-4.8.1/lib [1]PETSC ERROR: Configure run at Fri Jun 7 09:32:12 2013 [1]PETSC ERROR: Configure options --with-netcdf=1 --prefix=/usr/local/openmpi-4.8.1 --with-mpi=1 --with-shared-libraries=1 --with-hdf=1 --with-hdf-dir=/usr/local/openmpi-4.8.1 --with-hdf-dir=/usr/local/openmpi-4.8.1 --with-netcdf-lib=/usr/local/openmpi-4.8.1/lib64/libnetcdf.so --with-metis=1 --with-netcdf-include=/usr/local/openmpi-4.8.1/include --with-metis-dir=/usr/local/openmpi-4.8.1 --COPTFLAGS=-O3 --CXXOPTFLAGS=-O3 --FOPTFLAGS=-O3 --libdir=/usr/local/openmpi-4.8.1/lib64 [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: MatView() line 811 in /local/petsc-3.4.0/src/mat/interface/matrix.c [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Arguments must have same communicators! [0]PETSC ERROR: Different communicators in the two objects: Argument # 1 and 2 flag 3! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.4.0, May, 13, 2013 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: petscwrap on a arch-linux2-c-debug named dornroeschen.igpm.rwth-aachen.de by fischer Tue Apr 22 11:14:39 2014 [1]PETSC ERROR: --------------------- Error Message ------------------------------------ [1]PETSC ERROR: Arguments must have same communicators! [1]PETSC ERROR: Different communicators in the two objects: Argument # 1 and 2 flag 3! [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Petsc Release Version 3.4.0, May, 13, 2013 [1]PETSC ERROR: See docs/changes/index.html for recent updates. [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [1]PETSC ERROR: See docs/index.html for manual pages. [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: petscwrap on a arch-linux2-c-debug named dornroeschen.igpm.rwth-aachen.de by fischer Tue Apr 22 11:14:39 2014 [1]PETSC ERROR: Libraries linked from /usr/local/openmpi-4.8.1/lib [1]PETSC ERROR: Configure run at Fri Jun 7 09:32:12 2013 [1]PETSC ERROR: Configure options --with-netcdf=1 --prefix=/usr/local/openmpi-4.8.1 --with-mpi=1 --with-shared-libraries=1 --with-hdf=1 --with-hdf-dir=/usr/local/openmpi-4.8.1 --with-hdf-dir=/usr/local/openmpi-4.8.1 --with-netcdf-lib=/usr/local/openmpi-4.8.1/lib64/libnetcdf.so --with-metis=1 --with-netcdf-include=/usr/local/openmpi-4.8.1/include --with-metis-dir=/usr/local/openmpi-4.8.1 --COPTFLAGS=-O3 --CXXOPTFLAGS=-O3 --FOPTFLAGS=-O3 --libdir=/usr/local/openmpi-4.8.1/lib64 [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: MatView() line 811 in /local/petsc-3.4.0/src/mat/interface/matrix.c [1]PETSC ERROR: main() line 108 in "unknowndirectory/"petsctest.cpp [0]PETSC ERROR: Libraries linked from /usr/local/openmpi-4.8.1/lib [0]PETSC ERROR: Configure run at Fri Jun 7 09:32:12 2013 [0]PETSC ERROR: Configure options --with-netcdf=1 --prefix=/usr/local/openmpi-4.8.1 --with-mpi=1 --with-shared-libraries=1 --with-hdf=1 --with-hdf-dir=/usr/local/openmpi-4.8.1 --with-hdf-dir=/usr/local/openmpi-4.8.1 --with-netcdf-lib=/usr/local/openmpi-4.8.1/lib64/libnetcdf.so --with-metis=1 --with-netcdf-include=/usr/local/openmpi-4.8.1/include --with-metis-dir=/usr/local/openmpi-4.8.1 --COPTFLAGS=-O3 --CXXOPTFLAGS=-O3 --FOPTFLAGS=-O3 --libdir=/usr/local/openmpi-4.8.1/lib64 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatView() line 811 in /local/petsc-3.4.0/src/mat/interface/matrix.c [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Arguments must have same communicators! [0]PETSC ERROR: Different communicators in the two objects: Argument # 1 and 2 flag 3! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.4.0, May, 13, 2013 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: petscwrap on a arch-linux2-c-debug named dornroeschen.igpm.rwth-aachen.de by fischer Tue Apr 22 11:14:39 2014 [0]PETSC ERROR: Libraries linked from /usr/local/openmpi-4.8.1/lib [0]PETSC ERROR: Configure run at Fri Jun 7 09:32:12 2013 [0]PETSC ERROR: Configure options --with-netcdf=1 --prefix=/usr/local/openmpi-4.8.1 --with-mpi=1 --with-shared-libraries=1 --with-hdf=1 --with-hdf-dir=/usr/local/openmpi-4.8.1 --with-hdf-dir=/usr/local/openmpi-4.8.1 --with-netcdf-lib=/usr/local/openmpi-4.8.1/lib64/libnetcdf.so --with-metis=1 --with-netcdf-include=/usr/local/openmpi-4.8.1/include --with-metis-dir=/usr/local/openmpi-4.8.1 --COPTFLAGS=-O3 --CXXOPTFLAGS=-O3 --FOPTFLAGS=-O3 --libdir=/usr/local/openmpi-4.8.1/lib64 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatView() line 811 in /local/petsc-3.4.0/src/mat/interface/matrix.c [0]PETSC ERROR: main() line 108 in "unknowndirectory/"petsctest.cpp -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 80. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- PETSc KSPSolve done, residual norm: 30.4324, it took 10000 iterations.PETSc KSPSolve done, residual norm: 30.4324, it took 10000 iterations.-------------------------------------------------------------------------- mpirun has exited due to process rank 0 with PID 127153 on node dornroeschen.igpm.rwth-aachen.de exiting improperly. There are two reasons this could occur: 1. this process did not call "init" before exiting, but others in the job did. This can cause a job to hang indefinitely while it waits for all processes to call "init". By rule, if one process calls "init", then ALL processes must call "init" prior to termination. 2. this process called "init", but exited without calling "finalize". By rule, all processes that call "init" MUST call "finalize" prior to exiting or it will be considered an "abnormal termination" This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). -------------------------------------------------------------------------- [dornroeschen.igpm.rwth-aachen.de:127152] 1 more process has sent help message help-mpi-btl-base.txt / btl:no-nics [dornroeschen.igpm.rwth-aachen.de:127152] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages [dornroeschen.igpm.rwth-aachen.de:127152] 1 more process has sent help message help-mpi-api.txt / mpi-abort I would be glad to get some pointers as to where this problem comes from. Kind regards, Niklas Fischer -------------- next part -------------- A non-text attachment was scrubbed... Name: petsctest.tar.gz Type: application/gzip Size: 10146 bytes Desc: not available URL: From rupp at iue.tuwien.ac.at Tue Apr 22 05:03:44 2014 From: rupp at iue.tuwien.ac.at (Karl Rupp) Date: Tue, 22 Apr 2014 12:03:44 +0200 Subject: [petsc-users] petscfe_type opencl, variable_coefficient field In-Reply-To: References: Message-ID: <53563E80.7060404@iue.tuwien.ac.at> Hi David, > I'm trying to use > > -variable_coefficient field with -petscfe_type opencl -mat_petscfe_type > opencl > > constant coefficients "-variable_coefficient none" works fine, but I am > getting NaN in my residual vector for spatially varying coefficients. > > I tried changing N_t manually for N_comp_aux =1, also tried setting my > variable coefficient field to a constant at each quadrature point to see > if any difference. This is - unfortunately - a known issue, where we currently don't know whether this is a race condition in the OpenCL kernel, or something goes wrong with the data setup. I expect this to require rather more elaborate debugging, so unfortunately I can only advise you to wait until this is fixed... Best regards, Karli From jed at jedbrown.org Tue Apr 22 06:08:58 2014 From: jed at jedbrown.org (Jed Brown) Date: Tue, 22 Apr 2014 07:08:58 -0400 Subject: [petsc-users] [beginner question] Different communicators in the two objects: Argument # 1 and 2 flag 3! In-Reply-To: <5356367A.3090906@niklasfi.de> References: <5356367A.3090906@niklasfi.de> Message-ID: <87k3ahboz9.fsf@jedbrown.org> Niklas Fischer writes: > Hello, > > I have attached a small test case for a problem I am experiencing. What > this dummy program does is it reads a vector and a matrix from a text > file and then solves Ax=b. The same data is available in two forms: > - everything is in one file (matops.s.0 and vops.s.0) > - the matrix and vector are split between processes (matops.0, > matops.1, vops.0, vops.1) > > The serial version of the program works perfectly fine but unfortunately > errors occure, when running the parallel version: > > make && mpirun -n 2 a.out matops vops > > mpic++ -DPETSC_CLANGUAGE_CXX -isystem > /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/include -isystem > /home/data/fischer/libs/petsc-3.4.3/include petsctest.cpp -Werror -Wall > -Wpedantic -std=c++11 -L > /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib -lpetsc > /usr/bin/ld: warning: libmpi_cxx.so.0, needed by > /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so, > may conflict with libmpi_cxx.so.1 > /usr/bin/ld: warning: libmpi.so.0, needed by > /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so, > may conflict with libmpi.so.1 > librdmacm: couldn't read ABI version. > librdmacm: assuming: 4 > CMA: unable to get RDMA device list > -------------------------------------------------------------------------- > [[43019,1],0]: A high-performance Open MPI point-to-point messaging module > was unable to find any relevant network interfaces: > > Module: OpenFabrics (openib) > Host: dornroeschen.igpm.rwth-aachen.de > CMA: unable to get RDMA device list It looks like your MPI is either broken or some of the code linked into your application was compiled with a different MPI or different version. Make sure you can compile and run simple MPI programs in parallel. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From niklas at niklasfi.de Tue Apr 22 06:48:12 2014 From: niklas at niklasfi.de (Niklas Fischer) Date: Tue, 22 Apr 2014 13:48:12 +0200 Subject: [petsc-users] [beginner question] Different communicators in the two objects: Argument # 1 and 2 flag 3! In-Reply-To: <87k3ahboz9.fsf@jedbrown.org> References: <5356367A.3090906@niklasfi.de> <87k3ahboz9.fsf@jedbrown.org> Message-ID: <535656FC.5010507@niklasfi.de> Am 22.04.2014 13:08, schrieb Jed Brown: > Niklas Fischer writes: > >> Hello, >> >> I have attached a small test case for a problem I am experiencing. What >> this dummy program does is it reads a vector and a matrix from a text >> file and then solves Ax=b. The same data is available in two forms: >> - everything is in one file (matops.s.0 and vops.s.0) >> - the matrix and vector are split between processes (matops.0, >> matops.1, vops.0, vops.1) >> >> The serial version of the program works perfectly fine but unfortunately >> errors occure, when running the parallel version: >> >> make && mpirun -n 2 a.out matops vops >> >> mpic++ -DPETSC_CLANGUAGE_CXX -isystem >> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/include -isystem >> /home/data/fischer/libs/petsc-3.4.3/include petsctest.cpp -Werror -Wall >> -Wpedantic -std=c++11 -L >> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib -lpetsc >> /usr/bin/ld: warning: libmpi_cxx.so.0, needed by >> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so, >> may conflict with libmpi_cxx.so.1 >> /usr/bin/ld: warning: libmpi.so.0, needed by >> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so, >> may conflict with libmpi.so.1 >> librdmacm: couldn't read ABI version. >> librdmacm: assuming: 4 >> CMA: unable to get RDMA device list >> -------------------------------------------------------------------------- >> [[43019,1],0]: A high-performance Open MPI point-to-point messaging module >> was unable to find any relevant network interfaces: >> >> Module: OpenFabrics (openib) >> Host: dornroeschen.igpm.rwth-aachen.de >> CMA: unable to get RDMA device list > It looks like your MPI is either broken or some of the code linked into > your application was compiled with a different MPI or different version. > Make sure you can compile and run simple MPI programs in parallel. Hello Jed, thank you for your inputs. Unfortunately MPI does not seem to be the issue here. The attachment contains a simple MPI hello world program which runs flawlessly (I will append the output to this mail) and I have not encountered any problems with other MPI programs. My question still stands. Greetings, Niklas Fischer mpirun -np 2 ./mpitest librdmacm: couldn't read ABI version. librdmacm: assuming: 4 CMA: unable to get RDMA device list -------------------------------------------------------------------------- [[44086,1],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: dornroeschen.igpm.rwth-aachen.de Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- librdmacm: couldn't read ABI version. librdmacm: assuming: 4 CMA: unable to get RDMA device list Hello world from processor dornroeschen.igpm.rwth-aachen.de, rank 0 out of 2 processors Hello world from processor dornroeschen.igpm.rwth-aachen.de, rank 1 out of 2 processors [dornroeschen.igpm.rwth-aachen.de:128141] 1 more process has sent help message help-mpi-btl-base.txt / btl:no-nics [dornroeschen.igpm.rwth-aachen.de:128141] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages From knepley at gmail.com Tue Apr 22 06:57:24 2014 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 22 Apr 2014 06:57:24 -0500 Subject: [petsc-users] [beginner question] Different communicators in the two objects: Argument # 1 and 2 flag 3! In-Reply-To: <535656FC.5010507@niklasfi.de> References: <5356367A.3090906@niklasfi.de> <87k3ahboz9.fsf@jedbrown.org> <535656FC.5010507@niklasfi.de> Message-ID: On Tue, Apr 22, 2014 at 6:48 AM, Niklas Fischer wrote: > Am 22.04.2014 13:08, schrieb Jed Brown: > >> Niklas Fischer writes: >> >> Hello, >>> >>> I have attached a small test case for a problem I am experiencing. What >>> this dummy program does is it reads a vector and a matrix from a text >>> file and then solves Ax=b. The same data is available in two forms: >>> - everything is in one file (matops.s.0 and vops.s.0) >>> - the matrix and vector are split between processes (matops.0, >>> matops.1, vops.0, vops.1) >>> >>> The serial version of the program works perfectly fine but unfortunately >>> errors occure, when running the parallel version: >>> >>> make && mpirun -n 2 a.out matops vops >>> >>> mpic++ -DPETSC_CLANGUAGE_CXX -isystem >>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/include -isystem >>> /home/data/fischer/libs/petsc-3.4.3/include petsctest.cpp -Werror -Wall >>> -Wpedantic -std=c++11 -L >>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib -lpetsc >>> /usr/bin/ld: warning: libmpi_cxx.so.0, needed by >>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so, >>> may conflict with libmpi_cxx.so.1 >>> /usr/bin/ld: warning: libmpi.so.0, needed by >>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so, >>> may conflict with libmpi.so.1 >>> librdmacm: couldn't read ABI version. >>> librdmacm: assuming: 4 >>> CMA: unable to get RDMA device list >>> ------------------------------------------------------------ >>> -------------- >>> [[43019,1],0]: A high-performance Open MPI point-to-point messaging >>> module >>> was unable to find any relevant network interfaces: >>> >>> Module: OpenFabrics (openib) >>> Host: dornroeschen.igpm.rwth-aachen.de >>> CMA: unable to get RDMA device list >>> >> It looks like your MPI is either broken or some of the code linked into >> your application was compiled with a different MPI or different version. >> Make sure you can compile and run simple MPI programs in parallel. >> > Hello Jed, > > thank you for your inputs. Unfortunately MPI does not seem to be the issue > here. The attachment contains a simple MPI hello world program which runs > flawlessly (I will append the output to this mail) and I have not > encountered any problems with other MPI programs. My question still stands. > This is a simple error. You created the matrix A using PETSC_COMM_WORLD, but you try to view it using PETSC_VIEWER_STDOUT_SELF. You need to use PETSC_VIEWER_STDOUT_WORLD in order to match. Thanks, Matt > Greetings, > Niklas Fischer > > mpirun -np 2 ./mpitest > > librdmacm: couldn't read ABI version. > librdmacm: assuming: 4 > CMA: unable to get RDMA device list > -------------------------------------------------------------------------- > [[44086,1],0]: A high-performance Open MPI point-to-point messaging module > was unable to find any relevant network interfaces: > > Module: OpenFabrics (openib) > Host: dornroeschen.igpm.rwth-aachen.de > > Another transport will be used instead, although this may result in > lower performance. > -------------------------------------------------------------------------- > librdmacm: couldn't read ABI version. > librdmacm: assuming: 4 > CMA: unable to get RDMA device list > Hello world from processor dornroeschen.igpm.rwth-aachen.de, rank 0 out > of 2 processors > Hello world from processor dornroeschen.igpm.rwth-aachen.de, rank 1 out > of 2 processors > [dornroeschen.igpm.rwth-aachen.de:128141] 1 more process has sent help > message help-mpi-btl-base.txt / btl:no-nics > [dornroeschen.igpm.rwth-aachen.de:128141] Set MCA parameter > "orte_base_help_aggregate" to 0 to see all help / error messages > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Tue Apr 22 07:02:02 2014 From: jed at jedbrown.org (Jed Brown) Date: Tue, 22 Apr 2014 08:02:02 -0400 Subject: [petsc-users] [beginner question] Different communicators in the two objects: Argument # 1 and 2 flag 3! In-Reply-To: <535656FC.5010507@niklasfi.de> References: <5356367A.3090906@niklasfi.de> <87k3ahboz9.fsf@jedbrown.org> <535656FC.5010507@niklasfi.de> Message-ID: <87bnvtbmit.fsf@jedbrown.org> Niklas Fischer writes: > thank you for your inputs. Unfortunately MPI does not seem to be the > issue here. The attachment contains a simple MPI hello world program > which runs flawlessly (I will append the output to this mail) and I have > not encountered any problems with other MPI programs. My question still > stands. The output below is amazingly ugly and concerning. Matt answered your question, but in your build system, you should not be adding -DPETSC_CLANGUAGE_CXX; that is a configure-time option, not something that your code can change arbitrarily. Your makefile should include $PETSC_DIR/conf/variables (and conf/rules if you want, but not if you want to write your own rules). > mpirun -np 2 ./mpitest > > librdmacm: couldn't read ABI version. > librdmacm: assuming: 4 > CMA: unable to get RDMA device list > -------------------------------------------------------------------------- > [[44086,1],0]: A high-performance Open MPI point-to-point messaging module > was unable to find any relevant network interfaces: > > Module: OpenFabrics (openib) > Host: dornroeschen.igpm.rwth-aachen.de > > Another transport will be used instead, although this may result in > lower performance. > -------------------------------------------------------------------------- > librdmacm: couldn't read ABI version. > librdmacm: assuming: 4 > CMA: unable to get RDMA device list > Hello world from processor dornroeschen.igpm.rwth-aachen.de, rank 0 out > of 2 processors > Hello world from processor dornroeschen.igpm.rwth-aachen.de, rank 1 out > of 2 processors > [dornroeschen.igpm.rwth-aachen.de:128141] 1 more process has sent help > message help-mpi-btl-base.txt / btl:no-nics > [dornroeschen.igpm.rwth-aachen.de:128141] Set MCA parameter > "orte_base_help_aggregate" to 0 to see all help / error messages -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From niklas at niklasfi.de Tue Apr 22 07:11:50 2014 From: niklas at niklasfi.de (Niklas Fischer) Date: Tue, 22 Apr 2014 14:11:50 +0200 Subject: [petsc-users] [beginner question] Different communicators in the two objects: Argument # 1 and 2 flag 3! In-Reply-To: <87bnvtbmit.fsf@jedbrown.org> References: <5356367A.3090906@niklasfi.de> <87k3ahboz9.fsf@jedbrown.org> <535656FC.5010507@niklasfi.de> <87bnvtbmit.fsf@jedbrown.org> Message-ID: <53565C86.2070802@niklasfi.de> Am 22.04.2014 14:02, schrieb Jed Brown: > Niklas Fischer writes: >> thank you for your inputs. Unfortunately MPI does not seem to be the >> issue here. The attachment contains a simple MPI hello world program >> which runs flawlessly (I will append the output to this mail) and I have >> not encountered any problems with other MPI programs. My question still >> stands. > The output below is amazingly ugly and concerning. Matt answered your > question, but in your build system, you should not be adding > -DPETSC_CLANGUAGE_CXX; that is a configure-time option, not something > that your code can change arbitrarily. Your makefile should include > $PETSC_DIR/conf/variables (and conf/rules if you want, but not if you > want to write your own rules). Thank you for your advice. As I stated earlier, this is just a test case. I normally use (your) cmake module when using PETSc. > >> mpirun -np 2 ./mpitest >> >> librdmacm: couldn't read ABI version. >> librdmacm: assuming: 4 >> CMA: unable to get RDMA device list >> -------------------------------------------------------------------------- >> [[44086,1],0]: A high-performance Open MPI point-to-point messaging module >> was unable to find any relevant network interfaces: >> >> Module: OpenFabrics (openib) >> Host: dornroeschen.igpm.rwth-aachen.de >> >> Another transport will be used instead, although this may result in >> lower performance. >> -------------------------------------------------------------------------- >> librdmacm: couldn't read ABI version. >> librdmacm: assuming: 4 >> CMA: unable to get RDMA device list >> Hello world from processor dornroeschen.igpm.rwth-aachen.de, rank 0 out >> of 2 processors >> Hello world from processor dornroeschen.igpm.rwth-aachen.de, rank 1 out >> of 2 processors >> [dornroeschen.igpm.rwth-aachen.de:128141] 1 more process has sent help >> message help-mpi-btl-base.txt / btl:no-nics >> [dornroeschen.igpm.rwth-aachen.de:128141] Set MCA parameter >> "orte_base_help_aggregate" to 0 to see all help / error messages About the MPI output: I have asked the system administrator about this, and he is of the oppinion that everything is as it should be. Initially, I found the messages concerning as well, but all they are really saying is that MPI defaults to using a slower link to exchange messages. Greetings, Niklas Fischer From niklas at niklasfi.de Tue Apr 22 07:12:35 2014 From: niklas at niklasfi.de (Niklas Fischer) Date: Tue, 22 Apr 2014 14:12:35 +0200 Subject: [petsc-users] [beginner question] Different communicators in the two objects: Argument # 1 and 2 flag 3! In-Reply-To: References: <5356367A.3090906@niklasfi.de> <87k3ahboz9.fsf@jedbrown.org> <535656FC.5010507@niklasfi.de> Message-ID: <53565CB3.8030604@niklasfi.de> Am 22.04.2014 13:57, schrieb Matthew Knepley: > On Tue, Apr 22, 2014 at 6:48 AM, Niklas Fischer > wrote: > > Am 22.04.2014 13:08, schrieb Jed Brown: > > Niklas Fischer > writes: > > Hello, > > I have attached a small test case for a problem I am > experiencing. What > this dummy program does is it reads a vector and a matrix > from a text > file and then solves Ax=b. The same data is available in > two forms: > - everything is in one file (matops.s.0 and vops.s.0) > - the matrix and vector are split between processes > (matops.0, > matops.1, vops.0, vops.1) > > The serial version of the program works perfectly fine but > unfortunately > errors occure, when running the parallel version: > > make && mpirun -n 2 a.out matops vops > > mpic++ -DPETSC_CLANGUAGE_CXX -isystem > /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/include > -isystem > /home/data/fischer/libs/petsc-3.4.3/include petsctest.cpp > -Werror -Wall > -Wpedantic -std=c++11 -L > /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib -lpetsc > /usr/bin/ld: warning: libmpi_cxx.so.0, needed by > /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so, > may conflict with libmpi_cxx.so.1 > /usr/bin/ld: warning: libmpi.so.0, needed by > /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so, > may conflict with libmpi.so.1 > librdmacm: couldn't read ABI version. > librdmacm: assuming: 4 > CMA: unable to get RDMA device list > -------------------------------------------------------------------------- > [[43019,1],0]: A high-performance Open MPI point-to-point > messaging module > was unable to find any relevant network interfaces: > > Module: OpenFabrics (openib) > Host: dornroeschen.igpm.rwth-aachen.de > > CMA: unable to get RDMA device list > > It looks like your MPI is either broken or some of the code > linked into > your application was compiled with a different MPI or > different version. > Make sure you can compile and run simple MPI programs in parallel. > > Hello Jed, > > thank you for your inputs. Unfortunately MPI does not seem to be > the issue here. The attachment contains a simple MPI hello world > program which runs flawlessly (I will append the output to this > mail) and I have not encountered any problems with other MPI > programs. My question still stands. > > > This is a simple error. You created the matrix A using > PETSC_COMM_WORLD, but you try to view it > using PETSC_VIEWER_STDOUT_SELF. You need to use > PETSC_VIEWER_STDOUT_WORLD in > order to match. > > Thanks, > > Matt > > Greetings, > Niklas Fischer > > mpirun -np 2 ./mpitest > > librdmacm: couldn't read ABI version. > librdmacm: assuming: 4 > CMA: unable to get RDMA device list > -------------------------------------------------------------------------- > [[44086,1],0]: A high-performance Open MPI point-to-point > messaging module > was unable to find any relevant network interfaces: > > Module: OpenFabrics (openib) > Host: dornroeschen.igpm.rwth-aachen.de > > > Another transport will be used instead, although this may result in > lower performance. > -------------------------------------------------------------------------- > librdmacm: couldn't read ABI version. > librdmacm: assuming: 4 > CMA: unable to get RDMA device list > Hello world from processor dornroeschen.igpm.rwth-aachen.de > , rank 0 out of 2 processors > Hello world from processor dornroeschen.igpm.rwth-aachen.de > , rank 1 out of 2 processors > [dornroeschen.igpm.rwth-aachen.de:128141 > ] 1 more process > has sent help message help-mpi-btl-base.txt / btl:no-nics > [dornroeschen.igpm.rwth-aachen.de:128141 > ] Set MCA > parameter "orte_base_help_aggregate" to 0 to see all help / error > messages > > Thank you, Matthew, this solves my viewing problem. Am I doing something wrong when initializing the matrices as well? The matrix' viewing output starts with "Matrix Object: 1 MPI processes" and the Krylov solver does not converge. Your help is really appreciated, Niklas Fischer > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Tue Apr 22 07:21:49 2014 From: jed at jedbrown.org (Jed Brown) Date: Tue, 22 Apr 2014 08:21:49 -0400 Subject: [petsc-users] [beginner question] Different communicators in the two objects: Argument # 1 and 2 flag 3! In-Reply-To: <53565C86.2070802@niklasfi.de> References: <5356367A.3090906@niklasfi.de> <87k3ahboz9.fsf@jedbrown.org> <535656FC.5010507@niklasfi.de> <87bnvtbmit.fsf@jedbrown.org> <53565C86.2070802@niklasfi.de> Message-ID: <878uqxbllu.fsf@jedbrown.org> Niklas Fischer writes: > About the MPI output: I have asked the system administrator about this, > and he is of the oppinion that everything is as it should be. Fire the sysadmin? ;-) > Initially, I found the messages concerning as well, but all they are > really saying is that MPI defaults to using a slower link to exchange > messages. If that network interface is intentional, it should be set as the default so that every run doesn't spam you with this stuff. The ABI version warning is another possible problem. Even if the program ultimately runs correctly, spewing these warnings is not "as it should be". -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From niklas at niklasfi.de Tue Apr 22 07:59:40 2014 From: niklas at niklasfi.de (Niklas Fischer) Date: Tue, 22 Apr 2014 14:59:40 +0200 Subject: [petsc-users] [beginner question] Different communicators in the two objects: Argument # 1 and 2 flag 3! In-Reply-To: <53565CB3.8030604@niklasfi.de> References: <5356367A.3090906@niklasfi.de> <87k3ahboz9.fsf@jedbrown.org> <535656FC.5010507@niklasfi.de> <53565CB3.8030604@niklasfi.de> Message-ID: <535667BC.10607@niklasfi.de> I should probably note that everything is fine if I run the serial version of this (with the exact same matrix + right hand side). PETSc KSPSolve done, residual norm: 3.13459e-13, it took 6 iterations. Am 22.04.2014 14:12, schrieb Niklas Fischer: > > Am 22.04.2014 13:57, schrieb Matthew Knepley: >> On Tue, Apr 22, 2014 at 6:48 AM, Niklas Fischer > > wrote: >> >> Am 22.04.2014 13:08, schrieb Jed Brown: >> >> Niklas Fischer > > writes: >> >> Hello, >> >> I have attached a small test case for a problem I am >> experiencing. What >> this dummy program does is it reads a vector and a matrix >> from a text >> file and then solves Ax=b. The same data is available in >> two forms: >> - everything is in one file (matops.s.0 and vops.s.0) >> - the matrix and vector are split between processes >> (matops.0, >> matops.1, vops.0, vops.1) >> >> The serial version of the program works perfectly fine >> but unfortunately >> errors occure, when running the parallel version: >> >> make && mpirun -n 2 a.out matops vops >> >> mpic++ -DPETSC_CLANGUAGE_CXX -isystem >> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/include >> -isystem >> /home/data/fischer/libs/petsc-3.4.3/include petsctest.cpp >> -Werror -Wall >> -Wpedantic -std=c++11 -L >> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib >> -lpetsc >> /usr/bin/ld: warning: libmpi_cxx.so.0, needed by >> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so, >> may conflict with libmpi_cxx.so.1 >> /usr/bin/ld: warning: libmpi.so.0, needed by >> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so, >> may conflict with libmpi.so.1 >> librdmacm: couldn't read ABI version. >> librdmacm: assuming: 4 >> CMA: unable to get RDMA device list >> -------------------------------------------------------------------------- >> [[43019,1],0]: A high-performance Open MPI point-to-point >> messaging module >> was unable to find any relevant network interfaces: >> >> Module: OpenFabrics (openib) >> Host: dornroeschen.igpm.rwth-aachen.de >> >> CMA: unable to get RDMA device list >> >> It looks like your MPI is either broken or some of the code >> linked into >> your application was compiled with a different MPI or >> different version. >> Make sure you can compile and run simple MPI programs in >> parallel. >> >> Hello Jed, >> >> thank you for your inputs. Unfortunately MPI does not seem to be >> the issue here. The attachment contains a simple MPI hello world >> program which runs flawlessly (I will append the output to this >> mail) and I have not encountered any problems with other MPI >> programs. My question still stands. >> >> >> This is a simple error. You created the matrix A using >> PETSC_COMM_WORLD, but you try to view it >> using PETSC_VIEWER_STDOUT_SELF. You need to use >> PETSC_VIEWER_STDOUT_WORLD in >> order to match. >> >> Thanks, >> >> Matt >> >> Greetings, >> Niklas Fischer >> >> mpirun -np 2 ./mpitest >> >> librdmacm: couldn't read ABI version. >> librdmacm: assuming: 4 >> CMA: unable to get RDMA device list >> -------------------------------------------------------------------------- >> [[44086,1],0]: A high-performance Open MPI point-to-point >> messaging module >> was unable to find any relevant network interfaces: >> >> Module: OpenFabrics (openib) >> Host: dornroeschen.igpm.rwth-aachen.de >> >> >> Another transport will be used instead, although this may result in >> lower performance. >> -------------------------------------------------------------------------- >> librdmacm: couldn't read ABI version. >> librdmacm: assuming: 4 >> CMA: unable to get RDMA device list >> Hello world from processor dornroeschen.igpm.rwth-aachen.de >> , rank 0 out of 2 processors >> Hello world from processor dornroeschen.igpm.rwth-aachen.de >> , rank 1 out of 2 processors >> [dornroeschen.igpm.rwth-aachen.de:128141 >> ] 1 more process >> has sent help message help-mpi-btl-base.txt / btl:no-nics >> [dornroeschen.igpm.rwth-aachen.de:128141 >> ] Set MCA >> parameter "orte_base_help_aggregate" to 0 to see all help / error >> messages >> >> > Thank you, Matthew, this solves my viewing problem. Am I doing > something wrong when initializing the matrices as well? The matrix' > viewing output starts with "Matrix Object: 1 MPI processes" and the > Krylov solver does not converge. > > Your help is really appreciated, > Niklas Fischer >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which >> their experiments lead. >> -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From romvegeta at yahoo.fr Tue Apr 22 08:06:29 2014 From: romvegeta at yahoo.fr (Veltz Romain) Date: Tue, 22 Apr 2014 15:06:29 +0200 Subject: [petsc-users] setting sundials options Message-ID: Dear PETSc users, I am wondering how the options for the SUNDIALS solvers can be set in Petsc4py. For example, there is ts.setType(ts.Type.SUNDIALS) but how can we set: TSSundialsSetType TSSundialsSetTolerance TSSundialsGetPC Thank you for your help, Bests. From fischega at westinghouse.com Tue Apr 22 08:48:10 2014 From: fischega at westinghouse.com (Fischer, Greg A.) Date: Tue, 22 Apr 2014 09:48:10 -0400 Subject: [petsc-users] SNES: approximating the Jacobian with computed residuals? Message-ID: Hello PETSc-users, I'm using the SNES component with the NGMRES method in my application. I'm using a matrix-free context for the Jacobian and the MatMFFDComputeJacobian() function in my FormJacobian routine. My understanding is that this effectively approximates the Jacobian using the equation at the bottom of Page 103 in the PETSc User's Manual. This works, but the expense of computing two function evaluations in each SNES iteration nearly wipes out the performance improvements over Picard iteration. Based on my (limited) understanding of the Oosterlee/Washio SIAM paper ("Krylov Subspace Acceleration of Nonlinear Multigrid..."), they seem to suggest that it's possible to approximate the Jacobian with a series of previously-computed residuals (eq 2.14), rather than additional function evaluations in each iteration. Is this correct? If so, could someone point me to a reference that demonstrates how to do this with PETSc? Or, perhaps a better question to ask is: are there other ways of reducing the computing burden associated with estimating the Jacobian? Thanks, Greg From prbrune at gmail.com Tue Apr 22 09:16:29 2014 From: prbrune at gmail.com (Peter Brune) Date: Tue, 22 Apr 2014 09:16:29 -0500 Subject: [petsc-users] SNES: approximating the Jacobian with computed residuals? In-Reply-To: References: Message-ID: On Tue, Apr 22, 2014 at 8:48 AM, Fischer, Greg A. wrote: > Hello PETSc-users, > > I'm using the SNES component with the NGMRES method in my application. I'm > using a matrix-free context for the Jacobian and the > MatMFFDComputeJacobian() function in my FormJacobian routine. My > understanding is that this effectively approximates the Jacobian using the > equation at the bottom of Page 103 in the PETSc User's Manual. This works, > but the expense of computing two function evaluations in each SNES > iteration nearly wipes out the performance improvements over Picard > iteration. > Try -snes_type anderson. It's less stable than NGMRES, but requires one function evaluation per iteration. The manual is out of date. I guess it's time to fix that. It's interesting that the cost of matrix assembly and a linear solve is around the same as that of a function evaluation. Output from -log_summary would help in the diagnosis. > > Based on my (limited) understanding of the Oosterlee/Washio SIAM paper > ("Krylov Subspace Acceleration of Nonlinear Multigrid..."), they seem to > suggest that it's possible to approximate the Jacobian with a series of > previously-computed residuals (eq 2.14), rather than additional function > evaluations in each iteration. Is this correct? If so, could someone point > me to a reference that demonstrates how to do this with PETSc? > What indication do you have that the Jacobian is calculated at all in the NGMRES method? The two function evaluations are related to computing the quantities labeled F(u_M) and F(u_A) in O/W. We already use the Jacobian approximation for the minimization problem (2.14). - Peter > Or, perhaps a better question to ask is: are there other ways of reducing > the computing burden associated with estimating the Jacobian? > > Thanks, > Greg > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuentesdt at gmail.com Tue Apr 22 09:26:16 2014 From: fuentesdt at gmail.com (David Fuentes) Date: Tue, 22 Apr 2014 09:26:16 -0500 Subject: [petsc-users] petscfe_type opencl, variable_coefficient field In-Reply-To: <53563E80.7060404@iue.tuwien.ac.at> References: <53563E80.7060404@iue.tuwien.ac.at> Message-ID: Thanks. Also, the physics seems to be controlled by the "op" variable? Is there an interface to change the physics from the user code ? https://bitbucket.org/petsc/petsc/src/fb3a4ffdea0449562fb0f58dad4b44552b6d3589/src/dm/dt/interface/dtfe.c?at=master#cl-5028 switch (op) { case LAPLACIAN: if (useF0) {ierr = PetscSNPrintfCount(string_tail, end_of_buffer - string_tail, " f_0[fidx] = 4.0;\n", &count);STRING_ERROR_CHECK("Message to short");} if (useF1) { if (useAux) {ierr = PetscSNPrintfCount(string_tail, end_of_buffer - string_tail, " f_1[fidx] = a[cell]*gradU[cidx];\n", &count);STRING_ERROR_CHECK("Message to short");} else {ierr = PetscSNPrintfCount(string_tail, end_of_buffer - string_tail, " f_1[fidx] = gradU[cidx];\n", &count);STRING_ERROR_CHECK("Message to short");} } break; case ELASTICITY: if (useF0) {i On Tue, Apr 22, 2014 at 5:03 AM, Karl Rupp wrote: > Hi David, > > > > I'm trying to use > >> >> -variable_coefficient field with -petscfe_type opencl -mat_petscfe_type >> opencl >> >> constant coefficients "-variable_coefficient none" works fine, but I am >> getting NaN in my residual vector for spatially varying coefficients. >> >> I tried changing N_t manually for N_comp_aux =1, also tried setting my >> variable coefficient field to a constant at each quadrature point to see >> if any difference. >> > > This is - unfortunately - a known issue, where we currently don't know > whether this is a race condition in the OpenCL kernel, or something goes > wrong with the data setup. I expect this to require rather more elaborate > debugging, so unfortunately I can only advise you to wait until this is > fixed... > > Best regards, > Karli > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rupp at iue.tuwien.ac.at Tue Apr 22 09:42:32 2014 From: rupp at iue.tuwien.ac.at (Karl Rupp) Date: Tue, 22 Apr 2014 16:42:32 +0200 Subject: [petsc-users] petscfe_type opencl, variable_coefficient field In-Reply-To: References: <53563E80.7060404@iue.tuwien.ac.at> Message-ID: <53567FD8.30705@iue.tuwien.ac.at> Hi David, > Thanks. Also, the physics seems to be controlled by the "op" variable? > Is there an interface to change the physics from the user code ? I know that our intention is to provide the user with control over the physics in the long run, but (unless Matt implemented something recently) this is still work in progress... Best regards, Karli > > https://bitbucket.org/petsc/petsc/src/fb3a4ffdea0449562fb0f58dad4b44552b6d3589/src/dm/dt/interface/dtfe.c?at=master#cl-5028 > > switch (op) { > case LAPLACIAN: > if (useF0) {ierr = PetscSNPrintfCount(string_tail, end_of_buffer - string_tail, " f_0[fidx] = 4.0;\n", &count);STRING_ERROR_CHECK("Message to short");} > if (useF1) { > if (useAux) {ierr = PetscSNPrintfCount(string_tail, end_of_buffer - string_tail, " f_1[fidx] = a[cell]*gradU[cidx];\n", &count);STRING_ERROR_CHECK("Message to short");} > else {ierr = PetscSNPrintfCount(string_tail, end_of_buffer - string_tail, " f_1[fidx] = gradU[cidx];\n", &count);STRING_ERROR_CHECK("Message to short");} > } > break; > case ELASTICITY: > if (useF0) {i > > > > > On Tue, Apr 22, 2014 at 5:03 AM, Karl Rupp > wrote: > > Hi David, > > > > I'm trying to use > > > -variable_coefficient field with -petscfe_type opencl > -mat_petscfe_type > opencl > > constant coefficients "-variable_coefficient none" works fine, > but I am > getting NaN in my residual vector for spatially varying > coefficients. > > I tried changing N_t manually for N_comp_aux =1, also tried > setting my > variable coefficient field to a constant at each quadrature > point to see > if any difference. > > > This is - unfortunately - a known issue, where we currently don't > know whether this is a race condition in the OpenCL kernel, or > something goes wrong with the data setup. I expect this to require > rather more elaborate debugging, so unfortunately I can only advise > you to wait until this is fixed... > > Best regards, > Karli > > From lu_qin_2000 at yahoo.com Tue Apr 22 09:43:24 2014 From: lu_qin_2000 at yahoo.com (Qin Lu) Date: Tue, 22 Apr 2014 07:43:24 -0700 (PDT) Subject: [petsc-users] 2-stage preconditioner Message-ID: <1398177804.57321.YahooMailNeo@web160201.mail.bf1.yahoo.com> Hello, ? I need to implement a 2-stage preconditioner using PETSc linear solver: ? 1. The first stage uses a user-provided preconditioner routine. It seems?I can set it with?PCShellSetApply. 2. The second stage uses PETSc's ILU. ? Shall I just call this two preconditioners in sequence, or there is a particular way to hook them up? Is there any sample code for this? ? Many thanks for your suggestions. ? Best Regards, Qin -------------- next part -------------- An HTML attachment was scrubbed... URL: From prbrune at gmail.com Tue Apr 22 09:49:42 2014 From: prbrune at gmail.com (Peter Brune) Date: Tue, 22 Apr 2014 09:49:42 -0500 Subject: [petsc-users] 2-stage preconditioner In-Reply-To: <1398177804.57321.YahooMailNeo@web160201.mail.bf1.yahoo.com> References: <1398177804.57321.YahooMailNeo@web160201.mail.bf1.yahoo.com> Message-ID: PCComposite is probably the answer http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCCOMPOSITE.html The multiplicative variant will call one after the other. In order to nest it you may have to register your own custom type rather than using shell; this is doable with http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCRegister.html - Peter On Tue, Apr 22, 2014 at 9:43 AM, Qin Lu wrote: > Hello, > > I need to implement a 2-stage preconditioner using PETSc linear solver: > > 1. The first stage uses a user-provided preconditioner routine. It seems I > can set it with PCShellSetApply. > 2. The second stage uses PETSc's ILU. > > Shall I just call this two preconditioners in sequence, or there is a > particular way to hook them up? Is there any sample code for this? > > Many thanks for your suggestions. > > Best Regards, > Qin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Apr 22 10:13:44 2014 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 22 Apr 2014 10:13:44 -0500 Subject: [petsc-users] [beginner question] Different communicators in the two objects: Argument # 1 and 2 flag 3! In-Reply-To: <535667BC.10607@niklasfi.de> References: <5356367A.3090906@niklasfi.de> <87k3ahboz9.fsf@jedbrown.org> <535656FC.5010507@niklasfi.de> <53565CB3.8030604@niklasfi.de> <535667BC.10607@niklasfi.de> Message-ID: On Tue, Apr 22, 2014 at 7:59 AM, Niklas Fischer wrote: > I should probably note that everything is fine if I run the serial > version of this (with the exact same matrix + right hand side). > > PETSc KSPSolve done, residual norm: 3.13459e-13, it took 6 iterations. > Yes, your preconditioner is weaker in parallel since it is block Jacobi. If you just want to solve the problem, use a parallel sparse direct factorization, like SuperLU_dist or MUMPS. You reconfigure using --download-superlu-dist or --download-mumps, and then use -pc_type lu -pc_factor_mat_solver_package mumps If you want a really scalable solution, then you have to know about your operator, not just the discretization. Matt > Am 22.04.2014 14:12, schrieb Niklas Fischer: > > > Am 22.04.2014 13:57, schrieb Matthew Knepley: > > On Tue, Apr 22, 2014 at 6:48 AM, Niklas Fischer wrote: > >> Am 22.04.2014 13:08, schrieb Jed Brown: >> >>> Niklas Fischer writes: >>> >>> Hello, >>>> >>>> I have attached a small test case for a problem I am experiencing. What >>>> this dummy program does is it reads a vector and a matrix from a text >>>> file and then solves Ax=b. The same data is available in two forms: >>>> - everything is in one file (matops.s.0 and vops.s.0) >>>> - the matrix and vector are split between processes (matops.0, >>>> matops.1, vops.0, vops.1) >>>> >>>> The serial version of the program works perfectly fine but unfortunately >>>> errors occure, when running the parallel version: >>>> >>>> make && mpirun -n 2 a.out matops vops >>>> >>>> mpic++ -DPETSC_CLANGUAGE_CXX -isystem >>>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/include -isystem >>>> /home/data/fischer/libs/petsc-3.4.3/include petsctest.cpp -Werror -Wall >>>> -Wpedantic -std=c++11 -L >>>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib -lpetsc >>>> /usr/bin/ld: warning: libmpi_cxx.so.0, needed by >>>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so, >>>> may conflict with libmpi_cxx.so.1 >>>> /usr/bin/ld: warning: libmpi.so.0, needed by >>>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so, >>>> may conflict with libmpi.so.1 >>>> librdmacm: couldn't read ABI version. >>>> librdmacm: assuming: 4 >>>> CMA: unable to get RDMA device list >>>> >>>> -------------------------------------------------------------------------- >>>> [[43019,1],0]: A high-performance Open MPI point-to-point messaging >>>> module >>>> was unable to find any relevant network interfaces: >>>> >>>> Module: OpenFabrics (openib) >>>> Host: dornroeschen.igpm.rwth-aachen.de >>>> CMA: unable to get RDMA device list >>>> >>> It looks like your MPI is either broken or some of the code linked into >>> your application was compiled with a different MPI or different version. >>> Make sure you can compile and run simple MPI programs in parallel. >>> >> Hello Jed, >> >> thank you for your inputs. Unfortunately MPI does not seem to be the >> issue here. The attachment contains a simple MPI hello world program which >> runs flawlessly (I will append the output to this mail) and I have not >> encountered any problems with other MPI programs. My question still stands. >> > > This is a simple error. You created the matrix A using PETSC_COMM_WORLD, > but you try to view it > using PETSC_VIEWER_STDOUT_SELF. You need to use PETSC_VIEWER_STDOUT_WORLD > in > order to match. > > Thanks, > > Matt > > >> Greetings, >> Niklas Fischer >> >> mpirun -np 2 ./mpitest >> >> librdmacm: couldn't read ABI version. >> librdmacm: assuming: 4 >> CMA: unable to get RDMA device list >> -------------------------------------------------------------------------- >> [[44086,1],0]: A high-performance Open MPI point-to-point messaging module >> was unable to find any relevant network interfaces: >> >> Module: OpenFabrics (openib) >> Host: dornroeschen.igpm.rwth-aachen.de >> >> Another transport will be used instead, although this may result in >> lower performance. >> -------------------------------------------------------------------------- >> librdmacm: couldn't read ABI version. >> librdmacm: assuming: 4 >> CMA: unable to get RDMA device list >> Hello world from processor dornroeschen.igpm.rwth-aachen.de, rank 0 out >> of 2 processors >> Hello world from processor dornroeschen.igpm.rwth-aachen.de, rank 1 out >> of 2 processors >> [dornroeschen.igpm.rwth-aachen.de:128141] 1 more process has sent help >> message help-mpi-btl-base.txt / btl:no-nics >> [dornroeschen.igpm.rwth-aachen.de:128141] Set MCA parameter >> "orte_base_help_aggregate" to 0 to see all help / error messages >> > > Thank you, Matthew, this solves my viewing problem. Am I doing > something wrong when initializing the matrices as well? The matrix' viewing > output starts with "Matrix Object: 1 MPI processes" and the Krylov solver > does not converge. > > Your help is really appreciated, > Niklas Fischer > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Apr 22 10:16:27 2014 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 22 Apr 2014 10:16:27 -0500 Subject: [petsc-users] petscfe_type opencl, variable_coefficient field In-Reply-To: <53567FD8.30705@iue.tuwien.ac.at> References: <53563E80.7060404@iue.tuwien.ac.at> <53567FD8.30705@iue.tuwien.ac.at> Message-ID: On Tue, Apr 22, 2014 at 9:42 AM, Karl Rupp wrote: > Hi David, > > > Thanks. Also, the physics seems to be controlled by the "op" variable? > >> Is there an interface to change the physics from the user code ? >> > > I know that our intention is to provide the user with control over the > physics in the long run, but (unless Matt implemented something recently) > this is still work in progress... > This was just my quick hack. Its not bad for a user to change. You just provide the source for the pointwise function that you would have on the CPU. I did not put in the interface for passing in a string instead of a function pointer because I was the only one using it. If you use it too, I will put it in. Thanks, Matt > Best regards, > Karli > > > > >> https://bitbucket.org/petsc/petsc/src/fb3a4ffdea0449562fb0f58dad4b44 >> 552b6d3589/src/dm/dt/interface/dtfe.c?at=master#cl-5028 >> >> switch (op) { >> case LAPLACIAN: >> if (useF0) {ierr = PetscSNPrintfCount(string_tail, end_of_buffer - >> string_tail, " f_0[fidx] = 4.0;\n", &count);STRING_ERROR_CHECK("Message >> to short");} >> if (useF1) { >> if (useAux) {ierr = PetscSNPrintfCount(string_tail, end_of_buffer >> - string_tail, " f_1[fidx] = a[cell]*gradU[cidx];\n", >> &count);STRING_ERROR_CHECK("Message to short");} >> else {ierr = PetscSNPrintfCount(string_tail, end_of_buffer >> - string_tail, " f_1[fidx] = gradU[cidx];\n", >> &count);STRING_ERROR_CHECK("Message to short");} >> } >> break; >> case ELASTICITY: >> if (useF0) {i >> >> >> >> >> On Tue, Apr 22, 2014 at 5:03 AM, Karl Rupp > > wrote: >> >> Hi David, >> >> >> > I'm trying to use >> >> >> -variable_coefficient field with -petscfe_type opencl >> -mat_petscfe_type >> opencl >> >> constant coefficients "-variable_coefficient none" works fine, >> but I am >> getting NaN in my residual vector for spatially varying >> coefficients. >> >> I tried changing N_t manually for N_comp_aux =1, also tried >> setting my >> variable coefficient field to a constant at each quadrature >> point to see >> if any difference. >> >> >> This is - unfortunately - a known issue, where we currently don't >> know whether this is a race condition in the OpenCL kernel, or >> something goes wrong with the data setup. I expect this to require >> rather more elaborate debugging, so unfortunately I can only advise >> you to wait until this is fixed... >> >> Best regards, >> Karli >> >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From niklas at niklasfi.de Tue Apr 22 10:23:00 2014 From: niklas at niklasfi.de (Niklas Fischer) Date: Tue, 22 Apr 2014 17:23:00 +0200 Subject: [petsc-users] [beginner question] Different communicators in the two objects: Argument # 1 and 2 flag 3! In-Reply-To: <535667BC.10607@niklasfi.de> References: <5356367A.3090906@niklasfi.de> <87k3ahboz9.fsf@jedbrown.org> <535656FC.5010507@niklasfi.de> <53565CB3.8030604@niklasfi.de> <535667BC.10607@niklasfi.de> Message-ID: <53568954.7030604@niklasfi.de> I have tracked down the problem further and it basically boils down to the following question: Is it possible to use MatSetValue(s) to set values which are owned by other processes? If I create a matrix with for(int i = 0; i < 4 * size; ++i){ CHKERRXX(MatSetValue(A, i, i, rank+1, ADD_VALUES)); } for n=m=4, on four processes, one would expect each entry to be 1 + 2 + 3 + 4 = 10, however, PETSc prints Matrix Object: 1 MPI processes type: mpiaij row 0: (0, 1) row 1: (1, 1) row 2: (2, 1) row 3: (3, 1) row 4: (4, 2) row 5: (5, 2) row 6: (6, 2) row 7: (7, 2) row 8: (8, 3) row 9: (9, 3) row 10: (10, 3) row 11: (11, 3) row 12: (12, 4) row 13: (13, 4) row 14: (14, 4) row 15: (15, 4) which is exactly, what CHKERRXX(VecGetOwnershipRange(x, &ownership_start, &ownership_end)); for(int i = ownership_start; i < ownership_end; ++i){ CHKERRXX(MatSetValue(A, i, i, rank+1, ADD_VALUES)); } would give us. Kind regards, Niklas Fischer Am 22.04.2014 14:59, schrieb Niklas Fischer: > I should probably note that everything is fine if I run the serial > version of this (with the exact same matrix + right hand side). > > PETSc KSPSolve done, residual norm: 3.13459e-13, it took 6 iterations. > > Am 22.04.2014 14:12, schrieb Niklas Fischer: >> >> Am 22.04.2014 13:57, schrieb Matthew Knepley: >>> On Tue, Apr 22, 2014 at 6:48 AM, Niklas Fischer >> > wrote: >>> >>> Am 22.04.2014 13:08, schrieb Jed Brown: >>> >>> Niklas Fischer >> > writes: >>> >>> Hello, >>> >>> I have attached a small test case for a problem I am >>> experiencing. What >>> this dummy program does is it reads a vector and a >>> matrix from a text >>> file and then solves Ax=b. The same data is available in >>> two forms: >>> - everything is in one file (matops.s.0 and vops.s.0) >>> - the matrix and vector are split between processes >>> (matops.0, >>> matops.1, vops.0, vops.1) >>> >>> The serial version of the program works perfectly fine >>> but unfortunately >>> errors occure, when running the parallel version: >>> >>> make && mpirun -n 2 a.out matops vops >>> >>> mpic++ -DPETSC_CLANGUAGE_CXX -isystem >>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/include >>> -isystem >>> /home/data/fischer/libs/petsc-3.4.3/include >>> petsctest.cpp -Werror -Wall >>> -Wpedantic -std=c++11 -L >>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib >>> -lpetsc >>> /usr/bin/ld: warning: libmpi_cxx.so.0, needed by >>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so, >>> may conflict with libmpi_cxx.so.1 >>> /usr/bin/ld: warning: libmpi.so.0, needed by >>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so, >>> may conflict with libmpi.so.1 >>> librdmacm: couldn't read ABI version. >>> librdmacm: assuming: 4 >>> CMA: unable to get RDMA device list >>> -------------------------------------------------------------------------- >>> [[43019,1],0]: A high-performance Open MPI >>> point-to-point messaging module >>> was unable to find any relevant network interfaces: >>> >>> Module: OpenFabrics (openib) >>> Host: dornroeschen.igpm.rwth-aachen.de >>> >>> CMA: unable to get RDMA device list >>> >>> It looks like your MPI is either broken or some of the code >>> linked into >>> your application was compiled with a different MPI or >>> different version. >>> Make sure you can compile and run simple MPI programs in >>> parallel. >>> >>> Hello Jed, >>> >>> thank you for your inputs. Unfortunately MPI does not seem to be >>> the issue here. The attachment contains a simple MPI hello world >>> program which runs flawlessly (I will append the output to this >>> mail) and I have not encountered any problems with other MPI >>> programs. My question still stands. >>> >>> >>> This is a simple error. You created the matrix A using >>> PETSC_COMM_WORLD, but you try to view it >>> using PETSC_VIEWER_STDOUT_SELF. You need to use >>> PETSC_VIEWER_STDOUT_WORLD in >>> order to match. >>> >>> Thanks, >>> >>> Matt >>> >>> Greetings, >>> Niklas Fischer >>> >>> mpirun -np 2 ./mpitest >>> >>> librdmacm: couldn't read ABI version. >>> librdmacm: assuming: 4 >>> CMA: unable to get RDMA device list >>> -------------------------------------------------------------------------- >>> [[44086,1],0]: A high-performance Open MPI point-to-point >>> messaging module >>> was unable to find any relevant network interfaces: >>> >>> Module: OpenFabrics (openib) >>> Host: dornroeschen.igpm.rwth-aachen.de >>> >>> >>> Another transport will be used instead, although this may result in >>> lower performance. >>> -------------------------------------------------------------------------- >>> librdmacm: couldn't read ABI version. >>> librdmacm: assuming: 4 >>> CMA: unable to get RDMA device list >>> Hello world from processor dornroeschen.igpm.rwth-aachen.de >>> , rank 0 out of 2 >>> processors >>> Hello world from processor dornroeschen.igpm.rwth-aachen.de >>> , rank 1 out of 2 >>> processors >>> [dornroeschen.igpm.rwth-aachen.de:128141 >>> ] 1 more process >>> has sent help message help-mpi-btl-base.txt / btl:no-nics >>> [dornroeschen.igpm.rwth-aachen.de:128141 >>> ] Set MCA >>> parameter "orte_base_help_aggregate" to 0 to see all help / >>> error messages >>> >>> >> Thank you, Matthew, this solves my viewing problem. Am I doing >> something wrong when initializing the matrices as well? The matrix' >> viewing output starts with "Matrix Object: 1 MPI processes" and the >> Krylov solver does not converge. >> >> Your help is really appreciated, >> Niklas Fischer >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which >>> their experiments lead. >>> -- Norbert Wiener >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sudheer3 at gmail.com Tue Apr 22 10:44:21 2014 From: sudheer3 at gmail.com (Sudheer Kumar) Date: Tue, 22 Apr 2014 21:14:21 +0530 Subject: [petsc-users] MATLAB and PetSc Message-ID: I request for advice on the following: I have a MATLAB code that uses some Linear algebra (LA) and Linear equation solver (LS) routines. I have to speed up this MATLAB code by using the i) MKL routines for LA calls and ii) Petsc routines for LS calls. I was able to achieve the step (i) using the following commands icc -c -I/work/apps/matlab/2013a/extern/include -I/work/apps/matlab/2013a/simulink/include -DMATLAB_MEX_FILE -ansi -D_GNU_SOURCE -fexceptions -fPIC -fno-omit-frame-pointer -pthread -DMX_COMPAT_64 -O -DNDEBUG "matrixMultiply.c" icc -O -pthread -shared -Wl,--version-script,/work/apps/matlab/2013a/extern/lib/glnxa64/mexFunction.map -Wl,--no-undefined -o "matrixMultiply.mexa64" matrixMultiply.o -Wl,-rpath-link,/work/apps/matlab/2013a/bin/glnxa64 -L/work/apps/matlab/2013a/bin/glnxa64 -lmx -lmex -lmat -mkl -lm -lstdc++ The code with dgemm call runs fine. However, how to achieve the step (ii), i.e., calling the PetSc routing from MATLAB executables compiled with mex. Please provide if you are aware of any sample codes. I wish to eventually achieve all the above functionality for the Octave code as well. I thought of trying first with MATLAB as I thought the interface and support for external library support is better with it. Please do let me know your suggestions. Thanks. Regards Sudheer -------------- next part -------------- An HTML attachment was scrubbed... URL: From fischega at westinghouse.com Tue Apr 22 10:56:55 2014 From: fischega at westinghouse.com (Fischer, Greg A.) Date: Tue, 22 Apr 2014 11:56:55 -0400 Subject: [petsc-users] SNES: approximating the Jacobian with computed residuals? In-Reply-To: References: Message-ID: From: Peter Brune [mailto:prbrune at gmail.com] Sent: Tuesday, April 22, 2014 10:16 AM To: Fischer, Greg A. Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] SNES: approximating the Jacobian with computed residuals? On Tue, Apr 22, 2014 at 8:48 AM, Fischer, Greg A. > wrote: Hello PETSc-users, I'm using the SNES component with the NGMRES method in my application. I'm using a matrix-free context for the Jacobian and the MatMFFDComputeJacobian() function in my FormJacobian routine. My understanding is that this effectively approximates the Jacobian using the equation at the bottom of Page 103 in the PETSc User's Manual. This works, but the expense of computing two function evaluations in each SNES iteration nearly wipes out the performance improvements over Picard iteration. Try -snes_type anderson. It's less stable than NGMRES, but requires one function evaluation per iteration. The manual is out of date. I guess it's time to fix that. It's interesting that the cost of matrix assembly and a linear solve is around the same as that of a function evaluation. Output from -log_summary would help in the diagnosis. I tried the ?snes_type anderson option, and it seems to be requiring even more function evaluations than the Picard iterations. I?ve attached ?log_summary output. This seems strange, because I can use the NLKAIN code (http://nlkain.sourceforge.net/) to fairly good effect, and I?ve read that it?s related to Anderson mixing. Would it be useful to adjust the parameters? I?ve also attached ?log_summary output for NGMRES. Does anything jump out as being amiss? Based on my (limited) understanding of the Oosterlee/Washio SIAM paper ("Krylov Subspace Acceleration of Nonlinear Multigrid..."), they seem to suggest that it's possible to approximate the Jacobian with a series of previously-computed residuals (eq 2.14), rather than additional function evaluations in each iteration. Is this correct? If so, could someone point me to a reference that demonstrates how to do this with PETSc? What indication do you have that the Jacobian is calculated at all in the NGMRES method? The two function evaluations are related to computing the quantities labeled F(u_M) and F(u_A) in O/W. We already use the Jacobian approximation for the minimization problem (2.14). - Peter Thanks for the clarification. -Greg Or, perhaps a better question to ask is: are there other ways of reducing the computing burden associated with estimating the Jacobian? Thanks, Greg -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: anderson_log_summary.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: ngmres_log_summary.txt URL: From bsmith at mcs.anl.gov Tue Apr 22 11:08:45 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 22 Apr 2014 11:08:45 -0500 Subject: [petsc-users] [beginner question] Different communicators in the two objects: Argument # 1 and 2 flag 3! In-Reply-To: <53568954.7030604@niklasfi.de> References: <5356367A.3090906@niklasfi.de> <87k3ahboz9.fsf@jedbrown.org> <535656FC.5010507@niklasfi.de> <53565CB3.8030604@niklasfi.de> <535667BC.10607@niklasfi.de> <53568954.7030604@niklasfi.de> Message-ID: On Apr 22, 2014, at 10:23 AM, Niklas Fischer wrote: > I have tracked down the problem further and it basically boils down to the following question: Is it possible to use MatSetValue(s) to set values which are owned by other processes? Yes it is certainly possible. You should not set a large percent of the values on the ?wrong? process but setting some is fine. The values will also be added together if you use ADD_VALUES. Below have you called the MatAssemblyBegin/End after setting all the values? > > If I create a matrix with > > for(int i = 0; i < 4 * size; ++i){ > CHKERRXX(MatSetValue(A, i, i, rank+1, ADD_VALUES)); > } > > for n=m=4, on four processes, one would expect each entry to be 1 + 2 + 3 + 4 = 10, however, PETSc prints > Matrix Object: 1 MPI processes > type: mpiaij > row 0: (0, 1) > row 1: (1, 1) > row 2: (2, 1) > row 3: (3, 1) > row 4: (4, 2) > row 5: (5, 2) > row 6: (6, 2) > row 7: (7, 2) > row 8: (8, 3) > row 9: (9, 3) > row 10: (10, 3) > row 11: (11, 3) > row 12: (12, 4) > row 13: (13, 4) > row 14: (14, 4) > row 15: (15, 4) > which is exactly, what > > CHKERRXX(VecGetOwnershipRange(x, &ownership_start, &ownership_end)); > for(int i = ownership_start; i < ownership_end; ++i){ > CHKERRXX(MatSetValue(A, i, i, rank+1, ADD_VALUES)); > } > would give us. > > Kind regards, > Niklas Fischer > > Am 22.04.2014 14:59, schrieb Niklas Fischer: >> I should probably note that everything is fine if I run the serial version of this (with the exact same matrix + right hand side). >> >> PETSc KSPSolve done, residual norm: 3.13459e-13, it took 6 iterations. >> >> Am 22.04.2014 14:12, schrieb Niklas Fischer: >>> >>> Am 22.04.2014 13:57, schrieb Matthew Knepley: >>>> On Tue, Apr 22, 2014 at 6:48 AM, Niklas Fischer wrote: >>>> Am 22.04.2014 13:08, schrieb Jed Brown: >>>> Niklas Fischer writes: >>>> >>>> Hello, >>>> >>>> I have attached a small test case for a problem I am experiencing. What >>>> this dummy program does is it reads a vector and a matrix from a text >>>> file and then solves Ax=b. The same data is available in two forms: >>>> - everything is in one file (matops.s.0 and vops.s.0) >>>> - the matrix and vector are split between processes (matops.0, >>>> matops.1, vops.0, vops.1) >>>> >>>> The serial version of the program works perfectly fine but unfortunately >>>> errors occure, when running the parallel version: >>>> >>>> make && mpirun -n 2 a.out matops vops >>>> >>>> mpic++ -DPETSC_CLANGUAGE_CXX -isystem >>>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/include -isystem >>>> /home/data/fischer/libs/petsc-3.4.3/include petsctest.cpp -Werror -Wall >>>> -Wpedantic -std=c++11 -L >>>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib -lpetsc >>>> /usr/bin/ld: warning: libmpi_cxx.so.0, needed by >>>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so, >>>> may conflict with libmpi_cxx.so.1 >>>> /usr/bin/ld: warning: libmpi.so.0, needed by >>>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so, >>>> may conflict with libmpi.so.1 >>>> librdmacm: couldn't read ABI version. >>>> librdmacm: assuming: 4 >>>> CMA: unable to get RDMA device list >>>> -------------------------------------------------------------------------- >>>> [[43019,1],0]: A high-performance Open MPI point-to-point messaging module >>>> was unable to find any relevant network interfaces: >>>> >>>> Module: OpenFabrics (openib) >>>> Host: dornroeschen.igpm.rwth-aachen.de >>>> CMA: unable to get RDMA device list >>>> It looks like your MPI is either broken or some of the code linked into >>>> your application was compiled with a different MPI or different version. >>>> Make sure you can compile and run simple MPI programs in parallel. >>>> Hello Jed, >>>> >>>> thank you for your inputs. Unfortunately MPI does not seem to be the issue here. The attachment contains a simple MPI hello world program which runs flawlessly (I will append the output to this mail) and I have not encountered any problems with other MPI programs. My question still stands. >>>> >>>> This is a simple error. You created the matrix A using PETSC_COMM_WORLD, but you try to view it >>>> using PETSC_VIEWER_STDOUT_SELF. You need to use PETSC_VIEWER_STDOUT_WORLD in >>>> order to match. >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> Greetings, >>>> Niklas Fischer >>>> >>>> mpirun -np 2 ./mpitest >>>> >>>> librdmacm: couldn't read ABI version. >>>> librdmacm: assuming: 4 >>>> CMA: unable to get RDMA device list >>>> -------------------------------------------------------------------------- >>>> [[44086,1],0]: A high-performance Open MPI point-to-point messaging module >>>> was unable to find any relevant network interfaces: >>>> >>>> Module: OpenFabrics (openib) >>>> Host: dornroeschen.igpm.rwth-aachen.de >>>> >>>> Another transport will be used instead, although this may result in >>>> lower performance. >>>> -------------------------------------------------------------------------- >>>> librdmacm: couldn't read ABI version. >>>> librdmacm: assuming: 4 >>>> CMA: unable to get RDMA device list >>>> Hello world from processor dornroeschen.igpm.rwth-aachen.de, rank 0 out of 2 processors >>>> Hello world from processor dornroeschen.igpm.rwth-aachen.de, rank 1 out of 2 processors >>>> [dornroeschen.igpm.rwth-aachen.de:128141] 1 more process has sent help message help-mpi-btl-base.txt / btl:no-nics >>>> [dornroeschen.igpm.rwth-aachen.de:128141] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages >>>> >>> Thank you, Matthew, this solves my viewing problem. Am I doing something wrong when initializing the matrices as well? The matrix' viewing output starts with "Matrix Object: 1 MPI processes" and the Krylov solver does not converge. >>> >>> Your help is really appreciated, >>> Niklas Fischer >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>> -- Norbert Wiener >>> >> > From bsmith at mcs.anl.gov Tue Apr 22 11:11:46 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 22 Apr 2014 11:11:46 -0500 Subject: [petsc-users] MATLAB and PetSc In-Reply-To: References: Message-ID: <237DD811-AD05-4BD2-97FF-1915A8ACD694@mcs.anl.gov> On Apr 22, 2014, at 10:44 AM, Sudheer Kumar wrote: > I request for advice on the following: > > I have a MATLAB code that uses some Linear algebra (LA) and Linear equation solver (LS) routines. I have to speed up this MATLAB code by using the i) MKL routines for LA calls and ii) Petsc routines for LS calls. > > I was able to achieve the step (i) using the following commands > > icc -c -I/work/apps/matlab/2013a/extern/include -I/work/apps/matlab/2013a/simulink/include -DMATLAB_MEX_FILE -ansi -D_GNU_SOURCE -fexceptions -fPIC -fno-omit-frame-pointer -pthread -DMX_COMPAT_64 -O -DNDEBUG "matrixMultiply.c" > > icc -O -pthread -shared -Wl,--version-script,/work/apps/matlab/2013a/extern/lib/glnxa64/mexFunction.map -Wl,--no-undefined -o "matrixMultiply.mexa64" matrixMultiply.o -Wl,-rpath-link,/work/apps/matlab/2013a/bin/glnxa64 -L/work/apps/matlab/2013a/bin/glnxa64 -lmx -lmex -lmat -mkl -lm -lstdc++ > > > > The code with dgemm call runs fine. However, how to achieve the step (ii), i.e., calling the PetSc routing from MATLAB executables compiled with mex. Please provide if you are aware of any sample codes. This is a great deal of work and probably not worth it. Note that MATLAB already uses MKL internally. How much Matlab code do you have that computes the Jacobian and functions and is it parallel? Recommend just starting from scratch with a parallel PETSc code in C or FORTRAN than you can run problems as big as any machine you can access. Barry > > > > I wish to eventually achieve all the above functionality for the Octave code as well. I thought of trying first with MATLAB as I thought the interface and support for external library support is better with it. Please do let me know your suggestions. Thanks. > > Regards > Sudheer From niklas at niklasfi.de Tue Apr 22 11:32:40 2014 From: niklas at niklasfi.de (Niklas Fischer) Date: Tue, 22 Apr 2014 18:32:40 +0200 Subject: [petsc-users] [beginner question] Different communicators in the two objects: Argument # 1 and 2 flag 3! In-Reply-To: References: <5356367A.3090906@niklasfi.de> <87k3ahboz9.fsf@jedbrown.org> <535656FC.5010507@niklasfi.de> <53565CB3.8030604@niklasfi.de> <535667BC.10607@niklasfi.de> <53568954.7030604@niklasfi.de> Message-ID: <535699A8.2090808@niklasfi.de> Hello Barry, Am 22.04.2014 18:08, schrieb Barry Smith: > On Apr 22, 2014, at 10:23 AM, Niklas Fischer wrote: > >> I have tracked down the problem further and it basically boils down to the following question: Is it possible to use MatSetValue(s) to set values which are owned by other processes? > Yes it is certainly possible. You should not set a large percent of the values on the ?wrong? process but setting some is fine. The values will also be added together if you use ADD_VALUES. > > Below have you called the MatAssemblyBegin/End after setting all the values? It certainly is AssemblyBegin first, then set values, then AssemblyEnd CHKERRXX(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); CHKERRXX(VecAssemblyBegin(b)); for(int i = 0; i < 4 * size; ++i){ CHKERRXX(VecSetValue(b, i, 1, ADD_VALUES)); } for(int i = 0; i < 4 * size; ++i){ CHKERRXX(MatSetValue(A, i, i, rank+1, ADD_VALUES)); } CHKERRXX(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); CHKERRXX(VecAssemblyEnd(b)); My observation, setting the values does not work, also ties in with the solution given by the solver which is the result of Diag [1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4]x=Constant(1,16) Process [0] 1 1 1 1 Process [1] 0.5 0.5 0.5 0.5 Process [2] 0.333333 0.333333 0.333333 0.333333 Process [3] 0.25 0.25 0.25 0.25 > >> If I create a matrix with >> >> for(int i = 0; i < 4 * size; ++i){ >> CHKERRXX(MatSetValue(A, i, i, rank+1, ADD_VALUES)); >> } >> >> for n=m=4, on four processes, one would expect each entry to be 1 + 2 + 3 + 4 = 10, however, PETSc prints >> Matrix Object: 1 MPI processes >> type: mpiaij >> row 0: (0, 1) >> row 1: (1, 1) >> row 2: (2, 1) >> row 3: (3, 1) >> row 4: (4, 2) >> row 5: (5, 2) >> row 6: (6, 2) >> row 7: (7, 2) >> row 8: (8, 3) >> row 9: (9, 3) >> row 10: (10, 3) >> row 11: (11, 3) >> row 12: (12, 4) >> row 13: (13, 4) >> row 14: (14, 4) >> row 15: (15, 4) >> which is exactly, what >> >> CHKERRXX(VecGetOwnershipRange(x, &ownership_start, &ownership_end)); >> for(int i = ownership_start; i < ownership_end; ++i){ >> CHKERRXX(MatSetValue(A, i, i, rank+1, ADD_VALUES)); >> } >> would give us. >> >> Kind regards, >> Niklas Fischer >> >> Am 22.04.2014 14:59, schrieb Niklas Fischer: >>> I should probably note that everything is fine if I run the serial version of this (with the exact same matrix + right hand side). >>> >>> PETSc KSPSolve done, residual norm: 3.13459e-13, it took 6 iterations. >>> >>> Am 22.04.2014 14:12, schrieb Niklas Fischer: >>>> Am 22.04.2014 13:57, schrieb Matthew Knepley: >>>>> On Tue, Apr 22, 2014 at 6:48 AM, Niklas Fischer wrote: >>>>> Am 22.04.2014 13:08, schrieb Jed Brown: >>>>> Niklas Fischer writes: >>>>> >>>>> Hello, >>>>> >>>>> I have attached a small test case for a problem I am experiencing. What >>>>> this dummy program does is it reads a vector and a matrix from a text >>>>> file and then solves Ax=b. The same data is available in two forms: >>>>> - everything is in one file (matops.s.0 and vops.s.0) >>>>> - the matrix and vector are split between processes (matops.0, >>>>> matops.1, vops.0, vops.1) >>>>> >>>>> The serial version of the program works perfectly fine but unfortunately >>>>> errors occure, when running the parallel version: >>>>> >>>>> make && mpirun -n 2 a.out matops vops >>>>> >>>>> mpic++ -DPETSC_CLANGUAGE_CXX -isystem >>>>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/include -isystem >>>>> /home/data/fischer/libs/petsc-3.4.3/include petsctest.cpp -Werror -Wall >>>>> -Wpedantic -std=c++11 -L >>>>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib -lpetsc >>>>> /usr/bin/ld: warning: libmpi_cxx.so.0, needed by >>>>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so, >>>>> may conflict with libmpi_cxx.so.1 >>>>> /usr/bin/ld: warning: libmpi.so.0, needed by >>>>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so, >>>>> may conflict with libmpi.so.1 >>>>> librdmacm: couldn't read ABI version. >>>>> librdmacm: assuming: 4 >>>>> CMA: unable to get RDMA device list >>>>> -------------------------------------------------------------------------- >>>>> [[43019,1],0]: A high-performance Open MPI point-to-point messaging module >>>>> was unable to find any relevant network interfaces: >>>>> >>>>> Module: OpenFabrics (openib) >>>>> Host: dornroeschen.igpm.rwth-aachen.de >>>>> CMA: unable to get RDMA device list >>>>> It looks like your MPI is either broken or some of the code linked into >>>>> your application was compiled with a different MPI or different version. >>>>> Make sure you can compile and run simple MPI programs in parallel. >>>>> Hello Jed, >>>>> >>>>> thank you for your inputs. Unfortunately MPI does not seem to be the issue here. The attachment contains a simple MPI hello world program which runs flawlessly (I will append the output to this mail) and I have not encountered any problems with other MPI programs. My question still stands. >>>>> >>>>> This is a simple error. You created the matrix A using PETSC_COMM_WORLD, but you try to view it >>>>> using PETSC_VIEWER_STDOUT_SELF. You need to use PETSC_VIEWER_STDOUT_WORLD in >>>>> order to match. >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> Greetings, >>>>> Niklas Fischer >>>>> >>>>> mpirun -np 2 ./mpitest >>>>> >>>>> librdmacm: couldn't read ABI version. >>>>> librdmacm: assuming: 4 >>>>> CMA: unable to get RDMA device list >>>>> -------------------------------------------------------------------------- >>>>> [[44086,1],0]: A high-performance Open MPI point-to-point messaging module >>>>> was unable to find any relevant network interfaces: >>>>> >>>>> Module: OpenFabrics (openib) >>>>> Host: dornroeschen.igpm.rwth-aachen.de >>>>> >>>>> Another transport will be used instead, although this may result in >>>>> lower performance. >>>>> -------------------------------------------------------------------------- >>>>> librdmacm: couldn't read ABI version. >>>>> librdmacm: assuming: 4 >>>>> CMA: unable to get RDMA device list >>>>> Hello world from processor dornroeschen.igpm.rwth-aachen.de, rank 0 out of 2 processors >>>>> Hello world from processor dornroeschen.igpm.rwth-aachen.de, rank 1 out of 2 processors >>>>> [dornroeschen.igpm.rwth-aachen.de:128141] 1 more process has sent help message help-mpi-btl-base.txt / btl:no-nics >>>>> [dornroeschen.igpm.rwth-aachen.de:128141] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages >>>>> >>>> Thank you, Matthew, this solves my viewing problem. Am I doing something wrong when initializing the matrices as well? The matrix' viewing output starts with "Matrix Object: 1 MPI processes" and the Krylov solver does not converge. >>>> >>>> Your help is really appreciated, >>>> Niklas Fischer >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>>> -- Norbert Wiener From bsmith at mcs.anl.gov Tue Apr 22 11:35:48 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 22 Apr 2014 11:35:48 -0500 Subject: [petsc-users] [beginner question] Different communicators in the two objects: Argument # 1 and 2 flag 3! In-Reply-To: <535699A8.2090808@niklasfi.de> References: <5356367A.3090906@niklasfi.de> <87k3ahboz9.fsf@jedbrown.org> <535656FC.5010507@niklasfi.de> <53565CB3.8030604@niklasfi.de> <535667BC.10607@niklasfi.de> <53568954.7030604@niklasfi.de> <535699A8.2090808@niklasfi.de> Message-ID: Call BOTH the AssemblyBegin() and End AFTER you have called the set values routines. Barry Yes, you could argue it is confusing terminology. On Apr 22, 2014, at 11:32 AM, Niklas Fischer wrote: > Hello Barry, > > Am 22.04.2014 18:08, schrieb Barry Smith: >> On Apr 22, 2014, at 10:23 AM, Niklas Fischer wrote: >> >>> I have tracked down the problem further and it basically boils down to the following question: Is it possible to use MatSetValue(s) to set values which are owned by other processes? >> Yes it is certainly possible. You should not set a large percent of the values on the ?wrong? process but setting some is fine. The values will also be added together if you use ADD_VALUES. >> >> Below have you called the MatAssemblyBegin/End after setting all the values? > It certainly is AssemblyBegin first, then set values, then AssemblyEnd > > CHKERRXX(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); > CHKERRXX(VecAssemblyBegin(b)); > > > for(int i = 0; i < 4 * size; ++i){ > CHKERRXX(VecSetValue(b, i, 1, ADD_VALUES)); > } > > for(int i = 0; i < 4 * size; ++i){ > CHKERRXX(MatSetValue(A, i, i, rank+1, ADD_VALUES)); > } > > CHKERRXX(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); > CHKERRXX(VecAssemblyEnd(b)); > > My observation, setting the values does not work, also ties in with the solution given by the solver which is the result of > > Diag [1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4]x=Constant(1,16) > > Process [0] > 1 > 1 > 1 > 1 > Process [1] > 0.5 > 0.5 > 0.5 > 0.5 > Process [2] > 0.333333 > 0.333333 > 0.333333 > 0.333333 > Process [3] > 0.25 > 0.25 > 0.25 > 0.25 >> >>> If I create a matrix with >>> >>> for(int i = 0; i < 4 * size; ++i){ >>> CHKERRXX(MatSetValue(A, i, i, rank+1, ADD_VALUES)); >>> } >>> >>> for n=m=4, on four processes, one would expect each entry to be 1 + 2 + 3 + 4 = 10, however, PETSc prints >>> Matrix Object: 1 MPI processes >>> type: mpiaij >>> row 0: (0, 1) >>> row 1: (1, 1) >>> row 2: (2, 1) >>> row 3: (3, 1) >>> row 4: (4, 2) >>> row 5: (5, 2) >>> row 6: (6, 2) >>> row 7: (7, 2) >>> row 8: (8, 3) >>> row 9: (9, 3) >>> row 10: (10, 3) >>> row 11: (11, 3) >>> row 12: (12, 4) >>> row 13: (13, 4) >>> row 14: (14, 4) >>> row 15: (15, 4) >>> which is exactly, what >>> >>> CHKERRXX(VecGetOwnershipRange(x, &ownership_start, &ownership_end)); >>> for(int i = ownership_start; i < ownership_end; ++i){ >>> CHKERRXX(MatSetValue(A, i, i, rank+1, ADD_VALUES)); >>> } >>> would give us. >>> >>> Kind regards, >>> Niklas Fischer >>> >>> Am 22.04.2014 14:59, schrieb Niklas Fischer: >>>> I should probably note that everything is fine if I run the serial version of this (with the exact same matrix + right hand side). >>>> >>>> PETSc KSPSolve done, residual norm: 3.13459e-13, it took 6 iterations. >>>> >>>> Am 22.04.2014 14:12, schrieb Niklas Fischer: >>>>> Am 22.04.2014 13:57, schrieb Matthew Knepley: >>>>>> On Tue, Apr 22, 2014 at 6:48 AM, Niklas Fischer wrote: >>>>>> Am 22.04.2014 13:08, schrieb Jed Brown: >>>>>> Niklas Fischer writes: >>>>>> >>>>>> Hello, >>>>>> >>>>>> I have attached a small test case for a problem I am experiencing. What >>>>>> this dummy program does is it reads a vector and a matrix from a text >>>>>> file and then solves Ax=b. The same data is available in two forms: >>>>>> - everything is in one file (matops.s.0 and vops.s.0) >>>>>> - the matrix and vector are split between processes (matops.0, >>>>>> matops.1, vops.0, vops.1) >>>>>> >>>>>> The serial version of the program works perfectly fine but unfortunately >>>>>> errors occure, when running the parallel version: >>>>>> >>>>>> make && mpirun -n 2 a.out matops vops >>>>>> >>>>>> mpic++ -DPETSC_CLANGUAGE_CXX -isystem >>>>>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/include -isystem >>>>>> /home/data/fischer/libs/petsc-3.4.3/include petsctest.cpp -Werror -Wall >>>>>> -Wpedantic -std=c++11 -L >>>>>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib -lpetsc >>>>>> /usr/bin/ld: warning: libmpi_cxx.so.0, needed by >>>>>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so, >>>>>> may conflict with libmpi_cxx.so.1 >>>>>> /usr/bin/ld: warning: libmpi.so.0, needed by >>>>>> /home/data/fischer/libs/petsc-3.4.3/arch-linux2-c-debug/lib/libpetsc.so, >>>>>> may conflict with libmpi.so.1 >>>>>> librdmacm: couldn't read ABI version. >>>>>> librdmacm: assuming: 4 >>>>>> CMA: unable to get RDMA device list >>>>>> -------------------------------------------------------------------------- >>>>>> [[43019,1],0]: A high-performance Open MPI point-to-point messaging module >>>>>> was unable to find any relevant network interfaces: >>>>>> >>>>>> Module: OpenFabrics (openib) >>>>>> Host: dornroeschen.igpm.rwth-aachen.de >>>>>> CMA: unable to get RDMA device list >>>>>> It looks like your MPI is either broken or some of the code linked into >>>>>> your application was compiled with a different MPI or different version. >>>>>> Make sure you can compile and run simple MPI programs in parallel. >>>>>> Hello Jed, >>>>>> >>>>>> thank you for your inputs. Unfortunately MPI does not seem to be the issue here. The attachment contains a simple MPI hello world program which runs flawlessly (I will append the output to this mail) and I have not encountered any problems with other MPI programs. My question still stands. >>>>>> >>>>>> This is a simple error. You created the matrix A using PETSC_COMM_WORLD, but you try to view it >>>>>> using PETSC_VIEWER_STDOUT_SELF. You need to use PETSC_VIEWER_STDOUT_WORLD in >>>>>> order to match. >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> Greetings, >>>>>> Niklas Fischer >>>>>> >>>>>> mpirun -np 2 ./mpitest >>>>>> >>>>>> librdmacm: couldn't read ABI version. >>>>>> librdmacm: assuming: 4 >>>>>> CMA: unable to get RDMA device list >>>>>> -------------------------------------------------------------------------- >>>>>> [[44086,1],0]: A high-performance Open MPI point-to-point messaging module >>>>>> was unable to find any relevant network interfaces: >>>>>> >>>>>> Module: OpenFabrics (openib) >>>>>> Host: dornroeschen.igpm.rwth-aachen.de >>>>>> >>>>>> Another transport will be used instead, although this may result in >>>>>> lower performance. >>>>>> -------------------------------------------------------------------------- >>>>>> librdmacm: couldn't read ABI version. >>>>>> librdmacm: assuming: 4 >>>>>> CMA: unable to get RDMA device list >>>>>> Hello world from processor dornroeschen.igpm.rwth-aachen.de, rank 0 out of 2 processors >>>>>> Hello world from processor dornroeschen.igpm.rwth-aachen.de, rank 1 out of 2 processors >>>>>> [dornroeschen.igpm.rwth-aachen.de:128141] 1 more process has sent help message help-mpi-btl-base.txt / btl:no-nics >>>>>> [dornroeschen.igpm.rwth-aachen.de:128141] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages >>>>>> >>>>> Thank you, Matthew, this solves my viewing problem. Am I doing something wrong when initializing the matrices as well? The matrix' viewing output starts with "Matrix Object: 1 MPI processes" and the Krylov solver does not converge. >>>>> >>>>> Your help is really appreciated, >>>>> Niklas Fischer >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>>>> -- Norbert Wiener > From prbrune at gmail.com Tue Apr 22 11:43:51 2014 From: prbrune at gmail.com (Peter Brune) Date: Tue, 22 Apr 2014 11:43:51 -0500 Subject: [petsc-users] SNES: approximating the Jacobian with computed residuals? In-Reply-To: References: Message-ID: On Tue, Apr 22, 2014 at 10:56 AM, Fischer, Greg A. < fischega at westinghouse.com> wrote: > > > *From:* Peter Brune [mailto:prbrune at gmail.com] > *Sent:* Tuesday, April 22, 2014 10:16 AM > *To:* Fischer, Greg A. > *Cc:* petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] SNES: approximating the Jacobian with > computed residuals? > > > > On Tue, Apr 22, 2014 at 8:48 AM, Fischer, Greg A. < > fischega at westinghouse.com> wrote: > > Hello PETSc-users, > > I'm using the SNES component with the NGMRES method in my application. I'm > using a matrix-free context for the Jacobian and the > MatMFFDComputeJacobian() function in my FormJacobian routine. My > understanding is that this effectively approximates the Jacobian using the > equation at the bottom of Page 103 in the PETSc User's Manual. This works, > but the expense of computing two function evaluations in each SNES > iteration nearly wipes out the performance improvements over Picard > iteration. > > > > Try -snes_type anderson. It's less stable than NGMRES, but requires one > function evaluation per iteration. The manual is out of date. I guess > it's time to fix that. It's interesting that the cost of matrix assembly > and a linear solve is around the same as that of a function evaluation. > Output from -log_summary would help in the diagnosis. > > > > I tried the ?snes_type anderson option, and it seems to be requiring even > more function evaluations than the Picard iterations. I?ve attached > ?log_summary output. This seems strange, because I can use the NLKAIN code ( > http://nlkain.sourceforge.net/) to fairly good effect, and I?ve read that > it?s related to Anderson mixing. Would it be useful to adjust the > parameters? > If I recall correctly, NLKAIN is yet another improvement on Anderson Mixing. Our NGMRES is what's in O/W and is built largely around being nonlinearly preconditionable with something strong like FAS. What is the perceived difference in convergence? (what does -snes_monitor say?) Any multitude of tolerances may be different between the two methods, and it's hard to judge without knowing much, much more. Seeing what happens when one changes the parameters is of course important if you're looking at performance. By Picard, you mean simple fixed-point iteration, right? What constitutes a Picard iteration is a longstanding argument on this list and therefore requires clarification, unfortunately. :) This (without linesearch) can be duplicated in PETSc with -snes_type nrichardson -snes_linesearch_type basic. For a typical problem one must damp this with -snes_linesearch_damping That's what the linesearch is there to avoid, but this takes more function evaluations. > > > I?ve also attached ?log_summary output for NGMRES. Does anything jump out > as being amiss? > ########################################################## # # # WARNING!!! # # # # This code was compiled with a debugging option, # # To get timing results run ./configure # # using --with-debugging=no, the performance will # # be generally two or three times faster. # # # ########################################################## Timing comparisons aren't reasonable with debugging on. > > > > > Based on my (limited) understanding of the Oosterlee/Washio SIAM paper > ("Krylov Subspace Acceleration of Nonlinear Multigrid..."), they seem to > suggest that it's possible to approximate the Jacobian with a series of > previously-computed residuals (eq 2.14), rather than additional function > evaluations in each iteration. Is this correct? If so, could someone point > me to a reference that demonstrates how to do this with PETSc? > > > > What indication do you have that the Jacobian is calculated at all in the > NGMRES method? The two function evaluations are related to computing the > quantities labeled F(u_M) and F(u_A) in O/W. We already use the Jacobian > approximation for the minimization problem (2.14). > > > > - Peter > > > > Thanks for the clarification. > > > > -Greg > > > > > Or, perhaps a better question to ask is: are there other ways of reducing > the computing burden associated with estimating the Jacobian? > > Thanks, > Greg > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prbrune at gmail.com Tue Apr 22 11:47:39 2014 From: prbrune at gmail.com (Peter Brune) Date: Tue, 22 Apr 2014 11:47:39 -0500 Subject: [petsc-users] SNES: approximating the Jacobian with computed residuals? In-Reply-To: References: Message-ID: -snes_view would also be useful for basic santity checks. On Tue, Apr 22, 2014 at 11:43 AM, Peter Brune wrote: > > > > On Tue, Apr 22, 2014 at 10:56 AM, Fischer, Greg A. < > fischega at westinghouse.com> wrote: > >> >> >> *From:* Peter Brune [mailto:prbrune at gmail.com] >> *Sent:* Tuesday, April 22, 2014 10:16 AM >> *To:* Fischer, Greg A. >> *Cc:* petsc-users at mcs.anl.gov >> *Subject:* Re: [petsc-users] SNES: approximating the Jacobian with >> computed residuals? >> >> >> >> On Tue, Apr 22, 2014 at 8:48 AM, Fischer, Greg A. < >> fischega at westinghouse.com> wrote: >> >> Hello PETSc-users, >> >> I'm using the SNES component with the NGMRES method in my application. >> I'm using a matrix-free context for the Jacobian and the >> MatMFFDComputeJacobian() function in my FormJacobian routine. My >> understanding is that this effectively approximates the Jacobian using the >> equation at the bottom of Page 103 in the PETSc User's Manual. This works, >> but the expense of computing two function evaluations in each SNES >> iteration nearly wipes out the performance improvements over Picard >> iteration. >> >> >> >> Try -snes_type anderson. It's less stable than NGMRES, but requires one >> function evaluation per iteration. The manual is out of date. I guess >> it's time to fix that. It's interesting that the cost of matrix assembly >> and a linear solve is around the same as that of a function evaluation. >> Output from -log_summary would help in the diagnosis. >> >> >> >> I tried the ?snes_type anderson option, and it seems to be requiring even >> more function evaluations than the Picard iterations. I?ve attached >> ?log_summary output. This seems strange, because I can use the NLKAIN code ( >> http://nlkain.sourceforge.net/) to fairly good effect, and I?ve read >> that it?s related to Anderson mixing. Would it be useful to adjust the >> parameters? >> > > If I recall correctly, NLKAIN is yet another improvement on Anderson > Mixing. Our NGMRES is what's in O/W and is built largely around being > nonlinearly preconditionable with something strong like FAS. What is the > perceived difference in convergence? (what does -snes_monitor say?) Any > multitude of tolerances may be different between the two methods, and it's > hard to judge without knowing much, much more. Seeing what happens when > one changes the parameters is of course important if you're looking at > performance. > > By Picard, you mean simple fixed-point iteration, right? What constitutes > a Picard iteration is a longstanding argument on this list and therefore > requires clarification, unfortunately. :) This (without linesearch) can be > duplicated in PETSc with -snes_type nrichardson -snes_linesearch_type > basic. For a typical problem one must damp this with > -snes_linesearch_damping That's what the linesearch is > there to avoid, but this takes more function evaluations. > > >> >> >> I?ve also attached ?log_summary output for NGMRES. Does anything jump out >> as being amiss? >> > > ########################################################## > # # > # WARNING!!! # > # # > # This code was compiled with a debugging option, # > # To get timing results run ./configure # > # using --with-debugging=no, the performance will # > # be generally two or three times faster. # > # # > ########################################################## > > Timing comparisons aren't reasonable with debugging on. > > >> >> >> >> >> Based on my (limited) understanding of the Oosterlee/Washio SIAM paper >> ("Krylov Subspace Acceleration of Nonlinear Multigrid..."), they seem to >> suggest that it's possible to approximate the Jacobian with a series of >> previously-computed residuals (eq 2.14), rather than additional function >> evaluations in each iteration. Is this correct? If so, could someone point >> me to a reference that demonstrates how to do this with PETSc? >> >> >> >> What indication do you have that the Jacobian is calculated at all in the >> NGMRES method? The two function evaluations are related to computing the >> quantities labeled F(u_M) and F(u_A) in O/W. We already use the Jacobian >> approximation for the minimization problem (2.14). >> >> >> >> - Peter >> >> >> >> Thanks for the clarification. >> >> >> >> -Greg >> >> >> >> >> Or, perhaps a better question to ask is: are there other ways of reducing >> the computing burden associated with estimating the Jacobian? >> >> Thanks, >> Greg >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sudheer3 at gmail.com Tue Apr 22 12:19:42 2014 From: sudheer3 at gmail.com (Sudheer Kumar) Date: Tue, 22 Apr 2014 22:49:42 +0530 Subject: [petsc-users] MATLAB and PetSc In-Reply-To: <237DD811-AD05-4BD2-97FF-1915A8ACD694@mcs.anl.gov> References: <237DD811-AD05-4BD2-97FF-1915A8ACD694@mcs.anl.gov> Message-ID: On Tue, Apr 22, 2014 at 9:41 PM, Barry Smith wrote: > How much Matlab code do you have that computes the Jacobian and functions > and is it parallel? How much Matlab code do you have that computes the Jacobian and functions and is it parallel? The MATLAB code that I am working with is of 1000s lines of code (reservoir simulator, http://www.sintef.no/Projectweb/MRST/). The 90% of the time taken by the code on benchmarks is for linear solvers. And that's the reason to look for PetSc to speedup the entire code. >> This is a great deal of work and probably not worth it. Ok, unfortunately here I guess I need to attempt to do that work. Do let me know your suggestions. Thanks. Regards Sudheer -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Apr 22 12:49:18 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 22 Apr 2014 12:49:18 -0500 Subject: [petsc-users] MATLAB and PetSc In-Reply-To: References: <237DD811-AD05-4BD2-97FF-1915A8ACD694@mcs.anl.gov> Message-ID: <15AAED25-B41B-434A-801E-E3435A235A38@mcs.anl.gov> PETSc does have a MATLAB interface in bin/matlab/classes that provides most of PETSc?s functionality directly to Matlab. Unfortunately we do not have the resources to maintain it, so it is a bit broken. If you are familiar with MATLAB and interfacing with C then git it a try and we can provide a small amount of support to help you use it, but as I said we cannot make it perfect ourselves. Barry On Apr 22, 2014, at 12:19 PM, Sudheer Kumar wrote: > > On Tue, Apr 22, 2014 at 9:41 PM, Barry Smith wrote: > How much Matlab code do you have that computes the Jacobian and functions and is it parallel? > > How much Matlab code do you have that computes the Jacobian and functions and is it parallel? > > The MATLAB code that I am working with is of 1000s lines of code (reservoir simulator, http://www.sintef.no/Projectweb/MRST/). The 90% of the time taken by the code on benchmarks is for linear solvers. And that's the reason to look for PetSc to speedup the entire code. > > >> This is a great deal of work and probably not worth it. > Ok, unfortunately here I guess I need to attempt to do that work. Do let me know your suggestions. Thanks. > > > Regards > Sudheer From bsmith at mcs.anl.gov Tue Apr 22 12:50:27 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 22 Apr 2014 12:50:27 -0500 Subject: [petsc-users] SNES: approximating the Jacobian with computed residuals? In-Reply-To: References: Message-ID: <8E2E5983-A900-4438-8581-A9414EC14868@mcs.anl.gov> On Apr 22, 2014, at 11:47 AM, Peter Brune wrote: > -snes_view would also be useful for basic santity checks. and -snes_monitor ? > > > On Tue, Apr 22, 2014 at 11:43 AM, Peter Brune wrote: > > > > On Tue, Apr 22, 2014 at 10:56 AM, Fischer, Greg A. wrote: > > > From: Peter Brune [mailto:prbrune at gmail.com] > Sent: Tuesday, April 22, 2014 10:16 AM > To: Fischer, Greg A. > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] SNES: approximating the Jacobian with computed residuals? > > > > On Tue, Apr 22, 2014 at 8:48 AM, Fischer, Greg A. wrote: > > Hello PETSc-users, > > I'm using the SNES component with the NGMRES method in my application. I'm using a matrix-free context for the Jacobian and the MatMFFDComputeJacobian() function in my FormJacobian routine. My understanding is that this effectively approximates the Jacobian using the equation at the bottom of Page 103 in the PETSc User's Manual. This works, but the expense of computing two function evaluations in each SNES iteration nearly wipes out the performance improvements over Picard iteration. > > > > Try -snes_type anderson. It's less stable than NGMRES, but requires one function evaluation per iteration. The manual is out of date. I guess it's time to fix that. It's interesting that the cost of matrix assembly and a linear solve is around the same as that of a function evaluation. Output from -log_summary would help in the diagnosis. > > > > I tried the ?snes_type anderson option, and it seems to be requiring even more function evaluations than the Picard iterations. I?ve attached ?log_summary output. This seems strange, because I can use the NLKAIN code (http://nlkain.sourceforge.net/) to fairly good effect, and I?ve read that it?s related to Anderson mixing. Would it be useful to adjust the parameters? > > > If I recall correctly, NLKAIN is yet another improvement on Anderson Mixing. Our NGMRES is what's in O/W and is built largely around being nonlinearly preconditionable with something strong like FAS. What is the perceived difference in convergence? (what does -snes_monitor say?) Any multitude of tolerances may be different between the two methods, and it's hard to judge without knowing much, much more. Seeing what happens when one changes the parameters is of course important if you're looking at performance. > > By Picard, you mean simple fixed-point iteration, right? What constitutes a Picard iteration is a longstanding argument on this list and therefore requires clarification, unfortunately. :) This (without linesearch) can be duplicated in PETSc with -snes_type nrichardson -snes_linesearch_type basic. For a typical problem one must damp this with -snes_linesearch_damping That's what the linesearch is there to avoid, but this takes more function evaluations. > > > > > I?ve also attached ?log_summary output for NGMRES. Does anything jump out as being amiss? > > > ########################################################## > # # > # WARNING!!! # > # # > # This code was compiled with a debugging option, # > # To get timing results run ./configure # > # using --with-debugging=no, the performance will # > # be generally two or three times faster. # > # # > ########################################################## > > Timing comparisons aren't reasonable with debugging on. > > > > > > > > Based on my (limited) understanding of the Oosterlee/Washio SIAM paper ("Krylov Subspace Acceleration of Nonlinear Multigrid..."), they seem to suggest that it's possible to approximate the Jacobian with a series of previously-computed residuals (eq 2.14), rather than additional function evaluations in each iteration. Is this correct? If so, could someone point me to a reference that demonstrates how to do this with PETSc? > > > > What indication do you have that the Jacobian is calculated at all in the NGMRES method? The two function evaluations are related to computing the quantities labeled F(u_M) and F(u_A) in O/W. We already use the Jacobian approximation for the minimization problem (2.14). > > > > - Peter > > > > Thanks for the clarification. > > > > -Greg > > > > > Or, perhaps a better question to ask is: are there other ways of reducing the computing burden associated with estimating the Jacobian? > > Thanks, > Greg > > > > > From fischega at westinghouse.com Tue Apr 22 16:40:25 2014 From: fischega at westinghouse.com (Fischer, Greg A.) Date: Tue, 22 Apr 2014 17:40:25 -0400 Subject: [petsc-users] SNES: approximating the Jacobian with computed residuals? In-Reply-To: References: Message-ID: From: Peter Brune [mailto:prbrune at gmail.com] Sent: Tuesday, April 22, 2014 12:44 PM To: Fischer, Greg A. Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] SNES: approximating the Jacobian with computed residuals? On Tue, Apr 22, 2014 at 10:56 AM, Fischer, Greg A. > wrote: From: Peter Brune [mailto:prbrune at gmail.com] Sent: Tuesday, April 22, 2014 10:16 AM To: Fischer, Greg A. Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] SNES: approximating the Jacobian with computed residuals? On Tue, Apr 22, 2014 at 8:48 AM, Fischer, Greg A. > wrote: Hello PETSc-users, I'm using the SNES component with the NGMRES method in my application. I'm using a matrix-free context for the Jacobian and the MatMFFDComputeJacobian() function in my FormJacobian routine. My understanding is that this effectively approximates the Jacobian using the equation at the bottom of Page 103 in the PETSc User's Manual. This works, but the expense of computing two function evaluations in each SNES iteration nearly wipes out the performance improvements over Picard iteration. Try -snes_type anderson. It's less stable than NGMRES, but requires one function evaluation per iteration. The manual is out of date. I guess it's time to fix that. It's interesting that the cost of matrix assembly and a linear solve is around the same as that of a function evaluation. Output from -log_summary would help in the diagnosis. I tried the ?snes_type anderson option, and it seems to be requiring even more function evaluations than the Picard iterations. I?ve attached ?log_summary output. This seems strange, because I can use the NLKAIN code (http://nlkain.sourceforge.net/) to fairly good effect, and I?ve read that it?s related to Anderson mixing. Would it be useful to adjust the parameters? If I recall correctly, NLKAIN is yet another improvement on Anderson Mixing. Our NGMRES is what's in O/W and is built largely around being nonlinearly preconditionable with something strong like FAS. What is the perceived difference in convergence? (what does -snes_monitor say?) Any multitude of tolerances may be different between the two methods, and it's hard to judge without knowing much, much more. Seeing what happens when one changes the parameters is of course important if you're looking at performance. I?m not looking to apply a preconditioner, so it sounds like perhaps NGMRES isn?t a good choice for this application. I tried a different problem (one that is larger and more realistic), and found the SNESAnderson performance to be substantially better. NLKAIN still converges faster, though. (NLKAIN: ~1350 function calls; SNESAnderson: ~1800 function calls; fixed-point: ~2550 function calls). The ?snes_monitor seems to indicate steadily decreasing function norms. The ?snes_anderson_monitor option doesn?t seem to produce any output. I?ve tried passing ?-snes_anderson_monitor? and ?-snes_anderson_monitor true? as options, but don?t see any output analogous to ?-snes_gmres_monitor?. What?s the correct way to pass that option? By Picard, you mean simple fixed-point iteration, right? What constitutes a Picard iteration is a longstanding argument on this list and therefore requires clarification, unfortunately. :) This (without linesearch) can be duplicated in PETSc with -snes_type nrichardson -snes_linesearch_type basic. For a typical problem one must damp this with -snes_linesearch_damping That's what the linesearch is there to avoid, but this takes more function evaluations. Yes, I mean fixed-point iteration. I?ve also attached ?log_summary output for NGMRES. Does anything jump out as being amiss? ########################################################## # # # WARNING!!! # # # # This code was compiled with a debugging option, # # To get timing results run ./configure # # using --with-debugging=no, the performance will # # be generally two or three times faster. # # # ########################################################## Timing comparisons aren't reasonable with debugging on. Yes, I understand. At this point, I?m just comparing numbers of function evalautions. Based on my (limited) understanding of the Oosterlee/Washio SIAM paper ("Krylov Subspace Acceleration of Nonlinear Multigrid..."), they seem to suggest that it's possible to approximate the Jacobian with a series of previously-computed residuals (eq 2.14), rather than additional function evaluations in each iteration. Is this correct? If so, could someone point me to a reference that demonstrates how to do this with PETSc? What indication do you have that the Jacobian is calculated at all in the NGMRES method? The two function evaluations are related to computing the quantities labeled F(u_M) and F(u_A) in O/W. We already use the Jacobian approximation for the minimization problem (2.14). - Peter Thanks for the clarification. -Greg Or, perhaps a better question to ask is: are there other ways of reducing the computing burden associated with estimating the Jacobian? Thanks, Greg -------------- next part -------------- An HTML attachment was scrubbed... URL: From prbrune at gmail.com Tue Apr 22 16:58:47 2014 From: prbrune at gmail.com (Peter Brune) Date: Tue, 22 Apr 2014 16:58:47 -0500 Subject: [petsc-users] SNES: approximating the Jacobian with computed residuals? In-Reply-To: References: Message-ID: I accidentally forgot to reply to the list; resending. On Tue, Apr 22, 2014 at 4:55 PM, Peter Brune wrote: > > > > On Tue, Apr 22, 2014 at 4:40 PM, Fischer, Greg A. < > fischega at westinghouse.com> wrote: > >> >> >> >> >> *From:* Peter Brune [mailto:prbrune at gmail.com] >> *Sent:* Tuesday, April 22, 2014 12:44 PM >> >> *To:* Fischer, Greg A. >> *Cc:* petsc-users at mcs.anl.gov >> *Subject:* Re: [petsc-users] SNES: approximating the Jacobian with >> computed residuals? >> >> >> >> >> >> >> >> On Tue, Apr 22, 2014 at 10:56 AM, Fischer, Greg A. < >> fischega at westinghouse.com> wrote: >> >> >> >> *From:* Peter Brune [mailto:prbrune at gmail.com] >> *Sent:* Tuesday, April 22, 2014 10:16 AM >> *To:* Fischer, Greg A. >> *Cc:* petsc-users at mcs.anl.gov >> *Subject:* Re: [petsc-users] SNES: approximating the Jacobian with >> computed residuals? >> >> >> >> On Tue, Apr 22, 2014 at 8:48 AM, Fischer, Greg A. < >> fischega at westinghouse.com> wrote: >> >> Hello PETSc-users, >> >> I'm using the SNES component with the NGMRES method in my application. >> I'm using a matrix-free context for the Jacobian and the >> MatMFFDComputeJacobian() function in my FormJacobian routine. My >> understanding is that this effectively approximates the Jacobian using the >> equation at the bottom of Page 103 in the PETSc User's Manual. This works, >> but the expense of computing two function evaluations in each SNES >> iteration nearly wipes out the performance improvements over Picard >> iteration. >> >> >> >> Try -snes_type anderson. It's less stable than NGMRES, but requires one >> function evaluation per iteration. The manual is out of date. I guess >> it's time to fix that. It's interesting that the cost of matrix assembly >> and a linear solve is around the same as that of a function evaluation. >> Output from -log_summary would help in the diagnosis. >> >> >> >> I tried the ?snes_type anderson option, and it seems to be requiring even >> more function evaluations than the Picard iterations. I?ve attached >> ?log_summary output. This seems strange, because I can use the NLKAIN code ( >> http://nlkain.sourceforge.net/) to fairly good effect, and I?ve read >> that it?s related to Anderson mixing. Would it be useful to adjust the >> parameters? >> >> >> >> If I recall correctly, NLKAIN is yet another improvement on Anderson >> Mixing. Our NGMRES is what's in O/W and is built largely around being >> nonlinearly preconditionable with something strong like FAS. What is the >> perceived difference in convergence? (what does -snes_monitor say?) Any >> multitude of tolerances may be different between the two methods, and it's >> hard to judge without knowing much, much more. Seeing what happens when >> one changes the parameters is of course important if you're looking at >> performance. >> >> >> >> I?m not looking to apply a preconditioner, so it sounds like perhaps >> NGMRES isn?t a good choice for this application. >> >> >> > > It all depends on the characteristics of your problem. > > >> I tried a different problem (one that is larger and more realistic), >> and found the SNESAnderson performance to be substantially better. NLKAIN >> still converges faster, though. (NLKAIN: ~1350 function calls; >> SNESAnderson: ~1800 function calls; fixed-point: ~2550 function calls). >> The ?snes_monitor seems to indicate steadily decreasing function norms. >> >> >> > > Playing around with -snes_anderson_beta will in all likelihood yield some > performance improvements; perhaps significant. If I run > snes/examples/tutorials/ex5 with -snes_anderson_beta 1 and > snes_anderson_beta 0.1 the difference is 304 to 103 iterations. I may have > to go look and see what NLKAIN does for damping; most Anderson Mixing > implementations seem to do nothing complicated for this. Anything adaptive > would take more function evaluations. > > >> The ?snes_anderson_monitor option doesn?t seem to produce any output. >> I?ve tried passing ?-snes_anderson_monitor? and ?-snes_anderson_monitor >> true? as options, but don?t see any output analogous to >> ?-snes_gmres_monitor?. What?s the correct way to pass that option? >> >> > in the -snes_ngmres_monitor case, the monitor prints what happens with the > choice between the candidate and the minimized solutions AND if the restart > conditions are in play. In Anderson mixing, the first part doesn't apply, > so -snes_anderson_monitor only prints anything if the method is restarting; > the default is no restart, so nothing will show up. Changing to periodic > restart or difference restart (which takes more reductions) will yield > other output. > > >> >> By Picard, you mean simple fixed-point iteration, right? What >> constitutes a Picard iteration is a longstanding argument on this list and >> therefore requires clarification, unfortunately. :) This (without >> linesearch) can be duplicated in PETSc with -snes_type nrichardson >> -snes_linesearch_type basic. For a typical problem one must damp this with >> -snes_linesearch_damping That's what the linesearch is >> there to avoid, but this takes more function evaluations. >> >> >> >> Yes, I mean fixed-point iteration. >> > > Cool. > > >> >> >> I?ve also attached ?log_summary output for NGMRES. Does anything jump out >> as being amiss? >> >> >> >> ########################################################## >> # # >> # WARNING!!! # >> # # >> # This code was compiled with a debugging option, # >> # To get timing results run ./configure # >> # using --with-debugging=no, the performance will # >> # be generally two or three times faster. # >> # # >> ########################################################## >> >> >> >> Timing comparisons aren't reasonable with debugging on. >> >> >> >> Yes, I understand. At this point, I?m just comparing numbers of function >> evalautions. >> > > OK. > > - Peter > > >> >> > >> >> >> >> >> Based on my (limited) understanding of the Oosterlee/Washio SIAM paper >> ("Krylov Subspace Acceleration of Nonlinear Multigrid..."), they seem to >> suggest that it's possible to approximate the Jacobian with a series of >> previously-computed residuals (eq 2.14), rather than additional function >> evaluations in each iteration. Is this correct? If so, could someone point >> me to a reference that demonstrates how to do this with PETSc? >> >> >> >> What indication do you have that the Jacobian is calculated at all in the >> NGMRES method? The two function evaluations are related to computing the >> quantities labeled F(u_M) and F(u_A) in O/W. We already use the Jacobian >> approximation for the minimization problem (2.14). >> >> >> >> - Peter >> >> >> >> Thanks for the clarification. >> >> >> >> -Greg >> >> >> >> >> Or, perhaps a better question to ask is: are there other ways of reducing >> the computing burden associated with estimating the Jacobian? >> >> Thanks, >> Greg >> >> >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Wed Apr 23 04:18:06 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Wed, 23 Apr 2014 17:18:06 +0800 Subject: [petsc-users] KSP breakdown in specific cluster Message-ID: <5357854E.10502@gmail.com> Hi, My code was found to be giving error answer in one of the cluster, even on single processor. No error msg was given. It used to be working fine. I run the debug version and it gives the error msg: [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 8 FPE: Floating Point Exception,probably divide by zero [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: [0] VecDot_Seq line 62 src/vec/vec/impls/seq/bvec1.c [0]PETSC ERROR: [0] VecDot_MPI line 14 src/vec/vec/impls/mpi/pbvec.c [0]PETSC ERROR: [0] VecDot line 118 src/vec/vec/interface/rvector.c [0]PETSC ERROR: [0] KSPSolve_BCGS line 39 src/ksp/ksp/impls/bcgs/bcgs.c [0]PETSC ERROR: [0] KSPSolve line 356 src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [0]PETSC ERROR: ------------------------------------------------------------------------ It happens after KSPSolve. There was no problem on other cluster. So how should I debug to find the error? I tried to compare the input matrix and vector between different cluster but there are too many values. -- Thank you Yours sincerely, TAY wee-beng From zonexo at gmail.com Wed Apr 23 04:55:43 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Wed, 23 Apr 2014 17:55:43 +0800 Subject: [petsc-users] KSP breakdown in specific cluster (update) In-Reply-To: <5357854E.10502@gmail.com> References: <5357854E.10502@gmail.com> Message-ID: <53578E1F.4000709@gmail.com> Hi, Just to update that I managed to compare the values by reducing the problem size to hundred plus values. The matrix and vector are almost the same compared to my win7 output. Also tried valgrind but it aborts almost immediately: valgrind --leak-check=yes ./a.out ==17603== Memcheck, a memory error detector. ==17603== Copyright (C) 2002-2006, and GNU GPL'd, by Julian Seward et al. ==17603== Using LibVEX rev 1658, a library for dynamic binary translation. ==17603== Copyright (C) 2004-2006, and GNU GPL'd, by OpenWorks LLP. ==17603== Using valgrind-3.2.1, a dynamic binary instrumentation framework. ==17603== Copyright (C) 2000-2006, and GNU GPL'd, by Julian Seward et al. ==17603== For more details, rerun with: -v ==17603== --17603-- DWARF2 CFI reader: unhandled CFI instruction 0:10 --17603-- DWARF2 CFI reader: unhandled CFI instruction 0:10 vex amd64->IR: unhandled instruction bytes: 0xF 0xAE 0x85 0xF0 ==17603== valgrind: Unrecognised instruction at address 0x5DD0F0E. ==17603== Your program just tried to execute an instruction that Valgrind ==17603== did not recognise. There are two possible reasons for this. ==17603== 1. Your program has a bug and erroneously jumped to a non-code ==17603== location. If you are running Memcheck and you just saw a ==17603== warning about a bad jump, it's probably your program's fault. ==17603== 2. The instruction is legitimate but Valgrind doesn't handle it, ==17603== i.e. it's Valgrind's fault. If you think this is the case or ==17603== you are not sure, please let us know and we'll try to fix it. ==17603== Either way, Valgrind will now raise a SIGILL signal which will ==17603== probably kill your program. forrtl: severe (168): Program Exception - illegal instruction Image PC Routine Line Source libifcore.so.5 0000000005DD0F0E Unknown Unknown Unknown libifcore.so.5 0000000005DD0DC7 Unknown Unknown Unknown a.out 0000000001CB4CBB Unknown Unknown Unknown a.out 00000000004093DC Unknown Unknown Unknown libc.so.6 000000369141D974 Unknown Unknown Unknown a.out 00000000004092E9 Unknown Unknown Unknown ==17603== ==17603== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 5 from 1) ==17603== malloc/free: in use at exit: 239 bytes in 8 blocks. ==17603== malloc/free: 31 allocs, 23 frees, 31,388 bytes allocated. ==17603== For counts of detected errors, rerun with: -v ==17603== searching for pointers to 8 not-freed blocks. ==17603== checked 2,340,280 bytes. ==17603== ==17603== LEAK SUMMARY: ==17603== definitely lost: 0 bytes in 0 blocks. ==17603== possibly lost: 0 bytes in 0 blocks. ==17603== still reachable: 239 bytes in 8 blocks. ==17603== suppressed: 0 bytes in 0 blocks. ==17603== Reachable blocks (those to which a pointer was found) are not shown. ==17603== To see them, rerun with: --show-reachable=yes Thank you Yours sincerely, TAY wee-beng On 23/4/2014 5:18 PM, TAY wee-beng wrote: > Hi, > > My code was found to be giving error answer in one of the cluster, > even on single processor. No error msg was given. It used to be > working fine. > > I run the debug version and it gives the error msg: > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 8 FPE: Floating Point > Exception,probably divide by zero > [0]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger > [0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC > ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to > find memory corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > [0]PETSC ERROR: INSTEAD the line number of the start of the > function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: [0] VecDot_Seq line 62 src/vec/vec/impls/seq/bvec1.c > [0]PETSC ERROR: [0] VecDot_MPI line 14 src/vec/vec/impls/mpi/pbvec.c > [0]PETSC ERROR: [0] VecDot line 118 src/vec/vec/interface/rvector.c > [0]PETSC ERROR: [0] KSPSolve_BCGS line 39 src/ksp/ksp/impls/bcgs/bcgs.c > [0]PETSC ERROR: [0] KSPSolve line 356 src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Signal received! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > It happens after KSPSolve. There was no problem on other cluster. So > how should I debug to find the error? > > I tried to compare the input matrix and vector between different > cluster but there are too many values. > From knepley at gmail.com Wed Apr 23 05:00:48 2014 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 23 Apr 2014 06:00:48 -0400 Subject: [petsc-users] KSP breakdown in specific cluster (update) In-Reply-To: <53578E1F.4000709@gmail.com> References: <5357854E.10502@gmail.com> <53578E1F.4000709@gmail.com> Message-ID: On Wed, Apr 23, 2014 at 5:55 AM, TAY wee-beng wrote: > Hi, > > Just to update that I managed to compare the values by reducing the > problem size to hundred plus values. The matrix and vector are almost the > same compared to my win7 output. > Run in the debugger and get a stack trace, Matt > Also tried valgrind but it aborts almost immediately: > > valgrind --leak-check=yes ./a.out > ==17603== Memcheck, a memory error detector. > ==17603== Copyright (C) 2002-2006, and GNU GPL'd, by Julian Seward et al. > ==17603== Using LibVEX rev 1658, a library for dynamic binary translation. > ==17603== Copyright (C) 2004-2006, and GNU GPL'd, by OpenWorks LLP. > ==17603== Using valgrind-3.2.1, a dynamic binary instrumentation framework. > ==17603== Copyright (C) 2000-2006, and GNU GPL'd, by Julian Seward et al. > ==17603== For more details, rerun with: -v > ==17603== > --17603-- DWARF2 CFI reader: unhandled CFI instruction 0:10 > --17603-- DWARF2 CFI reader: unhandled CFI instruction 0:10 > vex amd64->IR: unhandled instruction bytes: 0xF 0xAE 0x85 0xF0 > ==17603== valgrind: Unrecognised instruction at address 0x5DD0F0E. > ==17603== Your program just tried to execute an instruction that Valgrind > ==17603== did not recognise. There are two possible reasons for this. > ==17603== 1. Your program has a bug and erroneously jumped to a non-code > ==17603== location. If you are running Memcheck and you just saw a > ==17603== warning about a bad jump, it's probably your program's fault. > ==17603== 2. The instruction is legitimate but Valgrind doesn't handle it, > ==17603== i.e. it's Valgrind's fault. If you think this is the case or > ==17603== you are not sure, please let us know and we'll try to fix it. > ==17603== Either way, Valgrind will now raise a SIGILL signal which will > ==17603== probably kill your program. > forrtl: severe (168): Program Exception - illegal instruction > Image PC Routine Line Source > libifcore.so.5 0000000005DD0F0E Unknown Unknown Unknown > libifcore.so.5 0000000005DD0DC7 Unknown Unknown Unknown > a.out 0000000001CB4CBB Unknown Unknown Unknown > a.out 00000000004093DC Unknown Unknown Unknown > libc.so.6 000000369141D974 Unknown Unknown Unknown > a.out 00000000004092E9 Unknown Unknown Unknown > ==17603== > ==17603== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 5 from 1) > ==17603== malloc/free: in use at exit: 239 bytes in 8 blocks. > ==17603== malloc/free: 31 allocs, 23 frees, 31,388 bytes allocated. > ==17603== For counts of detected errors, rerun with: -v > ==17603== searching for pointers to 8 not-freed blocks. > ==17603== checked 2,340,280 bytes. > ==17603== > ==17603== LEAK SUMMARY: > ==17603== definitely lost: 0 bytes in 0 blocks. > ==17603== possibly lost: 0 bytes in 0 blocks. > ==17603== still reachable: 239 bytes in 8 blocks. > ==17603== suppressed: 0 bytes in 0 blocks. > ==17603== Reachable blocks (those to which a pointer was found) are not > shown. > ==17603== To see them, rerun with: --show-reachable=yes > > Thank you > > Yours sincerely, > > TAY wee-beng > > On 23/4/2014 5:18 PM, TAY wee-beng wrote: > >> Hi, >> >> My code was found to be giving error answer in one of the cluster, even >> on single processor. No error msg was given. It used to be working fine. >> >> I run the debug version and it gives the error msg: >> >> [0]PETSC ERROR: ------------------------------ >> ------------------------------------------ >> [0]PETSC ERROR: Caught signal number 8 FPE: Floating Point >> Exception,probably divide by zero >> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/ >> documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.orgon GNU/linux and Apple Mac OS X to find memory corruption errors >> [0]PETSC ERROR: likely location of problem given in stack below >> [0]PETSC ERROR: --------------------- Stack Frames >> ------------------------------------ >> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not >> available, >> [0]PETSC ERROR: INSTEAD the line number of the start of the function >> [0]PETSC ERROR: is given. >> [0]PETSC ERROR: [0] VecDot_Seq line 62 src/vec/vec/impls/seq/bvec1.c >> [0]PETSC ERROR: [0] VecDot_MPI line 14 src/vec/vec/impls/mpi/pbvec.c >> [0]PETSC ERROR: [0] VecDot line 118 src/vec/vec/interface/rvector.c >> [0]PETSC ERROR: [0] KSPSolve_BCGS line 39 src/ksp/ksp/impls/bcgs/bcgs.c >> [0]PETSC ERROR: [0] KSPSolve line 356 src/ksp/ksp/interface/itfunc.c >> [0]PETSC ERROR: --------------------- Error Message >> ------------------------------------ >> [0]PETSC ERROR: Signal received! >> [0]PETSC ERROR: ------------------------------ >> ------------------------------------------ >> >> It happens after KSPSolve. There was no problem on other cluster. So how >> should I debug to find the error? >> >> I tried to compare the input matrix and vector between different cluster >> but there are too many values. >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From sudheer3 at gmail.com Wed Apr 23 07:36:58 2014 From: sudheer3 at gmail.com (Sudheer Kumar) Date: Wed, 23 Apr 2014 18:06:58 +0530 Subject: [petsc-users] MATLAB and PetSc In-Reply-To: <15AAED25-B41B-434A-801E-E3435A235A38@mcs.anl.gov> References: <237DD811-AD05-4BD2-97FF-1915A8ACD694@mcs.anl.gov> <15AAED25-B41B-434A-801E-E3435A235A38@mcs.anl.gov> Message-ID: Hi Barry, I have looked at code in bin/matlab/classes. It is not clear to me how to use the code. Great if you could help me how to use these codes. So, with these codes can we run the parallel version of Petsc code from MATLAB. Is mex interface involved in compiling the Petsc code to MATLAB executable dynamic library. Regards Sudheer On Tue, Apr 22, 2014 at 11:19 PM, Barry Smith wrote: > > PETSc does have a MATLAB interface in bin/matlab/classes that provides > most of PETSc's functionality directly to Matlab. Unfortunately we do not > have the resources to maintain it, so it is a bit broken. If you are > familiar with MATLAB and interfacing with C then git it a try and we can > provide a small amount of support to help you use it, but as I said we > cannot make it perfect ourselves. > > Barry > > > On Apr 22, 2014, at 12:19 PM, Sudheer Kumar wrote: > > > > > On Tue, Apr 22, 2014 at 9:41 PM, Barry Smith wrote: > > How much Matlab code do you have that computes the Jacobian and > functions and is it parallel? > > > > How much Matlab code do you have that computes the Jacobian and > functions and is it parallel? > > > > The MATLAB code that I am working with is of 1000s lines of code > (reservoir simulator, http://www.sintef.no/Projectweb/MRST/). The 90% of > the time taken by the code on benchmarks is for linear solvers. And that's > the reason to look for PetSc to speedup the entire code. > > > > >> This is a great deal of work and probably not worth it. > > Ok, unfortunately here I guess I need to attempt to do that work. Do let > me know your suggestions. Thanks. > > > > > > Regards > > Sudheer > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zanon at aices.rwth-aachen.de Wed Apr 23 10:02:13 2014 From: zanon at aices.rwth-aachen.de (Lorenzo Zanon) Date: Wed, 23 Apr 2014 17:02:13 +0200 Subject: [petsc-users] install Slepc wth Arpack Message-ID: <5357D5F5.1030908@aices.rwth-aachen.de> Hello, I would like to install Slepc with Arpack, but the procedure I'm using is not working. I downloaded arpack-ng and configured like this: ./configure --prefix=$HOME/Desktop/arpack_dir So the libraries can be found inside arpack_dir. Then I set the PETSC_DIR, PETSC_ARCH and SLEPC_DIR. Following the instructions on the website, inside the Slepc3.4.4 directory, I configure with: ./configure --with-arpack-dir=$HOME/Desktop/arpack_dir/lib --with-arpack-flags='-lparpack -larpack' which unfortunately doesn't work: Checking environment... Checking PETSc installation... Checking ARPACK library... ERROR: Unable to link with library ARPACK ERROR: In directories /home/lz474069/Desktop/arpack_dir/lib ERROR: With flags -L/home/lz474069/Desktop/arpack_dir/lib -lparpack -larpack ERROR: See "arch-linux2-c-debug/conf/configure.log" file for details Can anybody see where am I making a mistake? Thanks! Lorenzo -- ============================ Lorenzo Zanon, M.Sc. Doctoral Candidate Graduate School AICES RWTH Aachen University Schinkelstra?e 2 52062 Aachen Deutschland zanon at aices.rwth-aachen.de +49 (0)241 80 99 147 ============================ From jroman at dsic.upv.es Wed Apr 23 10:29:57 2014 From: jroman at dsic.upv.es (Jose E. Roman) Date: Wed, 23 Apr 2014 17:29:57 +0200 Subject: [petsc-users] install Slepc wth Arpack In-Reply-To: <5357D5F5.1030908@aices.rwth-aachen.de> References: <5357D5F5.1030908@aices.rwth-aachen.de> Message-ID: <2BAA2B5A-7437-421A-A9E6-BC7938304C23@dsic.upv.es> El 23/04/2014, a las 17:02, Lorenzo Zanon escribi?: > Hello, > > I would like to install Slepc with Arpack, but the procedure I'm using is not working. > > I downloaded arpack-ng and configured like this: > > ./configure --prefix=$HOME/Desktop/arpack_dir > > So the libraries can be found inside arpack_dir. Then I set the PETSC_DIR, PETSC_ARCH and SLEPC_DIR. Following the instructions on the website, inside the Slepc3.4.4 directory, I configure with: > > ./configure --with-arpack-dir=$HOME/Desktop/arpack_dir/lib --with-arpack-flags='-lparpack -larpack' > > which unfortunately doesn't work: > > Checking environment... > Checking PETSc installation... > Checking ARPACK library... > ERROR: Unable to link with library ARPACK > ERROR: In directories /home/lz474069/Desktop/arpack_dir/lib > ERROR: With flags -L/home/lz474069/Desktop/arpack_dir/lib -lparpack -larpack > > ERROR: See "arch-linux2-c-debug/conf/configure.log" file for details > > Can anybody see where am I making a mistake? > > Thanks! > Lorenzo Send configure.log to slepc-maint Jose From romain.veltz at inria.fr Wed Apr 23 12:04:32 2014 From: romain.veltz at inria.fr (Veltz Romain) Date: Wed, 23 Apr 2014 19:04:32 +0200 Subject: [petsc-users] setting Sundials options within petsc4py Message-ID: Dear pets users, I have been trying to set some sundials options from petsc4py but I am unsuccessful. I hope somebody will have a good pointer. For example I can pass -ts_sundials_type bdf to python in the command line. I would like to call TSSundialsSetTolerance to set the tolerance. Hence I need to pass some options in the command line (I don't know if this is possible for this particular function) or call TSSundialsSetTolerance(?) from petsc4py. So far, I am clueless. Also, I would like to limit the step sizes. This is not documented in petsc_manual.pdf. But in sundial.c, there is PetscErrorCode TSSundialsSetMaxTimeStep_Sundials(TS ts,PetscReal maxdt) which I would like to call also from petsc4py. Thank you for your help, Best regards, Veltz Romain Neuromathcomp Project Team Inria Sophia Antipolis M?diterran?e 2004 Route des Lucioles-BP 93 FR-06902 Sophia Antipolis http://www-sop.inria.fr/members/Romain.Veltz/public_html/Home.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Apr 23 13:59:13 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 23 Apr 2014 13:59:13 -0500 Subject: [petsc-users] MATLAB and PetSc In-Reply-To: References: <237DD811-AD05-4BD2-97FF-1915A8ACD694@mcs.anl.gov> <15AAED25-B41B-434A-801E-E3435A235A38@mcs.anl.gov> Message-ID: On Apr 23, 2014, at 7:36 AM, Sudheer Kumar wrote: > Hi Barry, > > I have looked at code in bin/matlab/classes. It is not clear to me how to use the code. Examples are in the examples/tutorials subdirectory of that directory. > Great if you could help me how to use these codes. So, with these codes can we run the parallel version of Petsc code from MATLAB. Is mex interface involved in compiling the Petsc code to MATLAB executable dynamic library. No mex is not used; instead the MATLAB function calllib() is used which is a million times easier than messing with mex. To use PETSc from Matlab you need to ./configure PETSc with the option ?with-matlab It does not work currently in parallel. If you put the classes directory in your matlab path than you can do help Petsc etc to see help for classes or just look in the top of those files. You will have to be proactive at using this stuff and figuring it out I cannot walk you through every little step. Barry Note that octave does not support calllib() or at least did not in the past. > > > > > Regards > Sudheer > > > On Tue, Apr 22, 2014 at 11:19 PM, Barry Smith wrote: > > PETSc does have a MATLAB interface in bin/matlab/classes that provides most of PETSc?s functionality directly to Matlab. Unfortunately we do not have the resources to maintain it, so it is a bit broken. If you are familiar with MATLAB and interfacing with C then git it a try and we can provide a small amount of support to help you use it, but as I said we cannot make it perfect ourselves. > > Barry > > > On Apr 22, 2014, at 12:19 PM, Sudheer Kumar wrote: > > > > > On Tue, Apr 22, 2014 at 9:41 PM, Barry Smith wrote: > > How much Matlab code do you have that computes the Jacobian and functions and is it parallel? > > > > How much Matlab code do you have that computes the Jacobian and functions and is it parallel? > > > > The MATLAB code that I am working with is of 1000s lines of code (reservoir simulator, http://www.sintef.no/Projectweb/MRST/). The 90% of the time taken by the code on benchmarks is for linear solvers. And that's the reason to look for PetSc to speedup the entire code. > > > > >> This is a great deal of work and probably not worth it. > > Ok, unfortunately here I guess I need to attempt to do that work. Do let me know your suggestions. Thanks. > > > > > > Regards > > Sudheer > > From u.tabak at tudelft.nl Wed Apr 23 14:06:04 2014 From: u.tabak at tudelft.nl (Umut Tabak) Date: Wed, 23 Apr 2014 21:06:04 +0200 Subject: [petsc-users] MATLAB and PetSc In-Reply-To: References: <237DD811-AD05-4BD2-97FF-1915A8ACD694@mcs.anl.gov> <15AAED25-B41B-434A-801E-E3435A235A38@mcs.anl.gov> Message-ID: <53580F1C.1030505@tudelft.nl> On 04/23/2014 08:59 PM, Barry Smith wrote: > > No mex is not used; instead the MATLAB function calllib() is used which is a million times easier than messing with mex. To use PETSc from Matlab you need to ./configure PETSc with the option ?with-matlab It does not work currently in parallel. If you put the classes directory in your matlab path than you can do help Petsc etc to see help for classes or just look in the top of those files. You will have to be proactive at using this stuff and figuring it out I cannot walk you through every little step. Hi Barry, I am following this discussion at the moment, but there is no documentation at the moment for this part of PETSc, right? It is as is... BR, Umut From bsmith at mcs.anl.gov Wed Apr 23 14:15:02 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 23 Apr 2014 14:15:02 -0500 Subject: [petsc-users] MATLAB and PetSc In-Reply-To: <53580F1C.1030505@tudelft.nl> References: <237DD811-AD05-4BD2-97FF-1915A8ACD694@mcs.anl.gov> <15AAED25-B41B-434A-801E-E3435A235A38@mcs.anl.gov> <53580F1C.1030505@tudelft.nl> Message-ID: <5723B8DE-6745-4AFE-B335-A1F2E803ECCE@mcs.anl.gov> On Apr 23, 2014, at 2:06 PM, Umut Tabak wrote: > On 04/23/2014 08:59 PM, Barry Smith wrote: >> >> No mex is not used; instead the MATLAB function calllib() is used which is a million times easier than messing with mex. To use PETSc from Matlab you need to ./configure PETSc with the option ?with-matlab It does not work currently in parallel. If you put the classes directory in your matlab path than you can do help Petsc etc to see help for classes or just look in the top of those files. You will have to be proactive at using this stuff and figuring it out I cannot walk you through every little step. > Hi Barry, > > I am following this discussion at the moment, but there is no documentation at the moment for this part of PETSc, right? It is as is? There is no comprehensive beginning to end tutorial that focuses on using PETSc exclusively from Matlab. You have to use a combination of the regular users manual, the regular manual pages, the matlab examples and the documentation at the top of the matlab classes (help Petsc etc). It would be great if someone put together some coordinated documentation. Barry > > BR, > Umut From jed at jedbrown.org Wed Apr 23 18:23:56 2014 From: jed at jedbrown.org (Jed Brown) Date: Wed, 23 Apr 2014 18:23:56 -0500 Subject: [petsc-users] setting Sundials options within petsc4py In-Reply-To: References: Message-ID: <8738h38wab.fsf@jedbrown.org> Veltz Romain writes: > Dear pets users, > > I have been trying to set some sundials options from petsc4py but I am > unsuccessful. I hope somebody will have a good pointer. For example I > can pass -ts_sundials_type bdf to python in the command line. > > I would like to call TSSundialsSetTolerance to set the > tolerance. Hence I need to pass some options in the command line (I > don't know if this is possible for this particular function) or call > TSSundialsSetTolerance(?) from petsc4py. So far, I am clueless. These functions are not yet implemented in petsc4py. You could clone the repository (https://bitbucket.org/petsc/petsc4py) and add the interfaces you need. It should be straightforward based on similar interfaces that already exist. We can accept patches/pull requests, but don't always have time to ensure that every function is exposed via petsc4py. > Also, I would like to limit the step sizes. This is not documented in > petsc_manual.pdf. But in sundial.c, there is PetscErrorCode > TSSundialsSetMaxTimeStep_Sundials(TS ts,PetscReal maxdt) which I would > like to call also from petsc4py. > > Thank you for your help, > > Best regards, > > > Veltz Romain > > Neuromathcomp Project Team > Inria Sophia Antipolis M?diterran?e > 2004 Route des Lucioles-BP 93 > FR-06902 Sophia Antipolis > http://www-sop.inria.fr/members/Romain.Veltz/public_html/Home.html -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From past at blackboard.com Wed Apr 23 19:59:34 2014 From: past at blackboard.com (BlackBoard) Date: Wed, 23 Apr 2014 17:59:34 -0700 Subject: [petsc-users] Your email account Message-ID: You have 1 new message Sign In Blackboard | Technology Services -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Wed Apr 23 20:12:25 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Thu, 24 Apr 2014 09:12:25 +0800 Subject: [petsc-users] KSP breakdown in specific cluster (update) In-Reply-To: References: <5357854E.10502@gmail.com> <53578E1F.4000709@gmail.com> Message-ID: <535864F9.5010505@gmail.com> On 23/4/2014 6:00 PM, Matthew Knepley wrote: > On Wed, Apr 23, 2014 at 5:55 AM, TAY wee-beng > wrote: > > Hi, > > Just to update that I managed to compare the values by reducing > the problem size to hundred plus values. The matrix and vector are > almost the same compared to my win7 output. > > > Run in the debugger and get a stack trace, Hi, I use -start_in_debugger option and it hangs at this point: Program received signal SIGFPE, Arithmetic exception. VecDot_Seq (xin=0x14ad3940, yin=0x14ad8fb0, z=0x7fff24cd79b8) at bvec1.c:71 71 ierr = PetscLogFlops(2.0*xin->map->n-1);CHKERRQ(ierr); (gdb) where #0 VecDot_Seq (xin=0x14ad3940, yin=0x14ad8fb0, z=0x7fff24cd79b8) at bvec1.c:71 #1 0x0000000001f1d8b5 in VecDot_MPI (xin=0x14ad3940, yin=0x14ad8fb0, z=0x7fff24cd7f40) at pbvec.c:15 #2 0x0000000001edfa14 in VecDot (x=0x14ad3940, y=0x14ad8fb0, val=0x7fff24cd7f40) at rvector.c:128 #3 0x00000000025cf539 in KSPSolve_BCGS (ksp=0x1479d910) at bcgs.c:85 #4 0x0000000002576687 in KSPSolve (ksp=0x1479d910, b=0x1476b110, x=0x14771890) at itfunc.c:441 #5 0x0000000001d859d9 in kspsolve_ (ksp=0x395a548, b=0x395a650, x=0x3959f38, __ierr=0x384d8b8) at itfuncf.c:219 #6 0x0000000001c37def in petsc_solvers_mp_semi_momentum_simple_xyz_ () #7 0x0000000001c97c02 in fractional_initial_mp_fractional_steps_ () #8 0x0000000001cbc336 in ibm3d_high_re () at ibm3d_high_Re.F90:675 #9 0x00000000004093dc in main () (gdb) Is this what you mean by a stack trace? I have also used "bt full" and I have attached a more detailed output. > > Matt > > Also tried valgrind but it aborts almost immediately: > > valgrind --leak-check=yes ./a.out > ==17603== Memcheck, a memory error detector. > ==17603== Copyright (C) 2002-2006, and GNU GPL'd, by Julian Seward > et al. > ==17603== Using LibVEX rev 1658, a library for dynamic binary > translation. > ==17603== Copyright (C) 2004-2006, and GNU GPL'd, by OpenWorks LLP. > ==17603== Using valgrind-3.2.1, a dynamic binary instrumentation > framework. > ==17603== Copyright (C) 2000-2006, and GNU GPL'd, by Julian Seward > et al. > ==17603== For more details, rerun with: -v > ==17603== > --17603-- DWARF2 CFI reader: unhandled CFI instruction 0:10 > --17603-- DWARF2 CFI reader: unhandled CFI instruction 0:10 > vex amd64->IR: unhandled instruction bytes: 0xF 0xAE 0x85 0xF0 > ==17603== valgrind: Unrecognised instruction at address 0x5DD0F0E. > ==17603== Your program just tried to execute an instruction that > Valgrind > ==17603== did not recognise. There are two possible reasons for this. > ==17603== 1. Your program has a bug and erroneously jumped to a > non-code > ==17603== location. If you are running Memcheck and you just saw a > ==17603== warning about a bad jump, it's probably your > program's fault. > ==17603== 2. The instruction is legitimate but Valgrind doesn't > handle it, > ==17603== i.e. it's Valgrind's fault. If you think this is the > case or > ==17603== you are not sure, please let us know and we'll try to > fix it. > ==17603== Either way, Valgrind will now raise a SIGILL signal > which will > ==17603== probably kill your program. > forrtl: severe (168): Program Exception - illegal instruction > Image PC Routine Line Source > libifcore.so.5 0000000005DD0F0E Unknown Unknown Unknown > libifcore.so.5 0000000005DD0DC7 Unknown Unknown Unknown > a.out 0000000001CB4CBB Unknown Unknown Unknown > a.out 00000000004093DC Unknown Unknown Unknown > libc.so.6 000000369141D974 Unknown Unknown Unknown > a.out 00000000004092E9 Unknown Unknown Unknown > ==17603== > ==17603== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 5 > from 1) > ==17603== malloc/free: in use at exit: 239 bytes in 8 blocks. > ==17603== malloc/free: 31 allocs, 23 frees, 31,388 bytes allocated. > ==17603== For counts of detected errors, rerun with: -v > ==17603== searching for pointers to 8 not-freed blocks. > ==17603== checked 2,340,280 bytes. > ==17603== > ==17603== LEAK SUMMARY: > ==17603== definitely lost: 0 bytes in 0 blocks. > ==17603== possibly lost: 0 bytes in 0 blocks. > ==17603== still reachable: 239 bytes in 8 blocks. > ==17603== suppressed: 0 bytes in 0 blocks. > ==17603== Reachable blocks (those to which a pointer was found) > are not shown. > ==17603== To see them, rerun with: --show-reachable=yes > > Thank you > > Yours sincerely, > > TAY wee-beng > > On 23/4/2014 5:18 PM, TAY wee-beng wrote: > > Hi, > > My code was found to be giving error answer in one of the > cluster, even on single processor. No error msg was given. It > used to be working fine. > > I run the debug version and it gives the error msg: > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 8 FPE: Floating Point > Exception,probably divide by zero > [0]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger > [0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC > ERROR: or try http://valgrind.org on GNU/linux and Apple Mac > OS X to find memory corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are > not available, > [0]PETSC ERROR: INSTEAD the line number of the start of > the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: [0] VecDot_Seq line 62 > src/vec/vec/impls/seq/bvec1.c > [0]PETSC ERROR: [0] VecDot_MPI line 14 > src/vec/vec/impls/mpi/pbvec.c > [0]PETSC ERROR: [0] VecDot line 118 > src/vec/vec/interface/rvector.c > [0]PETSC ERROR: [0] KSPSolve_BCGS line 39 > src/ksp/ksp/impls/bcgs/bcgs.c > [0]PETSC ERROR: [0] KSPSolve line 356 > src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Signal received! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > It happens after KSPSolve. There was no problem on other > cluster. So how should I debug to find the error? > > I tried to compare the input matrix and vector between > different cluster but there are too many values. > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- (gdb) bt full #0 VecDot_Seq (xin=0x14ad3940, yin=0x14ad8fb0, z=0x7fff24cd79b8) at bvec1.c:71 ya = (const PetscScalar *) 0x0 xa = (const PetscScalar *) 0x0 one = 1 bn = 960 ierr = 0 #1 0x0000000001f1d8b5 in VecDot_MPI (xin=0x14ad3940, yin=0x14ad8fb0, z=0x7fff24cd7f40) at pbvec.c:15 sum = 1.9762625833649862e-323 work = 1.3431953209405154 ierr = 0 #2 0x0000000001edfa14 in VecDot (x=0x14ad3940, y=0x14ad8fb0, val=0x7fff24cd7f40) at rvector.c:128 ierr = 0 #3 0x00000000025cf539 in KSPSolve_BCGS (ksp=0x1479d910) at bcgs.c:85 ierr = 0 i = 0 rho = 1.9762625833649862e-323 rhoold = 1 alpha = 1 beta = 1.600807474747106e-316 omega = 1.6910452843641213e-315 omegaold = 1 ---Type to continue, or q to quit--- d1 = 0 X = (Vec) 0x14771890 B = (Vec) 0x1476b110 V = (Vec) 0x14ade620 P = (Vec) 0x14aee970 R = (Vec) 0x14ad3940 RP = (Vec) 0x14ad8fb0 T = (Vec) 0x14ae3c90 S = (Vec) 0x14ae9300 dp = 1.1589630369172761 d2 = 1.5718032521948665e-316 bcgs = (KSP_BCGS *) 0x14abdc40 #4 0x0000000002576687 in KSPSolve (ksp=0x1479d910, b=0x1476b110, x=0x14771890) at itfunc.c:441 ierr = 0 rank = 32767 flag1 = PETSC_FALSE flag2 = PETSC_FALSE flag3 = PETSC_FALSE flg = PETSC_FALSE inXisinB = PETSC_FALSE guess_zero = PETSC_TRUE viewer = (PetscViewer) 0x7fff0000000b ---Type to continue, or q to quit--- mat = (Mat) 0x0 premat = (Mat) 0x7fff24ce0cf8 format = PETSC_VIEWER_DEFAULT #5 0x0000000001d859d9 in kspsolve_ (ksp=0x395a548, b=0x395a650, x=0x3959f38, __ierr=0x384d8b8) at itfuncf.c:219 No locals. #6 0x0000000001c37def in petsc_solvers_mp_semi_momentum_simple_xyz_ () No symbol table info available. #7 0x0000000001c97c02 in fractional_initial_mp_fractional_steps_ () No symbol table info available. #8 0x0000000001cbc336 in ibm3d_high_re () at ibm3d_high_Re.F90:675 filename = file_write_no_char = del_t_tmp = 0 chord = 0 sum_w = 317.24634223818254 sum_v = -0.029308797839978005 sum_u = -0.26935975976548698 max_w = 1.1569511841851332 max_v = 0.15264060063132492 max_u = 0.23053472715538209 explode_check = 10 error_all = 0 ---Type to continue, or q to quit--- ijk = 1 escape_time = 99990000 error = 0 interval2 = -16000 openstatus = {0, 0, 0, 0, 0, 0} #9 0x00000000004093dc in main () No symbol table info available. From zonexo at gmail.com Wed Apr 23 20:30:09 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Thu, 24 Apr 2014 09:30:09 +0800 Subject: [petsc-users] Problem with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 In-Reply-To: References: <534C9A2C.5060404@gmail.com> <534C9DB5.9070407@gmail.com> <53514B8A.90901@gmail.com> <495519DB-D3A1-4190-AED2-4ABA885C2835@mcs.anl.gov> <5351E62B.6060201@gmail.com> <53520587.6010606@gmail.com> <62DF81C7-0C35-410C-8D4C-206FBB22576A@mcs.anl.gov> <535248E8.2070002@gmail.com> <535284E0.8010901@gmail.com> <5352934C.1010306@gmail.com> <53529B09.8040009@gmail.com> <5353173D.60609@gmail.com> <53546B03.1010407@gmail.com> Message-ID: <53586921.7030907@gmail.com> Hi, Ya that's better. I've been trying to remove the unnecessary codes (also some copyright issues with the university) and to present only the code which shows the error. But strangely, by restructuring and removing all the redundant stuffs, the error is no longer there. I'm now trying to add back and re-construct my code and see if it appears. I'll let you people know if there's update. Thank you. Yours sincerely, TAY wee-beng On 21/4/2014 8:58 AM, Barry Smith wrote: > Please send the entire code. If we can run it and reproduce the problem we can likely track down the issue much faster than through endless rounds of email. > > Barry > > On Apr 20, 2014, at 7:49 PM, TAY wee-beng wrote: > >> On 20/4/2014 8:39 AM, TAY wee-beng wrote: >>> On 20/4/2014 1:02 AM, Matthew Knepley wrote: >>>> On Sat, Apr 19, 2014 at 10:49 AM, TAY wee-beng wrote: >>>> On 19/4/2014 11:39 PM, Matthew Knepley wrote: >>>>> On Sat, Apr 19, 2014 at 10:16 AM, TAY wee-beng wrote: >>>>> On 19/4/2014 10:55 PM, Matthew Knepley wrote: >>>>>> On Sat, Apr 19, 2014 at 9:14 AM, TAY wee-beng wrote: >>>>>> On 19/4/2014 6:48 PM, Matthew Knepley wrote: >>>>>>> On Sat, Apr 19, 2014 at 4:59 AM, TAY wee-beng wrote: >>>>>>> On 19/4/2014 1:17 PM, Barry Smith wrote: >>>>>>> On Apr 19, 2014, at 12:11 AM, TAY wee-beng wrote: >>>>>>> >>>>>>> On 19/4/2014 12:10 PM, Barry Smith wrote: >>>>>>> On Apr 18, 2014, at 9:57 PM, TAY wee-beng wrote: >>>>>>> >>>>>>> On 19/4/2014 3:53 AM, Barry Smith wrote: >>>>>>> Hmm, >>>>>>> >>>>>>> Interface DMDAVecGetArrayF90 >>>>>>> Subroutine DMDAVecGetArrayF903(da1, v,d1,ierr) >>>>>>> USE_DM_HIDE >>>>>>> DM_HIDE da1 >>>>>>> VEC_HIDE v >>>>>>> PetscScalar,pointer :: d1(:,:,:) >>>>>>> PetscErrorCode ierr >>>>>>> End Subroutine >>>>>>> >>>>>>> So the d1 is a F90 POINTER. But your subroutine seems to be treating it as a ?plain old Fortran array?? >>>>>>> real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) >>>>>>> Hi, >>>>>>> >>>>>>> So d1 is a pointer, and it's different if I declare it as "plain old Fortran array"? Because I declare it as a Fortran array and it works w/o any problem if I only call DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 with "u". >>>>>>> >>>>>>> But if I call DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 with "u", "v" and "w", error starts to happen. I wonder why... >>>>>>> >>>>>>> Also, supposed I call: >>>>>>> >>>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>>> >>>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>>> >>>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>>> >>>>>>> u_array .... >>>>>>> >>>>>>> v_array .... etc >>>>>>> >>>>>>> Now to restore the array, does it matter the sequence they are restored? >>>>>>> No it should not matter. If it matters that is a sign that memory has been written to incorrectly earlier in the code. >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> Hmm, I have been getting different results on different intel compilers. I'm not sure if MPI played a part but I'm only using a single processor. In the debug mode, things run without problem. In optimized mode, in some cases, the code aborts even doing simple initialization: >>>>>>> >>>>>>> >>>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>>> >>>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>>> >>>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>>> >>>>>>> call DMDAVecGetArrayF90(da_p,p_local,p_array,ierr) >>>>>>> >>>>>>> u_array = 0.d0 >>>>>>> >>>>>>> v_array = 0.d0 >>>>>>> >>>>>>> w_array = 0.d0 >>>>>>> >>>>>>> p_array = 0.d0 >>>>>>> >>>>>>> >>>>>>> call DMDAVecRestoreArrayF90(da_p,p_local,p_array,ierr) >>>>>>> >>>>>>> >>>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>>> >>>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>> >>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>> >>>>>>> The code aborts at call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr), giving segmentation error. But other version of intel compiler passes thru this part w/o error. Since the response is different among different compilers, is this PETSc or intel 's bug? Or mvapich or openmpi? >>>>>>> >>>>>>> We do this is a bunch of examples. Can you reproduce this different behavior in src/dm/examples/tutorials/ex11f90.F? >>>>>> Hi Matt, >>>>>> >>>>>> Do you mean putting the above lines into ex11f90.F and test? >>>>>> >>>>>> It already has DMDAVecGetArray(). Just run it. >>>>> Hi, >>>>> >>>>> It worked. The differences between mine and the code is the way the fortran modules are defined, and the ex11f90 only uses global vectors. Does it make a difference whether global or local vectors are used? Because the way it accesses x1 only touches the local region. >>>>> >>>>> No the global/local difference should not matter. >>>>> >>>>> Also, before using DMDAVecGetArrayF90, DMGetGlobalVector must be used 1st, is that so? I can't find the equivalent for local vector though. >>>>> >>>>> DMGetLocalVector() >>>> Ops, I do not have DMGetLocalVector and DMRestoreLocalVector in my code. Does it matter? >>>> >>>> If so, when should I call them? >>>> >>>> You just need a local vector from somewhere. >> Hi, >> >> Anyone can help with the questions below? Still trying to find why my code doesn't work. >> >> Thanks. >>> Hi, >>> >>> I insert part of my error region code into ex11f90: >>> >>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>> >>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>> >>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>> >>> call DMDAVecGetArrayF90(da_p,p_local,p_array,ierr) >>> >>> u_array = 0.d0 >>> >>> v_array = 0.d0 >>> >>> w_array = 0.d0 >>> >>> p_array = 0.d0 >>> >>> call DMDAVecRestoreArrayF90(da_p,p_local,p_array,ierr) >>> >>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>> >>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>> >>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>> >>> It worked w/o error. I'm going to change the way the modules are defined in my code. >>> >>> My code contains a main program and a number of modules files, with subroutines inside e.g. >>> >>> module solve >>> <- add include file? >>> subroutine RRK >>> <- add include file? >>> end subroutine RRK >>> >>> end module solve >>> >>> So where should the include files (#include ) be placed? >>> >>> After the module or inside the subroutine? >>> >>> Thanks. >>>> Matt >>>> >>>> Thanks. >>>>> Matt >>>>> >>>>> Thanks. >>>>>> Matt >>>>>> >>>>>> Thanks >>>>>> >>>>>> Regards. >>>>>>> Matt >>>>>>> >>>>>>> As in w, then v and u? >>>>>>> >>>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>> >>>>>>> thanks >>>>>>> Note also that the beginning and end indices of the u,v,w, are different for each process see for examplehttp://www.mcs.anl.gov/petsc/petsc-3.4/src/dm/examples/tutorials/ex11f90.F (and they do not start at 1). This is how to get the loop bounds. >>>>>>> Hi, >>>>>>> >>>>>>> In my case, I fixed the u,v,w such that their indices are the same. I also checked using DMDAGetCorners and DMDAGetGhostCorners. Now the problem lies in my subroutine treating it as a ?plain old Fortran array?. >>>>>>> >>>>>>> If I declare them as pointers, their indices follow the C 0 start convention, is that so? >>>>>>> Not really. It is that in each process you need to access them from the indices indicated by DMDAGetCorners() for global vectors and DMDAGetGhostCorners() for local vectors. So really C or Fortran doesn?t make any difference. >>>>>>> >>>>>>> >>>>>>> So my problem now is that in my old MPI code, the u(i,j,k) follow the Fortran 1 start convention. Is there some way to manipulate such that I do not have to change my u(i,j,k) to u(i-1,j-1,k-1)? >>>>>>> If you code wishes to access them with indices plus one from the values returned by DMDAGetCorners() for global vectors and DMDAGetGhostCorners() for local vectors then you need to manually subtract off the 1. >>>>>>> >>>>>>> Barry >>>>>>> >>>>>>> Thanks. >>>>>>> Barry >>>>>>> >>>>>>> On Apr 18, 2014, at 10:58 AM, TAY wee-beng wrote: >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> I tried to pinpoint the problem. I reduced my job size and hence I can run on 1 processor. Tried using valgrind but perhaps I'm using the optimized version, it didn't catch the error, besides saying "Segmentation fault (core dumped)" >>>>>>> >>>>>>> However, by re-writing my code, I found out a few things: >>>>>>> >>>>>>> 1. if I write my code this way: >>>>>>> >>>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>>> >>>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>>> >>>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>>> >>>>>>> u_array = .... >>>>>>> >>>>>>> v_array = .... >>>>>>> >>>>>>> w_array = .... >>>>>>> >>>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>>> >>>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>> >>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>> >>>>>>> The code runs fine. >>>>>>> >>>>>>> 2. if I write my code this way: >>>>>>> >>>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>>> >>>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>>> >>>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>>> >>>>>>> call uvw_array_change(u_array,v_array,w_array) -> this subroutine does the same modification as the above. >>>>>>> >>>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>>> >>>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>> >>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) -> error >>>>>>> >>>>>>> where the subroutine is: >>>>>>> >>>>>>> subroutine uvw_array_change(u,v,w) >>>>>>> >>>>>>> real(8), intent(inout) :: u(:,:,:),v(:,:,:),w(:,:,:) >>>>>>> >>>>>>> u ... >>>>>>> v... >>>>>>> w ... >>>>>>> >>>>>>> end subroutine uvw_array_change. >>>>>>> >>>>>>> The above will give an error at : >>>>>>> >>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>> >>>>>>> 3. Same as above, except I change the order of the last 3 lines to: >>>>>>> >>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>> >>>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>> >>>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) >>>>>>> >>>>>>> So they are now in reversed order. Now it works. >>>>>>> >>>>>>> 4. Same as 2 or 3, except the subroutine is changed to : >>>>>>> >>>>>>> subroutine uvw_array_change(u,v,w) >>>>>>> >>>>>>> real(8), intent(inout) :: u(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>>>>> >>>>>>> real(8), intent(inout) :: v(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>>>>> >>>>>>> real(8), intent(inout) :: w(start_indices(1):end_indices(1),start_indices(2):end_indices(2),start_indices(3):end_indices(3)) >>>>>>> >>>>>>> u ... >>>>>>> v... >>>>>>> w ... >>>>>>> >>>>>>> end subroutine uvw_array_change. >>>>>>> >>>>>>> The start_indices and end_indices are simply to shift the 0 indices of C convention to that of the 1 indices of the Fortran convention. This is necessary in my case because most of my codes start array counting at 1, hence the "trick". >>>>>>> >>>>>>> However, now no matter which order of the DMDAVecRestoreArrayF90 (as in 2 or 3), error will occur at "call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) " >>>>>>> >>>>>>> So did I violate and cause memory corruption due to the trick above? But I can't think of any way other than the "trick" to continue using the 1 indices convention. >>>>>>> >>>>>>> Thank you. >>>>>>> >>>>>>> Yours sincerely, >>>>>>> >>>>>>> TAY wee-beng >>>>>>> >>>>>>> On 15/4/2014 8:00 PM, Barry Smith wrote: >>>>>>> Try running under valgrindhttp://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >>>>>>> >>>>>>> >>>>>>> On Apr 14, 2014, at 9:47 PM, TAY wee-beng wrote: >>>>>>> >>>>>>> Hi Barry, >>>>>>> >>>>>>> As I mentioned earlier, the code works fine in PETSc debug mode but fails in non-debug mode. >>>>>>> >>>>>>> I have attached my code. >>>>>>> >>>>>>> Thank you >>>>>>> >>>>>>> Yours sincerely, >>>>>>> >>>>>>> TAY wee-beng >>>>>>> >>>>>>> On 15/4/2014 2:26 AM, Barry Smith wrote: >>>>>>> Please send the code that creates da_w and the declarations of w_array >>>>>>> >>>>>>> Barry >>>>>>> >>>>>>> On Apr 14, 2014, at 9:40 AM, TAY wee-beng >>>>>>> >>>>>>> wrote: >>>>>>> >>>>>>> >>>>>>> Hi Barry, >>>>>>> >>>>>>> I'm not too sure how to do it. I'm running mpi. So I run: >>>>>>> >>>>>>> mpirun -n 4 ./a.out -start_in_debugger >>>>>>> >>>>>>> I got the msg below. Before the gdb windows appear (thru x11), the program aborts. >>>>>>> >>>>>>> Also I tried running in another cluster and it worked. Also tried in the current cluster in debug mode and it worked too. >>>>>>> >>>>>>> mpirun -n 4 ./a.out -start_in_debugger >>>>>>> -------------------------------------------------------------------------- >>>>>>> An MPI process has executed an operation involving a call to the >>>>>>> "fork()" system call to create a child process. Open MPI is currently >>>>>>> operating in a condition that could result in memory corruption or >>>>>>> other system errors; your MPI job may hang, crash, or produce silent >>>>>>> data corruption. The use of fork() (or system() or other calls that >>>>>>> create child processes) is strongly discouraged. >>>>>>> >>>>>>> The process that invoked fork was: >>>>>>> >>>>>>> Local host: n12-76 (PID 20235) >>>>>>> MPI_COMM_WORLD rank: 2 >>>>>>> >>>>>>> If you are *absolutely sure* that your application will successfully >>>>>>> and correctly survive a call to fork(), you may disable this warning >>>>>>> by setting the mpi_warn_on_fork MCA parameter to 0. >>>>>>> -------------------------------------------------------------------------- >>>>>>> [2]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20235 on display localhost:50.0 on machine n12-76 >>>>>>> [0]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20233 on display localhost:50.0 on machine n12-76 >>>>>>> [1]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20234 on display localhost:50.0 on machine n12-76 >>>>>>> [3]PETSC ERROR: PETSC: Attaching gdb to ./a.out of pid 20236 on display localhost:50.0 on machine n12-76 >>>>>>> [n12-76:20232] 3 more processes have sent help message help-mpi-runtime.txt / mpi_init:warn-fork >>>>>>> [n12-76:20232] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages >>>>>>> >>>>>>> .... >>>>>>> >>>>>>> 1 >>>>>>> [1]PETSC ERROR: ------------------------------------------------------------------------ >>>>>>> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>>>>>> [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>>>>>> [1]PETSC ERROR: or see >>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC ERROR: or tryhttp://valgrind.org >>>>>>> on GNU/linux and Apple Mac OS X to find memory corruption errors >>>>>>> [1]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >>>>>>> [1]PETSC ERROR: to get more information on the crash. >>>>>>> [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >>>>>>> [3]PETSC ERROR: ------------------------------------------------------------------------ >>>>>>> [3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>>>>>> [3]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>>>>>> [3]PETSC ERROR: or see >>>>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSC ERROR: or tryhttp://valgrind.org >>>>>>> on GNU/linux and Apple Mac OS X to find memory corruption errors >>>>>>> [3]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >>>>>>> [3]PETSC ERROR: to get more information on the crash. >>>>>>> [3]PETSC ERROR: User provided function() line 0 in unknown directory unknown file (null) >>>>>>> >>>>>>> ... >>>>>>> Thank you. >>>>>>> >>>>>>> Yours sincerely, >>>>>>> >>>>>>> TAY wee-beng >>>>>>> >>>>>>> On 14/4/2014 9:05 PM, Barry Smith wrote: >>>>>>> >>>>>>> Because IO doesn?t always get flushed immediately it may not be hanging at this point. It is better to use the option -start_in_debugger then type cont in each debugger window and then when you think it is ?hanging? do a control C in each debugger window and type where to see where each process is you can also look around in the debugger at variables to see why it is ?hanging? at that point. >>>>>>> >>>>>>> Barry >>>>>>> >>>>>>> This routines don?t have any parallel communication in them so are unlikely to hang. >>>>>>> >>>>>>> On Apr 14, 2014, at 6:52 AM, TAY wee-beng >>>>>>> >>>>>>> >>>>>>> >>>>>>> wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> My code hangs and I added in mpi_barrier and print to catch the bug. I found that it hangs after printing "7". Is it because I'm doing something wrong? I need to access the u,v,w array so I use DMDAVecGetArrayF90. After access, I use DMDAVecRestoreArrayF90. >>>>>>> >>>>>>> call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) >>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"3" >>>>>>> call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr) >>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"4" >>>>>>> call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr) >>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"5" >>>>>>> call I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array) >>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"6" >>>>>>> call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) !must be in reverse order >>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"7" >>>>>>> call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr) >>>>>>> call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"8" >>>>>>> call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr) >>>>>>> -- >>>>>>> Thank you. >>>>>>> >>>>>>> Yours sincerely, >>>>>>> >>>>>>> TAY wee-beng >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>>>>> -- Norbert Wiener >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>>>> -- Norbert Wiener >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>>> -- Norbert Wiener >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>> -- Norbert Wiener From bsmith at mcs.anl.gov Wed Apr 23 20:41:43 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 23 Apr 2014 20:41:43 -0500 Subject: [petsc-users] KSP breakdown in specific cluster (update) In-Reply-To: <535864F9.5010505@gmail.com> References: <5357854E.10502@gmail.com> <53578E1F.4000709@gmail.com> <535864F9.5010505@gmail.com> Message-ID: The numbers like sum = 1.9762625833649862e-323 rho = 1.9762625833649862e-323 beta = 1.600807474747106e-316 omega = 1.6910452843641213e-315 d2 = 1.5718032521948665e-316 are nonsense. They would generally indicate that something is wrong, but unfortunately don?t point to exactly what is wrong. Barry On Apr 23, 2014, at 8:12 PM, TAY wee-beng wrote: > On 23/4/2014 6:00 PM, Matthew Knepley wrote: >> On Wed, Apr 23, 2014 at 5:55 AM, TAY wee-beng wrote: >> Hi, >> >> Just to update that I managed to compare the values by reducing the problem size to hundred plus values. The matrix and vector are almost the same compared to my win7 output. >> >> Run in the debugger and get a stack trace, > Hi, > > I use -start_in_debugger option and it hangs at this point: > > Program received signal SIGFPE, Arithmetic exception. > VecDot_Seq (xin=0x14ad3940, yin=0x14ad8fb0, z=0x7fff24cd79b8) at bvec1.c:71 > 71 ierr = PetscLogFlops(2.0*xin->map->n-1);CHKERRQ(ierr); > (gdb) where > #0 VecDot_Seq (xin=0x14ad3940, yin=0x14ad8fb0, z=0x7fff24cd79b8) at bvec1.c:71 > #1 0x0000000001f1d8b5 in VecDot_MPI (xin=0x14ad3940, yin=0x14ad8fb0, > z=0x7fff24cd7f40) at pbvec.c:15 > #2 0x0000000001edfa14 in VecDot (x=0x14ad3940, y=0x14ad8fb0, > val=0x7fff24cd7f40) at rvector.c:128 > #3 0x00000000025cf539 in KSPSolve_BCGS (ksp=0x1479d910) at bcgs.c:85 > #4 0x0000000002576687 in KSPSolve (ksp=0x1479d910, b=0x1476b110, x=0x14771890) > at itfunc.c:441 > #5 0x0000000001d859d9 in kspsolve_ (ksp=0x395a548, b=0x395a650, x=0x3959f38, > __ierr=0x384d8b8) at itfuncf.c:219 > #6 0x0000000001c37def in petsc_solvers_mp_semi_momentum_simple_xyz_ () > #7 0x0000000001c97c02 in fractional_initial_mp_fractional_steps_ () > #8 0x0000000001cbc336 in ibm3d_high_re () at ibm3d_high_Re.F90:675 > #9 0x00000000004093dc in main () > (gdb) > > Is this what you mean by a stack trace? > > I have also used "bt full" and I have attached a more detailed output. >> >> Matt >> >> Also tried valgrind but it aborts almost immediately: >> >> valgrind --leak-check=yes ./a.out >> ==17603== Memcheck, a memory error detector. >> ==17603== Copyright (C) 2002-2006, and GNU GPL'd, by Julian Seward et al. >> ==17603== Using LibVEX rev 1658, a library for dynamic binary translation. >> ==17603== Copyright (C) 2004-2006, and GNU GPL'd, by OpenWorks LLP. >> ==17603== Using valgrind-3.2.1, a dynamic binary instrumentation framework. >> ==17603== Copyright (C) 2000-2006, and GNU GPL'd, by Julian Seward et al. >> ==17603== For more details, rerun with: -v >> ==17603== >> --17603-- DWARF2 CFI reader: unhandled CFI instruction 0:10 >> --17603-- DWARF2 CFI reader: unhandled CFI instruction 0:10 >> vex amd64->IR: unhandled instruction bytes: 0xF 0xAE 0x85 0xF0 >> ==17603== valgrind: Unrecognised instruction at address 0x5DD0F0E. >> ==17603== Your program just tried to execute an instruction that Valgrind >> ==17603== did not recognise. There are two possible reasons for this. >> ==17603== 1. Your program has a bug and erroneously jumped to a non-code >> ==17603== location. If you are running Memcheck and you just saw a >> ==17603== warning about a bad jump, it's probably your program's fault. >> ==17603== 2. The instruction is legitimate but Valgrind doesn't handle it, >> ==17603== i.e. it's Valgrind's fault. If you think this is the case or >> ==17603== you are not sure, please let us know and we'll try to fix it. >> ==17603== Either way, Valgrind will now raise a SIGILL signal which will >> ==17603== probably kill your program. >> forrtl: severe (168): Program Exception - illegal instruction >> Image PC Routine Line Source >> libifcore.so.5 0000000005DD0F0E Unknown Unknown Unknown >> libifcore.so.5 0000000005DD0DC7 Unknown Unknown Unknown >> a.out 0000000001CB4CBB Unknown Unknown Unknown >> a.out 00000000004093DC Unknown Unknown Unknown >> libc.so.6 000000369141D974 Unknown Unknown Unknown >> a.out 00000000004092E9 Unknown Unknown Unknown >> ==17603== >> ==17603== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 5 from 1) >> ==17603== malloc/free: in use at exit: 239 bytes in 8 blocks. >> ==17603== malloc/free: 31 allocs, 23 frees, 31,388 bytes allocated. >> ==17603== For counts of detected errors, rerun with: -v >> ==17603== searching for pointers to 8 not-freed blocks. >> ==17603== checked 2,340,280 bytes. >> ==17603== >> ==17603== LEAK SUMMARY: >> ==17603== definitely lost: 0 bytes in 0 blocks. >> ==17603== possibly lost: 0 bytes in 0 blocks. >> ==17603== still reachable: 239 bytes in 8 blocks. >> ==17603== suppressed: 0 bytes in 0 blocks. >> ==17603== Reachable blocks (those to which a pointer was found) are not shown. >> ==17603== To see them, rerun with: --show-reachable=yes >> >> Thank you >> >> Yours sincerely, >> >> TAY wee-beng >> >> On 23/4/2014 5:18 PM, TAY wee-beng wrote: >> Hi, >> >> My code was found to be giving error answer in one of the cluster, even on single processor. No error msg was given. It used to be working fine. >> >> I run the debug version and it gives the error msg: >> >> [0]PETSC ERROR: ------------------------------------------------------------------------ >> [0]PETSC ERROR: Caught signal number 8 FPE: Floating Point Exception,probably divide by zero >> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >> [0]PETSC ERROR: likely location of problem given in stack below >> [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ >> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, >> [0]PETSC ERROR: INSTEAD the line number of the start of the function >> [0]PETSC ERROR: is given. >> [0]PETSC ERROR: [0] VecDot_Seq line 62 src/vec/vec/impls/seq/bvec1.c >> [0]PETSC ERROR: [0] VecDot_MPI line 14 src/vec/vec/impls/mpi/pbvec.c >> [0]PETSC ERROR: [0] VecDot line 118 src/vec/vec/interface/rvector.c >> [0]PETSC ERROR: [0] KSPSolve_BCGS line 39 src/ksp/ksp/impls/bcgs/bcgs.c >> [0]PETSC ERROR: [0] KSPSolve line 356 src/ksp/ksp/interface/itfunc.c >> [0]PETSC ERROR: --------------------- Error Message ------------------------------------ >> [0]PETSC ERROR: Signal received! >> [0]PETSC ERROR: ------------------------------------------------------------------------ >> >> It happens after KSPSolve. There was no problem on other cluster. So how should I debug to find the error? >> >> I tried to compare the input matrix and vector between different cluster but there are too many values. >> >> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener > > From zonexo at gmail.com Wed Apr 23 20:44:31 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Thu, 24 Apr 2014 09:44:31 +0800 Subject: [petsc-users] KSP breakdown in specific cluster (update) In-Reply-To: References: <5357854E.10502@gmail.com> <53578E1F.4000709@gmail.com> <535864F9.5010505@gmail.com> Message-ID: <53586C7F.8010909@gmail.com> On 24/4/2014 9:41 AM, Barry Smith wrote: > The numbers like > > sum = 1.9762625833649862e-323 > > rho = 1.9762625833649862e-323 > > beta = 1.600807474747106e-316 > omega = 1.6910452843641213e-315 > d2 = 1.5718032521948665e-316 > > are nonsense. They would generally indicate that something is wrong, but unfortunately don?t point to exactly what is wrong. Hi, In that case, how do I troubleshoot? Any suggestions? Thanks. > > > Barry > > On Apr 23, 2014, at 8:12 PM, TAY wee-beng wrote: > >> On 23/4/2014 6:00 PM, Matthew Knepley wrote: >>> On Wed, Apr 23, 2014 at 5:55 AM, TAY wee-beng wrote: >>> Hi, >>> >>> Just to update that I managed to compare the values by reducing the problem size to hundred plus values. The matrix and vector are almost the same compared to my win7 output. >>> >>> Run in the debugger and get a stack trace, >> Hi, >> >> I use -start_in_debugger option and it hangs at this point: >> >> Program received signal SIGFPE, Arithmetic exception. >> VecDot_Seq (xin=0x14ad3940, yin=0x14ad8fb0, z=0x7fff24cd79b8) at bvec1.c:71 >> 71 ierr = PetscLogFlops(2.0*xin->map->n-1);CHKERRQ(ierr); >> (gdb) where >> #0 VecDot_Seq (xin=0x14ad3940, yin=0x14ad8fb0, z=0x7fff24cd79b8) at bvec1.c:71 >> #1 0x0000000001f1d8b5 in VecDot_MPI (xin=0x14ad3940, yin=0x14ad8fb0, >> z=0x7fff24cd7f40) at pbvec.c:15 >> #2 0x0000000001edfa14 in VecDot (x=0x14ad3940, y=0x14ad8fb0, >> val=0x7fff24cd7f40) at rvector.c:128 >> #3 0x00000000025cf539 in KSPSolve_BCGS (ksp=0x1479d910) at bcgs.c:85 >> #4 0x0000000002576687 in KSPSolve (ksp=0x1479d910, b=0x1476b110, x=0x14771890) >> at itfunc.c:441 >> #5 0x0000000001d859d9 in kspsolve_ (ksp=0x395a548, b=0x395a650, x=0x3959f38, >> __ierr=0x384d8b8) at itfuncf.c:219 >> #6 0x0000000001c37def in petsc_solvers_mp_semi_momentum_simple_xyz_ () >> #7 0x0000000001c97c02 in fractional_initial_mp_fractional_steps_ () >> #8 0x0000000001cbc336 in ibm3d_high_re () at ibm3d_high_Re.F90:675 >> #9 0x00000000004093dc in main () >> (gdb) >> >> Is this what you mean by a stack trace? >> >> I have also used "bt full" and I have attached a more detailed output. >>> Matt >>> >>> Also tried valgrind but it aborts almost immediately: >>> >>> valgrind --leak-check=yes ./a.out >>> ==17603== Memcheck, a memory error detector. >>> ==17603== Copyright (C) 2002-2006, and GNU GPL'd, by Julian Seward et al. >>> ==17603== Using LibVEX rev 1658, a library for dynamic binary translation. >>> ==17603== Copyright (C) 2004-2006, and GNU GPL'd, by OpenWorks LLP. >>> ==17603== Using valgrind-3.2.1, a dynamic binary instrumentation framework. >>> ==17603== Copyright (C) 2000-2006, and GNU GPL'd, by Julian Seward et al. >>> ==17603== For more details, rerun with: -v >>> ==17603== >>> --17603-- DWARF2 CFI reader: unhandled CFI instruction 0:10 >>> --17603-- DWARF2 CFI reader: unhandled CFI instruction 0:10 >>> vex amd64->IR: unhandled instruction bytes: 0xF 0xAE 0x85 0xF0 >>> ==17603== valgrind: Unrecognised instruction at address 0x5DD0F0E. >>> ==17603== Your program just tried to execute an instruction that Valgrind >>> ==17603== did not recognise. There are two possible reasons for this. >>> ==17603== 1. Your program has a bug and erroneously jumped to a non-code >>> ==17603== location. If you are running Memcheck and you just saw a >>> ==17603== warning about a bad jump, it's probably your program's fault. >>> ==17603== 2. The instruction is legitimate but Valgrind doesn't handle it, >>> ==17603== i.e. it's Valgrind's fault. If you think this is the case or >>> ==17603== you are not sure, please let us know and we'll try to fix it. >>> ==17603== Either way, Valgrind will now raise a SIGILL signal which will >>> ==17603== probably kill your program. >>> forrtl: severe (168): Program Exception - illegal instruction >>> Image PC Routine Line Source >>> libifcore.so.5 0000000005DD0F0E Unknown Unknown Unknown >>> libifcore.so.5 0000000005DD0DC7 Unknown Unknown Unknown >>> a.out 0000000001CB4CBB Unknown Unknown Unknown >>> a.out 00000000004093DC Unknown Unknown Unknown >>> libc.so.6 000000369141D974 Unknown Unknown Unknown >>> a.out 00000000004092E9 Unknown Unknown Unknown >>> ==17603== >>> ==17603== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 5 from 1) >>> ==17603== malloc/free: in use at exit: 239 bytes in 8 blocks. >>> ==17603== malloc/free: 31 allocs, 23 frees, 31,388 bytes allocated. >>> ==17603== For counts of detected errors, rerun with: -v >>> ==17603== searching for pointers to 8 not-freed blocks. >>> ==17603== checked 2,340,280 bytes. >>> ==17603== >>> ==17603== LEAK SUMMARY: >>> ==17603== definitely lost: 0 bytes in 0 blocks. >>> ==17603== possibly lost: 0 bytes in 0 blocks. >>> ==17603== still reachable: 239 bytes in 8 blocks. >>> ==17603== suppressed: 0 bytes in 0 blocks. >>> ==17603== Reachable blocks (those to which a pointer was found) are not shown. >>> ==17603== To see them, rerun with: --show-reachable=yes >>> >>> Thank you >>> >>> Yours sincerely, >>> >>> TAY wee-beng >>> >>> On 23/4/2014 5:18 PM, TAY wee-beng wrote: >>> Hi, >>> >>> My code was found to be giving error answer in one of the cluster, even on single processor. No error msg was given. It used to be working fine. >>> >>> I run the debug version and it gives the error msg: >>> >>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>> [0]PETSC ERROR: Caught signal number 8 FPE: Floating Point Exception,probably divide by zero >>> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>> [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >>> [0]PETSC ERROR: likely location of problem given in stack below >>> [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ >>> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, >>> [0]PETSC ERROR: INSTEAD the line number of the start of the function >>> [0]PETSC ERROR: is given. >>> [0]PETSC ERROR: [0] VecDot_Seq line 62 src/vec/vec/impls/seq/bvec1.c >>> [0]PETSC ERROR: [0] VecDot_MPI line 14 src/vec/vec/impls/mpi/pbvec.c >>> [0]PETSC ERROR: [0] VecDot line 118 src/vec/vec/interface/rvector.c >>> [0]PETSC ERROR: [0] KSPSolve_BCGS line 39 src/ksp/ksp/impls/bcgs/bcgs.c >>> [0]PETSC ERROR: [0] KSPSolve line 356 src/ksp/ksp/interface/itfunc.c >>> [0]PETSC ERROR: --------------------- Error Message ------------------------------------ >>> [0]PETSC ERROR: Signal received! >>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>> >>> It happens after KSPSolve. There was no problem on other cluster. So how should I debug to find the error? >>> >>> I tried to compare the input matrix and vector between different cluster but there are too many values. >>> >>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> -- Norbert Wiener >> From rlmackie862 at gmail.com Wed Apr 23 20:48:10 2014 From: rlmackie862 at gmail.com (Randall Mackie) Date: Wed, 23 Apr 2014 18:48:10 -0700 Subject: [petsc-users] inserting vector into row of dense matrix Message-ID: <8463838D-27E1-43E7-B33D-4906A39D912F@gmail.com> I have a 3D rectangular grid over which I am doing some computations. I have created a 3D DA for that grid, and I can put the results of the computations into a global vector obtained via DMGetGlobalVector. Now, I need to do these computations for several realizations, and I want to keep all the results in order to do matrix-vector multiplies later. So, I have created a dense parallel matrix using MatCreateDense, but specifying the same layout for the columns as for the global vector on the DA (using VecGetLocalSize). My question is what is the best way to insert the global vector obtained from the DA into a row of the dense parallel matrix? My first attempt was to simply use MatSetValues on the values on each processor as I computed them (rather than putting into a global vector), then using MatAssemblyBegin/End and that worked fine, but I thought maybe that wasn't the most efficient way to do it. My next thought was to use VecGetArrayF90 to access the elements of the global vector, and MatDenseGetArrayF90 on the dense matrix, and then just copy the values over (as I read on some email in the archives), but I suspect I am not using MatDenseGetArrayF90 correctly because I am getting a memory corruption error. I wonder if MatDenseGetArrayF90 can be used in the way I am thinking, and if it can, then it must mean that I am not doing something correctly. Or is there a better way to insert an entire vector into each row of a dense parallel matrix? Thanks, Randy From bsmith at mcs.anl.gov Wed Apr 23 21:04:06 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 23 Apr 2014 21:04:06 -0500 Subject: [petsc-users] inserting vector into row of dense matrix In-Reply-To: <8463838D-27E1-43E7-B33D-4906A39D912F@gmail.com> References: <8463838D-27E1-43E7-B33D-4906A39D912F@gmail.com> Message-ID: Randy, The problem is that MatMPIDense stores each row on a single process; while the solution in DMGetGlobalVector() will be spread over all processes. Thus you could use MatSetValues() as you have done to put the values in but it will be terribly inefficient. What I suggest is to instead store the solutions in the COLUMNS of the dense matrix and use MatMultTranspose() to do the matrix-vector products you wish to do later. Then transferring from the DMGetGlobalVector to a column of the MatMPIDense will be a completely local copy. You can use MatDenseGetF90() to give you access to the matrix and VecGetArrayF90 to get access to the global vector and then just copy the values from the vec to the particular column of the dense matrix you want to put them. Each process will simply copy over its part of the global vector to its part of the column of the dense matrix. Good luck, Barry On Apr 23, 2014, at 8:48 PM, Randall Mackie wrote: > I have a 3D rectangular grid over which I am doing some computations. I have created a 3D DA for that grid, and I can put the results of the computations into a global vector obtained via DMGetGlobalVector. > > Now, I need to do these computations for several realizations, and I want to keep all the results in order to do matrix-vector multiplies later. > > So, I have created a dense parallel matrix using MatCreateDense, but specifying the same layout for the columns as for the global vector on the DA (using VecGetLocalSize). > > My question is what is the best way to insert the global vector obtained from the DA into a row of the dense parallel matrix? > > My first attempt was to simply use MatSetValues on the values on each processor as I computed them (rather than putting into a global vector), then using MatAssemblyBegin/End and that worked fine, but I thought maybe that wasn't the most efficient way to do it. > > My next thought was to use VecGetArrayF90 to access the elements of the global vector, and MatDenseGetArrayF90 on the dense matrix, and then just copy the values over (as I read on some email in the archives), but I suspect I am not using MatDenseGetArrayF90 correctly because I am getting a memory corruption error. > > I wonder if MatDenseGetArrayF90 can be used in the way I am thinking, and if it can, then it must mean that I am not doing something correctly. Or is there a better way to insert an entire vector into each row of a dense parallel matrix? > > Thanks, Randy > > From jed at jedbrown.org Thu Apr 24 01:19:38 2014 From: jed at jedbrown.org (Jed Brown) Date: Thu, 24 Apr 2014 00:19:38 -0600 Subject: [petsc-users] KSP breakdown in specific cluster (update) In-Reply-To: <53586C7F.8010909@gmail.com> References: <5357854E.10502@gmail.com> <53578E1F.4000709@gmail.com> <535864F9.5010505@gmail.com> <53586C7F.8010909@gmail.com> Message-ID: <877g6f6yh1.fsf@jedbrown.org> TAY wee-beng writes: > On 24/4/2014 9:41 AM, Barry Smith wrote: >> The numbers like >> >> sum = 1.9762625833649862e-323 >> >> rho = 1.9762625833649862e-323 >> >> beta = 1.600807474747106e-316 >> omega = 1.6910452843641213e-315 >> d2 = 1.5718032521948665e-316 >> >> are nonsense. They would generally indicate that something is wrong, but unfortunately don?t point to exactly what is wrong. > > Hi, > > In that case, how do I troubleshoot? Any suggestions? You have memory corruption. Use Valgrind, perhaps compiler checks like -fmudflap, and location watchpoints in the debugger. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From romain.veltz at inria.fr Thu Apr 24 01:28:12 2014 From: romain.veltz at inria.fr (Veltz Romain) Date: Thu, 24 Apr 2014 08:28:12 +0200 Subject: [petsc-users] setting Sundials options within petsc4py In-Reply-To: <8738h38wab.fsf@jedbrown.org> References: <8738h38wab.fsf@jedbrown.org> Message-ID: On Apr 24, 2014, at 1:23 AM, Jed Brown wrote: > Veltz Romain writes: > >> Dear pets users, >> >> I have been trying to set some sundials options from petsc4py but I am >> unsuccessful. I hope somebody will have a good pointer. For example I >> can pass -ts_sundials_type bdf to python in the command line. >> >> I would like to call TSSundialsSetTolerance to set the >> tolerance. Hence I need to pass some options in the command line (I >> don't know if this is possible for this particular function) or call >> TSSundialsSetTolerance(?) from petsc4py. So far, I am clueless. > > These functions are not yet implemented in petsc4py. You could clone > the repository (https://bitbucket.org/petsc/petsc4py) and add the > interfaces you need. It should be straightforward based on similar > interfaces that already exist. We can accept patches/pull requests, but > don't always have time to ensure that every function is exposed via > petsc4py. OK will try, hopefully I have that python level. That would be adding changes mainly to petsc4py / src / PETSc / TS.pyx right? > >> Also, I would like to limit the step sizes. This is not documented in >> petsc_manual.pdf. But in sundial.c, there is PetscErrorCode >> TSSundialsSetMaxTimeStep_Sundials(TS ts,PetscReal maxdt) which I would >> like to call also from petsc4py. >> >> Thank you for your help, >> >> Best regards, >> >> >> Veltz Romain >> >> Neuromathcomp Project Team >> Inria Sophia Antipolis M?diterran?e >> 2004 Route des Lucioles-BP 93 >> FR-06902 Sophia Antipolis >> http://www-sop.inria.fr/members/Romain.Veltz/public_html/Home.html Veltz Romain Neuromathcomp Project Team Inria Sophia Antipolis M?diterran?e 2004 Route des Lucioles-BP 93 FR-06902 Sophia Antipolis http://www-sop.inria.fr/members/Romain.Veltz/public_html/Home.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Thu Apr 24 01:35:08 2014 From: jed at jedbrown.org (Jed Brown) Date: Thu, 24 Apr 2014 00:35:08 -0600 Subject: [petsc-users] setting Sundials options within petsc4py In-Reply-To: References: <8738h38wab.fsf@jedbrown.org> Message-ID: <87y4yv5j6r.fsf@jedbrown.org> Veltz Romain writes: > OK will try, hopefully I have that python level. That would be adding changes mainly to > petsc4py / src / PETSc / TS.pyx right? Yes. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From wumeng07maths at qq.com Thu Apr 24 05:26:19 2014 From: wumeng07maths at qq.com (=?ISO-8859-1?B?T28gICAgICA=?=) Date: Thu, 24 Apr 2014 18:26:19 +0800 Subject: [petsc-users] PETSC ERROR: KSPComputeEigenvalues() Message-ID: Hi, I met a problem when I tried to use "KSPComputeEigenvalues()". The following is the solving part of my code. When this programme run to the line " KSPComputeEigenvalues(ksp, numberEigen, r, c,neig);" Then, PETSc Errors appear. Do you have any suggestions? //////////////////////////////////////////// //Solving the Linear System Part====================================================== cout<<"solve the Linear system"<P_Vertices->size(); KSPComputeEigenvalues(ksp,numberEigen,r,c,neig); cout<<"After KSPComputeEignvalues()"< From bsmith at mcs.anl.gov Thu Apr 24 06:32:46 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 24 Apr 2014 06:32:46 -0500 Subject: [petsc-users] PETSC ERROR: KSPComputeEigenvalues() In-Reply-To: References: Message-ID: <876AC8F3-037F-487A-B06C-A9D161BE8C5E@mcs.anl.gov> You must allocate r and c with a length at least numberEigen before call the routine. Barry BTW: I wouldn?t use a numberEigen of 1; even if you really care about only one eigenvalue use a numberEigen of say 10. BTW 2: for accurate computations of eigenvalues you should use the package SLEPc which has many high quality eigenvalue routines. The ones in PETSc are only for crude estimates to compute the conditioning of operators and preconditioned operators. On Apr 24, 2014, at 5:26 AM, Oo wrote: > > Hi, > > I met a problem when I tried to use "KSPComputeEigenvalues()". > > The following is the solving part of my code. > When this programme run to the line > " KSPComputeEigenvalues(ksp, numberEigen, r, c,neig);" > Then, PETSc Errors appear. > > Do you have any suggestions? > > //////////////////////////////////////////// > //Solving the Linear System Part====================================================== > cout<<"solve the Linear system"< t0=(double)clock(); > KSP ksp; > PC pc; > KSPCreate(PETSC_COMM_WORLD, &ksp); > // KSPSetType(ksp, KSPCG); > KSPSetType(ksp, KSPGMRES); > KSPSetOperators(ksp, coeff_matrix, coeff_matrix, DIFFERENT_NONZERO_PATTERN); > > PetscReal rtol=(1.e-4)*pow(h,4.0); > PetscReal stol=(1.e-3)*pow(h,4.0); > > KSPSetTolerances(ksp, rtol, stol, PETSC_DEFAULT, PETSC_DEFAULT); > > KSPGetPC(ksp, &pc); > PCSetType(pc, PCGAMG); > KSPSetFromOptions(ksp); > > > //====KSPSetComputeEigenvalues() > KSPSetComputeEigenvalues(ksp, PETSC_TRUE); > //KSPSetComputeEigenvalues(ksp, PETSC_FALSE); > //============================= > > > KSPSetUp(ksp); > > KSPSolve(ksp, load_vect, m_solut); > > > cout<<"Before KSPComputeEignvalues()"< //======KSPComputeEigenvalues() > numberEigen=1;//4*P_FEMMesh->P_Vertices->size(); > KSPComputeEigenvalues(ksp,numberEigen,r,c,neig); > cout<<"After KSPComputeEignvalues()"< > cout<<"EignValues:"< for(unsigned int j=0; j { > cout< } > //======================== > > > > Here is Petsc errors. > when the programme runs to > "KSPComputeEigenvalues(ksp,numberEigen,r,c,neig);" > These errors appear. > > PETSC ERRORs > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: Null argument, when expecting valid pointer! > [0]PETSC ERROR: Null Pointer: Parameter # 3! > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.4.3, Oct, 15, 2013 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: /Users/wumeng/MyWork/BuildMSplineTools/bin/msplinePDE_PFEM_2 on a arch-darwin-c-debug named vis032b.sophia.inria.fr by wumeng Thu Apr 24 11:50:16 2014 > [0]PETSC ERROR: Libraries linked from /Users/wumeng/MyWork/PETSc/arch-darwin-c-debug/lib > [0]PETSC ERROR: Configure run at Wed Dec 18 15:31:31 2013 > [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran --download-f-blas-lapack --download-mpich > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: KSPComputeEigenvalues() line 118 in /Users/wumeng/MyWork/PETSc/src/ksp/ksp/interface/itfunc.c > After KSPComputeEignvalues() > EignValues: > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > [0]PETSC ERROR: INSTEAD the line number of the start of the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: [0] KSPComputeEigenvalues line 116 /Users/wumeng/MyWork/PETSc/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: Signal received! > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.4.3, Oct, 15, 2013 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: /Users/wumeng/MyWork/BuildMSplineTools/bin/msplinePDE_PFEM_2 on a arch-darwin-c-debug named vis032b.sophia.inria.fr by wumeng Thu Apr 24 11:50:16 2014 > [0]PETSC ERROR: Libraries linked from /Users/wumeng/MyWork/PETSc/arch-darwin-c-debug/lib > [0]PETSC ERROR: Configure run at Wed Dec 18 15:31:31 2013 > [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran --download-f-blas-lapack --download-mpich > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > [unset]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > > > > Thanks, > > M. From wumeng07maths at qq.com Thu Apr 24 11:57:01 2014 From: wumeng07maths at qq.com (=?ISO-8859-1?B?T28gICAgICA=?=) Date: Fri, 25 Apr 2014 00:57:01 +0800 Subject: [petsc-users] Convergence_Eigenvalues_k=3 Message-ID: Hi, For analysis the convergence of linear solver, I meet a problem. One is the list of Eigenvalues whose linear system which has a convergence solution. The other is the list of Eigenvalues whose linear system whose solution does not converge (convergenceReason=-3). Do you know what kind of method can be used to obtain a convergence solution for our non-convergence case? Thanks, Meng -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Convergence_Eigenvalues_k=3.txt Type: application/octet-stream Size: 20302 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: NotConvergence_Eigenvalues.txt Type: application/octet-stream Size: 52053 bytes Desc: not available URL: From knepley at gmail.com Thu Apr 24 16:53:42 2014 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 24 Apr 2014 17:53:42 -0400 Subject: [petsc-users] Convergence_Eigenvalues_k=3 In-Reply-To: References: Message-ID: On Thu, Apr 24, 2014 at 12:57 PM, Oo wrote: > > Hi, > > For analysis the convergence of linear solver, > I meet a problem. > > One is the list of Eigenvalues whose linear system which has a convergence > solution. > The other is the list of Eigenvalues whose linear system whose solution > does not converge (convergenceReason=-3). > These are just lists of numbers. It does not tell us anything about the computation. What is the problem you are having? Matt > Do you know what kind of method can be used to obtain a convergence > solution for our non-convergence case? > > Thanks, > > Meng > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Apr 24 17:05:16 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 24 Apr 2014 17:05:16 -0500 Subject: [petsc-users] Convergence_Eigenvalues_k=3 In-Reply-To: References: Message-ID: <65E3810E-04F5-4B83-B3AF-EDE2D3AEE791@mcs.anl.gov> There are also a great deal of ?bogus? numbers that have no meaning and many zeros. Most of these are not the eigenvalues of anything. Run the two cases with -ksp_monitor_true_residual -ksp_monitor_singular_value and send the output Barry On Apr 24, 2014, at 4:53 PM, Matthew Knepley wrote: > On Thu, Apr 24, 2014 at 12:57 PM, Oo wrote: > > Hi, > > For analysis the convergence of linear solver, > I meet a problem. > > One is the list of Eigenvalues whose linear system which has a convergence solution. > The other is the list of Eigenvalues whose linear system whose solution does not converge (convergenceReason=-3). > > These are just lists of numbers. It does not tell us anything about the computation. What is the problem you are having? > > Matt > > Do you know what kind of method can be used to obtain a convergence solution for our non-convergence case? > > Thanks, > > Meng > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From wumeng07maths at qq.com Thu Apr 24 17:09:59 2014 From: wumeng07maths at qq.com (=?ISO-8859-1?B?T28gICAgICA=?=) Date: Fri, 25 Apr 2014 06:09:59 +0800 Subject: [petsc-users] Convergence_Eigenvalues_k=3 In-Reply-To: References: Message-ID: I tried to solve linear systems with PETSC. There are two linear systems. We got a solution of one of them. For another one, the PETSC solution does not converge. Thus, I out put the eigenvalues of these two coefficient matrixes (in the attachments) in order to find if there is a difference between them. Maybe, these eigenvalues can help me to know why that system doesn't converge. However, I can not find a difference. Do you have some suggestions about solving this non-convergence linear system? Here is my code: PetscReal rtol=(1.e-4)*pow(h,4.0); PetscReal stol=(1.e-3)*pow(h,4.0); KSPSetTolerances(ksp, rtol, stol, PETSC_DEFAULT, PETSC_DEFAULT); KSPGetPC(ksp, &pc); PCSetType(pc, PCGAMG); KSPSetFromOptions(ksp); KSPSetUp(ksp); KSPSolve(ksp, load_vect, m_solut); KSPGetConvergedReason(ksp, &reason); Thanks, Meng ------------------ Original ------------------ From: "Matthew Knepley";; Send time: Friday, Apr 25, 2014 5:53 AM To: "Oo "; Cc: "petsc-users"; Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 On Thu, Apr 24, 2014 at 12:57 PM, Oo wrote: Hi, For analysis the convergence of linear solver, I meet a problem. One is the list of Eigenvalues whose linear system which has a convergence solution. The other is the list of Eigenvalues whose linear system whose solution does not converge (convergenceReason=-3). These are just lists of numbers. It does not tell us anything about the computation. What is the problem you are having? Matt Do you know what kind of method can be used to obtain a convergence solution for our non-convergence case? Thanks, Meng -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From wumeng07maths at qq.com Thu Apr 24 17:11:31 2014 From: wumeng07maths at qq.com (=?gb18030?B?T28gICAgICA=?=) Date: Fri, 25 Apr 2014 06:11:31 +0800 Subject: [petsc-users] Convergence_Eigenvalues_k=3 In-Reply-To: <65E3810E-04F5-4B83-B3AF-EDE2D3AEE791@mcs.anl.gov> References: <65E3810E-04F5-4B83-B3AF-EDE2D3AEE791@mcs.anl.gov> Message-ID: Where should I put "-ksp_monitor_true_residual -ksp_monitor_singular_value " ? Thanks, Meng ------------------ Original ------------------ From: "Barry Smith";; Date: Apr 25, 2014 To: "Matthew Knepley"; Cc: "Oo "; "petsc-users"; Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 There are also a great deal of ?bogus? numbers that have no meaning and many zeros. Most of these are not the eigenvalues of anything. Run the two cases with -ksp_monitor_true_residual -ksp_monitor_singular_value and send the output Barry On Apr 24, 2014, at 4:53 PM, Matthew Knepley wrote: > On Thu, Apr 24, 2014 at 12:57 PM, Oo wrote: > > Hi, > > For analysis the convergence of linear solver, > I meet a problem. > > One is the list of Eigenvalues whose linear system which has a convergence solution. > The other is the list of Eigenvalues whose linear system whose solution does not converge (convergenceReason=-3). > > These are just lists of numbers. It does not tell us anything about the computation. What is the problem you are having? > > Matt > > Do you know what kind of method can be used to obtain a convergence solution for our non-convergence case? > > Thanks, > > Meng > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener . -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Thu Apr 24 17:20:34 2014 From: dave.mayhem23 at gmail.com (Dave May) Date: Fri, 25 Apr 2014 00:20:34 +0200 Subject: [petsc-users] Convergence_Eigenvalues_k=3 In-Reply-To: References: <65E3810E-04F5-4B83-B3AF-EDE2D3AEE791@mcs.anl.gov> Message-ID: On the command line On 25 April 2014 00:11, Oo wrote: > > Where should I put " > -ksp_monitor_true_residual -ksp_monitor_singular_value " ? > > Thanks, > > Meng > > > ------------------ Original ------------------ > *From: * "Barry Smith";; > *Date: * Apr 25, 2014 > *To: * "Matthew Knepley"; > *Cc: * "Oo "; "petsc-users"; > > *Subject: * Re: [petsc-users] Convergence_Eigenvalues_k=3 > > > There are also a great deal of ?bogus? numbers that have no meaning and > many zeros. Most of these are not the eigenvalues of anything. > > Run the two cases with -ksp_monitor_true_residual > -ksp_monitor_singular_value and send the output > > Barry > > > On Apr 24, 2014, at 4:53 PM, Matthew Knepley wrote: > > > On Thu, Apr 24, 2014 at 12:57 PM, Oo wrote: > > > > Hi, > > > > For analysis the convergence of linear solver, > > I meet a problem. > > > > One is the list of Eigenvalues whose linear system which has a > convergence solution. > > The other is the list of Eigenvalues whose linear system whose solution > does not converge (convergenceReason=-3). > > > > These are just lists of numbers. It does not tell us anything about the > computation. What is the problem you are having? > > > > Matt > > > > Do you know what kind of method can be used to obtain a convergence > solution for our non-convergence case? > > > > Thanks, > > > > Meng > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > . > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wumeng07maths at qq.com Thu Apr 24 17:24:46 2014 From: wumeng07maths at qq.com (=?gb18030?B?T28gICAgICA=?=) Date: Fri, 25 Apr 2014 06:24:46 +0800 Subject: [petsc-users] Convergence_Eigenvalues_k=3 In-Reply-To: References: <65E3810E-04F5-4B83-B3AF-EDE2D3AEE791@mcs.anl.gov> Message-ID: Configure PETSC again? ------------------ Original ------------------ From: "Dave May";; Send time: Friday, Apr 25, 2014 6:20 AM To: "Oo "; Cc: "Barry Smith"; "Matthew Knepley"; "petsc-users"; Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 On the command line On 25 April 2014 00:11, Oo wrote: Where should I put "-ksp_monitor_true_residual -ksp_monitor_singular_value " ? Thanks, Meng ------------------ Original ------------------ From: "Barry Smith";; Date: Apr 25, 2014 To: "Matthew Knepley"; Cc: "Oo "; "petsc-users"; Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 There are also a great deal of ?bogus? numbers that have no meaning and many zeros. Most of these are not the eigenvalues of anything. Run the two cases with -ksp_monitor_true_residual -ksp_monitor_singular_value and send the output Barry On Apr 24, 2014, at 4:53 PM, Matthew Knepley wrote: > On Thu, Apr 24, 2014 at 12:57 PM, Oo wrote: > > Hi, > > For analysis the convergence of linear solver, > I meet a problem. > > One is the list of Eigenvalues whose linear system which has a convergence solution. > The other is the list of Eigenvalues whose linear system whose solution does not converge (convergenceReason=-3). > > These are just lists of numbers. It does not tell us anything about the computation. What is the problem you are having? > > Matt > > Do you know what kind of method can be used to obtain a convergence solution for our non-convergence case? > > Thanks, > > Meng > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener . -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Thu Apr 24 17:27:17 2014 From: dave.mayhem23 at gmail.com (Dave May) Date: Fri, 25 Apr 2014 00:27:17 +0200 Subject: [petsc-users] Convergence_Eigenvalues_k=3 In-Reply-To: References: <65E3810E-04F5-4B83-B3AF-EDE2D3AEE791@mcs.anl.gov> Message-ID: Absolutely not! Run your code again with the command line args Barry suggested. ./my.app -ksp_monitor_true_residual -ksp_monitor_singular_value As long as you have called KSPSetFromOptions() (which from your code snippet it seems you have) these options will take effect. On 25 April 2014 00:24, Oo wrote: > > Configure PETSC again? > > ------------------ Original ------------------ > *From: * "Dave May";; > *Send time:* Friday, Apr 25, 2014 6:20 AM > *To:* "Oo "; > *Cc:* "Barry Smith"; "Matthew Knepley"< > knepley at gmail.com>; "petsc-users"; > *Subject: * Re: [petsc-users] Convergence_Eigenvalues_k=3 > > On the command line > > > On 25 April 2014 00:11, Oo wrote: > >> >> Where should I put " >> -ksp_monitor_true_residual -ksp_monitor_singular_value " ? >> >> Thanks, >> >> Meng >> >> >> ------------------ Original ------------------ >> *From: * "Barry Smith";; >> *Date: * Apr 25, 2014 >> *To: * "Matthew Knepley"; >> *Cc: * "Oo "; "petsc-users"; >> >> *Subject: * Re: [petsc-users] Convergence_Eigenvalues_k=3 >> >> >> There are also a great deal of ?bogus? numbers that have no meaning >> and many zeros. Most of these are not the eigenvalues of anything. >> >> Run the two cases with -ksp_monitor_true_residual >> -ksp_monitor_singular_value and send the output >> >> Barry >> >> >> On Apr 24, 2014, at 4:53 PM, Matthew Knepley wrote: >> >> > On Thu, Apr 24, 2014 at 12:57 PM, Oo wrote: >> > >> > Hi, >> > >> > For analysis the convergence of linear solver, >> > I meet a problem. >> > >> > One is the list of Eigenvalues whose linear system which has a >> convergence solution. >> > The other is the list of Eigenvalues whose linear system whose solution >> does not converge (convergenceReason=-3). >> > >> > These are just lists of numbers. It does not tell us anything about the >> computation. What is the problem you are having? >> > >> > Matt >> > >> > Do you know what kind of method can be used to obtain a convergence >> solution for our non-convergence case? >> > >> > Thanks, >> > >> > Meng >> > >> > >> > >> > -- >> > What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> > -- Norbert Wiener >> >> . >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Apr 24 17:27:22 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 24 Apr 2014 17:27:22 -0500 Subject: [petsc-users] Convergence_Eigenvalues_k=3 In-Reply-To: References: <65E3810E-04F5-4B83-B3AF-EDE2D3AEE791@mcs.anl.gov> Message-ID: On Apr 24, 2014, at 5:24 PM, Oo wrote: > > Configure PETSC again? No, the command line when you run the program. Barry > > ------------------ Original ------------------ > From: "Dave May";; > Send time: Friday, Apr 25, 2014 6:20 AM > To: "Oo "; > Cc: "Barry Smith"; "Matthew Knepley"; "petsc-users"; > Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 > > On the command line > > > On 25 April 2014 00:11, Oo wrote: > > Where should I put "-ksp_monitor_true_residual -ksp_monitor_singular_value " ? > > Thanks, > > Meng > > > ------------------ Original ------------------ > From: "Barry Smith";; > Date: Apr 25, 2014 > To: "Matthew Knepley"; > Cc: "Oo "; "petsc-users"; > Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 > > > There are also a great deal of ?bogus? numbers that have no meaning and many zeros. Most of these are not the eigenvalues of anything. > > Run the two cases with -ksp_monitor_true_residual -ksp_monitor_singular_value and send the output > > Barry > > > On Apr 24, 2014, at 4:53 PM, Matthew Knepley wrote: > > > On Thu, Apr 24, 2014 at 12:57 PM, Oo wrote: > > > > Hi, > > > > For analysis the convergence of linear solver, > > I meet a problem. > > > > One is the list of Eigenvalues whose linear system which has a convergence solution. > > The other is the list of Eigenvalues whose linear system whose solution does not converge (convergenceReason=-3). > > > > These are just lists of numbers. It does not tell us anything about the computation. What is the problem you are having? > > > > Matt > > > > Do you know what kind of method can be used to obtain a convergence solution for our non-convergence case? > > > > Thanks, > > > > Meng > > > > > > > > -- > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > -- Norbert Wiener > > . > From wumeng07maths at qq.com Thu Apr 24 17:46:03 2014 From: wumeng07maths at qq.com (=?gb18030?B?T28gICAgICA=?=) Date: Fri, 25 Apr 2014 06:46:03 +0800 Subject: [petsc-users] Convergence_Eigenvalues_k=3 In-Reply-To: References: <65E3810E-04F5-4B83-B3AF-EDE2D3AEE791@mcs.anl.gov> Message-ID: Thanks, The following is the output at the beginning. 0 KSP preconditioned resid norm 7.463734841673e+00 true resid norm 7.520241011357e-02 ||r(i)||/||b|| 1.000000000000e+00 0 KSP Residual norm 7.463734841673e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 1 KSP preconditioned resid norm 3.449001344285e-03 true resid norm 7.834435231711e-05 ||r(i)||/||b|| 1.041779807307e-03 1 KSP Residual norm 3.449001344285e-03 % max 9.999991695261e-01 min 9.999991695261e-01 max/min 1.000000000000e+00 2 KSP preconditioned resid norm 1.811463883605e-05 true resid norm 3.597611565181e-07 ||r(i)||/||b|| 4.783904611232e-06 2 KSP Residual norm 1.811463883605e-05 % max 1.000686014764e+00 min 9.991339510077e-01 max/min 1.001553409084e+00 0 KSP preconditioned resid norm 9.374463936067e+00 true resid norm 9.058107112571e-02 ||r(i)||/||b|| 1.000000000000e+00 0 KSP Residual norm 9.374463936067e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 1 KSP preconditioned resid norm 2.595582184655e-01 true resid norm 6.637387889158e-03 ||r(i)||/||b|| 7.327566131280e-02 1 KSP Residual norm 2.595582184655e-01 % max 9.933440157684e-01 min 9.933440157684e-01 max/min 1.000000000000e+00 2 KSP preconditioned resid norm 6.351429855766e-03 true resid norm 1.844857600919e-04 ||r(i)||/||b|| 2.036692189651e-03 2 KSP Residual norm 6.351429855766e-03 % max 1.000795215571e+00 min 8.099278726624e-01 max/min 1.235659679523e+00 3 KSP preconditioned resid norm 1.883016084950e-04 true resid norm 3.876682412610e-06 ||r(i)||/||b|| 4.279793078656e-05 3 KSP Residual norm 1.883016084950e-04 % max 1.184638644500e+00 min 8.086172954187e-01 max/min 1.465017692809e+00 When solving the linear system: Output: ......... ........ ........ +00 max/min 7.343521316293e+01 9636 KSP preconditioned resid norm 1.080641720588e+00 true resid norm 1.594064989678e-01 ||r(i)||/||b|| 8.917260288210e-03 9636 KSP Residual norm 1.080641720588e+00 % max 9.802496207537e+01 min 1.168945135768e+00 max/min 8.385762434515e+01 9637 KSP preconditioned resid norm 1.080641720588e+00 true resid norm 1.594064987918e-01 ||r(i)||/||b|| 8.917260278361e-03 9637 KSP Residual norm 1.080641720588e+00 % max 1.122401280488e+02 min 1.141681830513e+00 max/min 9.831121512938e+01 9638 KSP preconditioned resid norm 1.080641720588e+00 true resid norm 1.594064986859e-01 ||r(i)||/||b|| 8.917260272440e-03 9638 KSP Residual norm 1.080641720588e+00 % max 1.134941067042e+02 min 1.090790142559e+00 max/min 1.040476094127e+02 9639 KSP preconditioned resid norm 1.080641720588e+00 true resid norm 1.594064442988e-01 ||r(i)||/||b|| 8.917257230005e-03 9639 KSP Residual norm 1.080641720588e+00 % max 1.139914662925e+02 min 4.119649156568e-01 max/min 2.767018791170e+02 9640 KSP preconditioned resid norm 1.080641720586e+00 true resid norm 1.594063809183e-01 ||r(i)||/||b|| 8.917253684473e-03 9640 KSP Residual norm 1.080641720586e+00 % max 1.140011421526e+02 min 2.894486589274e-01 max/min 3.938561766878e+02 9641 KSP preconditioned resid norm 1.080641720586e+00 true resid norm 1.594063839202e-01 ||r(i)||/||b|| 8.917253852403e-03 9641 KSP Residual norm 1.080641720586e+00 % max 1.140392299942e+02 min 2.880532973299e-01 max/min 3.958962839563e+02 9642 KSP preconditioned resid norm 1.080641720586e+00 true resid norm 1.594064091676e-01 ||r(i)||/||b|| 8.917255264750e-03 9642 KSP Residual norm 1.080641720586e+00 % max 1.140392728591e+02 min 2.501717295613e-01 max/min 4.558439639005e+02 9643 KSP preconditioned resid norm 1.080641720583e+00 true resid norm 1.594064099334e-01 ||r(i)||/||b|| 8.917255307591e-03 9643 KSP Residual norm 1.080641720583e+00 % max 1.141360429432e+02 min 2.500714638111e-01 max/min 4.564137035220e+02 9644 KSP preconditioned resid norm 1.080641720582e+00 true resid norm 1.594064169337e-01 ||r(i)||/||b|| 8.917255699186e-03 9644 KSP Residual norm 1.080641720582e+00 % max 1.141719168213e+02 min 2.470526471293e-01 max/min 4.621359784969e+02 9645 KSP preconditioned resid norm 1.080641720554e+00 true resid norm 1.594064833602e-01 ||r(i)||/||b|| 8.917259415111e-03 9645 KSP Residual norm 1.080641720554e+00 % max 1.141770017757e+02 min 2.461729098264e-01 max/min 4.638081495493e+02 9646 KSP preconditioned resid norm 1.080641720550e+00 true resid norm 1.594066163854e-01 ||r(i)||/||b|| 8.917266856592e-03 9646 KSP Residual norm 1.080641720550e+00 % max 1.150251695783e+02 min 1.817293289064e-01 max/min 6.329477485583e+02 9647 KSP preconditioned resid norm 1.080641720425e+00 true resid norm 1.594070759575e-01 ||r(i)||/||b|| 8.917292565231e-03 9647 KSP Residual norm 1.080641720425e+00 % max 1.153670774825e+02 min 1.757825842976e-01 max/min 6.563055034347e+02 9648 KSP preconditioned resid norm 1.080641720405e+00 true resid norm 1.594072309986e-01 ||r(i)||/||b|| 8.917301238287e-03 9648 KSP Residual norm 1.080641720405e+00 % max 1.154419449950e+02 min 1.682003217110e-01 max/min 6.863360534671e+02 9649 KSP preconditioned resid norm 1.080641719971e+00 true resid norm 1.594088666650e-01 ||r(i)||/||b|| 8.917392738093e-03 9649 KSP Residual norm 1.080641719971e+00 % max 1.154420890958e+02 min 1.254364806923e-01 max/min 9.203230867027e+02 9650 KSP preconditioned resid norm 1.080641719766e+00 true resid norm 1.594089470619e-01 ||r(i)||/||b|| 8.917397235527e-03 9650 KSP Residual norm 1.080641719766e+00 % max 1.155791388935e+02 min 1.115280748954e-01 max/min 1.036323266603e+03 9651 KSP preconditioned resid norm 1.080641719668e+00 true resid norm 1.594099325489e-01 ||r(i)||/||b|| 8.917452364041e-03 9651 KSP Residual norm 1.080641719668e+00 % max 1.156952656131e+02 min 9.753165869338e-02 max/min 1.186232933624e+03 9652 KSP preconditioned resid norm 1.080641719560e+00 true resid norm 1.594104650490e-01 ||r(i)||/||b|| 8.917482152303e-03 9652 KSP Residual norm 1.080641719560e+00 % max 1.157175173166e+02 min 8.164906465197e-02 max/min 1.417254659437e+03 9653 KSP preconditioned resid norm 1.080641719545e+00 true resid norm 1.594102433389e-01 ||r(i)||/||b|| 8.917469749751e-03 9653 KSP Residual norm 1.080641719545e+00 % max 1.157284977956e+02 min 8.043379142473e-02 max/min 1.438804459490e+03 9654 KSP preconditioned resid norm 1.080641719502e+00 true resid norm 1.594103748106e-01 ||r(i)||/||b|| 8.917477104328e-03 9654 KSP Residual norm 1.080641719502e+00 % max 1.158252103352e+02 min 8.042977537341e-02 max/min 1.440078749412e+03 9655 KSP preconditioned resid norm 1.080641719500e+00 true resid norm 1.594103839160e-01 ||r(i)||/||b|| 8.917477613692e-03 9655 KSP Residual norm 1.080641719500e+00 % max 1.158319413225e+02 min 7.912584859399e-02 max/min 1.463895090931e+03 9656 KSP preconditioned resid norm 1.080641719298e+00 true resid norm 1.594103559180e-01 ||r(i)||/||b|| 8.917476047469e-03 9656 KSP Residual norm 1.080641719298e+00 % max 1.164752277567e+02 min 7.459488142962e-02 max/min 1.561437266532e+03 9657 KSP preconditioned resid norm 1.080641719265e+00 true resid norm 1.594102184171e-01 ||r(i)||/||b|| 8.917468355617e-03 9657 KSP Residual norm 1.080641719265e+00 % max 1.166579038733e+02 min 7.458594814570e-02 max/min 1.564073485335e+03 9658 KSP preconditioned resid norm 1.080641717458e+00 true resid norm 1.594091333285e-01 ||r(i)||/||b|| 8.917407655346e-03 9658 KSP Residual norm 1.080641717458e+00 % max 1.166903646829e+02 min 6.207842503132e-02 max/min 1.879724954749e+03 9659 KSP preconditioned resid norm 1.080641710951e+00 true resid norm 1.594084758922e-01 ||r(i)||/||b|| 8.917370878110e-03 9659 KSP Residual norm 1.080641710951e+00 % max 1.166911765130e+02 min 4.511759655558e-02 max/min 2.586378384966e+03 9660 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073892570e-01 ||r(i)||/||b|| 8.917310091324e-03 9660 KSP Residual norm 1.080641710473e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 9661 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073892624e-01 ||r(i)||/||b|| 8.917310091626e-03 9661 KSP Residual norm 1.080641710473e+00 % max 3.063497860835e+01 min 3.063497860835e+01 max/min 1.000000000000e+00 9662 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073892715e-01 ||r(i)||/||b|| 8.917310092135e-03 9662 KSP Residual norm 1.080641710473e+00 % max 3.066567116490e+01 min 3.845920135843e+00 max/min 7.973559013643e+00 9663 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073893803e-01 ||r(i)||/||b|| 8.917310098220e-03 9663 KSP Residual norm 1.080641710473e+00 % max 3.713314039929e+01 min 1.336313376350e+00 max/min 2.778774878443e+01 9664 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073893840e-01 ||r(i)||/||b|| 8.917310098430e-03 9664 KSP Residual norm 1.080641710473e+00 % max 4.496286107838e+01 min 1.226793755688e+00 max/min 3.665070911057e+01 9665 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073893901e-01 ||r(i)||/||b|| 8.917310098770e-03 9665 KSP Residual norm 1.080641710473e+00 % max 8.684753794468e+01 min 1.183106109633e+00 max/min 7.340638108242e+01 9666 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073897030e-01 ||r(i)||/||b|| 8.917310116272e-03 9666 KSP Residual norm 1.080641710473e+00 % max 9.802657279239e+01 min 1.168685545918e+00 max/min 8.387762913199e+01 9667 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073898787e-01 ||r(i)||/||b|| 8.917310126104e-03 9667 KSP Residual norm 1.080641710473e+00 % max 1.123342619847e+02 min 1.141262650540e+00 max/min 9.842980661068e+01 9668 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073899832e-01 ||r(i)||/||b|| 8.917310131950e-03 9668 KSP Residual norm 1.080641710473e+00 % max 1.134978630027e+02 min 1.090790451862e+00 max/min 1.040510235573e+02 9669 KSP preconditioned resid norm 1.080641710472e+00 true resid norm 1.594074437274e-01 ||r(i)||/||b|| 8.917313138421e-03 9669 KSP Residual norm 1.080641710472e+00 % max 1.139911467053e+02 min 4.122424185123e-01 max/min 2.765148407498e+02 9670 KSP preconditioned resid norm 1.080641710471e+00 true resid norm 1.594075064381e-01 ||r(i)||/||b|| 8.917316646478e-03 9670 KSP Residual norm 1.080641710471e+00 % max 1.140007007543e+02 min 2.895766957825e-01 max/min 3.936805081855e+02 9671 KSP preconditioned resid norm 1.080641710471e+00 true resid norm 1.594075034738e-01 ||r(i)||/||b|| 8.917316480653e-03 9671 KSP Residual norm 1.080641710471e+00 % max 1.140387089437e+02 min 2.881955202443e-01 max/min 3.956991033277e+02 9672 KSP preconditioned resid norm 1.080641710470e+00 true resid norm 1.594074784660e-01 ||r(i)||/||b|| 8.917315081711e-03 9672 KSP Residual norm 1.080641710470e+00 % max 1.140387584514e+02 min 2.501644562253e-01 max/min 4.558551609294e+02 9673 KSP preconditioned resid norm 1.080641710468e+00 true resid norm 1.594074777329e-01 ||r(i)||/||b|| 8.917315040700e-03 9673 KSP Residual norm 1.080641710468e+00 % max 1.141359894823e+02 min 2.500652210717e-01 max/min 4.564248838488e+02 9674 KSP preconditioned resid norm 1.080641710467e+00 true resid norm 1.594074707419e-01 ||r(i)||/||b|| 8.917314649619e-03 9674 KSP Residual norm 1.080641710467e+00 % max 1.141719424825e+02 min 2.470352404045e-01 max/min 4.621686456374e+02 9675 KSP preconditioned resid norm 1.080641710439e+00 true resid norm 1.594074050132e-01 ||r(i)||/||b|| 8.917310972734e-03 9675 KSP Residual norm 1.080641710439e+00 % max 1.141769957478e+02 min 2.461583334135e-01 max/min 4.638355897383e+02 9676 KSP preconditioned resid norm 1.080641710435e+00 true resid norm 1.594072732317e-01 ||r(i)||/||b|| 8.917303600825e-03 9676 KSP Residual norm 1.080641710435e+00 % max 1.150247840524e+02 min 1.817432478135e-01 max/min 6.328971526384e+02 9677 KSP preconditioned resid norm 1.080641710313e+00 true resid norm 1.594068192028e-01 ||r(i)||/||b|| 8.917278202275e-03 9677 KSP Residual norm 1.080641710313e+00 % max 1.153658698688e+02 min 1.758229849082e-01 max/min 6.561478291873e+02 9678 KSP preconditioned resid norm 1.080641710294e+00 true resid norm 1.594066656020e-01 ||r(i)||/||b|| 8.917269609788e-03 9678 KSP Residual norm 1.080641710294e+00 % max 1.154409214869e+02 min 1.682261493961e-01 max/min 6.862245964808e+02 9679 KSP preconditioned resid norm 1.080641709867e+00 true resid norm 1.594050456081e-01 ||r(i)||/||b|| 8.917178986714e-03 9679 KSP Residual norm 1.080641709867e+00 % max 1.154410567101e+02 min 1.254614849745e-01 max/min 9.201314390116e+02 9680 KSP preconditioned resid norm 1.080641709664e+00 true resid norm 1.594049650069e-01 ||r(i)||/||b|| 8.917174477851e-03 9680 KSP Residual norm 1.080641709664e+00 % max 1.155780217907e+02 min 1.115227545913e-01 max/min 1.036362688622e+03 9681 KSP preconditioned resid norm 1.080641709568e+00 true resid norm 1.594039822189e-01 ||r(i)||/||b|| 8.917119500317e-03 9681 KSP Residual norm 1.080641709568e+00 % max 1.156949294099e+02 min 9.748012982669e-02 max/min 1.186856538000e+03 9682 KSP preconditioned resid norm 1.080641709462e+00 true resid norm 1.594034585197e-01 ||r(i)||/||b|| 8.917090204385e-03 9682 KSP Residual norm 1.080641709462e+00 % max 1.157178049396e+02 min 8.161660044005e-02 max/min 1.417821917547e+03 9683 KSP preconditioned resid norm 1.080641709447e+00 true resid norm 1.594036816693e-01 ||r(i)||/||b|| 8.917102687458e-03 9683 KSP Residual norm 1.080641709447e+00 % max 1.157298376396e+02 min 8.041659530019e-02 max/min 1.439128791856e+03 9684 KSP preconditioned resid norm 1.080641709407e+00 true resid norm 1.594035571975e-01 ||r(i)||/||b|| 8.917095724454e-03 9684 KSP Residual norm 1.080641709407e+00 % max 1.158244705264e+02 min 8.041209608100e-02 max/min 1.440386162919e+03 9685 KSP preconditioned resid norm 1.080641709405e+00 true resid norm 1.594035489927e-01 ||r(i)||/||b|| 8.917095265477e-03 9685 KSP Residual norm 1.080641709405e+00 % max 1.158301451854e+02 min 7.912026880308e-02 max/min 1.463975627707e+03 9686 KSP preconditioned resid norm 1.080641709207e+00 true resid norm 1.594035774591e-01 ||r(i)||/||b|| 8.917096857899e-03 9686 KSP Residual norm 1.080641709207e+00 % max 1.164678839370e+02 min 7.460755571674e-02 max/min 1.561073577845e+03 9687 KSP preconditioned resid norm 1.080641709174e+00 true resid norm 1.594037147171e-01 ||r(i)||/||b|| 8.917104536162e-03 9687 KSP Residual norm 1.080641709174e+00 % max 1.166488052404e+02 min 7.459794478802e-02 max/min 1.563699986264e+03 9688 KSP preconditioned resid norm 1.080641707398e+00 true resid norm 1.594047860513e-01 ||r(i)||/||b|| 8.917164467006e-03 9688 KSP Residual norm 1.080641707398e+00 % max 1.166807852257e+02 min 6.210503072987e-02 max/min 1.878765437428e+03 9689 KSP preconditioned resid norm 1.080641701010e+00 true resid norm 1.594054414016e-01 ||r(i)||/||b|| 8.917201127549e-03 9689 KSP Residual norm 1.080641701010e+00 % max 1.166815140099e+02 min 4.513527468187e-02 max/min 2.585151299782e+03 9690 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065078534e-01 ||r(i)||/||b|| 8.917260785273e-03 9690 KSP Residual norm 1.080641700548e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 9691 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065078481e-01 ||r(i)||/||b|| 8.917260784977e-03 9691 KSP Residual norm 1.080641700548e+00 % max 3.063462923862e+01 min 3.063462923862e+01 max/min 1.000000000000e+00 9692 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065078391e-01 ||r(i)||/||b|| 8.917260784472e-03 9692 KSP Residual norm 1.080641700548e+00 % max 3.066527639129e+01 min 3.844401168431e+00 max/min 7.976606771193e+00 9693 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065077314e-01 ||r(i)||/||b|| 8.917260778448e-03 9693 KSP Residual norm 1.080641700548e+00 % max 3.714560719346e+01 min 1.336559823443e+00 max/min 2.779195255007e+01 9694 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065077277e-01 ||r(i)||/||b|| 8.917260778237e-03 9694 KSP Residual norm 1.080641700548e+00 % max 4.500980776068e+01 min 1.226798344943e+00 max/min 3.668883965014e+01 9695 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065077215e-01 ||r(i)||/||b|| 8.917260777894e-03 9695 KSP Residual norm 1.080641700548e+00 % max 8.688252795875e+01 min 1.183121330492e+00 max/min 7.343501103356e+01 9696 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065074170e-01 ||r(i)||/||b|| 8.917260760857e-03 9696 KSP Residual norm 1.080641700548e+00 % max 9.802496799855e+01 min 1.168942571066e+00 max/min 8.385781339895e+01 9697 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065072444e-01 ||r(i)||/||b|| 8.917260751202e-03 9697 KSP Residual norm 1.080641700548e+00 % max 1.122410641497e+02 min 1.141677885656e+00 max/min 9.831237475989e+01 9698 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065071406e-01 ||r(i)||/||b|| 8.917260745398e-03 9698 KSP Residual norm 1.080641700548e+00 % max 1.134941426413e+02 min 1.090790153651e+00 max/min 1.040476413006e+02 9699 KSP preconditioned resid norm 1.080641700547e+00 true resid norm 1.594064538413e-01 ||r(i)||/||b|| 8.917257763815e-03 9699 KSP Residual norm 1.080641700547e+00 % max 1.139914632588e+02 min 4.119681054781e-01 max/min 2.766997292824e+02 9700 KSP preconditioned resid norm 1.080641700546e+00 true resid norm 1.594063917269e-01 ||r(i)||/||b|| 8.917254289111e-03 9700 KSP Residual norm 1.080641700546e+00 % max 1.140011377578e+02 min 2.894500209686e-01 max/min 3.938543081679e+02 9701 KSP preconditioned resid norm 1.080641700546e+00 true resid norm 1.594063946692e-01 ||r(i)||/||b|| 8.917254453705e-03 9701 KSP Residual norm 1.080641700546e+00 % max 1.140392249545e+02 min 2.880548133071e-01 max/min 3.958941829341e+02 9702 KSP preconditioned resid norm 1.080641700546e+00 true resid norm 1.594064194159e-01 ||r(i)||/||b|| 8.917255838042e-03 9702 KSP Residual norm 1.080641700546e+00 % max 1.140392678944e+02 min 2.501716607446e-01 max/min 4.558440694479e+02 9703 KSP preconditioned resid norm 1.080641700543e+00 true resid norm 1.594064201661e-01 ||r(i)||/||b|| 8.917255880013e-03 9703 KSP Residual norm 1.080641700543e+00 % max 1.141360426034e+02 min 2.500714053220e-01 max/min 4.564138089137e+02 9704 KSP preconditioned resid norm 1.080641700542e+00 true resid norm 1.594064270279e-01 ||r(i)||/||b|| 8.917256263859e-03 9704 KSP Residual norm 1.080641700542e+00 % max 1.141719170495e+02 min 2.470524869868e-01 max/min 4.621362789824e+02 9705 KSP preconditioned resid norm 1.080641700515e+00 true resid norm 1.594064921269e-01 ||r(i)||/||b|| 8.917259905523e-03 9705 KSP Residual norm 1.080641700515e+00 % max 1.141770017414e+02 min 2.461727640845e-01 max/min 4.638084239987e+02 9706 KSP preconditioned resid norm 1.080641700511e+00 true resid norm 1.594066225181e-01 ||r(i)||/||b|| 8.917267199657e-03 9706 KSP Residual norm 1.080641700511e+00 % max 1.150251667711e+02 min 1.817294170466e-01 max/min 6.329474261263e+02 9707 KSP preconditioned resid norm 1.080641700391e+00 true resid norm 1.594070728983e-01 ||r(i)||/||b|| 8.917292394097e-03 9707 KSP Residual norm 1.080641700391e+00 % max 1.153670687494e+02 min 1.757829178698e-01 max/min 6.563042083238e+02 9708 KSP preconditioned resid norm 1.080641700372e+00 true resid norm 1.594072248425e-01 ||r(i)||/||b|| 8.917300893916e-03 9708 KSP Residual norm 1.080641700372e+00 % max 1.154419380718e+02 min 1.682005223292e-01 max/min 6.863351936912e+02 9709 KSP preconditioned resid norm 1.080641699955e+00 true resid norm 1.594088278507e-01 ||r(i)||/||b|| 8.917390566803e-03 9709 KSP Residual norm 1.080641699955e+00 % max 1.154420821193e+02 min 1.254367014624e-01 max/min 9.203214113045e+02 9710 KSP preconditioned resid norm 1.080641699758e+00 true resid norm 1.594089066545e-01 ||r(i)||/||b|| 8.917394975116e-03 9710 KSP Residual norm 1.080641699758e+00 % max 1.155791306030e+02 min 1.115278858784e-01 max/min 1.036324948623e+03 9711 KSP preconditioned resid norm 1.080641699664e+00 true resid norm 1.594098725388e-01 ||r(i)||/||b|| 8.917449007053e-03 9711 KSP Residual norm 1.080641699664e+00 % max 1.156952614563e+02 min 9.753133694719e-02 max/min 1.186236804269e+03 9712 KSP preconditioned resid norm 1.080641699560e+00 true resid norm 1.594103943811e-01 ||r(i)||/||b|| 8.917478199111e-03 9712 KSP Residual norm 1.080641699560e+00 % max 1.157175157006e+02 min 8.164872572382e-02 max/min 1.417260522743e+03 9713 KSP preconditioned resid norm 1.080641699546e+00 true resid norm 1.594101771105e-01 ||r(i)||/||b|| 8.917466044914e-03 9713 KSP Residual norm 1.080641699546e+00 % max 1.157285025175e+02 min 8.043358651149e-02 max/min 1.438808183705e+03 9714 KSP preconditioned resid norm 1.080641699505e+00 true resid norm 1.594103059104e-01 ||r(i)||/||b|| 8.917473250025e-03 9714 KSP Residual norm 1.080641699505e+00 % max 1.158251990486e+02 min 8.042956633721e-02 max/min 1.440082351843e+03 9715 KSP preconditioned resid norm 1.080641699503e+00 true resid norm 1.594103148215e-01 ||r(i)||/||b|| 8.917473748516e-03 9715 KSP Residual norm 1.080641699503e+00 % max 1.158319218998e+02 min 7.912575668150e-02 max/min 1.463896545925e+03 9716 KSP preconditioned resid norm 1.080641699309e+00 true resid norm 1.594102873744e-01 ||r(i)||/||b|| 8.917472213115e-03 9716 KSP Residual norm 1.080641699309e+00 % max 1.164751648212e+02 min 7.459498920670e-02 max/min 1.561434166824e+03 9717 KSP preconditioned resid norm 1.080641699277e+00 true resid norm 1.594101525703e-01 ||r(i)||/||b|| 8.917464672122e-03 9717 KSP Residual norm 1.080641699277e+00 % max 1.166578264103e+02 min 7.458604738274e-02 max/min 1.564070365758e+03 9718 KSP preconditioned resid norm 1.080641697541e+00 true resid norm 1.594090891807e-01 ||r(i)||/||b|| 8.917405185702e-03 9718 KSP Residual norm 1.080641697541e+00 % max 1.166902828397e+02 min 6.207863457218e-02 max/min 1.879717291527e+03 9719 KSP preconditioned resid norm 1.080641691292e+00 true resid norm 1.594084448306e-01 ||r(i)||/||b|| 8.917369140517e-03 9719 KSP Residual norm 1.080641691292e+00 % max 1.166910940162e+02 min 4.511779816158e-02 max/min 2.586364999424e+03 9720 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073798840e-01 ||r(i)||/||b|| 8.917309566993e-03 9720 KSP Residual norm 1.080641690833e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 9721 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073798892e-01 ||r(i)||/||b|| 8.917309567288e-03 9721 KSP Residual norm 1.080641690833e+00 % max 3.063497720423e+01 min 3.063497720423e+01 max/min 1.000000000000e+00 9722 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073798982e-01 ||r(i)||/||b|| 8.917309567787e-03 9722 KSP Residual norm 1.080641690833e+00 % max 3.066566915002e+01 min 3.845904218039e+00 max/min 7.973591491484e+00 9723 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073800048e-01 ||r(i)||/||b|| 8.917309573750e-03 9723 KSP Residual norm 1.080641690833e+00 % max 3.713330064708e+01 min 1.336316867286e+00 max/min 2.778779611043e+01 9724 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073800084e-01 ||r(i)||/||b|| 8.917309573955e-03 9724 KSP Residual norm 1.080641690833e+00 % max 4.496345800250e+01 min 1.226793989785e+00 max/min 3.665118868930e+01 9725 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073800144e-01 ||r(i)||/||b|| 8.917309574290e-03 9725 KSP Residual norm 1.080641690833e+00 % max 8.684799302076e+01 min 1.183106260116e+00 max/min 7.340675639080e+01 9726 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073803210e-01 ||r(i)||/||b|| 8.917309591441e-03 9726 KSP Residual norm 1.080641690833e+00 % max 9.802654643741e+01 min 1.168688159857e+00 max/min 8.387741897664e+01 9727 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073804933e-01 ||r(i)||/||b|| 8.917309601077e-03 9727 KSP Residual norm 1.080641690833e+00 % max 1.123333202151e+02 min 1.141267080659e+00 max/min 9.842859933384e+01 9728 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073805957e-01 ||r(i)||/||b|| 8.917309606809e-03 9728 KSP Residual norm 1.080641690833e+00 % max 1.134978238790e+02 min 1.090790456842e+00 max/min 1.040509872149e+02 9729 KSP preconditioned resid norm 1.080641690832e+00 true resid norm 1.594074332665e-01 ||r(i)||/||b|| 8.917312553232e-03 9729 KSP Residual norm 1.080641690832e+00 % max 1.139911500486e+02 min 4.122400558994e-01 max/min 2.765164336102e+02 9730 KSP preconditioned resid norm 1.080641690831e+00 true resid norm 1.594074947247e-01 ||r(i)||/||b|| 8.917315991226e-03 9730 KSP Residual norm 1.080641690831e+00 % max 1.140007051733e+02 min 2.895754967285e-01 max/min 3.936821535706e+02 9731 KSP preconditioned resid norm 1.080641690831e+00 true resid norm 1.594074918191e-01 ||r(i)||/||b|| 8.917315828690e-03 9731 KSP Residual norm 1.080641690831e+00 % max 1.140387143094e+02 min 2.881941911453e-01 max/min 3.957009468380e+02 9732 KSP preconditioned resid norm 1.080641690830e+00 true resid norm 1.594074673079e-01 ||r(i)||/||b|| 8.917314457521e-03 9732 KSP Residual norm 1.080641690830e+00 % max 1.140387637599e+02 min 2.501645326450e-01 max/min 4.558550428958e+02 9733 KSP preconditioned resid norm 1.080641690828e+00 true resid norm 1.594074665892e-01 ||r(i)||/||b|| 8.917314417314e-03 9733 KSP Residual norm 1.080641690828e+00 % max 1.141359902062e+02 min 2.500652873669e-01 max/min 4.564247657404e+02 9734 KSP preconditioned resid norm 1.080641690827e+00 true resid norm 1.594074597377e-01 ||r(i)||/||b|| 8.917314034043e-03 9734 KSP Residual norm 1.080641690827e+00 % max 1.141719421992e+02 min 2.470354276866e-01 max/min 4.621682941123e+02 9735 KSP preconditioned resid norm 1.080641690800e+00 true resid norm 1.594073953224e-01 ||r(i)||/||b|| 8.917310430625e-03 9735 KSP Residual norm 1.080641690800e+00 % max 1.141769958343e+02 min 2.461584786890e-01 max/min 4.638353163473e+02 9736 KSP preconditioned resid norm 1.080641690797e+00 true resid norm 1.594072661531e-01 ||r(i)||/||b|| 8.917303204847e-03 9736 KSP Residual norm 1.080641690797e+00 % max 1.150247889475e+02 min 1.817430598855e-01 max/min 6.328978340080e+02 9737 KSP preconditioned resid norm 1.080641690679e+00 true resid norm 1.594068211899e-01 ||r(i)||/||b|| 8.917278313432e-03 9737 KSP Residual norm 1.080641690679e+00 % max 1.153658852526e+02 min 1.758225138542e-01 max/min 6.561496745986e+02 9738 KSP preconditioned resid norm 1.080641690661e+00 true resid norm 1.594066706603e-01 ||r(i)||/||b|| 8.917269892752e-03 9738 KSP Residual norm 1.080641690661e+00 % max 1.154409350014e+02 min 1.682258367468e-01 max/min 6.862259521714e+02 9739 KSP preconditioned resid norm 1.080641690251e+00 true resid norm 1.594050830368e-01 ||r(i)||/||b|| 8.917181080485e-03 9739 KSP Residual norm 1.080641690251e+00 % max 1.154410703479e+02 min 1.254612102469e-01 max/min 9.201335625627e+02 9740 KSP preconditioned resid norm 1.080641690057e+00 true resid norm 1.594050040532e-01 ||r(i)||/||b|| 8.917176662114e-03 9740 KSP Residual norm 1.080641690057e+00 % max 1.155780358111e+02 min 1.115226752201e-01 max/min 1.036363551923e+03 9741 KSP preconditioned resid norm 1.080641689964e+00 true resid norm 1.594040409622e-01 ||r(i)||/||b|| 8.917122786435e-03 9741 KSP Residual norm 1.080641689964e+00 % max 1.156949319460e+02 min 9.748084084270e-02 max/min 1.186847907198e+03 9742 KSP preconditioned resid norm 1.080641689862e+00 true resid norm 1.594035276798e-01 ||r(i)||/||b|| 8.917094073223e-03 9742 KSP Residual norm 1.080641689862e+00 % max 1.157177974822e+02 min 8.161690928512e-02 max/min 1.417816461022e+03 9743 KSP preconditioned resid norm 1.080641689848e+00 true resid norm 1.594037462881e-01 ||r(i)||/||b|| 8.917106302257e-03 9743 KSP Residual norm 1.080641689848e+00 % max 1.157298152734e+02 min 8.041673363017e-02 max/min 1.439126038190e+03 9744 KSP preconditioned resid norm 1.080641689809e+00 true resid norm 1.594036242374e-01 ||r(i)||/||b|| 8.917099474695e-03 9744 KSP Residual norm 1.080641689809e+00 % max 1.158244737331e+02 min 8.041224000025e-02 max/min 1.440383624841e+03 9745 KSP preconditioned resid norm 1.080641689808e+00 true resid norm 1.594036161930e-01 ||r(i)||/||b|| 8.917099024685e-03 9745 KSP Residual norm 1.080641689808e+00 % max 1.158301611517e+02 min 7.912028815054e-02 max/min 1.463975471516e+03 9746 KSP preconditioned resid norm 1.080641689617e+00 true resid norm 1.594036440841e-01 ||r(i)||/||b|| 8.917100584928e-03 9746 KSP Residual norm 1.080641689617e+00 % max 1.164679674294e+02 min 7.460741052548e-02 max/min 1.561077734894e+03 9747 KSP preconditioned resid norm 1.080641689585e+00 true resid norm 1.594037786262e-01 ||r(i)||/||b|| 8.917108111263e-03 9747 KSP Residual norm 1.080641689585e+00 % max 1.166489093034e+02 min 7.459780450955e-02 max/min 1.563704321734e+03 9748 KSP preconditioned resid norm 1.080641687880e+00 true resid norm 1.594048285858e-01 ||r(i)||/||b|| 8.917166846404e-03 9748 KSP Residual norm 1.080641687880e+00 % max 1.166808944816e+02 min 6.210471238399e-02 max/min 1.878776827114e+03 9749 KSP preconditioned resid norm 1.080641681745e+00 true resid norm 1.594054707973e-01 ||r(i)||/||b|| 8.917202771956e-03 9749 KSP Residual norm 1.080641681745e+00 % max 1.166816242497e+02 min 4.513512286802e-02 max/min 2.585162437485e+03 9750 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065161384e-01 ||r(i)||/||b|| 8.917261248737e-03 9750 KSP Residual norm 1.080641681302e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 9751 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065161332e-01 ||r(i)||/||b|| 8.917261248446e-03 9751 KSP Residual norm 1.080641681302e+00 % max 3.063463477969e+01 min 3.063463477969e+01 max/min 1.000000000000e+00 9752 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065161243e-01 ||r(i)||/||b|| 8.917261247951e-03 9752 KSP Residual norm 1.080641681302e+00 % max 3.066528223339e+01 min 3.844415595957e+00 max/min 7.976578355796e+00 9753 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065160188e-01 ||r(i)||/||b|| 8.917261242049e-03 9753 KSP Residual norm 1.080641681302e+00 % max 3.714551741771e+01 min 1.336558367241e+00 max/min 2.779191566050e+01 9754 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065160151e-01 ||r(i)||/||b|| 8.917261241840e-03 9754 KSP Residual norm 1.080641681302e+00 % max 4.500946346995e+01 min 1.226798482639e+00 max/min 3.668855489054e+01 9755 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065160091e-01 ||r(i)||/||b|| 8.917261241506e-03 9755 KSP Residual norm 1.080641681302e+00 % max 8.688228012809e+01 min 1.183121177118e+00 max/min 7.343481108144e+01 9756 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065157105e-01 ||r(i)||/||b|| 8.917261224803e-03 9756 KSP Residual norm 1.080641681302e+00 % max 9.802497401249e+01 min 1.168940055038e+00 max/min 8.385799903943e+01 9757 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065155414e-01 ||r(i)||/||b|| 8.917261215340e-03 9757 KSP Residual norm 1.080641681302e+00 % max 1.122419823631e+02 min 1.141674012007e+00 max/min 9.831351259875e+01 9758 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065154397e-01 ||r(i)||/||b|| 8.917261209650e-03 9758 KSP Residual norm 1.080641681302e+00 % max 1.134941779165e+02 min 1.090790164362e+00 max/min 1.040476726180e+02 9759 KSP preconditioned resid norm 1.080641681301e+00 true resid norm 1.594064632071e-01 ||r(i)||/||b|| 8.917258287741e-03 9759 KSP Residual norm 1.080641681301e+00 % max 1.139914602803e+02 min 4.119712251563e-01 max/min 2.766976267263e+02 9760 KSP preconditioned resid norm 1.080641681300e+00 true resid norm 1.594064023343e-01 ||r(i)||/||b|| 8.917254882493e-03 9760 KSP Residual norm 1.080641681300e+00 % max 1.140011334474e+02 min 2.894513550128e-01 max/min 3.938524780524e+02 9761 KSP preconditioned resid norm 1.080641681299e+00 true resid norm 1.594064052181e-01 ||r(i)||/||b|| 8.917255043816e-03 9761 KSP Residual norm 1.080641681299e+00 % max 1.140392200085e+02 min 2.880562980628e-01 max/min 3.958921251693e+02 9762 KSP preconditioned resid norm 1.080641681299e+00 true resid norm 1.594064294735e-01 ||r(i)||/||b|| 8.917256400669e-03 9762 KSP Residual norm 1.080641681299e+00 % max 1.140392630219e+02 min 2.501715931763e-01 max/min 4.558441730892e+02 9763 KSP preconditioned resid norm 1.080641681297e+00 true resid norm 1.594064302085e-01 ||r(i)||/||b|| 8.917256441789e-03 9763 KSP Residual norm 1.080641681297e+00 % max 1.141360422663e+02 min 2.500713478839e-01 max/min 4.564139123977e+02 9764 KSP preconditioned resid norm 1.080641681296e+00 true resid norm 1.594064369343e-01 ||r(i)||/||b|| 8.917256818032e-03 9764 KSP Residual norm 1.080641681296e+00 % max 1.141719172739e+02 min 2.470523296471e-01 max/min 4.621365742109e+02 9765 KSP preconditioned resid norm 1.080641681269e+00 true resid norm 1.594065007316e-01 ||r(i)||/||b|| 8.917260386877e-03 9765 KSP Residual norm 1.080641681269e+00 % max 1.141770017074e+02 min 2.461726211496e-01 max/min 4.638086931610e+02 9766 KSP preconditioned resid norm 1.080641681266e+00 true resid norm 1.594066285385e-01 ||r(i)||/||b|| 8.917267536440e-03 9766 KSP Residual norm 1.080641681266e+00 % max 1.150251639969e+02 min 1.817295045513e-01 max/min 6.329471060897e+02 9767 KSP preconditioned resid norm 1.080641681150e+00 true resid norm 1.594070699054e-01 ||r(i)||/||b|| 8.917292226672e-03 9767 KSP Residual norm 1.080641681150e+00 % max 1.153670601170e+02 min 1.757832464778e-01 max/min 6.563029323251e+02 9768 KSP preconditioned resid norm 1.080641681133e+00 true resid norm 1.594072188128e-01 ||r(i)||/||b|| 8.917300556609e-03 9768 KSP Residual norm 1.080641681133e+00 % max 1.154419312149e+02 min 1.682007202940e-01 max/min 6.863343451393e+02 9769 KSP preconditioned resid norm 1.080641680732e+00 true resid norm 1.594087897943e-01 ||r(i)||/||b|| 8.917388437917e-03 9769 KSP Residual norm 1.080641680732e+00 % max 1.154420752094e+02 min 1.254369186538e-01 max/min 9.203197627011e+02 9770 KSP preconditioned resid norm 1.080641680543e+00 true resid norm 1.594088670353e-01 ||r(i)||/||b|| 8.917392758804e-03 9770 KSP Residual norm 1.080641680543e+00 % max 1.155791224140e+02 min 1.115277033208e-01 max/min 1.036326571538e+03 9771 KSP preconditioned resid norm 1.080641680452e+00 true resid norm 1.594098136932e-01 ||r(i)||/||b|| 8.917445715209e-03 9771 KSP Residual norm 1.080641680452e+00 % max 1.156952573953e+02 min 9.753101753312e-02 max/min 1.186240647556e+03 9772 KSP preconditioned resid norm 1.080641680352e+00 true resid norm 1.594103250846e-01 ||r(i)||/||b|| 8.917474322641e-03 9772 KSP Residual norm 1.080641680352e+00 % max 1.157175142052e+02 min 8.164839358709e-02 max/min 1.417266269688e+03 9773 KSP preconditioned resid norm 1.080641680339e+00 true resid norm 1.594101121664e-01 ||r(i)||/||b|| 8.917462411912e-03 9773 KSP Residual norm 1.080641680339e+00 % max 1.157285073186e+02 min 8.043338619163e-02 max/min 1.438811826757e+03 9774 KSP preconditioned resid norm 1.080641680299e+00 true resid norm 1.594102383476e-01 ||r(i)||/||b|| 8.917469470538e-03 9774 KSP Residual norm 1.080641680299e+00 % max 1.158251880541e+02 min 8.042936196082e-02 max/min 1.440085874492e+03 9775 KSP preconditioned resid norm 1.080641680298e+00 true resid norm 1.594102470687e-01 ||r(i)||/||b|| 8.917469958400e-03 9775 KSP Residual norm 1.080641680298e+00 % max 1.158319028736e+02 min 7.912566724984e-02 max/min 1.463897960037e+03 9776 KSP preconditioned resid norm 1.080641680112e+00 true resid norm 1.594102201623e-01 ||r(i)||/||b|| 8.917468453243e-03 9776 KSP Residual norm 1.080641680112e+00 % max 1.164751028861e+02 min 7.459509526345e-02 max/min 1.561431116546e+03 9777 KSP preconditioned resid norm 1.080641680081e+00 true resid norm 1.594100880055e-01 ||r(i)||/||b|| 8.917461060344e-03 9777 KSP Residual norm 1.080641680081e+00 % max 1.166577501679e+02 min 7.458614509876e-02 max/min 1.564067294448e+03 9778 KSP preconditioned resid norm 1.080641678414e+00 true resid norm 1.594090458938e-01 ||r(i)||/||b|| 8.917402764217e-03 9778 KSP Residual norm 1.080641678414e+00 % max 1.166902022924e+02 min 6.207884124039e-02 max/min 1.879709736213e+03 9779 KSP preconditioned resid norm 1.080641672412e+00 true resid norm 1.594084143785e-01 ||r(i)||/||b|| 8.917367437011e-03 9779 KSP Residual norm 1.080641672412e+00 % max 1.166910128239e+02 min 4.511799534103e-02 max/min 2.586351896663e+03 9780 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073707031e-01 ||r(i)||/||b|| 8.917309053412e-03 9780 KSP Residual norm 1.080641671971e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 9781 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073707083e-01 ||r(i)||/||b|| 8.917309053702e-03 9781 KSP Residual norm 1.080641671971e+00 % max 3.063497578379e+01 min 3.063497578379e+01 max/min 1.000000000000e+00 9782 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073707170e-01 ||r(i)||/||b|| 8.917309054190e-03 9782 KSP Residual norm 1.080641671971e+00 % max 3.066566713389e+01 min 3.845888622043e+00 max/min 7.973623302070e+00 9783 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073708215e-01 ||r(i)||/||b|| 8.917309060034e-03 9783 KSP Residual norm 1.080641671971e+00 % max 3.713345706068e+01 min 1.336320269525e+00 max/min 2.778784241137e+01 9784 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073708251e-01 ||r(i)||/||b|| 8.917309060235e-03 9784 KSP Residual norm 1.080641671971e+00 % max 4.496404075363e+01 min 1.226794215442e+00 max/min 3.665165696713e+01 9785 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073708309e-01 ||r(i)||/||b|| 8.917309060564e-03 9785 KSP Residual norm 1.080641671971e+00 % max 8.684843710054e+01 min 1.183106407741e+00 max/min 7.340712258198e+01 9786 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073711314e-01 ||r(i)||/||b|| 8.917309077372e-03 9786 KSP Residual norm 1.080641671971e+00 % max 9.802652080694e+01 min 1.168690722569e+00 max/min 8.387721311884e+01 9787 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073713003e-01 ||r(i)||/||b|| 8.917309086817e-03 9787 KSP Residual norm 1.080641671971e+00 % max 1.123323967807e+02 min 1.141271419633e+00 max/min 9.842741599265e+01 9788 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073714007e-01 ||r(i)||/||b|| 8.917309092435e-03 9788 KSP Residual norm 1.080641671971e+00 % max 1.134977855491e+02 min 1.090790461557e+00 max/min 1.040509516255e+02 9789 KSP preconditioned resid norm 1.080641671970e+00 true resid norm 1.594074230188e-01 ||r(i)||/||b|| 8.917311979972e-03 9789 KSP Residual norm 1.080641671970e+00 % max 1.139911533240e+02 min 4.122377311421e-01 max/min 2.765180009315e+02 9790 KSP preconditioned resid norm 1.080641671969e+00 true resid norm 1.594074832488e-01 ||r(i)||/||b|| 8.917315349259e-03 9790 KSP Residual norm 1.080641671969e+00 % max 1.140007095063e+02 min 2.895743194700e-01 max/min 3.936837690403e+02 9791 KSP preconditioned resid norm 1.080641671969e+00 true resid norm 1.594074804009e-01 ||r(i)||/||b|| 8.917315189946e-03 9791 KSP Residual norm 1.080641671969e+00 % max 1.140387195674e+02 min 2.881928861513e-01 max/min 3.957027568944e+02 9792 KSP preconditioned resid norm 1.080641671968e+00 true resid norm 1.594074563767e-01 ||r(i)||/||b|| 8.917313846024e-03 9792 KSP Residual norm 1.080641671968e+00 % max 1.140387689617e+02 min 2.501646075030e-01 max/min 4.558549272813e+02 9793 KSP preconditioned resid norm 1.080641671966e+00 true resid norm 1.594074556720e-01 ||r(i)||/||b|| 8.917313806607e-03 9793 KSP Residual norm 1.080641671966e+00 % max 1.141359909124e+02 min 2.500653522913e-01 max/min 4.564246500628e+02 9794 KSP preconditioned resid norm 1.080641671965e+00 true resid norm 1.594074489575e-01 ||r(i)||/||b|| 8.917313430994e-03 9794 KSP Residual norm 1.080641671965e+00 % max 1.141719419220e+02 min 2.470356110582e-01 max/min 4.621679499282e+02 9795 KSP preconditioned resid norm 1.080641671939e+00 true resid norm 1.594073858301e-01 ||r(i)||/||b|| 8.917309899620e-03 9795 KSP Residual norm 1.080641671939e+00 % max 1.141769959186e+02 min 2.461586211459e-01 max/min 4.638350482589e+02 9796 KSP preconditioned resid norm 1.080641671936e+00 true resid norm 1.594072592237e-01 ||r(i)||/||b|| 8.917302817214e-03 9796 KSP Residual norm 1.080641671936e+00 % max 1.150247937261e+02 min 1.817428765765e-01 max/min 6.328984986528e+02 9797 KSP preconditioned resid norm 1.080641671823e+00 true resid norm 1.594068231505e-01 ||r(i)||/||b|| 8.917278423111e-03 9797 KSP Residual norm 1.080641671823e+00 % max 1.153659002697e+02 min 1.758220533048e-01 max/min 6.561514787326e+02 9798 KSP preconditioned resid norm 1.080641671805e+00 true resid norm 1.594066756326e-01 ||r(i)||/||b|| 8.917270170903e-03 9798 KSP Residual norm 1.080641671805e+00 % max 1.154409481864e+02 min 1.682255312531e-01 max/min 6.862272767188e+02 9799 KSP preconditioned resid norm 1.080641671412e+00 true resid norm 1.594051197519e-01 ||r(i)||/||b|| 8.917183134347e-03 9799 KSP Residual norm 1.080641671412e+00 % max 1.154410836529e+02 min 1.254609413083e-01 max/min 9.201356410140e+02 9800 KSP preconditioned resid norm 1.080641671225e+00 true resid norm 1.594050423544e-01 ||r(i)||/||b|| 8.917178804700e-03 9800 KSP Residual norm 1.080641671225e+00 % max 1.155780495006e+02 min 1.115226000164e-01 max/min 1.036364373532e+03 9801 KSP preconditioned resid norm 1.080641671136e+00 true resid norm 1.594040985764e-01 ||r(i)||/||b|| 8.917126009400e-03 9801 KSP Residual norm 1.080641671136e+00 % max 1.156949344498e+02 min 9.748153395641e-02 max/min 1.186839494150e+03 9802 KSP preconditioned resid norm 1.080641671038e+00 true resid norm 1.594035955111e-01 ||r(i)||/||b|| 8.917097867731e-03 9802 KSP Residual norm 1.080641671038e+00 % max 1.157177902650e+02 min 8.161721244731e-02 max/min 1.417811106202e+03 9803 KSP preconditioned resid norm 1.080641671024e+00 true resid norm 1.594038096704e-01 ||r(i)||/||b|| 8.917109847884e-03 9803 KSP Residual norm 1.080641671024e+00 % max 1.157297935320e+02 min 8.041686995864e-02 max/min 1.439123328121e+03 9804 KSP preconditioned resid norm 1.080641670988e+00 true resid norm 1.594036899968e-01 ||r(i)||/||b|| 8.917103153299e-03 9804 KSP Residual norm 1.080641670988e+00 % max 1.158244769690e+02 min 8.041238179269e-02 max/min 1.440381125230e+03 9805 KSP preconditioned resid norm 1.080641670986e+00 true resid norm 1.594036821095e-01 ||r(i)||/||b|| 8.917102712083e-03 9805 KSP Residual norm 1.080641670986e+00 % max 1.158301768585e+02 min 7.912030787936e-02 max/min 1.463975304989e+03 9806 KSP preconditioned resid norm 1.080641670803e+00 true resid norm 1.594037094369e-01 ||r(i)||/||b|| 8.917104240783e-03 9806 KSP Residual norm 1.080641670803e+00 % max 1.164680491000e+02 min 7.460726855236e-02 max/min 1.561081800203e+03 9807 KSP preconditioned resid norm 1.080641670773e+00 true resid norm 1.594038413140e-01 ||r(i)||/||b|| 8.917111618039e-03 9807 KSP Residual norm 1.080641670773e+00 % max 1.166490110820e+02 min 7.459766739249e-02 max/min 1.563708560326e+03 9808 KSP preconditioned resid norm 1.080641669134e+00 true resid norm 1.594048703119e-01 ||r(i)||/||b|| 8.917169180576e-03 9808 KSP Residual norm 1.080641669134e+00 % max 1.166810013448e+02 min 6.210440124755e-02 max/min 1.878787960288e+03 9809 KSP preconditioned resid norm 1.080641663243e+00 true resid norm 1.594054996410e-01 ||r(i)||/||b|| 8.917204385483e-03 9809 KSP Residual norm 1.080641663243e+00 % max 1.166817320749e+02 min 4.513497352470e-02 max/min 2.585173380262e+03 9810 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065242705e-01 ||r(i)||/||b|| 8.917261703653e-03 9810 KSP Residual norm 1.080641662817e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 9811 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065242654e-01 ||r(i)||/||b|| 8.917261703367e-03 9811 KSP Residual norm 1.080641662817e+00 % max 3.063464017134e+01 min 3.063464017134e+01 max/min 1.000000000000e+00 9812 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065242568e-01 ||r(i)||/||b|| 8.917261702882e-03 9812 KSP Residual norm 1.080641662817e+00 % max 3.066528792324e+01 min 3.844429757599e+00 max/min 7.976550452671e+00 9813 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065241534e-01 ||r(i)||/||b|| 8.917261697099e-03 9813 KSP Residual norm 1.080641662817e+00 % max 3.714542869717e+01 min 1.336556919260e+00 max/min 2.779187938941e+01 9814 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065241497e-01 ||r(i)||/||b|| 8.917261696894e-03 9814 KSP Residual norm 1.080641662817e+00 % max 4.500912339640e+01 min 1.226798613987e+00 max/min 3.668827375841e+01 9815 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065241439e-01 ||r(i)||/||b|| 8.917261696565e-03 9815 KSP Residual norm 1.080641662817e+00 % max 8.688203511576e+01 min 1.183121026759e+00 max/min 7.343461332421e+01 9816 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065238512e-01 ||r(i)||/||b|| 8.917261680193e-03 9816 KSP Residual norm 1.080641662817e+00 % max 9.802498010891e+01 min 1.168937586878e+00 max/min 8.385818131719e+01 9817 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065236853e-01 ||r(i)||/||b|| 8.917261670916e-03 9817 KSP Residual norm 1.080641662817e+00 % max 1.122428829897e+02 min 1.141670208530e+00 max/min 9.831462899798e+01 9818 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065235856e-01 ||r(i)||/||b|| 8.917261665338e-03 9818 KSP Residual norm 1.080641662817e+00 % max 1.134942125400e+02 min 1.090790174709e+00 max/min 1.040477033727e+02 9819 KSP preconditioned resid norm 1.080641662816e+00 true resid norm 1.594064723991e-01 ||r(i)||/||b|| 8.917258801942e-03 9819 KSP Residual norm 1.080641662816e+00 % max 1.139914573562e+02 min 4.119742762810e-01 max/min 2.766955703769e+02 9820 KSP preconditioned resid norm 1.080641662815e+00 true resid norm 1.594064127438e-01 ||r(i)||/||b|| 8.917255464802e-03 9820 KSP Residual norm 1.080641662815e+00 % max 1.140011292201e+02 min 2.894526616195e-01 max/min 3.938506855740e+02 9821 KSP preconditioned resid norm 1.080641662815e+00 true resid norm 1.594064155702e-01 ||r(i)||/||b|| 8.917255622914e-03 9821 KSP Residual norm 1.080641662815e+00 % max 1.140392151549e+02 min 2.880577522236e-01 max/min 3.958901097943e+02 9822 KSP preconditioned resid norm 1.080641662815e+00 true resid norm 1.594064393436e-01 ||r(i)||/||b|| 8.917256952808e-03 9822 KSP Residual norm 1.080641662815e+00 % max 1.140392582401e+02 min 2.501715268375e-01 max/min 4.558442748531e+02 9823 KSP preconditioned resid norm 1.080641662812e+00 true resid norm 1.594064400637e-01 ||r(i)||/||b|| 8.917256993092e-03 9823 KSP Residual norm 1.080641662812e+00 % max 1.141360419319e+02 min 2.500712914815e-01 max/min 4.564140140028e+02 9824 KSP preconditioned resid norm 1.080641662811e+00 true resid norm 1.594064466562e-01 ||r(i)||/||b|| 8.917257361878e-03 9824 KSP Residual norm 1.080641662811e+00 % max 1.141719174947e+02 min 2.470521750742e-01 max/min 4.621368642491e+02 9825 KSP preconditioned resid norm 1.080641662786e+00 true resid norm 1.594065091771e-01 ||r(i)||/||b|| 8.917260859317e-03 9825 KSP Residual norm 1.080641662786e+00 % max 1.141770016737e+02 min 2.461724809746e-01 max/min 4.638089571251e+02 9826 KSP preconditioned resid norm 1.080641662783e+00 true resid norm 1.594066344484e-01 ||r(i)||/||b|| 8.917267867042e-03 9826 KSP Residual norm 1.080641662783e+00 % max 1.150251612560e+02 min 1.817295914017e-01 max/min 6.329467885162e+02 9827 KSP preconditioned resid norm 1.080641662672e+00 true resid norm 1.594070669773e-01 ||r(i)||/||b|| 8.917292062876e-03 9827 KSP Residual norm 1.080641662672e+00 % max 1.153670515865e+02 min 1.757835701538e-01 max/min 6.563016753248e+02 9828 KSP preconditioned resid norm 1.080641662655e+00 true resid norm 1.594072129068e-01 ||r(i)||/||b|| 8.917300226227e-03 9828 KSP Residual norm 1.080641662655e+00 % max 1.154419244262e+02 min 1.682009156098e-01 max/min 6.863335078032e+02 9829 KSP preconditioned resid norm 1.080641662270e+00 true resid norm 1.594087524825e-01 ||r(i)||/||b|| 8.917386350680e-03 9829 KSP Residual norm 1.080641662270e+00 % max 1.154420683680e+02 min 1.254371322814e-01 max/min 9.203181407961e+02 9830 KSP preconditioned resid norm 1.080641662088e+00 true resid norm 1.594088281904e-01 ||r(i)||/||b|| 8.917390585807e-03 9830 KSP Residual norm 1.080641662088e+00 % max 1.155791143272e+02 min 1.115275270000e-01 max/min 1.036328137421e+03 9831 KSP preconditioned resid norm 1.080641662001e+00 true resid norm 1.594097559916e-01 ||r(i)||/||b|| 8.917442487360e-03 9831 KSP Residual norm 1.080641662001e+00 % max 1.156952534277e+02 min 9.753070058662e-02 max/min 1.186244461814e+03 9832 KSP preconditioned resid norm 1.080641661905e+00 true resid norm 1.594102571355e-01 ||r(i)||/||b|| 8.917470521541e-03 9832 KSP Residual norm 1.080641661905e+00 % max 1.157175128244e+02 min 8.164806814555e-02 max/min 1.417271901866e+03 9833 KSP preconditioned resid norm 1.080641661892e+00 true resid norm 1.594100484838e-01 ||r(i)||/||b|| 8.917458849484e-03 9833 KSP Residual norm 1.080641661892e+00 % max 1.157285121903e+02 min 8.043319039381e-02 max/min 1.438815389813e+03 9834 KSP preconditioned resid norm 1.080641661854e+00 true resid norm 1.594101720988e-01 ||r(i)||/||b|| 8.917465764556e-03 9834 KSP Residual norm 1.080641661854e+00 % max 1.158251773437e+02 min 8.042916217223e-02 max/min 1.440089318544e+03 9835 KSP preconditioned resid norm 1.080641661853e+00 true resid norm 1.594101806342e-01 ||r(i)||/||b|| 8.917466242026e-03 9835 KSP Residual norm 1.080641661853e+00 % max 1.158318842362e+02 min 7.912558026309e-02 max/min 1.463899333832e+03 9836 KSP preconditioned resid norm 1.080641661674e+00 true resid norm 1.594101542582e-01 ||r(i)||/||b|| 8.917464766544e-03 9836 KSP Residual norm 1.080641661674e+00 % max 1.164750419425e+02 min 7.459519966941e-02 max/min 1.561428114124e+03 9837 KSP preconditioned resid norm 1.080641661644e+00 true resid norm 1.594100247000e-01 ||r(i)||/||b|| 8.917457519010e-03 9837 KSP Residual norm 1.080641661644e+00 % max 1.166576751361e+02 min 7.458624135723e-02 max/min 1.564064269942e+03 9838 KSP preconditioned resid norm 1.080641660043e+00 true resid norm 1.594090034525e-01 ||r(i)||/||b|| 8.917400390036e-03 9838 KSP Residual norm 1.080641660043e+00 % max 1.166901230302e+02 min 6.207904511461e-02 max/min 1.879702286251e+03 9839 KSP preconditioned resid norm 1.080641654279e+00 true resid norm 1.594083845247e-01 ||r(i)||/||b|| 8.917365766976e-03 9839 KSP Residual norm 1.080641654279e+00 % max 1.166909329256e+02 min 4.511818825181e-02 max/min 2.586339067391e+03 9840 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073617105e-01 ||r(i)||/||b|| 8.917308550363e-03 9840 KSP Residual norm 1.080641653856e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 9841 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073617156e-01 ||r(i)||/||b|| 8.917308550646e-03 9841 KSP Residual norm 1.080641653856e+00 % max 3.063497434930e+01 min 3.063497434930e+01 max/min 1.000000000000e+00 9842 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073617242e-01 ||r(i)||/||b|| 8.917308551127e-03 9842 KSP Residual norm 1.080641653856e+00 % max 3.066566511835e+01 min 3.845873341758e+00 max/min 7.973654458505e+00 9843 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073618265e-01 ||r(i)||/||b|| 8.917308556853e-03 9843 KSP Residual norm 1.080641653856e+00 % max 3.713360973998e+01 min 1.336323585560e+00 max/min 2.778788771015e+01 9844 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073618300e-01 ||r(i)||/||b|| 8.917308557050e-03 9844 KSP Residual norm 1.080641653856e+00 % max 4.496460969624e+01 min 1.226794432981e+00 max/min 3.665211423154e+01 9845 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073618358e-01 ||r(i)||/||b|| 8.917308557372e-03 9845 KSP Residual norm 1.080641653856e+00 % max 8.684887047460e+01 min 1.183106552555e+00 max/min 7.340747989862e+01 9846 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073621302e-01 ||r(i)||/||b|| 8.917308573843e-03 9846 KSP Residual norm 1.080641653856e+00 % max 9.802649587879e+01 min 1.168693234980e+00 max/min 8.387701147298e+01 9847 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073622957e-01 ||r(i)||/||b|| 8.917308583099e-03 9847 KSP Residual norm 1.080641653856e+00 % max 1.123314913547e+02 min 1.141275669292e+00 max/min 9.842625614229e+01 9848 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073623942e-01 ||r(i)||/||b|| 8.917308588608e-03 9848 KSP Residual norm 1.080641653856e+00 % max 1.134977479974e+02 min 1.090790466016e+00 max/min 1.040509167741e+02 9849 KSP preconditioned resid norm 1.080641653855e+00 true resid norm 1.594074129801e-01 ||r(i)||/||b|| 8.917311418404e-03 9849 KSP Residual norm 1.080641653855e+00 % max 1.139911565327e+02 min 4.122354439173e-01 max/min 2.765195429329e+02 9850 KSP preconditioned resid norm 1.080641653854e+00 true resid norm 1.594074720057e-01 ||r(i)||/||b|| 8.917314720319e-03 9850 KSP Residual norm 1.080641653854e+00 % max 1.140007137546e+02 min 2.895731636857e-01 max/min 3.936853550364e+02 9851 KSP preconditioned resid norm 1.080641653854e+00 true resid norm 1.594074692144e-01 ||r(i)||/||b|| 8.917314564169e-03 9851 KSP Residual norm 1.080641653854e+00 % max 1.140387247200e+02 min 2.881916049088e-01 max/min 3.957045339889e+02 9852 KSP preconditioned resid norm 1.080641653853e+00 true resid norm 1.594074456679e-01 ||r(i)||/||b|| 8.917313246972e-03 9852 KSP Residual norm 1.080641653853e+00 % max 1.140387740588e+02 min 2.501646808296e-01 max/min 4.558548140395e+02 9853 KSP preconditioned resid norm 1.080641653851e+00 true resid norm 1.594074449772e-01 ||r(i)||/||b|| 8.917313208332e-03 9853 KSP Residual norm 1.080641653851e+00 % max 1.141359916013e+02 min 2.500654158720e-01 max/min 4.564245367688e+02 9854 KSP preconditioned resid norm 1.080641653850e+00 true resid norm 1.594074383969e-01 ||r(i)||/||b|| 8.917312840228e-03 9854 KSP Residual norm 1.080641653850e+00 % max 1.141719416509e+02 min 2.470357905993e-01 max/min 4.621676129356e+02 9855 KSP preconditioned resid norm 1.080641653826e+00 true resid norm 1.594073765323e-01 ||r(i)||/||b|| 8.917309379499e-03 9855 KSP Residual norm 1.080641653826e+00 % max 1.141769960009e+02 min 2.461587608334e-01 max/min 4.638347853812e+02 9856 KSP preconditioned resid norm 1.080641653822e+00 true resid norm 1.594072524403e-01 ||r(i)||/||b|| 8.917302437746e-03 9856 KSP Residual norm 1.080641653822e+00 % max 1.150247983913e+02 min 1.817426977612e-01 max/min 6.328991470261e+02 9857 KSP preconditioned resid norm 1.080641653714e+00 true resid norm 1.594068250847e-01 ||r(i)||/||b|| 8.917278531310e-03 9857 KSP Residual norm 1.080641653714e+00 % max 1.153659149296e+02 min 1.758216030104e-01 max/min 6.561532425728e+02 9858 KSP preconditioned resid norm 1.080641653697e+00 true resid norm 1.594066805198e-01 ||r(i)||/||b|| 8.917270444298e-03 9858 KSP Residual norm 1.080641653697e+00 % max 1.154409610504e+02 min 1.682252327368e-01 max/min 6.862285709010e+02 9859 KSP preconditioned resid norm 1.080641653319e+00 true resid norm 1.594051557657e-01 ||r(i)||/||b|| 8.917185148971e-03 9859 KSP Residual norm 1.080641653319e+00 % max 1.154410966341e+02 min 1.254606780252e-01 max/min 9.201376754146e+02 9860 KSP preconditioned resid norm 1.080641653140e+00 true resid norm 1.594050799233e-01 ||r(i)||/||b|| 8.917180906316e-03 9860 KSP Residual norm 1.080641653140e+00 % max 1.155780628675e+02 min 1.115225287992e-01 max/min 1.036365155202e+03 9861 KSP preconditioned resid norm 1.080641653054e+00 true resid norm 1.594041550812e-01 ||r(i)||/||b|| 8.917129170295e-03 9861 KSP Residual norm 1.080641653054e+00 % max 1.156949369211e+02 min 9.748220965886e-02 max/min 1.186831292869e+03 9862 KSP preconditioned resid norm 1.080641652960e+00 true resid norm 1.594036620365e-01 ||r(i)||/||b|| 8.917101589189e-03 9862 KSP Residual norm 1.080641652960e+00 % max 1.157177832797e+02 min 8.161751000999e-02 max/min 1.417805851533e+03 9863 KSP preconditioned resid norm 1.080641652947e+00 true resid norm 1.594038718371e-01 ||r(i)||/||b|| 8.917113325517e-03 9863 KSP Residual norm 1.080641652947e+00 % max 1.157297723960e+02 min 8.041700428565e-02 max/min 1.439120661408e+03 9864 KSP preconditioned resid norm 1.080641652912e+00 true resid norm 1.594037544972e-01 ||r(i)||/||b|| 8.917106761475e-03 9864 KSP Residual norm 1.080641652912e+00 % max 1.158244802295e+02 min 8.041252146115e-02 max/min 1.440378663980e+03 9865 KSP preconditioned resid norm 1.080641652910e+00 true resid norm 1.594037467642e-01 ||r(i)||/||b|| 8.917106328887e-03 9865 KSP Residual norm 1.080641652910e+00 % max 1.158301923080e+02 min 7.912032794006e-02 max/min 1.463975129069e+03 9866 KSP preconditioned resid norm 1.080641652734e+00 true resid norm 1.594037735388e-01 ||r(i)||/||b|| 8.917107826673e-03 9866 KSP Residual norm 1.080641652734e+00 % max 1.164681289889e+02 min 7.460712971447e-02 max/min 1.561085776047e+03 9867 KSP preconditioned resid norm 1.080641652705e+00 true resid norm 1.594039028009e-01 ||r(i)||/||b|| 8.917115057644e-03 9867 KSP Residual norm 1.080641652705e+00 % max 1.166491106270e+02 min 7.459753335320e-02 max/min 1.563712704477e+03 9868 KSP preconditioned resid norm 1.080641651132e+00 true resid norm 1.594049112430e-01 ||r(i)||/||b|| 8.917171470279e-03 9868 KSP Residual norm 1.080641651132e+00 % max 1.166811058685e+02 min 6.210409714784e-02 max/min 1.878798843026e+03 9869 KSP preconditioned resid norm 1.080641645474e+00 true resid norm 1.594055279416e-01 ||r(i)||/||b|| 8.917205968634e-03 9869 KSP Residual norm 1.080641645474e+00 % max 1.166818375393e+02 min 4.513482662958e-02 max/min 2.585184130582e+03 9870 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065322524e-01 ||r(i)||/||b|| 8.917262150160e-03 9870 KSP Residual norm 1.080641645065e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 9871 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065322474e-01 ||r(i)||/||b|| 8.917262149880e-03 9871 KSP Residual norm 1.080641645065e+00 % max 3.063464541814e+01 min 3.063464541814e+01 max/min 1.000000000000e+00 9872 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065322389e-01 ||r(i)||/||b|| 8.917262149406e-03 9872 KSP Residual norm 1.080641645065e+00 % max 3.066529346532e+01 min 3.844443657370e+00 max/min 7.976523054651e+00 9873 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065321376e-01 ||r(i)||/||b|| 8.917262143737e-03 9873 KSP Residual norm 1.080641645065e+00 % max 3.714534104643e+01 min 1.336555480314e+00 max/min 2.779184373079e+01 9874 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065321340e-01 ||r(i)||/||b|| 8.917262143537e-03 9874 KSP Residual norm 1.080641645065e+00 % max 4.500878758461e+01 min 1.226798739269e+00 max/min 3.668799628162e+01 9875 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065321283e-01 ||r(i)||/||b|| 8.917262143216e-03 9875 KSP Residual norm 1.080641645065e+00 % max 8.688179296761e+01 min 1.183120879361e+00 max/min 7.343441780400e+01 9876 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065318413e-01 ||r(i)||/||b|| 8.917262127165e-03 9876 KSP Residual norm 1.080641645065e+00 % max 9.802498627683e+01 min 1.168935165788e+00 max/min 8.385836028017e+01 9877 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065316788e-01 ||r(i)||/||b|| 8.917262118072e-03 9877 KSP Residual norm 1.080641645065e+00 % max 1.122437663284e+02 min 1.141666474192e+00 max/min 9.831572430803e+01 9878 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065315811e-01 ||r(i)||/||b|| 8.917262112605e-03 9878 KSP Residual norm 1.080641645065e+00 % max 1.134942465219e+02 min 1.090790184702e+00 max/min 1.040477335730e+02 9879 KSP preconditioned resid norm 1.080641645064e+00 true resid norm 1.594064814200e-01 ||r(i)||/||b|| 8.917259306578e-03 9879 KSP Residual norm 1.080641645064e+00 % max 1.139914544856e+02 min 4.119772603942e-01 max/min 2.766935591945e+02 9880 KSP preconditioned resid norm 1.080641645063e+00 true resid norm 1.594064229585e-01 ||r(i)||/||b|| 8.917256036220e-03 9880 KSP Residual norm 1.080641645063e+00 % max 1.140011250742e+02 min 2.894539413393e-01 max/min 3.938489299774e+02 9881 KSP preconditioned resid norm 1.080641645063e+00 true resid norm 1.594064257287e-01 ||r(i)||/||b|| 8.917256191184e-03 9881 KSP Residual norm 1.080641645063e+00 % max 1.140392103921e+02 min 2.880591764056e-01 max/min 3.958881359556e+02 9882 KSP preconditioned resid norm 1.080641645063e+00 true resid norm 1.594064490294e-01 ||r(i)||/||b|| 8.917257494633e-03 9882 KSP Residual norm 1.080641645063e+00 % max 1.140392535477e+02 min 2.501714617108e-01 max/min 4.558443747654e+02 9883 KSP preconditioned resid norm 1.080641645061e+00 true resid norm 1.594064497349e-01 ||r(i)||/||b|| 8.917257534101e-03 9883 KSP Residual norm 1.080641645061e+00 % max 1.141360416004e+02 min 2.500712361002e-01 max/min 4.564141137554e+02 9884 KSP preconditioned resid norm 1.080641645059e+00 true resid norm 1.594064561966e-01 ||r(i)||/||b|| 8.917257895569e-03 9884 KSP Residual norm 1.080641645059e+00 % max 1.141719177119e+02 min 2.470520232306e-01 max/min 4.621371491678e+02 9885 KSP preconditioned resid norm 1.080641645035e+00 true resid norm 1.594065174658e-01 ||r(i)||/||b|| 8.917261322993e-03 9885 KSP Residual norm 1.080641645035e+00 % max 1.141770016402e+02 min 2.461723435101e-01 max/min 4.638092159834e+02 9886 KSP preconditioned resid norm 1.080641645032e+00 true resid norm 1.594066402497e-01 ||r(i)||/||b|| 8.917268191569e-03 9886 KSP Residual norm 1.080641645032e+00 % max 1.150251585488e+02 min 1.817296775460e-01 max/min 6.329464735865e+02 9887 KSP preconditioned resid norm 1.080641644925e+00 true resid norm 1.594070641129e-01 ||r(i)||/||b|| 8.917291902641e-03 9887 KSP Residual norm 1.080641644925e+00 % max 1.153670431587e+02 min 1.757838889123e-01 max/min 6.563004372733e+02 9888 KSP preconditioned resid norm 1.080641644909e+00 true resid norm 1.594072071224e-01 ||r(i)||/||b|| 8.917299902647e-03 9888 KSP Residual norm 1.080641644909e+00 % max 1.154419177071e+02 min 1.682011082589e-01 max/min 6.863326817642e+02 9889 KSP preconditioned resid norm 1.080641644540e+00 true resid norm 1.594087159021e-01 ||r(i)||/||b|| 8.917384304358e-03 9889 KSP Residual norm 1.080641644540e+00 % max 1.154420615965e+02 min 1.254373423966e-01 max/min 9.203165452239e+02 9890 KSP preconditioned resid norm 1.080641644365e+00 true resid norm 1.594087901062e-01 ||r(i)||/||b|| 8.917388455364e-03 9890 KSP Residual norm 1.080641644365e+00 % max 1.155791063432e+02 min 1.115273566807e-01 max/min 1.036329648465e+03 9891 KSP preconditioned resid norm 1.080641644282e+00 true resid norm 1.594096994140e-01 ||r(i)||/||b|| 8.917439322388e-03 9891 KSP Residual norm 1.080641644282e+00 % max 1.156952495514e+02 min 9.753038620512e-02 max/min 1.186248245835e+03 9892 KSP preconditioned resid norm 1.080641644189e+00 true resid norm 1.594101905102e-01 ||r(i)||/||b|| 8.917466794493e-03 9892 KSP Residual norm 1.080641644189e+00 % max 1.157175115527e+02 min 8.164774923625e-02 max/min 1.417277422037e+03 9893 KSP preconditioned resid norm 1.080641644177e+00 true resid norm 1.594099860409e-01 ||r(i)||/||b|| 8.917455356405e-03 9893 KSP Residual norm 1.080641644177e+00 % max 1.157285171248e+02 min 8.043299897298e-02 max/min 1.438818875368e+03 9894 KSP preconditioned resid norm 1.080641644141e+00 true resid norm 1.594101071411e-01 ||r(i)||/||b|| 8.917462130798e-03 9894 KSP Residual norm 1.080641644141e+00 % max 1.158251669093e+02 min 8.042896682578e-02 max/min 1.440092686509e+03 9895 KSP preconditioned resid norm 1.080641644139e+00 true resid norm 1.594101154949e-01 ||r(i)||/||b|| 8.917462598110e-03 9895 KSP Residual norm 1.080641644139e+00 % max 1.158318659802e+02 min 7.912549560750e-02 max/min 1.463900669321e+03 9896 KSP preconditioned resid norm 1.080641643967e+00 true resid norm 1.594100896394e-01 ||r(i)||/||b|| 8.917461151744e-03 9896 KSP Residual norm 1.080641643967e+00 % max 1.164749819823e+02 min 7.459530237415e-02 max/min 1.561425160502e+03 9897 KSP preconditioned resid norm 1.080641643939e+00 true resid norm 1.594099626317e-01 ||r(i)||/||b|| 8.917454046886e-03 9897 KSP Residual norm 1.080641643939e+00 % max 1.166576013049e+02 min 7.458633610448e-02 max/min 1.564061293230e+03 9898 KSP preconditioned resid norm 1.080641642401e+00 true resid norm 1.594089618420e-01 ||r(i)||/||b|| 8.917398062328e-03 9898 KSP Residual norm 1.080641642401e+00 % max 1.166900450421e+02 min 6.207924609942e-02 max/min 1.879694944349e+03 9899 KSP preconditioned resid norm 1.080641636866e+00 true resid norm 1.594083552587e-01 ||r(i)||/||b|| 8.917364129827e-03 9899 KSP Residual norm 1.080641636866e+00 % max 1.166908543099e+02 min 4.511837692126e-02 max/min 2.586326509785e+03 9900 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073529026e-01 ||r(i)||/||b|| 8.917308057647e-03 9900 KSP Residual norm 1.080641636459e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 9901 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073529076e-01 ||r(i)||/||b|| 8.917308057926e-03 9901 KSP Residual norm 1.080641636459e+00 % max 3.063497290246e+01 min 3.063497290246e+01 max/min 1.000000000000e+00 9902 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073529160e-01 ||r(i)||/||b|| 8.917308058394e-03 9902 KSP Residual norm 1.080641636459e+00 % max 3.066566310477e+01 min 3.845858371013e+00 max/min 7.973684973919e+00 9903 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073530163e-01 ||r(i)||/||b|| 8.917308064007e-03 9903 KSP Residual norm 1.080641636459e+00 % max 3.713375877488e+01 min 1.336326817614e+00 max/min 2.778793202787e+01 9904 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073530198e-01 ||r(i)||/||b|| 8.917308064200e-03 9904 KSP Residual norm 1.080641636459e+00 % max 4.496516516046e+01 min 1.226794642680e+00 max/min 3.665256074336e+01 9905 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073530254e-01 ||r(i)||/||b|| 8.917308064515e-03 9905 KSP Residual norm 1.080641636459e+00 % max 8.684929340512e+01 min 1.183106694605e+00 max/min 7.340782855945e+01 9906 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073533139e-01 ||r(i)||/||b|| 8.917308080655e-03 9906 KSP Residual norm 1.080641636459e+00 % max 9.802647163348e+01 min 1.168695697992e+00 max/min 8.387681395754e+01 9907 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073534761e-01 ||r(i)||/||b|| 8.917308089727e-03 9907 KSP Residual norm 1.080641636459e+00 % max 1.123306036185e+02 min 1.141279831413e+00 max/min 9.842511934993e+01 9908 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073535726e-01 ||r(i)||/||b|| 8.917308095124e-03 9908 KSP Residual norm 1.080641636459e+00 % max 1.134977112090e+02 min 1.090790470231e+00 max/min 1.040508826457e+02 9909 KSP preconditioned resid norm 1.080641636458e+00 true resid norm 1.594074031465e-01 ||r(i)||/||b|| 8.917310868307e-03 9909 KSP Residual norm 1.080641636458e+00 % max 1.139911596762e+02 min 4.122331938771e-01 max/min 2.765210598500e+02 9910 KSP preconditioned resid norm 1.080641636457e+00 true resid norm 1.594074609911e-01 ||r(i)||/||b|| 8.917314104156e-03 9910 KSP Residual norm 1.080641636457e+00 % max 1.140007179199e+02 min 2.895720290537e-01 max/min 3.936869120007e+02 9911 KSP preconditioned resid norm 1.080641636457e+00 true resid norm 1.594074582552e-01 ||r(i)||/||b|| 8.917313951111e-03 9911 KSP Residual norm 1.080641636457e+00 % max 1.140387297689e+02 min 2.881903470643e-01 max/min 3.957062786128e+02 9912 KSP preconditioned resid norm 1.080641636457e+00 true resid norm 1.594074351775e-01 ||r(i)||/||b|| 8.917312660133e-03 9912 KSP Residual norm 1.080641636457e+00 % max 1.140387790534e+02 min 2.501647526543e-01 max/min 4.558547031242e+02 9913 KSP preconditioned resid norm 1.080641636455e+00 true resid norm 1.594074345003e-01 ||r(i)||/||b|| 8.917312622252e-03 9913 KSP Residual norm 1.080641636455e+00 % max 1.141359922733e+02 min 2.500654781355e-01 max/min 4.564244258117e+02 9914 KSP preconditioned resid norm 1.080641636454e+00 true resid norm 1.594074280517e-01 ||r(i)||/||b|| 8.917312261516e-03 9914 KSP Residual norm 1.080641636454e+00 % max 1.141719413857e+02 min 2.470359663864e-01 max/min 4.621672829905e+02 9915 KSP preconditioned resid norm 1.080641636430e+00 true resid norm 1.594073674253e-01 ||r(i)||/||b|| 8.917308870053e-03 9915 KSP Residual norm 1.080641636430e+00 % max 1.141769960812e+02 min 2.461588977988e-01 max/min 4.638345276249e+02 9916 KSP preconditioned resid norm 1.080641636427e+00 true resid norm 1.594072457998e-01 ||r(i)||/||b|| 8.917302066275e-03 9916 KSP Residual norm 1.080641636427e+00 % max 1.150248029458e+02 min 1.817425233097e-01 max/min 6.328997795954e+02 9917 KSP preconditioned resid norm 1.080641636323e+00 true resid norm 1.594068269925e-01 ||r(i)||/||b|| 8.917278638033e-03 9917 KSP Residual norm 1.080641636323e+00 % max 1.153659292411e+02 min 1.758211627252e-01 max/min 6.561549670868e+02 9918 KSP preconditioned resid norm 1.080641636306e+00 true resid norm 1.594066853231e-01 ||r(i)||/||b|| 8.917270712996e-03 9918 KSP Residual norm 1.080641636306e+00 % max 1.154409736018e+02 min 1.682249410195e-01 max/min 6.862298354942e+02 9919 KSP preconditioned resid norm 1.080641635944e+00 true resid norm 1.594051910900e-01 ||r(i)||/||b|| 8.917187125026e-03 9919 KSP Residual norm 1.080641635944e+00 % max 1.154411092996e+02 min 1.254604202789e-01 max/min 9.201396667011e+02 9920 KSP preconditioned resid norm 1.080641635771e+00 true resid norm 1.594051167722e-01 ||r(i)||/||b|| 8.917182967656e-03 9920 KSP Residual norm 1.080641635771e+00 % max 1.155780759198e+02 min 1.115224613869e-01 max/min 1.036365898694e+03 9921 KSP preconditioned resid norm 1.080641635689e+00 true resid norm 1.594042104954e-01 ||r(i)||/||b|| 8.917132270191e-03 9921 KSP Residual norm 1.080641635689e+00 % max 1.156949393598e+02 min 9.748286841768e-02 max/min 1.186823297649e+03 9922 KSP preconditioned resid norm 1.080641635599e+00 true resid norm 1.594037272785e-01 ||r(i)||/||b|| 8.917105238852e-03 9922 KSP Residual norm 1.080641635599e+00 % max 1.157177765183e+02 min 8.161780203814e-02 max/min 1.417800695787e+03 9923 KSP preconditioned resid norm 1.080641635586e+00 true resid norm 1.594039328090e-01 ||r(i)||/||b|| 8.917116736308e-03 9923 KSP Residual norm 1.080641635586e+00 % max 1.157297518468e+02 min 8.041713659406e-02 max/min 1.439118038125e+03 9924 KSP preconditioned resid norm 1.080641635552e+00 true resid norm 1.594038177599e-01 ||r(i)||/||b|| 8.917110300416e-03 9924 KSP Residual norm 1.080641635552e+00 % max 1.158244835100e+02 min 8.041265899129e-02 max/min 1.440376241290e+03 9925 KSP preconditioned resid norm 1.080641635551e+00 true resid norm 1.594038101782e-01 ||r(i)||/||b|| 8.917109876291e-03 9925 KSP Residual norm 1.080641635551e+00 % max 1.158302075022e+02 min 7.912034826741e-02 max/min 1.463974944988e+03 9926 KSP preconditioned resid norm 1.080641635382e+00 true resid norm 1.594038364112e-01 ||r(i)||/||b|| 8.917111343775e-03 9926 KSP Residual norm 1.080641635382e+00 % max 1.164682071358e+02 min 7.460699390243e-02 max/min 1.561089665242e+03 9927 KSP preconditioned resid norm 1.080641635354e+00 true resid norm 1.594039631076e-01 ||r(i)||/||b|| 8.917118431221e-03 9927 KSP Residual norm 1.080641635354e+00 % max 1.166492079885e+02 min 7.459740228247e-02 max/min 1.563716757144e+03 9928 KSP preconditioned resid norm 1.080641633843e+00 true resid norm 1.594049513925e-01 ||r(i)||/||b|| 8.917173716256e-03 9928 KSP Residual norm 1.080641633843e+00 % max 1.166812081048e+02 min 6.210379986959e-02 max/min 1.878809482669e+03 9929 KSP preconditioned resid norm 1.080641628410e+00 true resid norm 1.594055557080e-01 ||r(i)||/||b|| 8.917207521894e-03 9929 KSP Residual norm 1.080641628410e+00 % max 1.166819406955e+02 min 4.513468212670e-02 max/min 2.585194692807e+03 9930 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065400862e-01 ||r(i)||/||b|| 8.917262588389e-03 9930 KSP Residual norm 1.080641628017e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 9931 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065400814e-01 ||r(i)||/||b|| 8.917262588116e-03 9931 KSP Residual norm 1.080641628017e+00 % max 3.063465052437e+01 min 3.063465052437e+01 max/min 1.000000000000e+00 9932 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065400730e-01 ||r(i)||/||b|| 8.917262587648e-03 9932 KSP Residual norm 1.080641628017e+00 % max 3.066529886382e+01 min 3.844457299464e+00 max/min 7.976496154112e+00 9933 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065399737e-01 ||r(i)||/||b|| 8.917262582094e-03 9933 KSP Residual norm 1.080641628017e+00 % max 3.714525447417e+01 min 1.336554051053e+00 max/min 2.779180867762e+01 9934 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065399702e-01 ||r(i)||/||b|| 8.917262581897e-03 9934 KSP Residual norm 1.080641628017e+00 % max 4.500845605790e+01 min 1.226798858744e+00 max/min 3.668772247146e+01 9935 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065399646e-01 ||r(i)||/||b|| 8.917262581583e-03 9935 KSP Residual norm 1.080641628017e+00 % max 8.688155371236e+01 min 1.183120734870e+00 max/min 7.343422454843e+01 9936 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065396833e-01 ||r(i)||/||b|| 8.917262565850e-03 9936 KSP Residual norm 1.080641628017e+00 % max 9.802499250688e+01 min 1.168932790974e+00 max/min 8.385853597724e+01 9937 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065395240e-01 ||r(i)||/||b|| 8.917262556936e-03 9937 KSP Residual norm 1.080641628017e+00 % max 1.122446326752e+02 min 1.141662807959e+00 max/min 9.831679887675e+01 9938 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065394282e-01 ||r(i)||/||b|| 8.917262551579e-03 9938 KSP Residual norm 1.080641628017e+00 % max 1.134942798726e+02 min 1.090790194356e+00 max/min 1.040477632269e+02 9939 KSP preconditioned resid norm 1.080641628016e+00 true resid norm 1.594064902727e-01 ||r(i)||/||b|| 8.917259801801e-03 9939 KSP Residual norm 1.080641628016e+00 % max 1.139914516679e+02 min 4.119801790176e-01 max/min 2.766915921531e+02 9940 KSP preconditioned resid norm 1.080641628015e+00 true resid norm 1.594064329818e-01 ||r(i)||/||b|| 8.917256596927e-03 9940 KSP Residual norm 1.080641628015e+00 % max 1.140011210085e+02 min 2.894551947128e-01 max/min 3.938472105212e+02 9941 KSP preconditioned resid norm 1.080641628015e+00 true resid norm 1.594064356968e-01 ||r(i)||/||b|| 8.917256748803e-03 9941 KSP Residual norm 1.080641628015e+00 % max 1.140392057187e+02 min 2.880605712140e-01 max/min 3.958862028153e+02 9942 KSP preconditioned resid norm 1.080641628014e+00 true resid norm 1.594064585337e-01 ||r(i)||/||b|| 8.917258026309e-03 9942 KSP Residual norm 1.080641628014e+00 % max 1.140392489431e+02 min 2.501713977769e-01 max/min 4.558444728556e+02 9943 KSP preconditioned resid norm 1.080641628013e+00 true resid norm 1.594064592249e-01 ||r(i)||/||b|| 8.917258064976e-03 9943 KSP Residual norm 1.080641628013e+00 % max 1.141360412719e+02 min 2.500711817244e-01 max/min 4.564142116850e+02 9944 KSP preconditioned resid norm 1.080641628011e+00 true resid norm 1.594064655583e-01 ||r(i)||/||b|| 8.917258419265e-03 9944 KSP Residual norm 1.080641628011e+00 % max 1.141719179255e+02 min 2.470518740808e-01 max/min 4.621374290332e+02 9945 KSP preconditioned resid norm 1.080641627988e+00 true resid norm 1.594065256003e-01 ||r(i)||/||b|| 8.917261778039e-03 9945 KSP Residual norm 1.080641627988e+00 % max 1.141770016071e+02 min 2.461722087102e-01 max/min 4.638094698230e+02 9946 KSP preconditioned resid norm 1.080641627985e+00 true resid norm 1.594066459440e-01 ||r(i)||/||b|| 8.917268510113e-03 9946 KSP Residual norm 1.080641627985e+00 % max 1.150251558754e+02 min 1.817297629761e-01 max/min 6.329461613313e+02 9947 KSP preconditioned resid norm 1.080641627883e+00 true resid norm 1.594070613108e-01 ||r(i)||/||b|| 8.917291745887e-03 9947 KSP Residual norm 1.080641627883e+00 % max 1.153670348347e+02 min 1.757842027946e-01 max/min 6.562992180216e+02 9948 KSP preconditioned resid norm 1.080641627867e+00 true resid norm 1.594072014571e-01 ||r(i)||/||b|| 8.917299585729e-03 9948 KSP Residual norm 1.080641627867e+00 % max 1.154419110590e+02 min 1.682012982555e-01 max/min 6.863318669729e+02 9949 KSP preconditioned resid norm 1.080641627512e+00 true resid norm 1.594086800399e-01 ||r(i)||/||b|| 8.917382298211e-03 9949 KSP Residual norm 1.080641627512e+00 % max 1.154420548965e+02 min 1.254375490145e-01 max/min 9.203149758861e+02 9950 KSP preconditioned resid norm 1.080641627344e+00 true resid norm 1.594087527690e-01 ||r(i)||/||b|| 8.917386366706e-03 9950 KSP Residual norm 1.080641627344e+00 % max 1.155790984624e+02 min 1.115271921586e-01 max/min 1.036331106570e+03 9951 KSP preconditioned resid norm 1.080641627264e+00 true resid norm 1.594096439404e-01 ||r(i)||/||b|| 8.917436219173e-03 9951 KSP Residual norm 1.080641627264e+00 % max 1.156952457641e+02 min 9.753007450777e-02 max/min 1.186251998145e+03 9952 KSP preconditioned resid norm 1.080641627176e+00 true resid norm 1.594101251850e-01 ||r(i)||/||b|| 8.917463140178e-03 9952 KSP Residual norm 1.080641627176e+00 % max 1.157175103847e+02 min 8.164743677427e-02 max/min 1.417282831604e+03 9953 KSP preconditioned resid norm 1.080641627164e+00 true resid norm 1.594099248157e-01 ||r(i)||/||b|| 8.917451931444e-03 9953 KSP Residual norm 1.080641627164e+00 % max 1.157285221143e+02 min 8.043281187160e-02 max/min 1.438822284357e+03 9954 KSP preconditioned resid norm 1.080641627129e+00 true resid norm 1.594100434515e-01 ||r(i)||/||b|| 8.917458567979e-03 9954 KSP Residual norm 1.080641627129e+00 % max 1.158251567433e+02 min 8.042877586324e-02 max/min 1.440095979333e+03 9955 KSP preconditioned resid norm 1.080641627127e+00 true resid norm 1.594100516277e-01 ||r(i)||/||b|| 8.917459025356e-03 9955 KSP Residual norm 1.080641627127e+00 % max 1.158318480981e+02 min 7.912541326160e-02 max/min 1.463901966807e+03 9956 KSP preconditioned resid norm 1.080641626963e+00 true resid norm 1.594100262829e-01 ||r(i)||/||b|| 8.917457607557e-03 9956 KSP Residual norm 1.080641626963e+00 % max 1.164749229958e+02 min 7.459540346436e-02 max/min 1.561422253738e+03 9957 KSP preconditioned resid norm 1.080641626935e+00 true resid norm 1.594099017783e-01 ||r(i)||/||b|| 8.917450642725e-03 9957 KSP Residual norm 1.080641626935e+00 % max 1.166575286636e+02 min 7.458642942097e-02 max/min 1.564058362483e+03 9958 KSP preconditioned resid norm 1.080641625459e+00 true resid norm 1.594089210473e-01 ||r(i)||/||b|| 8.917395780257e-03 9958 KSP Residual norm 1.080641625459e+00 % max 1.166899683166e+02 min 6.207944430169e-02 max/min 1.879687707085e+03 9959 KSP preconditioned resid norm 1.080641620143e+00 true resid norm 1.594083265698e-01 ||r(i)||/||b|| 8.917362524958e-03 9959 KSP Residual norm 1.080641620143e+00 % max 1.166907769657e+02 min 4.511856151865e-02 max/min 2.586314213883e+03 9960 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073442756e-01 ||r(i)||/||b|| 8.917307575050e-03 9960 KSP Residual norm 1.080641619752e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 9961 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073442805e-01 ||r(i)||/||b|| 8.917307575322e-03 9961 KSP Residual norm 1.080641619752e+00 % max 3.063497144523e+01 min 3.063497144523e+01 max/min 1.000000000000e+00 9962 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073442888e-01 ||r(i)||/||b|| 8.917307575783e-03 9962 KSP Residual norm 1.080641619752e+00 % max 3.066566109468e+01 min 3.845843703829e+00 max/min 7.973714861098e+00 9963 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073443871e-01 ||r(i)||/||b|| 8.917307581283e-03 9963 KSP Residual norm 1.080641619752e+00 % max 3.713390425813e+01 min 1.336329967989e+00 max/min 2.778797538606e+01 9964 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073443904e-01 ||r(i)||/||b|| 8.917307581472e-03 9964 KSP Residual norm 1.080641619752e+00 % max 4.496570748601e+01 min 1.226794844829e+00 max/min 3.665299677086e+01 9965 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073443960e-01 ||r(i)||/||b|| 8.917307581781e-03 9965 KSP Residual norm 1.080641619752e+00 % max 8.684970616180e+01 min 1.183106833938e+00 max/min 7.340816878957e+01 9966 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073446787e-01 ||r(i)||/||b|| 8.917307597597e-03 9966 KSP Residual norm 1.080641619752e+00 % max 9.802644805035e+01 min 1.168698112506e+00 max/min 8.387662049028e+01 9967 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073448376e-01 ||r(i)||/||b|| 8.917307606488e-03 9967 KSP Residual norm 1.080641619752e+00 % max 1.123297332549e+02 min 1.141283907756e+00 max/min 9.842400518531e+01 9968 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073449323e-01 ||r(i)||/||b|| 8.917307611782e-03 9968 KSP Residual norm 1.080641619752e+00 % max 1.134976751689e+02 min 1.090790474217e+00 max/min 1.040508492250e+02 9969 KSP preconditioned resid norm 1.080641619751e+00 true resid norm 1.594073935137e-01 ||r(i)||/||b|| 8.917310329444e-03 9969 KSP Residual norm 1.080641619751e+00 % max 1.139911627557e+02 min 4.122309806702e-01 max/min 2.765225519207e+02 9970 KSP preconditioned resid norm 1.080641619750e+00 true resid norm 1.594074502003e-01 ||r(i)||/||b|| 8.917313500516e-03 9970 KSP Residual norm 1.080641619750e+00 % max 1.140007220037e+02 min 2.895709152543e-01 max/min 3.936884403725e+02 9971 KSP preconditioned resid norm 1.080641619750e+00 true resid norm 1.594074475189e-01 ||r(i)||/||b|| 8.917313350515e-03 9971 KSP Residual norm 1.080641619750e+00 % max 1.140387347164e+02 min 2.881891122657e-01 max/min 3.957079912555e+02 9972 KSP preconditioned resid norm 1.080641619750e+00 true resid norm 1.594074249007e-01 ||r(i)||/||b|| 8.917312085247e-03 9972 KSP Residual norm 1.080641619750e+00 % max 1.140387839472e+02 min 2.501648230072e-01 max/min 4.558545944886e+02 9973 KSP preconditioned resid norm 1.080641619748e+00 true resid norm 1.594074242369e-01 ||r(i)||/||b|| 8.917312048114e-03 9973 KSP Residual norm 1.080641619748e+00 % max 1.141359929290e+02 min 2.500655391090e-01 max/min 4.564243171438e+02 9974 KSP preconditioned resid norm 1.080641619747e+00 true resid norm 1.594074179174e-01 ||r(i)||/||b|| 8.917311694599e-03 9974 KSP Residual norm 1.080641619747e+00 % max 1.141719411262e+02 min 2.470361384981e-01 max/min 4.621669599450e+02 9975 KSP preconditioned resid norm 1.080641619724e+00 true resid norm 1.594073585052e-01 ||r(i)||/||b|| 8.917308371056e-03 9975 KSP Residual norm 1.080641619724e+00 % max 1.141769961595e+02 min 2.461590320913e-01 max/min 4.638342748974e+02 9976 KSP preconditioned resid norm 1.080641619721e+00 true resid norm 1.594072392992e-01 ||r(i)||/||b|| 8.917301702630e-03 9976 KSP Residual norm 1.080641619721e+00 % max 1.150248073926e+02 min 1.817423531140e-01 max/min 6.329003967526e+02 9977 KSP preconditioned resid norm 1.080641619621e+00 true resid norm 1.594068288737e-01 ||r(i)||/||b|| 8.917278743269e-03 9977 KSP Residual norm 1.080641619621e+00 % max 1.153659432133e+02 min 1.758207322277e-01 max/min 6.561566531518e+02 9978 KSP preconditioned resid norm 1.080641619605e+00 true resid norm 1.594066900433e-01 ||r(i)||/||b|| 8.917270977044e-03 9978 KSP Residual norm 1.080641619605e+00 % max 1.154409858489e+02 min 1.682246559484e-01 max/min 6.862310711714e+02 9979 KSP preconditioned resid norm 1.080641619257e+00 true resid norm 1.594052257365e-01 ||r(i)||/||b|| 8.917189063164e-03 9979 KSP Residual norm 1.080641619257e+00 % max 1.154411216580e+02 min 1.254601679658e-01 max/min 9.201416157000e+02 9980 KSP preconditioned resid norm 1.080641619092e+00 true resid norm 1.594051529133e-01 ||r(i)||/||b|| 8.917184989407e-03 9980 KSP Residual norm 1.080641619092e+00 % max 1.155780886655e+02 min 1.115223976110e-01 max/min 1.036366605645e+03 9981 KSP preconditioned resid norm 1.080641619012e+00 true resid norm 1.594042648383e-01 ||r(i)||/||b|| 8.917135310152e-03 9981 KSP Residual norm 1.080641619012e+00 % max 1.156949417657e+02 min 9.748351070064e-02 max/min 1.186815502788e+03 9982 KSP preconditioned resid norm 1.080641618926e+00 true resid norm 1.594037912594e-01 ||r(i)||/||b|| 8.917108817969e-03 9982 KSP Residual norm 1.080641618926e+00 % max 1.157177699729e+02 min 8.161808862755e-02 max/min 1.417795637202e+03 9983 KSP preconditioned resid norm 1.080641618914e+00 true resid norm 1.594039926066e-01 ||r(i)||/||b|| 8.917120081407e-03 9983 KSP Residual norm 1.080641618914e+00 % max 1.157297318664e+02 min 8.041726690542e-02 max/min 1.439115457660e+03 9984 KSP preconditioned resid norm 1.080641618881e+00 true resid norm 1.594038798061e-01 ||r(i)||/||b|| 8.917113771305e-03 9984 KSP Residual norm 1.080641618881e+00 % max 1.158244868066e+02 min 8.041279440761e-02 max/min 1.440373856671e+03 9985 KSP preconditioned resid norm 1.080641618880e+00 true resid norm 1.594038723728e-01 ||r(i)||/||b|| 8.917113355483e-03 9985 KSP Residual norm 1.080641618880e+00 % max 1.158302224434e+02 min 7.912036884330e-02 max/min 1.463974753110e+03 9986 KSP preconditioned resid norm 1.080641618717e+00 true resid norm 1.594038980749e-01 ||r(i)||/||b|| 8.917114793267e-03 9986 KSP Residual norm 1.080641618717e+00 % max 1.164682835790e+02 min 7.460686106511e-02 max/min 1.561093469371e+03 9987 KSP preconditioned resid norm 1.080641618691e+00 true resid norm 1.594040222541e-01 ||r(i)||/||b|| 8.917121739901e-03 9987 KSP Residual norm 1.080641618691e+00 % max 1.166493032152e+02 min 7.459727412837e-02 max/min 1.563720720069e+03 9988 KSP preconditioned resid norm 1.080641617240e+00 true resid norm 1.594049907736e-01 ||r(i)||/||b|| 8.917175919248e-03 9988 KSP Residual norm 1.080641617240e+00 % max 1.166813081045e+02 min 6.210350927266e-02 max/min 1.878819884271e+03 9989 KSP preconditioned resid norm 1.080641612022e+00 true resid norm 1.594055829488e-01 ||r(i)||/||b|| 8.917209045757e-03 9989 KSP Residual norm 1.080641612022e+00 % max 1.166820415947e+02 min 4.513453999933e-02 max/min 2.585205069033e+03 9990 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065477745e-01 ||r(i)||/||b|| 8.917263018470e-03 9990 KSP Residual norm 1.080641611645e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 9991 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065477696e-01 ||r(i)||/||b|| 8.917263018201e-03 9991 KSP Residual norm 1.080641611645e+00 % max 3.063465549435e+01 min 3.063465549435e+01 max/min 1.000000000000e+00 9992 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065477615e-01 ||r(i)||/||b|| 8.917263017744e-03 9992 KSP Residual norm 1.080641611645e+00 % max 3.066530412302e+01 min 3.844470687795e+00 max/min 7.976469744031e+00 9993 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065476642e-01 ||r(i)||/||b|| 8.917263012302e-03 9993 KSP Residual norm 1.080641611645e+00 % max 3.714516899070e+01 min 1.336552632161e+00 max/min 2.779177422339e+01 9994 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065476608e-01 ||r(i)||/||b|| 8.917263012110e-03 9994 KSP Residual norm 1.080641611645e+00 % max 4.500812884571e+01 min 1.226798972667e+00 max/min 3.668745234425e+01 9995 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065476552e-01 ||r(i)||/||b|| 8.917263011801e-03 9995 KSP Residual norm 1.080641611645e+00 % max 8.688131738345e+01 min 1.183120593233e+00 max/min 7.343403358914e+01 9996 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065473796e-01 ||r(i)||/||b|| 8.917262996379e-03 9996 KSP Residual norm 1.080641611645e+00 % max 9.802499878957e+01 min 1.168930461656e+00 max/min 8.385870845619e+01 9997 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065472234e-01 ||r(i)||/||b|| 8.917262987642e-03 9997 KSP Residual norm 1.080641611645e+00 % max 1.122454823211e+02 min 1.141659208813e+00 max/min 9.831785304632e+01 9998 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065471295e-01 ||r(i)||/||b|| 8.917262982389e-03 9998 KSP Residual norm 1.080641611645e+00 % max 1.134943126018e+02 min 1.090790203681e+00 max/min 1.040477923425e+02 9999 KSP preconditioned resid norm 1.080641611644e+00 true resid norm 1.594064989598e-01 ||r(i)||/||b|| 8.917260287762e-03 9999 KSP Residual norm 1.080641611644e+00 % max 1.139914489020e+02 min 4.119830336343e-01 max/min 2.766896682528e+02 10000 KSP preconditioned resid norm 1.080641611643e+00 true resid norm 1.594064428168e-01 ||r(i)||/||b|| 8.917257147097e-03 10000 KSP Residual norm 1.080641611643e+00 % max 1.140011170215e+02 min 2.894564222709e-01 max/min 3.938455264772e+02 Then the number of iteration >10000 The Programme stop with convergenceReason=-3 Best, Meng ------------------ Original ------------------ From: "Barry Smith";; Send time: Friday, Apr 25, 2014 6:27 AM To: "Oo "; Cc: "Dave May"; "petsc-users"; Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 On Apr 24, 2014, at 5:24 PM, Oo wrote: > > Configure PETSC again? No, the command line when you run the program. Barry > > ------------------ Original ------------------ > From: "Dave May";; > Send time: Friday, Apr 25, 2014 6:20 AM > To: "Oo "; > Cc: "Barry Smith"; "Matthew Knepley"; "petsc-users"; > Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 > > On the command line > > > On 25 April 2014 00:11, Oo wrote: > > Where should I put "-ksp_monitor_true_residual -ksp_monitor_singular_value " ? > > Thanks, > > Meng > > > ------------------ Original ------------------ > From: "Barry Smith";; > Date: Apr 25, 2014 > To: "Matthew Knepley"; > Cc: "Oo "; "petsc-users"; > Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 > > > There are also a great deal of ?bogus? numbers that have no meaning and many zeros. Most of these are not the eigenvalues of anything. > > Run the two cases with -ksp_monitor_true_residual -ksp_monitor_singular_value and send the output > > Barry > > > On Apr 24, 2014, at 4:53 PM, Matthew Knepley wrote: > > > On Thu, Apr 24, 2014 at 12:57 PM, Oo wrote: > > > > Hi, > > > > For analysis the convergence of linear solver, > > I meet a problem. > > > > One is the list of Eigenvalues whose linear system which has a convergence solution. > > The other is the list of Eigenvalues whose linear system whose solution does not converge (convergenceReason=-3). > > > > These are just lists of numbers. It does not tell us anything about the computation. What is the problem you are having? > > > > Matt > > > > Do you know what kind of method can be used to obtain a convergence solution for our non-convergence case? > > > > Thanks, > > > > Meng > > > > > > > > -- > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > -- Norbert Wiener > > . > . -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Apr 24 17:54:26 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 24 Apr 2014 17:54:26 -0500 Subject: [petsc-users] Convergence_Eigenvalues_k=3 In-Reply-To: References: <65E3810E-04F5-4B83-B3AF-EDE2D3AEE791@mcs.anl.gov> Message-ID: <4EC22F71-3552-402B-8886-9180229E1BC2@mcs.anl.gov> Run the bad case again with -ksp_gmres_restart 200 -ksp_max_it 200 and send the output On Apr 24, 2014, at 5:46 PM, Oo wrote: > > Thanks, > The following is the output at the beginning. > > > 0 KSP preconditioned resid norm 7.463734841673e+00 true resid norm 7.520241011357e-02 ||r(i)||/||b|| 1.000000000000e+00 > 0 KSP Residual norm 7.463734841673e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 1 KSP preconditioned resid norm 3.449001344285e-03 true resid norm 7.834435231711e-05 ||r(i)||/||b|| 1.041779807307e-03 > 1 KSP Residual norm 3.449001344285e-03 % max 9.999991695261e-01 min 9.999991695261e-01 max/min 1.000000000000e+00 > 2 KSP preconditioned resid norm 1.811463883605e-05 true resid norm 3.597611565181e-07 ||r(i)||/||b|| 4.783904611232e-06 > 2 KSP Residual norm 1.811463883605e-05 % max 1.000686014764e+00 min 9.991339510077e-01 max/min 1.001553409084e+00 > 0 KSP preconditioned resid norm 9.374463936067e+00 true resid norm 9.058107112571e-02 ||r(i)||/||b|| 1.000000000000e+00 > 0 KSP Residual norm 9.374463936067e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 1 KSP preconditioned resid norm 2.595582184655e-01 true resid norm 6.637387889158e-03 ||r(i)||/||b|| 7.327566131280e-02 > 1 KSP Residual norm 2.595582184655e-01 % max 9.933440157684e-01 min 9.933440157684e-01 max/min 1.000000000000e+00 > 2 KSP preconditioned resid norm 6.351429855766e-03 true resid norm 1.844857600919e-04 ||r(i)||/||b|| 2.036692189651e-03 > 2 KSP Residual norm 6.351429855766e-03 % max 1.000795215571e+00 min 8.099278726624e-01 max/min 1.235659679523e+00 > 3 KSP preconditioned resid norm 1.883016084950e-04 true resid norm 3.876682412610e-06 ||r(i)||/||b|| 4.279793078656e-05 > 3 KSP Residual norm 1.883016084950e-04 % max 1.184638644500e+00 min 8.086172954187e-01 max/min 1.465017692809e+00 > > > When solving the linear system: > Output: > ......... > ........ > ........ > +00 max/min 7.343521316293e+01 > 9636 KSP preconditioned resid norm 1.080641720588e+00 true resid norm 1.594064989678e-01 ||r(i)||/||b|| 8.917260288210e-03 > 9636 KSP Residual norm 1.080641720588e+00 % max 9.802496207537e+01 min 1.168945135768e+00 max/min 8.385762434515e+01 > 9637 KSP preconditioned resid norm 1.080641720588e+00 true resid norm 1.594064987918e-01 ||r(i)||/||b|| 8.917260278361e-03 > 9637 KSP Residual norm 1.080641720588e+00 % max 1.122401280488e+02 min 1.141681830513e+00 max/min 9.831121512938e+01 > 9638 KSP preconditioned resid norm 1.080641720588e+00 true resid norm 1.594064986859e-01 ||r(i)||/||b|| 8.917260272440e-03 > 9638 KSP Residual norm 1.080641720588e+00 % max 1.134941067042e+02 min 1.090790142559e+00 max/min 1.040476094127e+02 > 9639 KSP preconditioned resid norm 1.080641720588e+00 true resid norm 1.594064442988e-01 ||r(i)||/||b|| 8.917257230005e-03 > 9639 KSP Residual norm 1.080641720588e+00 % max 1.139914662925e+02 min 4.119649156568e-01 max/min 2.767018791170e+02 > 9640 KSP preconditioned resid norm 1.080641720586e+00 true resid norm 1.594063809183e-01 ||r(i)||/||b|| 8.917253684473e-03 > 9640 KSP Residual norm 1.080641720586e+00 % max 1.140011421526e+02 min 2.894486589274e-01 max/min 3.938561766878e+02 > 9641 KSP preconditioned resid norm 1.080641720586e+00 true resid norm 1.594063839202e-01 ||r(i)||/||b|| 8.917253852403e-03 > 9641 KSP Residual norm 1.080641720586e+00 % max 1.140392299942e+02 min 2.880532973299e-01 max/min 3.958962839563e+02 > 9642 KSP preconditioned resid norm 1.080641720586e+00 true resid norm 1.594064091676e-01 ||r(i)||/||b|| 8.917255264750e-03 > 9642 KSP Residual norm 1.080641720586e+00 % max 1.140392728591e+02 min 2.501717295613e-01 max/min 4.558439639005e+02 > 9643 KSP preconditioned resid norm 1.080641720583e+00 true resid norm 1.594064099334e-01 ||r(i)||/||b|| 8.917255307591e-03 > 9643 KSP Residual norm 1.080641720583e+00 % max 1.141360429432e+02 min 2.500714638111e-01 max/min 4.564137035220e+02 > 9644 KSP preconditioned resid norm 1.080641720582e+00 true resid norm 1.594064169337e-01 ||r(i)||/||b|| 8.917255699186e-03 > 9644 KSP Residual norm 1.080641720582e+00 % max 1.141719168213e+02 min 2.470526471293e-01 max/min 4.621359784969e+02 > 9645 KSP preconditioned resid norm 1.080641720554e+00 true resid norm 1.594064833602e-01 ||r(i)||/||b|| 8.917259415111e-03 > 9645 KSP Residual norm 1.080641720554e+00 % max 1.141770017757e+02 min 2.461729098264e-01 max/min 4.638081495493e+02 > 9646 KSP preconditioned resid norm 1.080641720550e+00 true resid norm 1.594066163854e-01 ||r(i)||/||b|| 8.917266856592e-03 > 9646 KSP Residual norm 1.080641720550e+00 % max 1.150251695783e+02 min 1.817293289064e-01 max/min 6.329477485583e+02 > 9647 KSP preconditioned resid norm 1.080641720425e+00 true resid norm 1.594070759575e-01 ||r(i)||/||b|| 8.917292565231e-03 > 9647 KSP Residual norm 1.080641720425e+00 % max 1.153670774825e+02 min 1.757825842976e-01 max/min 6.563055034347e+02 > 9648 KSP preconditioned resid norm 1.080641720405e+00 true resid norm 1.594072309986e-01 ||r(i)||/||b|| 8.917301238287e-03 > 9648 KSP Residual norm 1.080641720405e+00 % max 1.154419449950e+02 min 1.682003217110e-01 max/min 6.863360534671e+02 > 9649 KSP preconditioned resid norm 1.080641719971e+00 true resid norm 1.594088666650e-01 ||r(i)||/||b|| 8.917392738093e-03 > 9649 KSP Residual norm 1.080641719971e+00 % max 1.154420890958e+02 min 1.254364806923e-01 max/min 9.203230867027e+02 > 9650 KSP preconditioned resid norm 1.080641719766e+00 true resid norm 1.594089470619e-01 ||r(i)||/||b|| 8.917397235527e-03 > 9650 KSP Residual norm 1.080641719766e+00 % max 1.155791388935e+02 min 1.115280748954e-01 max/min 1.036323266603e+03 > 9651 KSP preconditioned resid norm 1.080641719668e+00 true resid norm 1.594099325489e-01 ||r(i)||/||b|| 8.917452364041e-03 > 9651 KSP Residual norm 1.080641719668e+00 % max 1.156952656131e+02 min 9.753165869338e-02 max/min 1.186232933624e+03 > 9652 KSP preconditioned resid norm 1.080641719560e+00 true resid norm 1.594104650490e-01 ||r(i)||/||b|| 8.917482152303e-03 > 9652 KSP Residual norm 1.080641719560e+00 % max 1.157175173166e+02 min 8.164906465197e-02 max/min 1.417254659437e+03 > 9653 KSP preconditioned resid norm 1.080641719545e+00 true resid norm 1.594102433389e-01 ||r(i)||/||b|| 8.917469749751e-03 > 9653 KSP Residual norm 1.080641719545e+00 % max 1.157284977956e+02 min 8.043379142473e-02 max/min 1.438804459490e+03 > 9654 KSP preconditioned resid norm 1.080641719502e+00 true resid norm 1.594103748106e-01 ||r(i)||/||b|| 8.917477104328e-03 > 9654 KSP Residual norm 1.080641719502e+00 % max 1.158252103352e+02 min 8.042977537341e-02 max/min 1.440078749412e+03 > 9655 KSP preconditioned resid norm 1.080641719500e+00 true resid norm 1.594103839160e-01 ||r(i)||/||b|| 8.917477613692e-03 > 9655 KSP Residual norm 1.080641719500e+00 % max 1.158319413225e+02 min 7.912584859399e-02 max/min 1.463895090931e+03 > 9656 KSP preconditioned resid norm 1.080641719298e+00 true resid norm 1.594103559180e-01 ||r(i)||/||b|| 8.917476047469e-03 > 9656 KSP Residual norm 1.080641719298e+00 % max 1.164752277567e+02 min 7.459488142962e-02 max/min 1.561437266532e+03 > 9657 KSP preconditioned resid norm 1.080641719265e+00 true resid norm 1.594102184171e-01 ||r(i)||/||b|| 8.917468355617e-03 > 9657 KSP Residual norm 1.080641719265e+00 % max 1.166579038733e+02 min 7.458594814570e-02 max/min 1.564073485335e+03 > 9658 KSP preconditioned resid norm 1.080641717458e+00 true resid norm 1.594091333285e-01 ||r(i)||/||b|| 8.917407655346e-03 > 9658 KSP Residual norm 1.080641717458e+00 % max 1.166903646829e+02 min 6.207842503132e-02 max/min 1.879724954749e+03 > 9659 KSP preconditioned resid norm 1.080641710951e+00 true resid norm 1.594084758922e-01 ||r(i)||/||b|| 8.917370878110e-03 > 9659 KSP Residual norm 1.080641710951e+00 % max 1.166911765130e+02 min 4.511759655558e-02 max/min 2.586378384966e+03 > 9660 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073892570e-01 ||r(i)||/||b|| 8.917310091324e-03 > 9660 KSP Residual norm 1.080641710473e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9661 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073892624e-01 ||r(i)||/||b|| 8.917310091626e-03 > 9661 KSP Residual norm 1.080641710473e+00 % max 3.063497860835e+01 min 3.063497860835e+01 max/min 1.000000000000e+00 > 9662 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073892715e-01 ||r(i)||/||b|| 8.917310092135e-03 > 9662 KSP Residual norm 1.080641710473e+00 % max 3.066567116490e+01 min 3.845920135843e+00 max/min 7.973559013643e+00 > 9663 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073893803e-01 ||r(i)||/||b|| 8.917310098220e-03 > 9663 KSP Residual norm 1.080641710473e+00 % max 3.713314039929e+01 min 1.336313376350e+00 max/min 2.778774878443e+01 > 9664 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073893840e-01 ||r(i)||/||b|| 8.917310098430e-03 > 9664 KSP Residual norm 1.080641710473e+00 % max 4.496286107838e+01 min 1.226793755688e+00 max/min 3.665070911057e+01 > 9665 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073893901e-01 ||r(i)||/||b|| 8.917310098770e-03 > 9665 KSP Residual norm 1.080641710473e+00 % max 8.684753794468e+01 min 1.183106109633e+00 max/min 7.340638108242e+01 > 9666 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073897030e-01 ||r(i)||/||b|| 8.917310116272e-03 > 9666 KSP Residual norm 1.080641710473e+00 % max 9.802657279239e+01 min 1.168685545918e+00 max/min 8.387762913199e+01 > 9667 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073898787e-01 ||r(i)||/||b|| 8.917310126104e-03 > 9667 KSP Residual norm 1.080641710473e+00 % max 1.123342619847e+02 min 1.141262650540e+00 max/min 9.842980661068e+01 > 9668 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073899832e-01 ||r(i)||/||b|| 8.917310131950e-03 > 9668 KSP Residual norm 1.080641710473e+00 % max 1.134978630027e+02 min 1.090790451862e+00 max/min 1.040510235573e+02 > 9669 KSP preconditioned resid norm 1.080641710472e+00 true resid norm 1.594074437274e-01 ||r(i)||/||b|| 8.917313138421e-03 > 9669 KSP Residual norm 1.080641710472e+00 % max 1.139911467053e+02 min 4.122424185123e-01 max/min 2.765148407498e+02 > 9670 KSP preconditioned resid norm 1.080641710471e+00 true resid norm 1.594075064381e-01 ||r(i)||/||b|| 8.917316646478e-03 > 9670 KSP Residual norm 1.080641710471e+00 % max 1.140007007543e+02 min 2.895766957825e-01 max/min 3.936805081855e+02 > 9671 KSP preconditioned resid norm 1.080641710471e+00 true resid norm 1.594075034738e-01 ||r(i)||/||b|| 8.917316480653e-03 > 9671 KSP Residual norm 1.080641710471e+00 % max 1.140387089437e+02 min 2.881955202443e-01 max/min 3.956991033277e+02 > 9672 KSP preconditioned resid norm 1.080641710470e+00 true resid norm 1.594074784660e-01 ||r(i)||/||b|| 8.917315081711e-03 > 9672 KSP Residual norm 1.080641710470e+00 % max 1.140387584514e+02 min 2.501644562253e-01 max/min 4.558551609294e+02 > 9673 KSP preconditioned resid norm 1.080641710468e+00 true resid norm 1.594074777329e-01 ||r(i)||/||b|| 8.917315040700e-03 > 9673 KSP Residual norm 1.080641710468e+00 % max 1.141359894823e+02 min 2.500652210717e-01 max/min 4.564248838488e+02 > 9674 KSP preconditioned resid norm 1.080641710467e+00 true resid norm 1.594074707419e-01 ||r(i)||/||b|| 8.917314649619e-03 > 9674 KSP Residual norm 1.080641710467e+00 % max 1.141719424825e+02 min 2.470352404045e-01 max/min 4.621686456374e+02 > 9675 KSP preconditioned resid norm 1.080641710439e+00 true resid norm 1.594074050132e-01 ||r(i)||/||b|| 8.917310972734e-03 > 9675 KSP Residual norm 1.080641710439e+00 % max 1.141769957478e+02 min 2.461583334135e-01 max/min 4.638355897383e+02 > 9676 KSP preconditioned resid norm 1.080641710435e+00 true resid norm 1.594072732317e-01 ||r(i)||/||b|| 8.917303600825e-03 > 9676 KSP Residual norm 1.080641710435e+00 % max 1.150247840524e+02 min 1.817432478135e-01 max/min 6.328971526384e+02 > 9677 KSP preconditioned resid norm 1.080641710313e+00 true resid norm 1.594068192028e-01 ||r(i)||/||b|| 8.917278202275e-03 > 9677 KSP Residual norm 1.080641710313e+00 % max 1.153658698688e+02 min 1.758229849082e-01 max/min 6.561478291873e+02 > 9678 KSP preconditioned resid norm 1.080641710294e+00 true resid norm 1.594066656020e-01 ||r(i)||/||b|| 8.917269609788e-03 > 9678 KSP Residual norm 1.080641710294e+00 % max 1.154409214869e+02 min 1.682261493961e-01 max/min 6.862245964808e+02 > 9679 KSP preconditioned resid norm 1.080641709867e+00 true resid norm 1.594050456081e-01 ||r(i)||/||b|| 8.917178986714e-03 > 9679 KSP Residual norm 1.080641709867e+00 % max 1.154410567101e+02 min 1.254614849745e-01 max/min 9.201314390116e+02 > 9680 KSP preconditioned resid norm 1.080641709664e+00 true resid norm 1.594049650069e-01 ||r(i)||/||b|| 8.917174477851e-03 > 9680 KSP Residual norm 1.080641709664e+00 % max 1.155780217907e+02 min 1.115227545913e-01 max/min 1.036362688622e+03 > 9681 KSP preconditioned resid norm 1.080641709568e+00 true resid norm 1.594039822189e-01 ||r(i)||/||b|| 8.917119500317e-03 > 9681 KSP Residual norm 1.080641709568e+00 % max 1.156949294099e+02 min 9.748012982669e-02 max/min 1.186856538000e+03 > 9682 KSP preconditioned resid norm 1.080641709462e+00 true resid norm 1.594034585197e-01 ||r(i)||/||b|| 8.917090204385e-03 > 9682 KSP Residual norm 1.080641709462e+00 % max 1.157178049396e+02 min 8.161660044005e-02 max/min 1.417821917547e+03 > 9683 KSP preconditioned resid norm 1.080641709447e+00 true resid norm 1.594036816693e-01 ||r(i)||/||b|| 8.917102687458e-03 > 9683 KSP Residual norm 1.080641709447e+00 % max 1.157298376396e+02 min 8.041659530019e-02 max/min 1.439128791856e+03 > 9684 KSP preconditioned resid norm 1.080641709407e+00 true resid norm 1.594035571975e-01 ||r(i)||/||b|| 8.917095724454e-03 > 9684 KSP Residual norm 1.080641709407e+00 % max 1.158244705264e+02 min 8.041209608100e-02 max/min 1.440386162919e+03 > 9685 KSP preconditioned resid norm 1.080641709405e+00 true resid norm 1.594035489927e-01 ||r(i)||/||b|| 8.917095265477e-03 > 9685 KSP Residual norm 1.080641709405e+00 % max 1.158301451854e+02 min 7.912026880308e-02 max/min 1.463975627707e+03 > 9686 KSP preconditioned resid norm 1.080641709207e+00 true resid norm 1.594035774591e-01 ||r(i)||/||b|| 8.917096857899e-03 > 9686 KSP Residual norm 1.080641709207e+00 % max 1.164678839370e+02 min 7.460755571674e-02 max/min 1.561073577845e+03 > 9687 KSP preconditioned resid norm 1.080641709174e+00 true resid norm 1.594037147171e-01 ||r(i)||/||b|| 8.917104536162e-03 > 9687 KSP Residual norm 1.080641709174e+00 % max 1.166488052404e+02 min 7.459794478802e-02 max/min 1.563699986264e+03 > 9688 KSP preconditioned resid norm 1.080641707398e+00 true resid norm 1.594047860513e-01 ||r(i)||/||b|| 8.917164467006e-03 > 9688 KSP Residual norm 1.080641707398e+00 % max 1.166807852257e+02 min 6.210503072987e-02 max/min 1.878765437428e+03 > 9689 KSP preconditioned resid norm 1.080641701010e+00 true resid norm 1.594054414016e-01 ||r(i)||/||b|| 8.917201127549e-03 > 9689 KSP Residual norm 1.080641701010e+00 % max 1.166815140099e+02 min 4.513527468187e-02 max/min 2.585151299782e+03 > 9690 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065078534e-01 ||r(i)||/||b|| 8.917260785273e-03 > 9690 KSP Residual norm 1.080641700548e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9691 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065078481e-01 ||r(i)||/||b|| 8.917260784977e-03 > 9691 KSP Residual norm 1.080641700548e+00 % max 3.063462923862e+01 min 3.063462923862e+01 max/min 1.000000000000e+00 > 9692 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065078391e-01 ||r(i)||/||b|| 8.917260784472e-03 > 9692 KSP Residual norm 1.080641700548e+00 % max 3.066527639129e+01 min 3.844401168431e+00 max/min 7.976606771193e+00 > 9693 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065077314e-01 ||r(i)||/||b|| 8.917260778448e-03 > 9693 KSP Residual norm 1.080641700548e+00 % max 3.714560719346e+01 min 1.336559823443e+00 max/min 2.779195255007e+01 > 9694 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065077277e-01 ||r(i)||/||b|| 8.917260778237e-03 > 9694 KSP Residual norm 1.080641700548e+00 % max 4.500980776068e+01 min 1.226798344943e+00 max/min 3.668883965014e+01 > 9695 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065077215e-01 ||r(i)||/||b|| 8.917260777894e-03 > 9695 KSP Residual norm 1.080641700548e+00 % max 8.688252795875e+01 min 1.183121330492e+00 max/min 7.343501103356e+01 > 9696 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065074170e-01 ||r(i)||/||b|| 8.917260760857e-03 > 9696 KSP Residual norm 1.080641700548e+00 % max 9.802496799855e+01 min 1.168942571066e+00 max/min 8.385781339895e+01 > 9697 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065072444e-01 ||r(i)||/||b|| 8.917260751202e-03 > 9697 KSP Residual norm 1.080641700548e+00 % max 1.122410641497e+02 min 1.141677885656e+00 max/min 9.831237475989e+01 > 9698 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065071406e-01 ||r(i)||/||b|| 8.917260745398e-03 > 9698 KSP Residual norm 1.080641700548e+00 % max 1.134941426413e+02 min 1.090790153651e+00 max/min 1.040476413006e+02 > 9699 KSP preconditioned resid norm 1.080641700547e+00 true resid norm 1.594064538413e-01 ||r(i)||/||b|| 8.917257763815e-03 > 9699 KSP Residual norm 1.080641700547e+00 % max 1.139914632588e+02 min 4.119681054781e-01 max/min 2.766997292824e+02 > 9700 KSP preconditioned resid norm 1.080641700546e+00 true resid norm 1.594063917269e-01 ||r(i)||/||b|| 8.917254289111e-03 > 9700 KSP Residual norm 1.080641700546e+00 % max 1.140011377578e+02 min 2.894500209686e-01 max/min 3.938543081679e+02 > 9701 KSP preconditioned resid norm 1.080641700546e+00 true resid norm 1.594063946692e-01 ||r(i)||/||b|| 8.917254453705e-03 > 9701 KSP Residual norm 1.080641700546e+00 % max 1.140392249545e+02 min 2.880548133071e-01 max/min 3.958941829341e+02 > 9702 KSP preconditioned resid norm 1.080641700546e+00 true resid norm 1.594064194159e-01 ||r(i)||/||b|| 8.917255838042e-03 > 9702 KSP Residual norm 1.080641700546e+00 % max 1.140392678944e+02 min 2.501716607446e-01 max/min 4.558440694479e+02 > 9703 KSP preconditioned resid norm 1.080641700543e+00 true resid norm 1.594064201661e-01 ||r(i)||/||b|| 8.917255880013e-03 > 9703 KSP Residual norm 1.080641700543e+00 % max 1.141360426034e+02 min 2.500714053220e-01 max/min 4.564138089137e+02 > 9704 KSP preconditioned resid norm 1.080641700542e+00 true resid norm 1.594064270279e-01 ||r(i)||/||b|| 8.917256263859e-03 > 9704 KSP Residual norm 1.080641700542e+00 % max 1.141719170495e+02 min 2.470524869868e-01 max/min 4.621362789824e+02 > 9705 KSP preconditioned resid norm 1.080641700515e+00 true resid norm 1.594064921269e-01 ||r(i)||/||b|| 8.917259905523e-03 > 9705 KSP Residual norm 1.080641700515e+00 % max 1.141770017414e+02 min 2.461727640845e-01 max/min 4.638084239987e+02 > 9706 KSP preconditioned resid norm 1.080641700511e+00 true resid norm 1.594066225181e-01 ||r(i)||/||b|| 8.917267199657e-03 > 9706 KSP Residual norm 1.080641700511e+00 % max 1.150251667711e+02 min 1.817294170466e-01 max/min 6.329474261263e+02 > 9707 KSP preconditioned resid norm 1.080641700391e+00 true resid norm 1.594070728983e-01 ||r(i)||/||b|| 8.917292394097e-03 > 9707 KSP Residual norm 1.080641700391e+00 % max 1.153670687494e+02 min 1.757829178698e-01 max/min 6.563042083238e+02 > 9708 KSP preconditioned resid norm 1.080641700372e+00 true resid norm 1.594072248425e-01 ||r(i)||/||b|| 8.917300893916e-03 > 9708 KSP Residual norm 1.080641700372e+00 % max 1.154419380718e+02 min 1.682005223292e-01 max/min 6.863351936912e+02 > 9709 KSP preconditioned resid norm 1.080641699955e+00 true resid norm 1.594088278507e-01 ||r(i)||/||b|| 8.917390566803e-03 > 9709 KSP Residual norm 1.080641699955e+00 % max 1.154420821193e+02 min 1.254367014624e-01 max/min 9.203214113045e+02 > 9710 KSP preconditioned resid norm 1.080641699758e+00 true resid norm 1.594089066545e-01 ||r(i)||/||b|| 8.917394975116e-03 > 9710 KSP Residual norm 1.080641699758e+00 % max 1.155791306030e+02 min 1.115278858784e-01 max/min 1.036324948623e+03 > 9711 KSP preconditioned resid norm 1.080641699664e+00 true resid norm 1.594098725388e-01 ||r(i)||/||b|| 8.917449007053e-03 > 9711 KSP Residual norm 1.080641699664e+00 % max 1.156952614563e+02 min 9.753133694719e-02 max/min 1.186236804269e+03 > 9712 KSP preconditioned resid norm 1.080641699560e+00 true resid norm 1.594103943811e-01 ||r(i)||/||b|| 8.917478199111e-03 > 9712 KSP Residual norm 1.080641699560e+00 % max 1.157175157006e+02 min 8.164872572382e-02 max/min 1.417260522743e+03 > 9713 KSP preconditioned resid norm 1.080641699546e+00 true resid norm 1.594101771105e-01 ||r(i)||/||b|| 8.917466044914e-03 > 9713 KSP Residual norm 1.080641699546e+00 % max 1.157285025175e+02 min 8.043358651149e-02 max/min 1.438808183705e+03 > 9714 KSP preconditioned resid norm 1.080641699505e+00 true resid norm 1.594103059104e-01 ||r(i)||/||b|| 8.917473250025e-03 > 9714 KSP Residual norm 1.080641699505e+00 % max 1.158251990486e+02 min 8.042956633721e-02 max/min 1.440082351843e+03 > 9715 KSP preconditioned resid norm 1.080641699503e+00 true resid norm 1.594103148215e-01 ||r(i)||/||b|| 8.917473748516e-03 > 9715 KSP Residual norm 1.080641699503e+00 % max 1.158319218998e+02 min 7.912575668150e-02 max/min 1.463896545925e+03 > 9716 KSP preconditioned resid norm 1.080641699309e+00 true resid norm 1.594102873744e-01 ||r(i)||/||b|| 8.917472213115e-03 > 9716 KSP Residual norm 1.080641699309e+00 % max 1.164751648212e+02 min 7.459498920670e-02 max/min 1.561434166824e+03 > 9717 KSP preconditioned resid norm 1.080641699277e+00 true resid norm 1.594101525703e-01 ||r(i)||/||b|| 8.917464672122e-03 > 9717 KSP Residual norm 1.080641699277e+00 % max 1.166578264103e+02 min 7.458604738274e-02 max/min 1.564070365758e+03 > 9718 KSP preconditioned resid norm 1.080641697541e+00 true resid norm 1.594090891807e-01 ||r(i)||/||b|| 8.917405185702e-03 > 9718 KSP Residual norm 1.080641697541e+00 % max 1.166902828397e+02 min 6.207863457218e-02 max/min 1.879717291527e+03 > 9719 KSP preconditioned resid norm 1.080641691292e+00 true resid norm 1.594084448306e-01 ||r(i)||/||b|| 8.917369140517e-03 > 9719 KSP Residual norm 1.080641691292e+00 % max 1.166910940162e+02 min 4.511779816158e-02 max/min 2.586364999424e+03 > 9720 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073798840e-01 ||r(i)||/||b|| 8.917309566993e-03 > 9720 KSP Residual norm 1.080641690833e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9721 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073798892e-01 ||r(i)||/||b|| 8.917309567288e-03 > 9721 KSP Residual norm 1.080641690833e+00 % max 3.063497720423e+01 min 3.063497720423e+01 max/min 1.000000000000e+00 > 9722 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073798982e-01 ||r(i)||/||b|| 8.917309567787e-03 > 9722 KSP Residual norm 1.080641690833e+00 % max 3.066566915002e+01 min 3.845904218039e+00 max/min 7.973591491484e+00 > 9723 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073800048e-01 ||r(i)||/||b|| 8.917309573750e-03 > 9723 KSP Residual norm 1.080641690833e+00 % max 3.713330064708e+01 min 1.336316867286e+00 max/min 2.778779611043e+01 > 9724 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073800084e-01 ||r(i)||/||b|| 8.917309573955e-03 > 9724 KSP Residual norm 1.080641690833e+00 % max 4.496345800250e+01 min 1.226793989785e+00 max/min 3.665118868930e+01 > 9725 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073800144e-01 ||r(i)||/||b|| 8.917309574290e-03 > 9725 KSP Residual norm 1.080641690833e+00 % max 8.684799302076e+01 min 1.183106260116e+00 max/min 7.340675639080e+01 > 9726 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073803210e-01 ||r(i)||/||b|| 8.917309591441e-03 > 9726 KSP Residual norm 1.080641690833e+00 % max 9.802654643741e+01 min 1.168688159857e+00 max/min 8.387741897664e+01 > 9727 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073804933e-01 ||r(i)||/||b|| 8.917309601077e-03 > 9727 KSP Residual norm 1.080641690833e+00 % max 1.123333202151e+02 min 1.141267080659e+00 max/min 9.842859933384e+01 > 9728 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073805957e-01 ||r(i)||/||b|| 8.917309606809e-03 > 9728 KSP Residual norm 1.080641690833e+00 % max 1.134978238790e+02 min 1.090790456842e+00 max/min 1.040509872149e+02 > 9729 KSP preconditioned resid norm 1.080641690832e+00 true resid norm 1.594074332665e-01 ||r(i)||/||b|| 8.917312553232e-03 > 9729 KSP Residual norm 1.080641690832e+00 % max 1.139911500486e+02 min 4.122400558994e-01 max/min 2.765164336102e+02 > 9730 KSP preconditioned resid norm 1.080641690831e+00 true resid norm 1.594074947247e-01 ||r(i)||/||b|| 8.917315991226e-03 > 9730 KSP Residual norm 1.080641690831e+00 % max 1.140007051733e+02 min 2.895754967285e-01 max/min 3.936821535706e+02 > 9731 KSP preconditioned resid norm 1.080641690831e+00 true resid norm 1.594074918191e-01 ||r(i)||/||b|| 8.917315828690e-03 > 9731 KSP Residual norm 1.080641690831e+00 % max 1.140387143094e+02 min 2.881941911453e-01 max/min 3.957009468380e+02 > 9732 KSP preconditioned resid norm 1.080641690830e+00 true resid norm 1.594074673079e-01 ||r(i)||/||b|| 8.917314457521e-03 > 9732 KSP Residual norm 1.080641690830e+00 % max 1.140387637599e+02 min 2.501645326450e-01 max/min 4.558550428958e+02 > 9733 KSP preconditioned resid norm 1.080641690828e+00 true resid norm 1.594074665892e-01 ||r(i)||/||b|| 8.917314417314e-03 > 9733 KSP Residual norm 1.080641690828e+00 % max 1.141359902062e+02 min 2.500652873669e-01 max/min 4.564247657404e+02 > 9734 KSP preconditioned resid norm 1.080641690827e+00 true resid norm 1.594074597377e-01 ||r(i)||/||b|| 8.917314034043e-03 > 9734 KSP Residual norm 1.080641690827e+00 % max 1.141719421992e+02 min 2.470354276866e-01 max/min 4.621682941123e+02 > 9735 KSP preconditioned resid norm 1.080641690800e+00 true resid norm 1.594073953224e-01 ||r(i)||/||b|| 8.917310430625e-03 > 9735 KSP Residual norm 1.080641690800e+00 % max 1.141769958343e+02 min 2.461584786890e-01 max/min 4.638353163473e+02 > 9736 KSP preconditioned resid norm 1.080641690797e+00 true resid norm 1.594072661531e-01 ||r(i)||/||b|| 8.917303204847e-03 > 9736 KSP Residual norm 1.080641690797e+00 % max 1.150247889475e+02 min 1.817430598855e-01 max/min 6.328978340080e+02 > 9737 KSP preconditioned resid norm 1.080641690679e+00 true resid norm 1.594068211899e-01 ||r(i)||/||b|| 8.917278313432e-03 > 9737 KSP Residual norm 1.080641690679e+00 % max 1.153658852526e+02 min 1.758225138542e-01 max/min 6.561496745986e+02 > 9738 KSP preconditioned resid norm 1.080641690661e+00 true resid norm 1.594066706603e-01 ||r(i)||/||b|| 8.917269892752e-03 > 9738 KSP Residual norm 1.080641690661e+00 % max 1.154409350014e+02 min 1.682258367468e-01 max/min 6.862259521714e+02 > 9739 KSP preconditioned resid norm 1.080641690251e+00 true resid norm 1.594050830368e-01 ||r(i)||/||b|| 8.917181080485e-03 > 9739 KSP Residual norm 1.080641690251e+00 % max 1.154410703479e+02 min 1.254612102469e-01 max/min 9.201335625627e+02 > 9740 KSP preconditioned resid norm 1.080641690057e+00 true resid norm 1.594050040532e-01 ||r(i)||/||b|| 8.917176662114e-03 > 9740 KSP Residual norm 1.080641690057e+00 % max 1.155780358111e+02 min 1.115226752201e-01 max/min 1.036363551923e+03 > 9741 KSP preconditioned resid norm 1.080641689964e+00 true resid norm 1.594040409622e-01 ||r(i)||/||b|| 8.917122786435e-03 > 9741 KSP Residual norm 1.080641689964e+00 % max 1.156949319460e+02 min 9.748084084270e-02 max/min 1.186847907198e+03 > 9742 KSP preconditioned resid norm 1.080641689862e+00 true resid norm 1.594035276798e-01 ||r(i)||/||b|| 8.917094073223e-03 > 9742 KSP Residual norm 1.080641689862e+00 % max 1.157177974822e+02 min 8.161690928512e-02 max/min 1.417816461022e+03 > 9743 KSP preconditioned resid norm 1.080641689848e+00 true resid norm 1.594037462881e-01 ||r(i)||/||b|| 8.917106302257e-03 > 9743 KSP Residual norm 1.080641689848e+00 % max 1.157298152734e+02 min 8.041673363017e-02 max/min 1.439126038190e+03 > 9744 KSP preconditioned resid norm 1.080641689809e+00 true resid norm 1.594036242374e-01 ||r(i)||/||b|| 8.917099474695e-03 > 9744 KSP Residual norm 1.080641689809e+00 % max 1.158244737331e+02 min 8.041224000025e-02 max/min 1.440383624841e+03 > 9745 KSP preconditioned resid norm 1.080641689808e+00 true resid norm 1.594036161930e-01 ||r(i)||/||b|| 8.917099024685e-03 > 9745 KSP Residual norm 1.080641689808e+00 % max 1.158301611517e+02 min 7.912028815054e-02 max/min 1.463975471516e+03 > 9746 KSP preconditioned resid norm 1.080641689617e+00 true resid norm 1.594036440841e-01 ||r(i)||/||b|| 8.917100584928e-03 > 9746 KSP Residual norm 1.080641689617e+00 % max 1.164679674294e+02 min 7.460741052548e-02 max/min 1.561077734894e+03 > 9747 KSP preconditioned resid norm 1.080641689585e+00 true resid norm 1.594037786262e-01 ||r(i)||/||b|| 8.917108111263e-03 > 9747 KSP Residual norm 1.080641689585e+00 % max 1.166489093034e+02 min 7.459780450955e-02 max/min 1.563704321734e+03 > 9748 KSP preconditioned resid norm 1.080641687880e+00 true resid norm 1.594048285858e-01 ||r(i)||/||b|| 8.917166846404e-03 > 9748 KSP Residual norm 1.080641687880e+00 % max 1.166808944816e+02 min 6.210471238399e-02 max/min 1.878776827114e+03 > 9749 KSP preconditioned resid norm 1.080641681745e+00 true resid norm 1.594054707973e-01 ||r(i)||/||b|| 8.917202771956e-03 > 9749 KSP Residual norm 1.080641681745e+00 % max 1.166816242497e+02 min 4.513512286802e-02 max/min 2.585162437485e+03 > 9750 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065161384e-01 ||r(i)||/||b|| 8.917261248737e-03 > 9750 KSP Residual norm 1.080641681302e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9751 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065161332e-01 ||r(i)||/||b|| 8.917261248446e-03 > 9751 KSP Residual norm 1.080641681302e+00 % max 3.063463477969e+01 min 3.063463477969e+01 max/min 1.000000000000e+00 > 9752 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065161243e-01 ||r(i)||/||b|| 8.917261247951e-03 > 9752 KSP Residual norm 1.080641681302e+00 % max 3.066528223339e+01 min 3.844415595957e+00 max/min 7.976578355796e+00 > 9753 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065160188e-01 ||r(i)||/||b|| 8.917261242049e-03 > 9753 KSP Residual norm 1.080641681302e+00 % max 3.714551741771e+01 min 1.336558367241e+00 max/min 2.779191566050e+01 > 9754 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065160151e-01 ||r(i)||/||b|| 8.917261241840e-03 > 9754 KSP Residual norm 1.080641681302e+00 % max 4.500946346995e+01 min 1.226798482639e+00 max/min 3.668855489054e+01 > 9755 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065160091e-01 ||r(i)||/||b|| 8.917261241506e-03 > 9755 KSP Residual norm 1.080641681302e+00 % max 8.688228012809e+01 min 1.183121177118e+00 max/min 7.343481108144e+01 > 9756 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065157105e-01 ||r(i)||/||b|| 8.917261224803e-03 > 9756 KSP Residual norm 1.080641681302e+00 % max 9.802497401249e+01 min 1.168940055038e+00 max/min 8.385799903943e+01 > 9757 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065155414e-01 ||r(i)||/||b|| 8.917261215340e-03 > 9757 KSP Residual norm 1.080641681302e+00 % max 1.122419823631e+02 min 1.141674012007e+00 max/min 9.831351259875e+01 > 9758 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065154397e-01 ||r(i)||/||b|| 8.917261209650e-03 > 9758 KSP Residual norm 1.080641681302e+00 % max 1.134941779165e+02 min 1.090790164362e+00 max/min 1.040476726180e+02 > 9759 KSP preconditioned resid norm 1.080641681301e+00 true resid norm 1.594064632071e-01 ||r(i)||/||b|| 8.917258287741e-03 > 9759 KSP Residual norm 1.080641681301e+00 % max 1.139914602803e+02 min 4.119712251563e-01 max/min 2.766976267263e+02 > 9760 KSP preconditioned resid norm 1.080641681300e+00 true resid norm 1.594064023343e-01 ||r(i)||/||b|| 8.917254882493e-03 > 9760 KSP Residual norm 1.080641681300e+00 % max 1.140011334474e+02 min 2.894513550128e-01 max/min 3.938524780524e+02 > 9761 KSP preconditioned resid norm 1.080641681299e+00 true resid norm 1.594064052181e-01 ||r(i)||/||b|| 8.917255043816e-03 > 9761 KSP Residual norm 1.080641681299e+00 % max 1.140392200085e+02 min 2.880562980628e-01 max/min 3.958921251693e+02 > 9762 KSP preconditioned resid norm 1.080641681299e+00 true resid norm 1.594064294735e-01 ||r(i)||/||b|| 8.917256400669e-03 > 9762 KSP Residual norm 1.080641681299e+00 % max 1.140392630219e+02 min 2.501715931763e-01 max/min 4.558441730892e+02 > 9763 KSP preconditioned resid norm 1.080641681297e+00 true resid norm 1.594064302085e-01 ||r(i)||/||b|| 8.917256441789e-03 > 9763 KSP Residual norm 1.080641681297e+00 % max 1.141360422663e+02 min 2.500713478839e-01 max/min 4.564139123977e+02 > 9764 KSP preconditioned resid norm 1.080641681296e+00 true resid norm 1.594064369343e-01 ||r(i)||/||b|| 8.917256818032e-03 > 9764 KSP Residual norm 1.080641681296e+00 % max 1.141719172739e+02 min 2.470523296471e-01 max/min 4.621365742109e+02 > 9765 KSP preconditioned resid norm 1.080641681269e+00 true resid norm 1.594065007316e-01 ||r(i)||/||b|| 8.917260386877e-03 > 9765 KSP Residual norm 1.080641681269e+00 % max 1.141770017074e+02 min 2.461726211496e-01 max/min 4.638086931610e+02 > 9766 KSP preconditioned resid norm 1.080641681266e+00 true resid norm 1.594066285385e-01 ||r(i)||/||b|| 8.917267536440e-03 > 9766 KSP Residual norm 1.080641681266e+00 % max 1.150251639969e+02 min 1.817295045513e-01 max/min 6.329471060897e+02 > 9767 KSP preconditioned resid norm 1.080641681150e+00 true resid norm 1.594070699054e-01 ||r(i)||/||b|| 8.917292226672e-03 > 9767 KSP Residual norm 1.080641681150e+00 % max 1.153670601170e+02 min 1.757832464778e-01 max/min 6.563029323251e+02 > 9768 KSP preconditioned resid norm 1.080641681133e+00 true resid norm 1.594072188128e-01 ||r(i)||/||b|| 8.917300556609e-03 > 9768 KSP Residual norm 1.080641681133e+00 % max 1.154419312149e+02 min 1.682007202940e-01 max/min 6.863343451393e+02 > 9769 KSP preconditioned resid norm 1.080641680732e+00 true resid norm 1.594087897943e-01 ||r(i)||/||b|| 8.917388437917e-03 > 9769 KSP Residual norm 1.080641680732e+00 % max 1.154420752094e+02 min 1.254369186538e-01 max/min 9.203197627011e+02 > 9770 KSP preconditioned resid norm 1.080641680543e+00 true resid norm 1.594088670353e-01 ||r(i)||/||b|| 8.917392758804e-03 > 9770 KSP Residual norm 1.080641680543e+00 % max 1.155791224140e+02 min 1.115277033208e-01 max/min 1.036326571538e+03 > 9771 KSP preconditioned resid norm 1.080641680452e+00 true resid norm 1.594098136932e-01 ||r(i)||/||b|| 8.917445715209e-03 > 9771 KSP Residual norm 1.080641680452e+00 % max 1.156952573953e+02 min 9.753101753312e-02 max/min 1.186240647556e+03 > 9772 KSP preconditioned resid norm 1.080641680352e+00 true resid norm 1.594103250846e-01 ||r(i)||/||b|| 8.917474322641e-03 > 9772 KSP Residual norm 1.080641680352e+00 % max 1.157175142052e+02 min 8.164839358709e-02 max/min 1.417266269688e+03 > 9773 KSP preconditioned resid norm 1.080641680339e+00 true resid norm 1.594101121664e-01 ||r(i)||/||b|| 8.917462411912e-03 > 9773 KSP Residual norm 1.080641680339e+00 % max 1.157285073186e+02 min 8.043338619163e-02 max/min 1.438811826757e+03 > 9774 KSP preconditioned resid norm 1.080641680299e+00 true resid norm 1.594102383476e-01 ||r(i)||/||b|| 8.917469470538e-03 > 9774 KSP Residual norm 1.080641680299e+00 % max 1.158251880541e+02 min 8.042936196082e-02 max/min 1.440085874492e+03 > 9775 KSP preconditioned resid norm 1.080641680298e+00 true resid norm 1.594102470687e-01 ||r(i)||/||b|| 8.917469958400e-03 > 9775 KSP Residual norm 1.080641680298e+00 % max 1.158319028736e+02 min 7.912566724984e-02 max/min 1.463897960037e+03 > 9776 KSP preconditioned resid norm 1.080641680112e+00 true resid norm 1.594102201623e-01 ||r(i)||/||b|| 8.917468453243e-03 > 9776 KSP Residual norm 1.080641680112e+00 % max 1.164751028861e+02 min 7.459509526345e-02 max/min 1.561431116546e+03 > 9777 KSP preconditioned resid norm 1.080641680081e+00 true resid norm 1.594100880055e-01 ||r(i)||/||b|| 8.917461060344e-03 > 9777 KSP Residual norm 1.080641680081e+00 % max 1.166577501679e+02 min 7.458614509876e-02 max/min 1.564067294448e+03 > 9778 KSP preconditioned resid norm 1.080641678414e+00 true resid norm 1.594090458938e-01 ||r(i)||/||b|| 8.917402764217e-03 > 9778 KSP Residual norm 1.080641678414e+00 % max 1.166902022924e+02 min 6.207884124039e-02 max/min 1.879709736213e+03 > 9779 KSP preconditioned resid norm 1.080641672412e+00 true resid norm 1.594084143785e-01 ||r(i)||/||b|| 8.917367437011e-03 > 9779 KSP Residual norm 1.080641672412e+00 % max 1.166910128239e+02 min 4.511799534103e-02 max/min 2.586351896663e+03 > 9780 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073707031e-01 ||r(i)||/||b|| 8.917309053412e-03 > 9780 KSP Residual norm 1.080641671971e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9781 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073707083e-01 ||r(i)||/||b|| 8.917309053702e-03 > 9781 KSP Residual norm 1.080641671971e+00 % max 3.063497578379e+01 min 3.063497578379e+01 max/min 1.000000000000e+00 > 9782 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073707170e-01 ||r(i)||/||b|| 8.917309054190e-03 > 9782 KSP Residual norm 1.080641671971e+00 % max 3.066566713389e+01 min 3.845888622043e+00 max/min 7.973623302070e+00 > 9783 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073708215e-01 ||r(i)||/||b|| 8.917309060034e-03 > 9783 KSP Residual norm 1.080641671971e+00 % max 3.713345706068e+01 min 1.336320269525e+00 max/min 2.778784241137e+01 > 9784 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073708251e-01 ||r(i)||/||b|| 8.917309060235e-03 > 9784 KSP Residual norm 1.080641671971e+00 % max 4.496404075363e+01 min 1.226794215442e+00 max/min 3.665165696713e+01 > 9785 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073708309e-01 ||r(i)||/||b|| 8.917309060564e-03 > 9785 KSP Residual norm 1.080641671971e+00 % max 8.684843710054e+01 min 1.183106407741e+00 max/min 7.340712258198e+01 > 9786 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073711314e-01 ||r(i)||/||b|| 8.917309077372e-03 > 9786 KSP Residual norm 1.080641671971e+00 % max 9.802652080694e+01 min 1.168690722569e+00 max/min 8.387721311884e+01 > 9787 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073713003e-01 ||r(i)||/||b|| 8.917309086817e-03 > 9787 KSP Residual norm 1.080641671971e+00 % max 1.123323967807e+02 min 1.141271419633e+00 max/min 9.842741599265e+01 > 9788 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073714007e-01 ||r(i)||/||b|| 8.917309092435e-03 > 9788 KSP Residual norm 1.080641671971e+00 % max 1.134977855491e+02 min 1.090790461557e+00 max/min 1.040509516255e+02 > 9789 KSP preconditioned resid norm 1.080641671970e+00 true resid norm 1.594074230188e-01 ||r(i)||/||b|| 8.917311979972e-03 > 9789 KSP Residual norm 1.080641671970e+00 % max 1.139911533240e+02 min 4.122377311421e-01 max/min 2.765180009315e+02 > 9790 KSP preconditioned resid norm 1.080641671969e+00 true resid norm 1.594074832488e-01 ||r(i)||/||b|| 8.917315349259e-03 > 9790 KSP Residual norm 1.080641671969e+00 % max 1.140007095063e+02 min 2.895743194700e-01 max/min 3.936837690403e+02 > 9791 KSP preconditioned resid norm 1.080641671969e+00 true resid norm 1.594074804009e-01 ||r(i)||/||b|| 8.917315189946e-03 > 9791 KSP Residual norm 1.080641671969e+00 % max 1.140387195674e+02 min 2.881928861513e-01 max/min 3.957027568944e+02 > 9792 KSP preconditioned resid norm 1.080641671968e+00 true resid norm 1.594074563767e-01 ||r(i)||/||b|| 8.917313846024e-03 > 9792 KSP Residual norm 1.080641671968e+00 % max 1.140387689617e+02 min 2.501646075030e-01 max/min 4.558549272813e+02 > 9793 KSP preconditioned resid norm 1.080641671966e+00 true resid norm 1.594074556720e-01 ||r(i)||/||b|| 8.917313806607e-03 > 9793 KSP Residual norm 1.080641671966e+00 % max 1.141359909124e+02 min 2.500653522913e-01 max/min 4.564246500628e+02 > 9794 KSP preconditioned resid norm 1.080641671965e+00 true resid norm 1.594074489575e-01 ||r(i)||/||b|| 8.917313430994e-03 > 9794 KSP Residual norm 1.080641671965e+00 % max 1.141719419220e+02 min 2.470356110582e-01 max/min 4.621679499282e+02 > 9795 KSP preconditioned resid norm 1.080641671939e+00 true resid norm 1.594073858301e-01 ||r(i)||/||b|| 8.917309899620e-03 > 9795 KSP Residual norm 1.080641671939e+00 % max 1.141769959186e+02 min 2.461586211459e-01 max/min 4.638350482589e+02 > 9796 KSP preconditioned resid norm 1.080641671936e+00 true resid norm 1.594072592237e-01 ||r(i)||/||b|| 8.917302817214e-03 > 9796 KSP Residual norm 1.080641671936e+00 % max 1.150247937261e+02 min 1.817428765765e-01 max/min 6.328984986528e+02 > 9797 KSP preconditioned resid norm 1.080641671823e+00 true resid norm 1.594068231505e-01 ||r(i)||/||b|| 8.917278423111e-03 > 9797 KSP Residual norm 1.080641671823e+00 % max 1.153659002697e+02 min 1.758220533048e-01 max/min 6.561514787326e+02 > 9798 KSP preconditioned resid norm 1.080641671805e+00 true resid norm 1.594066756326e-01 ||r(i)||/||b|| 8.917270170903e-03 > 9798 KSP Residual norm 1.080641671805e+00 % max 1.154409481864e+02 min 1.682255312531e-01 max/min 6.862272767188e+02 > 9799 KSP preconditioned resid norm 1.080641671412e+00 true resid norm 1.594051197519e-01 ||r(i)||/||b|| 8.917183134347e-03 > 9799 KSP Residual norm 1.080641671412e+00 % max 1.154410836529e+02 min 1.254609413083e-01 max/min 9.201356410140e+02 > 9800 KSP preconditioned resid norm 1.080641671225e+00 true resid norm 1.594050423544e-01 ||r(i)||/||b|| 8.917178804700e-03 > 9800 KSP Residual norm 1.080641671225e+00 % max 1.155780495006e+02 min 1.115226000164e-01 max/min 1.036364373532e+03 > 9801 KSP preconditioned resid norm 1.080641671136e+00 true resid norm 1.594040985764e-01 ||r(i)||/||b|| 8.917126009400e-03 > 9801 KSP Residual norm 1.080641671136e+00 % max 1.156949344498e+02 min 9.748153395641e-02 max/min 1.186839494150e+03 > 9802 KSP preconditioned resid norm 1.080641671038e+00 true resid norm 1.594035955111e-01 ||r(i)||/||b|| 8.917097867731e-03 > 9802 KSP Residual norm 1.080641671038e+00 % max 1.157177902650e+02 min 8.161721244731e-02 max/min 1.417811106202e+03 > 9803 KSP preconditioned resid norm 1.080641671024e+00 true resid norm 1.594038096704e-01 ||r(i)||/||b|| 8.917109847884e-03 > 9803 KSP Residual norm 1.080641671024e+00 % max 1.157297935320e+02 min 8.041686995864e-02 max/min 1.439123328121e+03 > 9804 KSP preconditioned resid norm 1.080641670988e+00 true resid norm 1.594036899968e-01 ||r(i)||/||b|| 8.917103153299e-03 > 9804 KSP Residual norm 1.080641670988e+00 % max 1.158244769690e+02 min 8.041238179269e-02 max/min 1.440381125230e+03 > 9805 KSP preconditioned resid norm 1.080641670986e+00 true resid norm 1.594036821095e-01 ||r(i)||/||b|| 8.917102712083e-03 > 9805 KSP Residual norm 1.080641670986e+00 % max 1.158301768585e+02 min 7.912030787936e-02 max/min 1.463975304989e+03 > 9806 KSP preconditioned resid norm 1.080641670803e+00 true resid norm 1.594037094369e-01 ||r(i)||/||b|| 8.917104240783e-03 > 9806 KSP Residual norm 1.080641670803e+00 % max 1.164680491000e+02 min 7.460726855236e-02 max/min 1.561081800203e+03 > 9807 KSP preconditioned resid norm 1.080641670773e+00 true resid norm 1.594038413140e-01 ||r(i)||/||b|| 8.917111618039e-03 > 9807 KSP Residual norm 1.080641670773e+00 % max 1.166490110820e+02 min 7.459766739249e-02 max/min 1.563708560326e+03 > 9808 KSP preconditioned resid norm 1.080641669134e+00 true resid norm 1.594048703119e-01 ||r(i)||/||b|| 8.917169180576e-03 > 9808 KSP Residual norm 1.080641669134e+00 % max 1.166810013448e+02 min 6.210440124755e-02 max/min 1.878787960288e+03 > 9809 KSP preconditioned resid norm 1.080641663243e+00 true resid norm 1.594054996410e-01 ||r(i)||/||b|| 8.917204385483e-03 > 9809 KSP Residual norm 1.080641663243e+00 % max 1.166817320749e+02 min 4.513497352470e-02 max/min 2.585173380262e+03 > 9810 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065242705e-01 ||r(i)||/||b|| 8.917261703653e-03 > 9810 KSP Residual norm 1.080641662817e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9811 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065242654e-01 ||r(i)||/||b|| 8.917261703367e-03 > 9811 KSP Residual norm 1.080641662817e+00 % max 3.063464017134e+01 min 3.063464017134e+01 max/min 1.000000000000e+00 > 9812 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065242568e-01 ||r(i)||/||b|| 8.917261702882e-03 > 9812 KSP Residual norm 1.080641662817e+00 % max 3.066528792324e+01 min 3.844429757599e+00 max/min 7.976550452671e+00 > 9813 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065241534e-01 ||r(i)||/||b|| 8.917261697099e-03 > 9813 KSP Residual norm 1.080641662817e+00 % max 3.714542869717e+01 min 1.336556919260e+00 max/min 2.779187938941e+01 > 9814 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065241497e-01 ||r(i)||/||b|| 8.917261696894e-03 > 9814 KSP Residual norm 1.080641662817e+00 % max 4.500912339640e+01 min 1.226798613987e+00 max/min 3.668827375841e+01 > 9815 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065241439e-01 ||r(i)||/||b|| 8.917261696565e-03 > 9815 KSP Residual norm 1.080641662817e+00 % max 8.688203511576e+01 min 1.183121026759e+00 max/min 7.343461332421e+01 > 9816 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065238512e-01 ||r(i)||/||b|| 8.917261680193e-03 > 9816 KSP Residual norm 1.080641662817e+00 % max 9.802498010891e+01 min 1.168937586878e+00 max/min 8.385818131719e+01 > 9817 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065236853e-01 ||r(i)||/||b|| 8.917261670916e-03 > 9817 KSP Residual norm 1.080641662817e+00 % max 1.122428829897e+02 min 1.141670208530e+00 max/min 9.831462899798e+01 > 9818 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065235856e-01 ||r(i)||/||b|| 8.917261665338e-03 > 9818 KSP Residual norm 1.080641662817e+00 % max 1.134942125400e+02 min 1.090790174709e+00 max/min 1.040477033727e+02 > 9819 KSP preconditioned resid norm 1.080641662816e+00 true resid norm 1.594064723991e-01 ||r(i)||/||b|| 8.917258801942e-03 > 9819 KSP Residual norm 1.080641662816e+00 % max 1.139914573562e+02 min 4.119742762810e-01 max/min 2.766955703769e+02 > 9820 KSP preconditioned resid norm 1.080641662815e+00 true resid norm 1.594064127438e-01 ||r(i)||/||b|| 8.917255464802e-03 > 9820 KSP Residual norm 1.080641662815e+00 % max 1.140011292201e+02 min 2.894526616195e-01 max/min 3.938506855740e+02 > 9821 KSP preconditioned resid norm 1.080641662815e+00 true resid norm 1.594064155702e-01 ||r(i)||/||b|| 8.917255622914e-03 > 9821 KSP Residual norm 1.080641662815e+00 % max 1.140392151549e+02 min 2.880577522236e-01 max/min 3.958901097943e+02 > 9822 KSP preconditioned resid norm 1.080641662815e+00 true resid norm 1.594064393436e-01 ||r(i)||/||b|| 8.917256952808e-03 > 9822 KSP Residual norm 1.080641662815e+00 % max 1.140392582401e+02 min 2.501715268375e-01 max/min 4.558442748531e+02 > 9823 KSP preconditioned resid norm 1.080641662812e+00 true resid norm 1.594064400637e-01 ||r(i)||/||b|| 8.917256993092e-03 > 9823 KSP Residual norm 1.080641662812e+00 % max 1.141360419319e+02 min 2.500712914815e-01 max/min 4.564140140028e+02 > 9824 KSP preconditioned resid norm 1.080641662811e+00 true resid norm 1.594064466562e-01 ||r(i)||/||b|| 8.917257361878e-03 > 9824 KSP Residual norm 1.080641662811e+00 % max 1.141719174947e+02 min 2.470521750742e-01 max/min 4.621368642491e+02 > 9825 KSP preconditioned resid norm 1.080641662786e+00 true resid norm 1.594065091771e-01 ||r(i)||/||b|| 8.917260859317e-03 > 9825 KSP Residual norm 1.080641662786e+00 % max 1.141770016737e+02 min 2.461724809746e-01 max/min 4.638089571251e+02 > 9826 KSP preconditioned resid norm 1.080641662783e+00 true resid norm 1.594066344484e-01 ||r(i)||/||b|| 8.917267867042e-03 > 9826 KSP Residual norm 1.080641662783e+00 % max 1.150251612560e+02 min 1.817295914017e-01 max/min 6.329467885162e+02 > 9827 KSP preconditioned resid norm 1.080641662672e+00 true resid norm 1.594070669773e-01 ||r(i)||/||b|| 8.917292062876e-03 > 9827 KSP Residual norm 1.080641662672e+00 % max 1.153670515865e+02 min 1.757835701538e-01 max/min 6.563016753248e+02 > 9828 KSP preconditioned resid norm 1.080641662655e+00 true resid norm 1.594072129068e-01 ||r(i)||/||b|| 8.917300226227e-03 > 9828 KSP Residual norm 1.080641662655e+00 % max 1.154419244262e+02 min 1.682009156098e-01 max/min 6.863335078032e+02 > 9829 KSP preconditioned resid norm 1.080641662270e+00 true resid norm 1.594087524825e-01 ||r(i)||/||b|| 8.917386350680e-03 > 9829 KSP Residual norm 1.080641662270e+00 % max 1.154420683680e+02 min 1.254371322814e-01 max/min 9.203181407961e+02 > 9830 KSP preconditioned resid norm 1.080641662088e+00 true resid norm 1.594088281904e-01 ||r(i)||/||b|| 8.917390585807e-03 > 9830 KSP Residual norm 1.080641662088e+00 % max 1.155791143272e+02 min 1.115275270000e-01 max/min 1.036328137421e+03 > 9831 KSP preconditioned resid norm 1.080641662001e+00 true resid norm 1.594097559916e-01 ||r(i)||/||b|| 8.917442487360e-03 > 9831 KSP Residual norm 1.080641662001e+00 % max 1.156952534277e+02 min 9.753070058662e-02 max/min 1.186244461814e+03 > 9832 KSP preconditioned resid norm 1.080641661905e+00 true resid norm 1.594102571355e-01 ||r(i)||/||b|| 8.917470521541e-03 > 9832 KSP Residual norm 1.080641661905e+00 % max 1.157175128244e+02 min 8.164806814555e-02 max/min 1.417271901866e+03 > 9833 KSP preconditioned resid norm 1.080641661892e+00 true resid norm 1.594100484838e-01 ||r(i)||/||b|| 8.917458849484e-03 > 9833 KSP Residual norm 1.080641661892e+00 % max 1.157285121903e+02 min 8.043319039381e-02 max/min 1.438815389813e+03 > 9834 KSP preconditioned resid norm 1.080641661854e+00 true resid norm 1.594101720988e-01 ||r(i)||/||b|| 8.917465764556e-03 > 9834 KSP Residual norm 1.080641661854e+00 % max 1.158251773437e+02 min 8.042916217223e-02 max/min 1.440089318544e+03 > 9835 KSP preconditioned resid norm 1.080641661853e+00 true resid norm 1.594101806342e-01 ||r(i)||/||b|| 8.917466242026e-03 > 9835 KSP Residual norm 1.080641661853e+00 % max 1.158318842362e+02 min 7.912558026309e-02 max/min 1.463899333832e+03 > 9836 KSP preconditioned resid norm 1.080641661674e+00 true resid norm 1.594101542582e-01 ||r(i)||/||b|| 8.917464766544e-03 > 9836 KSP Residual norm 1.080641661674e+00 % max 1.164750419425e+02 min 7.459519966941e-02 max/min 1.561428114124e+03 > 9837 KSP preconditioned resid norm 1.080641661644e+00 true resid norm 1.594100247000e-01 ||r(i)||/||b|| 8.917457519010e-03 > 9837 KSP Residual norm 1.080641661644e+00 % max 1.166576751361e+02 min 7.458624135723e-02 max/min 1.564064269942e+03 > 9838 KSP preconditioned resid norm 1.080641660043e+00 true resid norm 1.594090034525e-01 ||r(i)||/||b|| 8.917400390036e-03 > 9838 KSP Residual norm 1.080641660043e+00 % max 1.166901230302e+02 min 6.207904511461e-02 max/min 1.879702286251e+03 > 9839 KSP preconditioned resid norm 1.080641654279e+00 true resid norm 1.594083845247e-01 ||r(i)||/||b|| 8.917365766976e-03 > 9839 KSP Residual norm 1.080641654279e+00 % max 1.166909329256e+02 min 4.511818825181e-02 max/min 2.586339067391e+03 > 9840 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073617105e-01 ||r(i)||/||b|| 8.917308550363e-03 > 9840 KSP Residual norm 1.080641653856e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9841 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073617156e-01 ||r(i)||/||b|| 8.917308550646e-03 > 9841 KSP Residual norm 1.080641653856e+00 % max 3.063497434930e+01 min 3.063497434930e+01 max/min 1.000000000000e+00 > 9842 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073617242e-01 ||r(i)||/||b|| 8.917308551127e-03 > 9842 KSP Residual norm 1.080641653856e+00 % max 3.066566511835e+01 min 3.845873341758e+00 max/min 7.973654458505e+00 > 9843 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073618265e-01 ||r(i)||/||b|| 8.917308556853e-03 > 9843 KSP Residual norm 1.080641653856e+00 % max 3.713360973998e+01 min 1.336323585560e+00 max/min 2.778788771015e+01 > 9844 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073618300e-01 ||r(i)||/||b|| 8.917308557050e-03 > 9844 KSP Residual norm 1.080641653856e+00 % max 4.496460969624e+01 min 1.226794432981e+00 max/min 3.665211423154e+01 > 9845 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073618358e-01 ||r(i)||/||b|| 8.917308557372e-03 > 9845 KSP Residual norm 1.080641653856e+00 % max 8.684887047460e+01 min 1.183106552555e+00 max/min 7.340747989862e+01 > 9846 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073621302e-01 ||r(i)||/||b|| 8.917308573843e-03 > 9846 KSP Residual norm 1.080641653856e+00 % max 9.802649587879e+01 min 1.168693234980e+00 max/min 8.387701147298e+01 > 9847 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073622957e-01 ||r(i)||/||b|| 8.917308583099e-03 > 9847 KSP Residual norm 1.080641653856e+00 % max 1.123314913547e+02 min 1.141275669292e+00 max/min 9.842625614229e+01 > 9848 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073623942e-01 ||r(i)||/||b|| 8.917308588608e-03 > 9848 KSP Residual norm 1.080641653856e+00 % max 1.134977479974e+02 min 1.090790466016e+00 max/min 1.040509167741e+02 > 9849 KSP preconditioned resid norm 1.080641653855e+00 true resid norm 1.594074129801e-01 ||r(i)||/||b|| 8.917311418404e-03 > 9849 KSP Residual norm 1.080641653855e+00 % max 1.139911565327e+02 min 4.122354439173e-01 max/min 2.765195429329e+02 > 9850 KSP preconditioned resid norm 1.080641653854e+00 true resid norm 1.594074720057e-01 ||r(i)||/||b|| 8.917314720319e-03 > 9850 KSP Residual norm 1.080641653854e+00 % max 1.140007137546e+02 min 2.895731636857e-01 max/min 3.936853550364e+02 > 9851 KSP preconditioned resid norm 1.080641653854e+00 true resid norm 1.594074692144e-01 ||r(i)||/||b|| 8.917314564169e-03 > 9851 KSP Residual norm 1.080641653854e+00 % max 1.140387247200e+02 min 2.881916049088e-01 max/min 3.957045339889e+02 > 9852 KSP preconditioned resid norm 1.080641653853e+00 true resid norm 1.594074456679e-01 ||r(i)||/||b|| 8.917313246972e-03 > 9852 KSP Residual norm 1.080641653853e+00 % max 1.140387740588e+02 min 2.501646808296e-01 max/min 4.558548140395e+02 > 9853 KSP preconditioned resid norm 1.080641653851e+00 true resid norm 1.594074449772e-01 ||r(i)||/||b|| 8.917313208332e-03 > 9853 KSP Residual norm 1.080641653851e+00 % max 1.141359916013e+02 min 2.500654158720e-01 max/min 4.564245367688e+02 > 9854 KSP preconditioned resid norm 1.080641653850e+00 true resid norm 1.594074383969e-01 ||r(i)||/||b|| 8.917312840228e-03 > 9854 KSP Residual norm 1.080641653850e+00 % max 1.141719416509e+02 min 2.470357905993e-01 max/min 4.621676129356e+02 > 9855 KSP preconditioned resid norm 1.080641653826e+00 true resid norm 1.594073765323e-01 ||r(i)||/||b|| 8.917309379499e-03 > 9855 KSP Residual norm 1.080641653826e+00 % max 1.141769960009e+02 min 2.461587608334e-01 max/min 4.638347853812e+02 > 9856 KSP preconditioned resid norm 1.080641653822e+00 true resid norm 1.594072524403e-01 ||r(i)||/||b|| 8.917302437746e-03 > 9856 KSP Residual norm 1.080641653822e+00 % max 1.150247983913e+02 min 1.817426977612e-01 max/min 6.328991470261e+02 > 9857 KSP preconditioned resid norm 1.080641653714e+00 true resid norm 1.594068250847e-01 ||r(i)||/||b|| 8.917278531310e-03 > 9857 KSP Residual norm 1.080641653714e+00 % max 1.153659149296e+02 min 1.758216030104e-01 max/min 6.561532425728e+02 > 9858 KSP preconditioned resid norm 1.080641653697e+00 true resid norm 1.594066805198e-01 ||r(i)||/||b|| 8.917270444298e-03 > 9858 KSP Residual norm 1.080641653697e+00 % max 1.154409610504e+02 min 1.682252327368e-01 max/min 6.862285709010e+02 > 9859 KSP preconditioned resid norm 1.080641653319e+00 true resid norm 1.594051557657e-01 ||r(i)||/||b|| 8.917185148971e-03 > 9859 KSP Residual norm 1.080641653319e+00 % max 1.154410966341e+02 min 1.254606780252e-01 max/min 9.201376754146e+02 > 9860 KSP preconditioned resid norm 1.080641653140e+00 true resid norm 1.594050799233e-01 ||r(i)||/||b|| 8.917180906316e-03 > 9860 KSP Residual norm 1.080641653140e+00 % max 1.155780628675e+02 min 1.115225287992e-01 max/min 1.036365155202e+03 > 9861 KSP preconditioned resid norm 1.080641653054e+00 true resid norm 1.594041550812e-01 ||r(i)||/||b|| 8.917129170295e-03 > 9861 KSP Residual norm 1.080641653054e+00 % max 1.156949369211e+02 min 9.748220965886e-02 max/min 1.186831292869e+03 > 9862 KSP preconditioned resid norm 1.080641652960e+00 true resid norm 1.594036620365e-01 ||r(i)||/||b|| 8.917101589189e-03 > 9862 KSP Residual norm 1.080641652960e+00 % max 1.157177832797e+02 min 8.161751000999e-02 max/min 1.417805851533e+03 > 9863 KSP preconditioned resid norm 1.080641652947e+00 true resid norm 1.594038718371e-01 ||r(i)||/||b|| 8.917113325517e-03 > 9863 KSP Residual norm 1.080641652947e+00 % max 1.157297723960e+02 min 8.041700428565e-02 max/min 1.439120661408e+03 > 9864 KSP preconditioned resid norm 1.080641652912e+00 true resid norm 1.594037544972e-01 ||r(i)||/||b|| 8.917106761475e-03 > 9864 KSP Residual norm 1.080641652912e+00 % max 1.158244802295e+02 min 8.041252146115e-02 max/min 1.440378663980e+03 > 9865 KSP preconditioned resid norm 1.080641652910e+00 true resid norm 1.594037467642e-01 ||r(i)||/||b|| 8.917106328887e-03 > 9865 KSP Residual norm 1.080641652910e+00 % max 1.158301923080e+02 min 7.912032794006e-02 max/min 1.463975129069e+03 > 9866 KSP preconditioned resid norm 1.080641652734e+00 true resid norm 1.594037735388e-01 ||r(i)||/||b|| 8.917107826673e-03 > 9866 KSP Residual norm 1.080641652734e+00 % max 1.164681289889e+02 min 7.460712971447e-02 max/min 1.561085776047e+03 > 9867 KSP preconditioned resid norm 1.080641652705e+00 true resid norm 1.594039028009e-01 ||r(i)||/||b|| 8.917115057644e-03 > 9867 KSP Residual norm 1.080641652705e+00 % max 1.166491106270e+02 min 7.459753335320e-02 max/min 1.563712704477e+03 > 9868 KSP preconditioned resid norm 1.080641651132e+00 true resid norm 1.594049112430e-01 ||r(i)||/||b|| 8.917171470279e-03 > 9868 KSP Residual norm 1.080641651132e+00 % max 1.166811058685e+02 min 6.210409714784e-02 max/min 1.878798843026e+03 > 9869 KSP preconditioned resid norm 1.080641645474e+00 true resid norm 1.594055279416e-01 ||r(i)||/||b|| 8.917205968634e-03 > 9869 KSP Residual norm 1.080641645474e+00 % max 1.166818375393e+02 min 4.513482662958e-02 max/min 2.585184130582e+03 > 9870 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065322524e-01 ||r(i)||/||b|| 8.917262150160e-03 > 9870 KSP Residual norm 1.080641645065e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9871 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065322474e-01 ||r(i)||/||b|| 8.917262149880e-03 > 9871 KSP Residual norm 1.080641645065e+00 % max 3.063464541814e+01 min 3.063464541814e+01 max/min 1.000000000000e+00 > 9872 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065322389e-01 ||r(i)||/||b|| 8.917262149406e-03 > 9872 KSP Residual norm 1.080641645065e+00 % max 3.066529346532e+01 min 3.844443657370e+00 max/min 7.976523054651e+00 > 9873 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065321376e-01 ||r(i)||/||b|| 8.917262143737e-03 > 9873 KSP Residual norm 1.080641645065e+00 % max 3.714534104643e+01 min 1.336555480314e+00 max/min 2.779184373079e+01 > 9874 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065321340e-01 ||r(i)||/||b|| 8.917262143537e-03 > 9874 KSP Residual norm 1.080641645065e+00 % max 4.500878758461e+01 min 1.226798739269e+00 max/min 3.668799628162e+01 > 9875 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065321283e-01 ||r(i)||/||b|| 8.917262143216e-03 > 9875 KSP Residual norm 1.080641645065e+00 % max 8.688179296761e+01 min 1.183120879361e+00 max/min 7.343441780400e+01 > 9876 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065318413e-01 ||r(i)||/||b|| 8.917262127165e-03 > 9876 KSP Residual norm 1.080641645065e+00 % max 9.802498627683e+01 min 1.168935165788e+00 max/min 8.385836028017e+01 > 9877 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065316788e-01 ||r(i)||/||b|| 8.917262118072e-03 > 9877 KSP Residual norm 1.080641645065e+00 % max 1.122437663284e+02 min 1.141666474192e+00 max/min 9.831572430803e+01 > 9878 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065315811e-01 ||r(i)||/||b|| 8.917262112605e-03 > 9878 KSP Residual norm 1.080641645065e+00 % max 1.134942465219e+02 min 1.090790184702e+00 max/min 1.040477335730e+02 > 9879 KSP preconditioned resid norm 1.080641645064e+00 true resid norm 1.594064814200e-01 ||r(i)||/||b|| 8.917259306578e-03 > 9879 KSP Residual norm 1.080641645064e+00 % max 1.139914544856e+02 min 4.119772603942e-01 max/min 2.766935591945e+02 > 9880 KSP preconditioned resid norm 1.080641645063e+00 true resid norm 1.594064229585e-01 ||r(i)||/||b|| 8.917256036220e-03 > 9880 KSP Residual norm 1.080641645063e+00 % max 1.140011250742e+02 min 2.894539413393e-01 max/min 3.938489299774e+02 > 9881 KSP preconditioned resid norm 1.080641645063e+00 true resid norm 1.594064257287e-01 ||r(i)||/||b|| 8.917256191184e-03 > 9881 KSP Residual norm 1.080641645063e+00 % max 1.140392103921e+02 min 2.880591764056e-01 max/min 3.958881359556e+02 > 9882 KSP preconditioned resid norm 1.080641645063e+00 true resid norm 1.594064490294e-01 ||r(i)||/||b|| 8.917257494633e-03 > 9882 KSP Residual norm 1.080641645063e+00 % max 1.140392535477e+02 min 2.501714617108e-01 max/min 4.558443747654e+02 > 9883 KSP preconditioned resid norm 1.080641645061e+00 true resid norm 1.594064497349e-01 ||r(i)||/||b|| 8.917257534101e-03 > 9883 KSP Residual norm 1.080641645061e+00 % max 1.141360416004e+02 min 2.500712361002e-01 max/min 4.564141137554e+02 > 9884 KSP preconditioned resid norm 1.080641645059e+00 true resid norm 1.594064561966e-01 ||r(i)||/||b|| 8.917257895569e-03 > 9884 KSP Residual norm 1.080641645059e+00 % max 1.141719177119e+02 min 2.470520232306e-01 max/min 4.621371491678e+02 > 9885 KSP preconditioned resid norm 1.080641645035e+00 true resid norm 1.594065174658e-01 ||r(i)||/||b|| 8.917261322993e-03 > 9885 KSP Residual norm 1.080641645035e+00 % max 1.141770016402e+02 min 2.461723435101e-01 max/min 4.638092159834e+02 > 9886 KSP preconditioned resid norm 1.080641645032e+00 true resid norm 1.594066402497e-01 ||r(i)||/||b|| 8.917268191569e-03 > 9886 KSP Residual norm 1.080641645032e+00 % max 1.150251585488e+02 min 1.817296775460e-01 max/min 6.329464735865e+02 > 9887 KSP preconditioned resid norm 1.080641644925e+00 true resid norm 1.594070641129e-01 ||r(i)||/||b|| 8.917291902641e-03 > 9887 KSP Residual norm 1.080641644925e+00 % max 1.153670431587e+02 min 1.757838889123e-01 max/min 6.563004372733e+02 > 9888 KSP preconditioned resid norm 1.080641644909e+00 true resid norm 1.594072071224e-01 ||r(i)||/||b|| 8.917299902647e-03 > 9888 KSP Residual norm 1.080641644909e+00 % max 1.154419177071e+02 min 1.682011082589e-01 max/min 6.863326817642e+02 > 9889 KSP preconditioned resid norm 1.080641644540e+00 true resid norm 1.594087159021e-01 ||r(i)||/||b|| 8.917384304358e-03 > 9889 KSP Residual norm 1.080641644540e+00 % max 1.154420615965e+02 min 1.254373423966e-01 max/min 9.203165452239e+02 > 9890 KSP preconditioned resid norm 1.080641644365e+00 true resid norm 1.594087901062e-01 ||r(i)||/||b|| 8.917388455364e-03 > 9890 KSP Residual norm 1.080641644365e+00 % max 1.155791063432e+02 min 1.115273566807e-01 max/min 1.036329648465e+03 > 9891 KSP preconditioned resid norm 1.080641644282e+00 true resid norm 1.594096994140e-01 ||r(i)||/||b|| 8.917439322388e-03 > 9891 KSP Residual norm 1.080641644282e+00 % max 1.156952495514e+02 min 9.753038620512e-02 max/min 1.186248245835e+03 > 9892 KSP preconditioned resid norm 1.080641644189e+00 true resid norm 1.594101905102e-01 ||r(i)||/||b|| 8.917466794493e-03 > 9892 KSP Residual norm 1.080641644189e+00 % max 1.157175115527e+02 min 8.164774923625e-02 max/min 1.417277422037e+03 > 9893 KSP preconditioned resid norm 1.080641644177e+00 true resid norm 1.594099860409e-01 ||r(i)||/||b|| 8.917455356405e-03 > 9893 KSP Residual norm 1.080641644177e+00 % max 1.157285171248e+02 min 8.043299897298e-02 max/min 1.438818875368e+03 > 9894 KSP preconditioned resid norm 1.080641644141e+00 true resid norm 1.594101071411e-01 ||r(i)||/||b|| 8.917462130798e-03 > 9894 KSP Residual norm 1.080641644141e+00 % max 1.158251669093e+02 min 8.042896682578e-02 max/min 1.440092686509e+03 > 9895 KSP preconditioned resid norm 1.080641644139e+00 true resid norm 1.594101154949e-01 ||r(i)||/||b|| 8.917462598110e-03 > 9895 KSP Residual norm 1.080641644139e+00 % max 1.158318659802e+02 min 7.912549560750e-02 max/min 1.463900669321e+03 > 9896 KSP preconditioned resid norm 1.080641643967e+00 true resid norm 1.594100896394e-01 ||r(i)||/||b|| 8.917461151744e-03 > 9896 KSP Residual norm 1.080641643967e+00 % max 1.164749819823e+02 min 7.459530237415e-02 max/min 1.561425160502e+03 > 9897 KSP preconditioned resid norm 1.080641643939e+00 true resid norm 1.594099626317e-01 ||r(i)||/||b|| 8.917454046886e-03 > 9897 KSP Residual norm 1.080641643939e+00 % max 1.166576013049e+02 min 7.458633610448e-02 max/min 1.564061293230e+03 > 9898 KSP preconditioned resid norm 1.080641642401e+00 true resid norm 1.594089618420e-01 ||r(i)||/||b|| 8.917398062328e-03 > 9898 KSP Residual norm 1.080641642401e+00 % max 1.166900450421e+02 min 6.207924609942e-02 max/min 1.879694944349e+03 > 9899 KSP preconditioned resid norm 1.080641636866e+00 true resid norm 1.594083552587e-01 ||r(i)||/||b|| 8.917364129827e-03 > 9899 KSP Residual norm 1.080641636866e+00 % max 1.166908543099e+02 min 4.511837692126e-02 max/min 2.586326509785e+03 > 9900 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073529026e-01 ||r(i)||/||b|| 8.917308057647e-03 > 9900 KSP Residual norm 1.080641636459e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9901 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073529076e-01 ||r(i)||/||b|| 8.917308057926e-03 > 9901 KSP Residual norm 1.080641636459e+00 % max 3.063497290246e+01 min 3.063497290246e+01 max/min 1.000000000000e+00 > 9902 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073529160e-01 ||r(i)||/||b|| 8.917308058394e-03 > 9902 KSP Residual norm 1.080641636459e+00 % max 3.066566310477e+01 min 3.845858371013e+00 max/min 7.973684973919e+00 > 9903 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073530163e-01 ||r(i)||/||b|| 8.917308064007e-03 > 9903 KSP Residual norm 1.080641636459e+00 % max 3.713375877488e+01 min 1.336326817614e+00 max/min 2.778793202787e+01 > 9904 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073530198e-01 ||r(i)||/||b|| 8.917308064200e-03 > 9904 KSP Residual norm 1.080641636459e+00 % max 4.496516516046e+01 min 1.226794642680e+00 max/min 3.665256074336e+01 > 9905 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073530254e-01 ||r(i)||/||b|| 8.917308064515e-03 > 9905 KSP Residual norm 1.080641636459e+00 % max 8.684929340512e+01 min 1.183106694605e+00 max/min 7.340782855945e+01 > 9906 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073533139e-01 ||r(i)||/||b|| 8.917308080655e-03 > 9906 KSP Residual norm 1.080641636459e+00 % max 9.802647163348e+01 min 1.168695697992e+00 max/min 8.387681395754e+01 > 9907 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073534761e-01 ||r(i)||/||b|| 8.917308089727e-03 > 9907 KSP Residual norm 1.080641636459e+00 % max 1.123306036185e+02 min 1.141279831413e+00 max/min 9.842511934993e+01 > 9908 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073535726e-01 ||r(i)||/||b|| 8.917308095124e-03 > 9908 KSP Residual norm 1.080641636459e+00 % max 1.134977112090e+02 min 1.090790470231e+00 max/min 1.040508826457e+02 > 9909 KSP preconditioned resid norm 1.080641636458e+00 true resid norm 1.594074031465e-01 ||r(i)||/||b|| 8.917310868307e-03 > 9909 KSP Residual norm 1.080641636458e+00 % max 1.139911596762e+02 min 4.122331938771e-01 max/min 2.765210598500e+02 > 9910 KSP preconditioned resid norm 1.080641636457e+00 true resid norm 1.594074609911e-01 ||r(i)||/||b|| 8.917314104156e-03 > 9910 KSP Residual norm 1.080641636457e+00 % max 1.140007179199e+02 min 2.895720290537e-01 max/min 3.936869120007e+02 > 9911 KSP preconditioned resid norm 1.080641636457e+00 true resid norm 1.594074582552e-01 ||r(i)||/||b|| 8.917313951111e-03 > 9911 KSP Residual norm 1.080641636457e+00 % max 1.140387297689e+02 min 2.881903470643e-01 max/min 3.957062786128e+02 > 9912 KSP preconditioned resid norm 1.080641636457e+00 true resid norm 1.594074351775e-01 ||r(i)||/||b|| 8.917312660133e-03 > 9912 KSP Residual norm 1.080641636457e+00 % max 1.140387790534e+02 min 2.501647526543e-01 max/min 4.558547031242e+02 > 9913 KSP preconditioned resid norm 1.080641636455e+00 true resid norm 1.594074345003e-01 ||r(i)||/||b|| 8.917312622252e-03 > 9913 KSP Residual norm 1.080641636455e+00 % max 1.141359922733e+02 min 2.500654781355e-01 max/min 4.564244258117e+02 > 9914 KSP preconditioned resid norm 1.080641636454e+00 true resid norm 1.594074280517e-01 ||r(i)||/||b|| 8.917312261516e-03 > 9914 KSP Residual norm 1.080641636454e+00 % max 1.141719413857e+02 min 2.470359663864e-01 max/min 4.621672829905e+02 > 9915 KSP preconditioned resid norm 1.080641636430e+00 true resid norm 1.594073674253e-01 ||r(i)||/||b|| 8.917308870053e-03 > 9915 KSP Residual norm 1.080641636430e+00 % max 1.141769960812e+02 min 2.461588977988e-01 max/min 4.638345276249e+02 > 9916 KSP preconditioned resid norm 1.080641636427e+00 true resid norm 1.594072457998e-01 ||r(i)||/||b|| 8.917302066275e-03 > 9916 KSP Residual norm 1.080641636427e+00 % max 1.150248029458e+02 min 1.817425233097e-01 max/min 6.328997795954e+02 > 9917 KSP preconditioned resid norm 1.080641636323e+00 true resid norm 1.594068269925e-01 ||r(i)||/||b|| 8.917278638033e-03 > 9917 KSP Residual norm 1.080641636323e+00 % max 1.153659292411e+02 min 1.758211627252e-01 max/min 6.561549670868e+02 > 9918 KSP preconditioned resid norm 1.080641636306e+00 true resid norm 1.594066853231e-01 ||r(i)||/||b|| 8.917270712996e-03 > 9918 KSP Residual norm 1.080641636306e+00 % max 1.154409736018e+02 min 1.682249410195e-01 max/min 6.862298354942e+02 > 9919 KSP preconditioned resid norm 1.080641635944e+00 true resid norm 1.594051910900e-01 ||r(i)||/||b|| 8.917187125026e-03 > 9919 KSP Residual norm 1.080641635944e+00 % max 1.154411092996e+02 min 1.254604202789e-01 max/min 9.201396667011e+02 > 9920 KSP preconditioned resid norm 1.080641635771e+00 true resid norm 1.594051167722e-01 ||r(i)||/||b|| 8.917182967656e-03 > 9920 KSP Residual norm 1.080641635771e+00 % max 1.155780759198e+02 min 1.115224613869e-01 max/min 1.036365898694e+03 > 9921 KSP preconditioned resid norm 1.080641635689e+00 true resid norm 1.594042104954e-01 ||r(i)||/||b|| 8.917132270191e-03 > 9921 KSP Residual norm 1.080641635689e+00 % max 1.156949393598e+02 min 9.748286841768e-02 max/min 1.186823297649e+03 > 9922 KSP preconditioned resid norm 1.080641635599e+00 true resid norm 1.594037272785e-01 ||r(i)||/||b|| 8.917105238852e-03 > 9922 KSP Residual norm 1.080641635599e+00 % max 1.157177765183e+02 min 8.161780203814e-02 max/min 1.417800695787e+03 > 9923 KSP preconditioned resid norm 1.080641635586e+00 true resid norm 1.594039328090e-01 ||r(i)||/||b|| 8.917116736308e-03 > 9923 KSP Residual norm 1.080641635586e+00 % max 1.157297518468e+02 min 8.041713659406e-02 max/min 1.439118038125e+03 > 9924 KSP preconditioned resid norm 1.080641635552e+00 true resid norm 1.594038177599e-01 ||r(i)||/||b|| 8.917110300416e-03 > 9924 KSP Residual norm 1.080641635552e+00 % max 1.158244835100e+02 min 8.041265899129e-02 max/min 1.440376241290e+03 > 9925 KSP preconditioned resid norm 1.080641635551e+00 true resid norm 1.594038101782e-01 ||r(i)||/||b|| 8.917109876291e-03 > 9925 KSP Residual norm 1.080641635551e+00 % max 1.158302075022e+02 min 7.912034826741e-02 max/min 1.463974944988e+03 > 9926 KSP preconditioned resid norm 1.080641635382e+00 true resid norm 1.594038364112e-01 ||r(i)||/||b|| 8.917111343775e-03 > 9926 KSP Residual norm 1.080641635382e+00 % max 1.164682071358e+02 min 7.460699390243e-02 max/min 1.561089665242e+03 > 9927 KSP preconditioned resid norm 1.080641635354e+00 true resid norm 1.594039631076e-01 ||r(i)||/||b|| 8.917118431221e-03 > 9927 KSP Residual norm 1.080641635354e+00 % max 1.166492079885e+02 min 7.459740228247e-02 max/min 1.563716757144e+03 > 9928 KSP preconditioned resid norm 1.080641633843e+00 true resid norm 1.594049513925e-01 ||r(i)||/||b|| 8.917173716256e-03 > 9928 KSP Residual norm 1.080641633843e+00 % max 1.166812081048e+02 min 6.210379986959e-02 max/min 1.878809482669e+03 > 9929 KSP preconditioned resid norm 1.080641628410e+00 true resid norm 1.594055557080e-01 ||r(i)||/||b|| 8.917207521894e-03 > 9929 KSP Residual norm 1.080641628410e+00 % max 1.166819406955e+02 min 4.513468212670e-02 max/min 2.585194692807e+03 > 9930 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065400862e-01 ||r(i)||/||b|| 8.917262588389e-03 > 9930 KSP Residual norm 1.080641628017e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9931 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065400814e-01 ||r(i)||/||b|| 8.917262588116e-03 > 9931 KSP Residual norm 1.080641628017e+00 % max 3.063465052437e+01 min 3.063465052437e+01 max/min 1.000000000000e+00 > 9932 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065400730e-01 ||r(i)||/||b|| 8.917262587648e-03 > 9932 KSP Residual norm 1.080641628017e+00 % max 3.066529886382e+01 min 3.844457299464e+00 max/min 7.976496154112e+00 > 9933 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065399737e-01 ||r(i)||/||b|| 8.917262582094e-03 > 9933 KSP Residual norm 1.080641628017e+00 % max 3.714525447417e+01 min 1.336554051053e+00 max/min 2.779180867762e+01 > 9934 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065399702e-01 ||r(i)||/||b|| 8.917262581897e-03 > 9934 KSP Residual norm 1.080641628017e+00 % max 4.500845605790e+01 min 1.226798858744e+00 max/min 3.668772247146e+01 > 9935 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065399646e-01 ||r(i)||/||b|| 8.917262581583e-03 > 9935 KSP Residual norm 1.080641628017e+00 % max 8.688155371236e+01 min 1.183120734870e+00 max/min 7.343422454843e+01 > 9936 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065396833e-01 ||r(i)||/||b|| 8.917262565850e-03 > 9936 KSP Residual norm 1.080641628017e+00 % max 9.802499250688e+01 min 1.168932790974e+00 max/min 8.385853597724e+01 > 9937 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065395240e-01 ||r(i)||/||b|| 8.917262556936e-03 > 9937 KSP Residual norm 1.080641628017e+00 % max 1.122446326752e+02 min 1.141662807959e+00 max/min 9.831679887675e+01 > 9938 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065394282e-01 ||r(i)||/||b|| 8.917262551579e-03 > 9938 KSP Residual norm 1.080641628017e+00 % max 1.134942798726e+02 min 1.090790194356e+00 max/min 1.040477632269e+02 > 9939 KSP preconditioned resid norm 1.080641628016e+00 true resid norm 1.594064902727e-01 ||r(i)||/||b|| 8.917259801801e-03 > 9939 KSP Residual norm 1.080641628016e+00 % max 1.139914516679e+02 min 4.119801790176e-01 max/min 2.766915921531e+02 > 9940 KSP preconditioned resid norm 1.080641628015e+00 true resid norm 1.594064329818e-01 ||r(i)||/||b|| 8.917256596927e-03 > 9940 KSP Residual norm 1.080641628015e+00 % max 1.140011210085e+02 min 2.894551947128e-01 max/min 3.938472105212e+02 > 9941 KSP preconditioned resid norm 1.080641628015e+00 true resid norm 1.594064356968e-01 ||r(i)||/||b|| 8.917256748803e-03 > 9941 KSP Residual norm 1.080641628015e+00 % max 1.140392057187e+02 min 2.880605712140e-01 max/min 3.958862028153e+02 > 9942 KSP preconditioned resid norm 1.080641628014e+00 true resid norm 1.594064585337e-01 ||r(i)||/||b|| 8.917258026309e-03 > 9942 KSP Residual norm 1.080641628014e+00 % max 1.140392489431e+02 min 2.501713977769e-01 max/min 4.558444728556e+02 > 9943 KSP preconditioned resid norm 1.080641628013e+00 true resid norm 1.594064592249e-01 ||r(i)||/||b|| 8.917258064976e-03 > 9943 KSP Residual norm 1.080641628013e+00 % max 1.141360412719e+02 min 2.500711817244e-01 max/min 4.564142116850e+02 > 9944 KSP preconditioned resid norm 1.080641628011e+00 true resid norm 1.594064655583e-01 ||r(i)||/||b|| 8.917258419265e-03 > 9944 KSP Residual norm 1.080641628011e+00 % max 1.141719179255e+02 min 2.470518740808e-01 max/min 4.621374290332e+02 > 9945 KSP preconditioned resid norm 1.080641627988e+00 true resid norm 1.594065256003e-01 ||r(i)||/||b|| 8.917261778039e-03 > 9945 KSP Residual norm 1.080641627988e+00 % max 1.141770016071e+02 min 2.461722087102e-01 max/min 4.638094698230e+02 > 9946 KSP preconditioned resid norm 1.080641627985e+00 true resid norm 1.594066459440e-01 ||r(i)||/||b|| 8.917268510113e-03 > 9946 KSP Residual norm 1.080641627985e+00 % max 1.150251558754e+02 min 1.817297629761e-01 max/min 6.329461613313e+02 > 9947 KSP preconditioned resid norm 1.080641627883e+00 true resid norm 1.594070613108e-01 ||r(i)||/||b|| 8.917291745887e-03 > 9947 KSP Residual norm 1.080641627883e+00 % max 1.153670348347e+02 min 1.757842027946e-01 max/min 6.562992180216e+02 > 9948 KSP preconditioned resid norm 1.080641627867e+00 true resid norm 1.594072014571e-01 ||r(i)||/||b|| 8.917299585729e-03 > 9948 KSP Residual norm 1.080641627867e+00 % max 1.154419110590e+02 min 1.682012982555e-01 max/min 6.863318669729e+02 > 9949 KSP preconditioned resid norm 1.080641627512e+00 true resid norm 1.594086800399e-01 ||r(i)||/||b|| 8.917382298211e-03 > 9949 KSP Residual norm 1.080641627512e+00 % max 1.154420548965e+02 min 1.254375490145e-01 max/min 9.203149758861e+02 > 9950 KSP preconditioned resid norm 1.080641627344e+00 true resid norm 1.594087527690e-01 ||r(i)||/||b|| 8.917386366706e-03 > 9950 KSP Residual norm 1.080641627344e+00 % max 1.155790984624e+02 min 1.115271921586e-01 max/min 1.036331106570e+03 > 9951 KSP preconditioned resid norm 1.080641627264e+00 true resid norm 1.594096439404e-01 ||r(i)||/||b|| 8.917436219173e-03 > 9951 KSP Residual norm 1.080641627264e+00 % max 1.156952457641e+02 min 9.753007450777e-02 max/min 1.186251998145e+03 > 9952 KSP preconditioned resid norm 1.080641627176e+00 true resid norm 1.594101251850e-01 ||r(i)||/||b|| 8.917463140178e-03 > 9952 KSP Residual norm 1.080641627176e+00 % max 1.157175103847e+02 min 8.164743677427e-02 max/min 1.417282831604e+03 > 9953 KSP preconditioned resid norm 1.080641627164e+00 true resid norm 1.594099248157e-01 ||r(i)||/||b|| 8.917451931444e-03 > 9953 KSP Residual norm 1.080641627164e+00 % max 1.157285221143e+02 min 8.043281187160e-02 max/min 1.438822284357e+03 > 9954 KSP preconditioned resid norm 1.080641627129e+00 true resid norm 1.594100434515e-01 ||r(i)||/||b|| 8.917458567979e-03 > 9954 KSP Residual norm 1.080641627129e+00 % max 1.158251567433e+02 min 8.042877586324e-02 max/min 1.440095979333e+03 > 9955 KSP preconditioned resid norm 1.080641627127e+00 true resid norm 1.594100516277e-01 ||r(i)||/||b|| 8.917459025356e-03 > 9955 KSP Residual norm 1.080641627127e+00 % max 1.158318480981e+02 min 7.912541326160e-02 max/min 1.463901966807e+03 > 9956 KSP preconditioned resid norm 1.080641626963e+00 true resid norm 1.594100262829e-01 ||r(i)||/||b|| 8.917457607557e-03 > 9956 KSP Residual norm 1.080641626963e+00 % max 1.164749229958e+02 min 7.459540346436e-02 max/min 1.561422253738e+03 > 9957 KSP preconditioned resid norm 1.080641626935e+00 true resid norm 1.594099017783e-01 ||r(i)||/||b|| 8.917450642725e-03 > 9957 KSP Residual norm 1.080641626935e+00 % max 1.166575286636e+02 min 7.458642942097e-02 max/min 1.564058362483e+03 > 9958 KSP preconditioned resid norm 1.080641625459e+00 true resid norm 1.594089210473e-01 ||r(i)||/||b|| 8.917395780257e-03 > 9958 KSP Residual norm 1.080641625459e+00 % max 1.166899683166e+02 min 6.207944430169e-02 max/min 1.879687707085e+03 > 9959 KSP preconditioned resid norm 1.080641620143e+00 true resid norm 1.594083265698e-01 ||r(i)||/||b|| 8.917362524958e-03 > 9959 KSP Residual norm 1.080641620143e+00 % max 1.166907769657e+02 min 4.511856151865e-02 max/min 2.586314213883e+03 > 9960 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073442756e-01 ||r(i)||/||b|| 8.917307575050e-03 > 9960 KSP Residual norm 1.080641619752e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9961 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073442805e-01 ||r(i)||/||b|| 8.917307575322e-03 > 9961 KSP Residual norm 1.080641619752e+00 % max 3.063497144523e+01 min 3.063497144523e+01 max/min 1.000000000000e+00 > 9962 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073442888e-01 ||r(i)||/||b|| 8.917307575783e-03 > 9962 KSP Residual norm 1.080641619752e+00 % max 3.066566109468e+01 min 3.845843703829e+00 max/min 7.973714861098e+00 > 9963 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073443871e-01 ||r(i)||/||b|| 8.917307581283e-03 > 9963 KSP Residual norm 1.080641619752e+00 % max 3.713390425813e+01 min 1.336329967989e+00 max/min 2.778797538606e+01 > 9964 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073443904e-01 ||r(i)||/||b|| 8.917307581472e-03 > 9964 KSP Residual norm 1.080641619752e+00 % max 4.496570748601e+01 min 1.226794844829e+00 max/min 3.665299677086e+01 > 9965 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073443960e-01 ||r(i)||/||b|| 8.917307581781e-03 > 9965 KSP Residual norm 1.080641619752e+00 % max 8.684970616180e+01 min 1.183106833938e+00 max/min 7.340816878957e+01 > 9966 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073446787e-01 ||r(i)||/||b|| 8.917307597597e-03 > 9966 KSP Residual norm 1.080641619752e+00 % max 9.802644805035e+01 min 1.168698112506e+00 max/min 8.387662049028e+01 > 9967 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073448376e-01 ||r(i)||/||b|| 8.917307606488e-03 > 9967 KSP Residual norm 1.080641619752e+00 % max 1.123297332549e+02 min 1.141283907756e+00 max/min 9.842400518531e+01 > 9968 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073449323e-01 ||r(i)||/||b|| 8.917307611782e-03 > 9968 KSP Residual norm 1.080641619752e+00 % max 1.134976751689e+02 min 1.090790474217e+00 max/min 1.040508492250e+02 > 9969 KSP preconditioned resid norm 1.080641619751e+00 true resid norm 1.594073935137e-01 ||r(i)||/||b|| 8.917310329444e-03 > 9969 KSP Residual norm 1.080641619751e+00 % max 1.139911627557e+02 min 4.122309806702e-01 max/min 2.765225519207e+02 > 9970 KSP preconditioned resid norm 1.080641619750e+00 true resid norm 1.594074502003e-01 ||r(i)||/||b|| 8.917313500516e-03 > 9970 KSP Residual norm 1.080641619750e+00 % max 1.140007220037e+02 min 2.895709152543e-01 max/min 3.936884403725e+02 > 9971 KSP preconditioned resid norm 1.080641619750e+00 true resid norm 1.594074475189e-01 ||r(i)||/||b|| 8.917313350515e-03 > 9971 KSP Residual norm 1.080641619750e+00 % max 1.140387347164e+02 min 2.881891122657e-01 max/min 3.957079912555e+02 > 9972 KSP preconditioned resid norm 1.080641619750e+00 true resid norm 1.594074249007e-01 ||r(i)||/||b|| 8.917312085247e-03 > 9972 KSP Residual norm 1.080641619750e+00 % max 1.140387839472e+02 min 2.501648230072e-01 max/min 4.558545944886e+02 > 9973 KSP preconditioned resid norm 1.080641619748e+00 true resid norm 1.594074242369e-01 ||r(i)||/||b|| 8.917312048114e-03 > 9973 KSP Residual norm 1.080641619748e+00 % max 1.141359929290e+02 min 2.500655391090e-01 max/min 4.564243171438e+02 > 9974 KSP preconditioned resid norm 1.080641619747e+00 true resid norm 1.594074179174e-01 ||r(i)||/||b|| 8.917311694599e-03 > 9974 KSP Residual norm 1.080641619747e+00 % max 1.141719411262e+02 min 2.470361384981e-01 max/min 4.621669599450e+02 > 9975 KSP preconditioned resid norm 1.080641619724e+00 true resid norm 1.594073585052e-01 ||r(i)||/||b|| 8.917308371056e-03 > 9975 KSP Residual norm 1.080641619724e+00 % max 1.141769961595e+02 min 2.461590320913e-01 max/min 4.638342748974e+02 > 9976 KSP preconditioned resid norm 1.080641619721e+00 true resid norm 1.594072392992e-01 ||r(i)||/||b|| 8.917301702630e-03 > 9976 KSP Residual norm 1.080641619721e+00 % max 1.150248073926e+02 min 1.817423531140e-01 max/min 6.329003967526e+02 > 9977 KSP preconditioned resid norm 1.080641619621e+00 true resid norm 1.594068288737e-01 ||r(i)||/||b|| 8.917278743269e-03 > 9977 KSP Residual norm 1.080641619621e+00 % max 1.153659432133e+02 min 1.758207322277e-01 max/min 6.561566531518e+02 > 9978 KSP preconditioned resid norm 1.080641619605e+00 true resid norm 1.594066900433e-01 ||r(i)||/||b|| 8.917270977044e-03 > 9978 KSP Residual norm 1.080641619605e+00 % max 1.154409858489e+02 min 1.682246559484e-01 max/min 6.862310711714e+02 > 9979 KSP preconditioned resid norm 1.080641619257e+00 true resid norm 1.594052257365e-01 ||r(i)||/||b|| 8.917189063164e-03 > 9979 KSP Residual norm 1.080641619257e+00 % max 1.154411216580e+02 min 1.254601679658e-01 max/min 9.201416157000e+02 > 9980 KSP preconditioned resid norm 1.080641619092e+00 true resid norm 1.594051529133e-01 ||r(i)||/||b|| 8.917184989407e-03 > 9980 KSP Residual norm 1.080641619092e+00 % max 1.155780886655e+02 min 1.115223976110e-01 max/min 1.036366605645e+03 > 9981 KSP preconditioned resid norm 1.080641619012e+00 true resid norm 1.594042648383e-01 ||r(i)||/||b|| 8.917135310152e-03 > 9981 KSP Residual norm 1.080641619012e+00 % max 1.156949417657e+02 min 9.748351070064e-02 max/min 1.186815502788e+03 > 9982 KSP preconditioned resid norm 1.080641618926e+00 true resid norm 1.594037912594e-01 ||r(i)||/||b|| 8.917108817969e-03 > 9982 KSP Residual norm 1.080641618926e+00 % max 1.157177699729e+02 min 8.161808862755e-02 max/min 1.417795637202e+03 > 9983 KSP preconditioned resid norm 1.080641618914e+00 true resid norm 1.594039926066e-01 ||r(i)||/||b|| 8.917120081407e-03 > 9983 KSP Residual norm 1.080641618914e+00 % max 1.157297318664e+02 min 8.041726690542e-02 max/min 1.439115457660e+03 > 9984 KSP preconditioned resid norm 1.080641618881e+00 true resid norm 1.594038798061e-01 ||r(i)||/||b|| 8.917113771305e-03 > 9984 KSP Residual norm 1.080641618881e+00 % max 1.158244868066e+02 min 8.041279440761e-02 max/min 1.440373856671e+03 > 9985 KSP preconditioned resid norm 1.080641618880e+00 true resid norm 1.594038723728e-01 ||r(i)||/||b|| 8.917113355483e-03 > 9985 KSP Residual norm 1.080641618880e+00 % max 1.158302224434e+02 min 7.912036884330e-02 max/min 1.463974753110e+03 > 9986 KSP preconditioned resid norm 1.080641618717e+00 true resid norm 1.594038980749e-01 ||r(i)||/||b|| 8.917114793267e-03 > 9986 KSP Residual norm 1.080641618717e+00 % max 1.164682835790e+02 min 7.460686106511e-02 max/min 1.561093469371e+03 > 9987 KSP preconditioned resid norm 1.080641618691e+00 true resid norm 1.594040222541e-01 ||r(i)||/||b|| 8.917121739901e-03 > 9987 KSP Residual norm 1.080641618691e+00 % max 1.166493032152e+02 min 7.459727412837e-02 max/min 1.563720720069e+03 > 9988 KSP preconditioned resid norm 1.080641617240e+00 true resid norm 1.594049907736e-01 ||r(i)||/||b|| 8.917175919248e-03 > 9988 KSP Residual norm 1.080641617240e+00 % max 1.166813081045e+02 min 6.210350927266e-02 max/min 1.878819884271e+03 > 9989 KSP preconditioned resid norm 1.080641612022e+00 true resid norm 1.594055829488e-01 ||r(i)||/||b|| 8.917209045757e-03 > 9989 KSP Residual norm 1.080641612022e+00 % max 1.166820415947e+02 min 4.513453999933e-02 max/min 2.585205069033e+03 > 9990 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065477745e-01 ||r(i)||/||b|| 8.917263018470e-03 > 9990 KSP Residual norm 1.080641611645e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9991 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065477696e-01 ||r(i)||/||b|| 8.917263018201e-03 > 9991 KSP Residual norm 1.080641611645e+00 % max 3.063465549435e+01 min 3.063465549435e+01 max/min 1.000000000000e+00 > 9992 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065477615e-01 ||r(i)||/||b|| 8.917263017744e-03 > 9992 KSP Residual norm 1.080641611645e+00 % max 3.066530412302e+01 min 3.844470687795e+00 max/min 7.976469744031e+00 > 9993 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065476642e-01 ||r(i)||/||b|| 8.917263012302e-03 > 9993 KSP Residual norm 1.080641611645e+00 % max 3.714516899070e+01 min 1.336552632161e+00 max/min 2.779177422339e+01 > 9994 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065476608e-01 ||r(i)||/||b|| 8.917263012110e-03 > 9994 KSP Residual norm 1.080641611645e+00 % max 4.500812884571e+01 min 1.226798972667e+00 max/min 3.668745234425e+01 > 9995 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065476552e-01 ||r(i)||/||b|| 8.917263011801e-03 > 9995 KSP Residual norm 1.080641611645e+00 % max 8.688131738345e+01 min 1.183120593233e+00 max/min 7.343403358914e+01 > 9996 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065473796e-01 ||r(i)||/||b|| 8.917262996379e-03 > 9996 KSP Residual norm 1.080641611645e+00 % max 9.802499878957e+01 min 1.168930461656e+00 max/min 8.385870845619e+01 > 9997 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065472234e-01 ||r(i)||/||b|| 8.917262987642e-03 > 9997 KSP Residual norm 1.080641611645e+00 % max 1.122454823211e+02 min 1.141659208813e+00 max/min 9.831785304632e+01 > 9998 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065471295e-01 ||r(i)||/||b|| 8.917262982389e-03 > 9998 KSP Residual norm 1.080641611645e+00 % max 1.134943126018e+02 min 1.090790203681e+00 max/min 1.040477923425e+02 > 9999 KSP preconditioned resid norm 1.080641611644e+00 true resid norm 1.594064989598e-01 ||r(i)||/||b|| 8.917260287762e-03 > 9999 KSP Residual norm 1.080641611644e+00 % max 1.139914489020e+02 min 4.119830336343e-01 max/min 2.766896682528e+02 > 10000 KSP preconditioned resid norm 1.080641611643e+00 true resid norm 1.594064428168e-01 ||r(i)||/||b|| 8.917257147097e-03 > 10000 KSP Residual norm 1.080641611643e+00 % max 1.140011170215e+02 min 2.894564222709e-01 max/min 3.938455264772e+02 > > > Then the number of iteration >10000 > The Programme stop with convergenceReason=-3 > > Best, > > Meng > ------------------ Original ------------------ > From: "Barry Smith";; > Send time: Friday, Apr 25, 2014 6:27 AM > To: "Oo "; > Cc: "Dave May"; "petsc-users"; > Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 > > > On Apr 24, 2014, at 5:24 PM, Oo wrote: > > > > > Configure PETSC again? > > No, the command line when you run the program. > > Barry > > > > > ------------------ Original ------------------ > > From: "Dave May";; > > Send time: Friday, Apr 25, 2014 6:20 AM > > To: "Oo "; > > Cc: "Barry Smith"; "Matthew Knepley"; "petsc-users"; > > Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 > > > > On the command line > > > > > > On 25 April 2014 00:11, Oo wrote: > > > > Where should I put "-ksp_monitor_true_residual -ksp_monitor_singular_value " ? > > > > Thanks, > > > > Meng > > > > > > ------------------ Original ------------------ > > From: "Barry Smith";; > > Date: Apr 25, 2014 > > To: "Matthew Knepley"; > > Cc: "Oo "; "petsc-users"; > > Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 > > > > > > There are also a great deal of ?bogus? numbers that have no meaning and many zeros. Most of these are not the eigenvalues of anything. > > > > Run the two cases with -ksp_monitor_true_residual -ksp_monitor_singular_value and send the output > > > > Barry > > > > > > On Apr 24, 2014, at 4:53 PM, Matthew Knepley wrote: > > > > > On Thu, Apr 24, 2014 at 12:57 PM, Oo wrote: > > > > > > Hi, > > > > > > For analysis the convergence of linear solver, > > > I meet a problem. > > > > > > One is the list of Eigenvalues whose linear system which has a convergence solution. > > > The other is the list of Eigenvalues whose linear system whose solution does not converge (convergenceReason=-3). > > > > > > These are just lists of numbers. It does not tell us anything about the computation. What is the problem you are having? > > > > > > Matt > > > > > > Do you know what kind of method can be used to obtain a convergence solution for our non-convergence case? > > > > > > Thanks, > > > > > > Meng > > > > > > > > > > > > -- > > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > > -- Norbert Wiener > > > > . > > > > . From steve.ndengue at gmail.com Thu Apr 24 19:18:08 2014 From: steve.ndengue at gmail.com (Steve Ndengue) Date: Thu, 24 Apr 2014 19:18:08 -0500 Subject: [petsc-users] Compiling program with program files dependencies (SLEPC ) Message-ID: <5359A9C0.2030807@gmail.com> Dear all, I am having trouble compiling a code with some dependencies that shall call SLEPC. The compilation however goes perfectly for the various tutorials and tests in the package. A sample makefile looks like: *** /default: code// //routines: Rout1.o Rout2.o Rout3.o Module1.o// //code: PgmPrincipal// //sources=Rout1.f Rout2.f Rout3.f Module1.f90// //objets=Rout1.o Rout2.o Rout3.o Module1.o Pgm//Principal//.o// // //%.o: %.f// // -${FLINKER} -c $ From balay at mcs.anl.gov Thu Apr 24 19:25:15 2014 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 24 Apr 2014 19:25:15 -0500 Subject: [petsc-users] Compiling program with program files dependencies (SLEPC ) In-Reply-To: <5359A9C0.2030807@gmail.com> References: <5359A9C0.2030807@gmail.com> Message-ID: On Thu, 24 Apr 2014, Steve Ndengue wrote: > Dear all, > > I am having trouble compiling a code with some dependencies that shall call > SLEPC. > The compilation however goes perfectly for the various tutorials and tests in > the package. > > A sample makefile looks like: > > *** > > /default: code// > //routines: Rout1.o Rout2.o Rout3.o Module1.o// > //code: PgmPrincipal// > //sources=Rout1.f Rout2.f Rout3.f Module1.f90// Its best to rename your sourcefiles .F/.F90 [and not .f/.f90 [this enables using default targets from petsc makefiles - and its the correct notation for preprocessing - which is required by petsc/slepc] > //objets=Rout1.o Rout2.o Rout3.o Module1.o Pgm//Principal//.o// > // > //%.o: %.f// > // -${FLINKER} -c $ // > //Module1_mod.mod Module1.o: //Module1//.f90// > // -${FLINKER} -c //Module1//.f90// > // > //include ${SLEPC_DIR}/conf/slepc_common// > // > //Pgm//Principal//: ${objets} chkopts// > // -${FLINKER} -o $@ ${objets} ${SLEPC_LIB}// > // > //.PHONY: clean// > // ${RM} *.o *.mod Pgm//Principal// > / > *** > > The code exits with Warning and error messages of the type: > > *Warning: Pgm**Principal**.f90:880: Illegal preprocessor directive** > **Warning: Pgm**Principal**.f90:889: Illegal preprocessor directive** > **Warning: Pgm**Principal**.f90:892: Illegal preprocessor directive** > **Warning: Pgm**Principal**.f90:895: Illegal preprocessor directive* > > and > > *USE slepceps** > ** 1** > **Fatal Error: Can't open module file 'slepceps.mod' for reading at (1): No > such file or directory > > > *I do not get these errors with the tests and errors files. > > > Sincerely, > > From steve.ndengue at gmail.com Thu Apr 24 19:36:16 2014 From: steve.ndengue at gmail.com (Steve Ndengue) Date: Thu, 24 Apr 2014 19:36:16 -0500 Subject: [petsc-users] Compiling program with program files dependencies (SLEPC ) In-Reply-To: References: <5359A9C0.2030807@gmail.com> Message-ID: <5359AE00.9030004@gmail.com> Thank you for the quick reply. It partly solved the problem; the PETSC and SLEPC files are now included in the compilation. However the Warning are now error messages! *** /pgm_hatom_offaxis_cyl.F90:880:0: error: invalid preprocessing directive #INCLUDE// //pgm_hatom_offaxis_cyl.F90:887:0: error: invalid preprocessing directive #IF// //pgm_hatom_offaxis_cyl.F90:890:0: error: invalid preprocessing directive #ELSE// //pgm_hatom_offaxis_cyl.F90:893:0: error: invalid preprocessing directive #ENDIF/ *** And the corresponding code lines are: *** *#INCLUDE ** ** USE slepceps** ** **!** ** IMPLICIT NONE** **!--------------------**-- **!** **#IF DEFINED(PETSC_USE_FORTRAN_DATATYPES)** **TYPE(Mat) A** **TYPE(EPS) solver** **#ELSE** **Mat A** **EPS solver** **#ENDIF* *** Sincerely. On 04/24/2014 07:25 PM, Satish Balay wrote: > On Thu, 24 Apr 2014, Steve Ndengue wrote: > >> Dear all, >> >> I am having trouble compiling a code with some dependencies that shall call >> SLEPC. >> The compilation however goes perfectly for the various tutorials and tests in >> the package. >> >> A sample makefile looks like: >> >> *** >> >> /default: code// >> //routines: Rout1.o Rout2.o Rout3.o Module1.o// >> //code: PgmPrincipal// >> //sources=Rout1.f Rout2.f Rout3.f Module1.f90// > Its best to rename your sourcefiles .F/.F90 [and not .f/.f90 > [this enables using default targets from petsc makefiles - and its > the correct notation for preprocessing - which is required by petsc/slepc] > >> //objets=Rout1.o Rout2.o Rout3.o Module1.o Pgm//Principal//.o// >> // >> //%.o: %.f// >> // -${FLINKER} -c $ And you shouldn't need to create the above .o.f target. > > Satish > > >> // >> //Module1_mod.mod Module1.o: //Module1//.f90// >> // -${FLINKER} -c //Module1//.f90// >> // >> //include ${SLEPC_DIR}/conf/slepc_common// >> // >> //Pgm//Principal//: ${objets} chkopts// >> // -${FLINKER} -o $@ ${objets} ${SLEPC_LIB}// >> // >> //.PHONY: clean// >> // ${RM} *.o *.mod Pgm//Principal// >> / >> *** >> >> The code exits with Warning and error messages of the type: >> >> *Warning: Pgm**Principal**.f90:880: Illegal preprocessor directive** >> **Warning: Pgm**Principal**.f90:889: Illegal preprocessor directive** >> **Warning: Pgm**Principal**.f90:892: Illegal preprocessor directive** >> **Warning: Pgm**Principal**.f90:895: Illegal preprocessor directive* >> >> and >> >> *USE slepceps** >> ** 1** >> **Fatal Error: Can't open module file 'slepceps.mod' for reading at (1): No >> such file or directory >> >> >> *I do not get these errors with the tests and errors files. >> >> >> Sincerely, >> >> -- Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Apr 24 19:39:54 2014 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 24 Apr 2014 20:39:54 -0400 Subject: [petsc-users] Compiling program with program files dependencies (SLEPC ) In-Reply-To: <5359AE00.9030004@gmail.com> References: <5359A9C0.2030807@gmail.com> <5359AE00.9030004@gmail.com> Message-ID: On Thu, Apr 24, 2014 at 8:36 PM, Steve Ndengue wrote: > Thank you for the quick reply. > > It partly solved the problem; the PETSC and SLEPC files are now included > in the compilation. > > However the Warning are now error messages! > *** > *pgm_hatom_offaxis_cyl.F90:880:0: error: invalid preprocessing directive > #INCLUDE* > *pgm_hatom_offaxis_cyl.F90:887:0: error: invalid preprocessing directive > #IF* > *pgm_hatom_offaxis_cyl.F90:890:0: error: invalid preprocessing directive > #ELSE* > *pgm_hatom_offaxis_cyl.F90:893:0: error: invalid preprocessing directive > #ENDIF* > *** > These should not be capitalized. Matt > And the corresponding code lines are: > *** > *#INCLUDE * > * USE slepceps* > > *!* > * IMPLICIT NONE* > *!--------------------* > *-- **!* > *#IF DEFINED(PETSC_USE_FORTRAN_DATATYPES)* > * TYPE(Mat) A* > * TYPE(EPS) solver* > *#ELSE* > * Mat A* > * EPS solver* > *#ENDIF* > *** > > Sincerely. > > On 04/24/2014 07:25 PM, Satish Balay wrote: > > On Thu, 24 Apr 2014, Steve Ndengue wrote: > > > Dear all, > > I am having trouble compiling a code with some dependencies that shall call > SLEPC. > The compilation however goes perfectly for the various tutorials and tests in > the package. > > A sample makefile looks like: > > *** > > /default: code// > //routines: Rout1.o Rout2.o Rout3.o Module1.o// > //code: PgmPrincipal// > //sources=Rout1.f Rout2.f Rout3.f Module1.f90// > > Its best to rename your sourcefiles .F/.F90 [and not .f/.f90 > [this enables using default targets from petsc makefiles - and its > the correct notation for preprocessing - which is required by petsc/slepc] > > > //objets=Rout1.o Rout2.o Rout3.o Module1.o Pgm//Principal//.o// > // > //%.o: %.f// > // -${FLINKER} -c $ > And you shouldn't need to create the above .o.f target. > > Satish > > > > // > //Module1_mod.mod Module1.o: //Module1//.f90// > // -${FLINKER} -c //Module1//.f90// > // > //include ${SLEPC_DIR}/conf/slepc_common// > // > //Pgm//Principal//: ${objets} chkopts// > // -${FLINKER} -o $@ ${objets} ${SLEPC_LIB}// > // > //.PHONY: clean// > // ${RM} *.o *.mod Pgm//Principal// > / > *** > > The code exits with Warning and error messages of the type: > > *Warning: Pgm**Principal**.f90:880: Illegal preprocessor directive** > **Warning: Pgm**Principal**.f90:889: Illegal preprocessor directive** > **Warning: Pgm**Principal**.f90:892: Illegal preprocessor directive** > **Warning: Pgm**Principal**.f90:895: Illegal preprocessor directive* > > and > > *USE slepceps** > ** 1** > **Fatal Error: Can't open module file 'slepceps.mod' for reading at (1): No > such file or directory > > > *I do not get these errors with the tests and errors files. > > > Sincerely, > > > > > > -- > Steve > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.ndengue at gmail.com Thu Apr 24 19:57:40 2014 From: steve.ndengue at gmail.com (Steve Ndengue) Date: Thu, 24 Apr 2014 19:57:40 -0500 Subject: [petsc-users] Fwd: Re: Compiling program with program files dependencies (SLEPC ) In-Reply-To: <5359B1D6.2040401@gmail.com> References: <5359B1D6.2040401@gmail.com> Message-ID: <5359B304.8030100@gmail.com> The results is the same with lower case letters... On 04/24/2014 07:39 PM, Matthew Knepley wrote: > On Thu, Apr 24, 2014 at 8:36 PM, Steve Ndengue > > wrote: > > Thank you for the quick reply. > > It partly solved the problem; the PETSC and SLEPC files are now > included in the compilation. > > However the Warning are now error messages! > *** > /pgm_hatom_offaxis_cyl.F90:880:0: error: invalid preprocessing > directive #INCLUDE// > //pgm_hatom_offaxis_cyl.F90:887:0: error: invalid preprocessing > directive #IF// > //pgm_hatom_offaxis_cyl.F90:890:0: error: invalid preprocessing > directive #ELSE// > //pgm_hatom_offaxis_cyl.F90:893:0: error: invalid preprocessing > directive #ENDIF/ > *** > > > These should not be capitalized. > > Matt > > And the corresponding code lines are: > *** > *#INCLUDE ** > ** USE slepceps** > ** > **!** > ** IMPLICIT NONE** > **!--------------------**-- > **!** > **#IF DEFINED(PETSC_USE_FORTRAN_DATATYPES)** > **TYPE(Mat) A** > **TYPE(EPS) solver** > **#ELSE** > **Mat A** > **EPS solver** > **#ENDIF* > *** > > Sincerely. > > On 04/24/2014 07:25 PM, Satish Balay wrote: >> On Thu, 24 Apr 2014, Steve Ndengue wrote: >> >>> Dear all, >>> >>> I am having trouble compiling a code with some dependencies that shall call >>> SLEPC. >>> The compilation however goes perfectly for the various tutorials and tests in >>> the package. >>> >>> A sample makefile looks like: >>> >>> *** >>> >>> /default: code// >>> //routines: Rout1.o Rout2.o Rout3.o Module1.o// >>> //code: PgmPrincipal// >>> //sources=Rout1.f Rout2.f Rout3.f Module1.f90// >> Its best to rename your sourcefiles .F/.F90 [and not .f/.f90 >> [this enables using default targets from petsc makefiles - and its >> the correct notation for preprocessing - which is required by petsc/slepc] >> >>> //objets=Rout1.o Rout2.o Rout3.o Module1.o Pgm//Principal//.o// >>> // >>> //%.o: %.f// >>> // -${FLINKER} -c $> And you shouldn't need to create the above .o.f target. >> >> Satish >> >> >>> // >>> //Module1_mod.mod Module1.o: //Module1//.f90// >>> // -${FLINKER} -c //Module1//.f90// >>> // >>> //include ${SLEPC_DIR}/conf/slepc_common// >>> // >>> //Pgm//Principal//: ${objets} chkopts// >>> // -${FLINKER} -o $@ ${objets} ${SLEPC_LIB}// >>> // >>> //.PHONY: clean// >>> // ${RM} *.o *.mod Pgm//Principal// >>> / >>> *** >>> >>> The code exits with Warning and error messages of the type: >>> >>> *Warning: Pgm**Principal**.f90:880: Illegal preprocessor directive** >>> **Warning: Pgm**Principal**.f90:889: Illegal preprocessor directive** >>> **Warning: Pgm**Principal**.f90:892: Illegal preprocessor directive** >>> **Warning: Pgm**Principal**.f90:895: Illegal preprocessor directive* >>> >>> and >>> >>> *USE slepceps** >>> ** 1** >>> **Fatal Error: Can't open module file 'slepceps.mod' for reading at (1): No >>> such file or directory >>> >>> >>> *I do not get these errors with the tests and errors files. >>> >>> >>> Sincerely, >>> >>> > > > -- > Steve > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -- Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Apr 24 20:14:33 2014 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 24 Apr 2014 21:14:33 -0400 Subject: [petsc-users] Compiling program with program files dependencies (SLEPC ) In-Reply-To: <5359B1D6.2040401@gmail.com> References: <5359A9C0.2030807@gmail.com> <5359AE00.9030004@gmail.com> <5359B1D6.2040401@gmail.com> Message-ID: On Thu, Apr 24, 2014 at 8:52 PM, Steve Ndengue wrote: > The results is the same with lower case letters... > Can you send the error? If your preprocessor does not understand #if, I can't see how it could compile anything. Matt > > On 04/24/2014 07:39 PM, Matthew Knepley wrote: > > On Thu, Apr 24, 2014 at 8:36 PM, Steve Ndengue wrote: > >> Thank you for the quick reply. >> >> It partly solved the problem; the PETSC and SLEPC files are now included >> in the compilation. >> >> However the Warning are now error messages! >> *** >> *pgm_hatom_offaxis_cyl.F90:880:0: error: invalid preprocessing directive >> #INCLUDE* >> *pgm_hatom_offaxis_cyl.F90:887:0: error: invalid preprocessing directive >> #IF* >> *pgm_hatom_offaxis_cyl.F90:890:0: error: invalid preprocessing directive >> #ELSE* >> *pgm_hatom_offaxis_cyl.F90:893:0: error: invalid preprocessing directive >> #ENDIF* >> *** >> > > These should not be capitalized. > > Matt > > >> And the corresponding code lines are: >> *** >> *#INCLUDE * >> * USE slepceps* >> >> *!* >> * IMPLICIT NONE* >> *!--------------------* >> *-- **!* >> *#IF DEFINED(PETSC_USE_FORTRAN_DATATYPES)* >> * TYPE(Mat) A* >> * TYPE(EPS) solver* >> *#ELSE* >> * Mat A* >> * EPS solver* >> *#ENDIF* >> *** >> >> Sincerely. >> >> On 04/24/2014 07:25 PM, Satish Balay wrote: >> >> On Thu, 24 Apr 2014, Steve Ndengue wrote: >> >> >> Dear all, >> >> I am having trouble compiling a code with some dependencies that shall call >> SLEPC. >> The compilation however goes perfectly for the various tutorials and tests in >> the package. >> >> A sample makefile looks like: >> >> *** >> >> /default: code// >> //routines: Rout1.o Rout2.o Rout3.o Module1.o// >> //code: PgmPrincipal// >> //sources=Rout1.f Rout2.f Rout3.f Module1.f90// >> >> Its best to rename your sourcefiles .F/.F90 [and not .f/.f90 >> [this enables using default targets from petsc makefiles - and its >> the correct notation for preprocessing - which is required by petsc/slepc] >> >> >> //objets=Rout1.o Rout2.o Rout3.o Module1.o Pgm//Principal//.o// >> // >> //%.o: %.f// >> // -${FLINKER} -c $> >> And you shouldn't need to create the above .o.f target. >> >> Satish >> >> >> >> // >> //Module1_mod.mod Module1.o: //Module1//.f90// >> // -${FLINKER} -c //Module1//.f90// >> // >> //include ${SLEPC_DIR}/conf/slepc_common// >> // >> //Pgm//Principal//: ${objets} chkopts// >> // -${FLINKER} -o $@ ${objets} ${SLEPC_LIB}// >> // >> //.PHONY: clean// >> // ${RM} *.o *.mod Pgm//Principal// >> / >> *** >> >> The code exits with Warning and error messages of the type: >> >> *Warning: Pgm**Principal**.f90:880: Illegal preprocessor directive** >> **Warning: Pgm**Principal**.f90:889: Illegal preprocessor directive** >> **Warning: Pgm**Principal**.f90:892: Illegal preprocessor directive** >> **Warning: Pgm**Principal**.f90:895: Illegal preprocessor directive* >> >> and >> >> *USE slepceps** >> ** 1** >> **Fatal Error: Can't open module file 'slepceps.mod' for reading at (1): No >> such file or directory >> >> >> *I do not get these errors with the tests and errors files. >> >> >> Sincerely, >> >> >> >> >> >> -- >> Steve >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > -- > Steve > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsd1 at rice.edu Thu Apr 24 20:27:34 2014 From: jsd1 at rice.edu (Justin Dong) Date: Thu, 24 Apr 2014 20:27:34 -0500 Subject: [petsc-users] How to configure PETSc with gcc-4.9 Message-ID: Hi all, I'm trying to configure PETSc on Mac OS X with gcc-4.9. Currently, it's configured with gcc that comes with Xcode, but I want to use gcc-4.9 that I installed myself. I try: ./configure --with-cc=gcc-4.9 --with-fc=gfortran --download-f-blas-lapack --download-mpich But while configuring MPICH it gets stuck indefinitely. I've included the output below after I terminate the process. I don't have this issue if I just configure with cc=gcc. Any ideas what the problem is here? ^CTraceback (most recent call last): File "./configure", line 10, in execfile(os.path.join(os.path.dirname(__file__), 'config', 'configure.py')) File "./config/configure.py", line 372, in petsc_configure([]) File "./config/configure.py", line 287, in petsc_configure framework.configure(out = sys.stdout) File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/framework.py", line 933, in configure child.configure() File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", line 558, in configure self.executeTest(self.configureLibrary) File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/base.py", line 115, in executeTest ret = apply(test, args,kargs) File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", line 748, in configureLibrary config.package.Package.configureLibrary(self) File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", line 486, in configureLibrary for location, directory, lib, incl in self.generateGuesses(): File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", line 232, in generateGuesses d = self.checkDownload(1) File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", line 351, in checkDownload return config.package.Package.checkDownload(self, requireDownload) File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", line 340, in checkDownload return self.getInstallDir() File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", line 187, in getInstallDir return os.path.abspath(self.Install()) File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", line 366, in Install return self.InstallMPICH() File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", line 544, in InstallMPICH output,err,ret = config.base.Configure.executeShellCommand('cd '+mpichDir+' && ./configure '+args, timeout=2000, log = self.framework.log) File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/script.py", line 254, in executeShellCommand (output, error, status) = runInShell(command, log, cwd) File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/script.py", line 243, in runInShell thread.join(timeout) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 958, in join self.__block.wait(delay) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 358, in wait _sleep(delay) KeyboardInterrupt Sincerely, Justin -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Apr 24 20:35:30 2014 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 24 Apr 2014 21:35:30 -0400 Subject: [petsc-users] How to configure PETSc with gcc-4.9 In-Reply-To: References: Message-ID: On Thu, Apr 24, 2014 at 9:27 PM, Justin Dong wrote: > Hi all, > > I'm trying to configure PETSc on Mac OS X with gcc-4.9. Currently, it's > configured with gcc that comes with Xcode, but I want to use gcc-4.9 that I > installed myself. I try: > > ./configure --with-cc=gcc-4.9 --with-fc=gfortran > --download-f-blas-lapack --download-mpich > > > But while configuring MPICH it gets stuck indefinitely. I've included the > output below after I terminate the process. I don't have this issue if I > just configure with cc=gcc. Any ideas what the problem is here? > >From the stack trace, its just taking a long time. If it overruns the timeout, it should just die. You can try running using -useThreads=0 if you suspect the timeout is not working. Matt > ^CTraceback (most recent call last): > > File "./configure", line 10, in > > execfile(os.path.join(os.path.dirname(__file__), 'config', > 'configure.py')) > > File "./config/configure.py", line 372, in > > petsc_configure([]) > > File "./config/configure.py", line 287, in petsc_configure > > framework.configure(out = sys.stdout) > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/framework.py", > line 933, in configure > > child.configure() > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", > line 558, in configure > > self.executeTest(self.configureLibrary) > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/base.py", > line 115, in executeTest > > ret = apply(test, args,kargs) > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", > line 748, in configureLibrary > > config.package.Package.configureLibrary(self) > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", > line 486, in configureLibrary > > for location, directory, lib, incl in self.generateGuesses(): > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", > line 232, in generateGuesses > > d = self.checkDownload(1) > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", > line 351, in checkDownload > > return config.package.Package.checkDownload(self, requireDownload) > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", > line 340, in checkDownload > > return self.getInstallDir() > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", > line 187, in getInstallDir > > return os.path.abspath(self.Install()) > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", > line 366, in Install > > return self.InstallMPICH() > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", > line 544, in InstallMPICH > > output,err,ret = config.base.Configure.executeShellCommand('cd > '+mpichDir+' && ./configure '+args, timeout=2000, log = self.framework.log) > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/script.py", > line 254, in executeShellCommand > > (output, error, status) = runInShell(command, log, cwd) > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/script.py", > line 243, in runInShell > > thread.join(timeout) > > File > "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", > line 958, in join > > self.__block.wait(delay) > > File > "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", > line 358, in wait > > _sleep(delay) > > KeyboardInterrupt > > > > Sincerely, > > Justin > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Apr 24 21:58:29 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 24 Apr 2014 21:58:29 -0500 Subject: [petsc-users] Compiling program with program files dependencies (SLEPC ) In-Reply-To: <5359B304.8030100@gmail.com> References: <5359B1D6.2040401@gmail.com> <5359B304.8030100@gmail.com> Message-ID: <3B6FCD6A-3E8E-4FE6-92FA-87A482D1163F@mcs.anl.gov> Try .F files instead of .F90 some Fortran compilers handle preprocessing in different ways for .F and .F90 Please also send configure.log and make.log so we know what compilers etc you are using. Barry Note that we are using C pre processing directives, On Apr 24, 2014, at 7:57 PM, Steve Ndengue wrote: > The results is the same with lower case letters... > > > On 04/24/2014 07:39 PM, Matthew Knepley wrote: >> On Thu, Apr 24, 2014 at 8:36 PM, Steve Ndengue wrote: >> Thank you for the quick reply. >> >> It partly solved the problem; the PETSC and SLEPC files are now included in the compilation. >> >> However the Warning are now error messages! >> *** >> pgm_hatom_offaxis_cyl.F90:880:0: error: invalid preprocessing directive #INCLUDE >> pgm_hatom_offaxis_cyl.F90:887:0: error: invalid preprocessing directive #IF >> pgm_hatom_offaxis_cyl.F90:890:0: error: invalid preprocessing directive #ELSE >> pgm_hatom_offaxis_cyl.F90:893:0: error: invalid preprocessing directive #ENDIF >> *** >> >> These should not be capitalized. >> >> Matt >> >> And the corresponding code lines are: >> *** >> #INCLUDE >> USE slepceps >> >> ! >> IMPLICIT NONE >> !---------------------- >> ! >> #IF DEFINED(PETSC_USE_FORTRAN_DATATYPES) >> TYPE(Mat) A >> TYPE(EPS) solver >> #ELSE >> Mat A >> EPS solver >> #ENDIF >> *** >> >> Sincerely. >> >> On 04/24/2014 07:25 PM, Satish Balay wrote: >>> On Thu, 24 Apr 2014, Steve Ndengue wrote: >>> >>> >>>> Dear all, >>>> >>>> I am having trouble compiling a code with some dependencies that shall call >>>> SLEPC. >>>> The compilation however goes perfectly for the various tutorials and tests in >>>> the package. >>>> >>>> A sample makefile looks like: >>>> >>>> *** >>>> >>>> /default: code// >>>> //routines: Rout1.o Rout2.o Rout3.o Module1.o// >>>> //code: PgmPrincipal// >>>> //sources=Rout1.f Rout2.f Rout3.f Module1.f90// >>>> >>> Its best to rename your sourcefiles .F/.F90 [and not .f/.f90 >>> [this enables using default targets from petsc makefiles - and its >>> the correct notation for preprocessing - which is required by petsc/slepc] >>> >>> >>>> //objets=Rout1.o Rout2.o Rout3.o Module1.o Pgm//Principal//.o// >>>> // >>>> //%.o: %.f// >>>> // -${FLINKER} -c $>>> >>> And you shouldn't need to create the above .o.f target. >>> >>> Satish >>> >>> >>> >>>> // >>>> //Module1_mod.mod Module1.o: //Module1//.f90// >>>> // -${FLINKER} -c //Module1//.f90// >>>> // >>>> //include ${SLEPC_DIR}/conf/slepc_common// >>>> // >>>> //Pgm//Principal//: ${objets} chkopts// >>>> // -${FLINKER} -o $@ ${objets} ${SLEPC_LIB}// >>>> // >>>> //.PHONY: clean// >>>> // ${RM} *.o *.mod Pgm//Principal// >>>> / >>>> *** >>>> >>>> The code exits with Warning and error messages of the type: >>>> >>>> *Warning: Pgm**Principal**.f90:880: Illegal preprocessor directive** >>>> **Warning: Pgm**Principal**.f90:889: Illegal preprocessor directive** >>>> **Warning: Pgm**Principal**.f90:892: Illegal preprocessor directive** >>>> **Warning: Pgm**Principal**.f90:895: Illegal preprocessor directive* >>>> >>>> and >>>> >>>> *USE slepceps** >>>> ** 1** >>>> **Fatal Error: Can't open module file 'slepceps.mod' for reading at (1): No >>>> such file or directory >>>> >>>> >>>> *I do not get these errors with the tests and errors files. >>>> >>>> >>>> Sincerely, >>>> >>>> >>>> >> >> >> -- >> Steve >> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener > > > -- > Steve > > > From bsmith at mcs.anl.gov Thu Apr 24 22:03:45 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 24 Apr 2014 22:03:45 -0500 Subject: [petsc-users] How to configure PETSc with gcc-4.9 In-Reply-To: References: Message-ID: <3EE82E45-497B-4901-9BBF-3D45BD53302F@mcs.anl.gov> Leave it to run for several hours it could be that gcc-4.9 is just taking a very long time on mpich. Barry On Apr 24, 2014, at 8:35 PM, Matthew Knepley wrote: > On Thu, Apr 24, 2014 at 9:27 PM, Justin Dong wrote: > Hi all, > > I'm trying to configure PETSc on Mac OS X with gcc-4.9. Currently, it's configured with gcc that comes with Xcode, but I want to use gcc-4.9 that I installed myself. I try: > > > ./configure --with-cc=gcc-4.9 --with-fc=gfortran --download-f-blas-lapack --download-mpich > > > > But while configuring MPICH it gets stuck indefinitely. I've included the output below after I terminate the process. I don't have this issue if I just configure with cc=gcc. Any ideas what the problem is here? > > From the stack trace, its just taking a long time. If it overruns the timeout, it should just die. You can try > running using -useThreads=0 if you suspect the timeout is not working. > > Matt > > ^CTraceback (most recent call last): > > File "./configure", line 10, in > > execfile(os.path.join(os.path.dirname(__file__), 'config', 'configure.py')) > > File "./config/configure.py", line 372, in > > petsc_configure([]) > > File "./config/configure.py", line 287, in petsc_configure > > framework.configure(out = sys.stdout) > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/framework.py", line 933, in configure > > child.configure() > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", line 558, in configure > > self.executeTest(self.configureLibrary) > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/base.py", line 115, in executeTest > > ret = apply(test, args,kargs) > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", line 748, in configureLibrary > > config.package.Package.configureLibrary(self) > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", line 486, in configureLibrary > > for location, directory, lib, incl in self.generateGuesses(): > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", line 232, in generateGuesses > > d = self.checkDownload(1) > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", line 351, in checkDownload > > return config.package.Package.checkDownload(self, requireDownload) > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", line 340, in checkDownload > > return self.getInstallDir() > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", line 187, in getInstallDir > > return os.path.abspath(self.Install()) > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", line 366, in Install > > return self.InstallMPICH() > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", line 544, in InstallMPICH > > output,err,ret = config.base.Configure.executeShellCommand('cd '+mpichDir+' && ./configure '+args, timeout=2000, log = self.framework.log) > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/script.py", line 254, in executeShellCommand > > (output, error, status) = runInShell(command, log, cwd) > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/script.py", line 243, in runInShell > > thread.join(timeout) > > File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 958, in join > > self.__block.wait(delay) > > File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 358, in wait > > _sleep(delay) > > > KeyboardInterrupt > > > > > > Sincerely, > > Justin > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From balay at mcs.anl.gov Thu Apr 24 22:14:07 2014 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 24 Apr 2014 22:14:07 -0500 Subject: [petsc-users] Compiling program with program files dependencies (SLEPC ) In-Reply-To: <3B6FCD6A-3E8E-4FE6-92FA-87A482D1163F@mcs.anl.gov> References: <5359B1D6.2040401@gmail.com> <5359B304.8030100@gmail.com> <3B6FCD6A-3E8E-4FE6-92FA-87A482D1163F@mcs.anl.gov> Message-ID: petsc examples work fine. send us the code that breaks. Also copy/paste the *complete* output from 'make' when you attempt to compile this code. Satish On Thu, 24 Apr 2014, Barry Smith wrote: > > Try .F files instead of .F90 some Fortran compilers handle preprocessing in different ways for .F and .F90 > > Please also send configure.log and make.log so we know what compilers etc you are using. > > Barry > > Note that we are using C pre processing directives, > > On Apr 24, 2014, at 7:57 PM, Steve Ndengue wrote: > > > The results is the same with lower case letters... > > > > > > On 04/24/2014 07:39 PM, Matthew Knepley wrote: > >> On Thu, Apr 24, 2014 at 8:36 PM, Steve Ndengue wrote: > >> Thank you for the quick reply. > >> > >> It partly solved the problem; the PETSC and SLEPC files are now included in the compilation. > >> > >> However the Warning are now error messages! > >> *** > >> pgm_hatom_offaxis_cyl.F90:880:0: error: invalid preprocessing directive #INCLUDE > >> pgm_hatom_offaxis_cyl.F90:887:0: error: invalid preprocessing directive #IF > >> pgm_hatom_offaxis_cyl.F90:890:0: error: invalid preprocessing directive #ELSE > >> pgm_hatom_offaxis_cyl.F90:893:0: error: invalid preprocessing directive #ENDIF > >> *** > >> > >> These should not be capitalized. > >> > >> Matt > >> > >> And the corresponding code lines are: > >> *** > >> #INCLUDE > >> USE slepceps > >> > >> ! > >> IMPLICIT NONE > >> !---------------------- > >> ! > >> #IF DEFINED(PETSC_USE_FORTRAN_DATATYPES) > >> TYPE(Mat) A > >> TYPE(EPS) solver > >> #ELSE > >> Mat A > >> EPS solver > >> #ENDIF > >> *** > >> > >> Sincerely. > >> > >> On 04/24/2014 07:25 PM, Satish Balay wrote: > >>> On Thu, 24 Apr 2014, Steve Ndengue wrote: > >>> > >>> > >>>> Dear all, > >>>> > >>>> I am having trouble compiling a code with some dependencies that shall call > >>>> SLEPC. > >>>> The compilation however goes perfectly for the various tutorials and tests in > >>>> the package. > >>>> > >>>> A sample makefile looks like: > >>>> > >>>> *** > >>>> > >>>> /default: code// > >>>> //routines: Rout1.o Rout2.o Rout3.o Module1.o// > >>>> //code: PgmPrincipal// > >>>> //sources=Rout1.f Rout2.f Rout3.f Module1.f90// > >>>> > >>> Its best to rename your sourcefiles .F/.F90 [and not .f/.f90 > >>> [this enables using default targets from petsc makefiles - and its > >>> the correct notation for preprocessing - which is required by petsc/slepc] > >>> > >>> > >>>> //objets=Rout1.o Rout2.o Rout3.o Module1.o Pgm//Principal//.o// > >>>> // > >>>> //%.o: %.f// > >>>> // -${FLINKER} -c $ >>>> > >>> And you shouldn't need to create the above .o.f target. > >>> > >>> Satish > >>> > >>> > >>> > >>>> // > >>>> //Module1_mod.mod Module1.o: //Module1//.f90// > >>>> // -${FLINKER} -c //Module1//.f90// > >>>> // > >>>> //include ${SLEPC_DIR}/conf/slepc_common// > >>>> // > >>>> //Pgm//Principal//: ${objets} chkopts// > >>>> // -${FLINKER} -o $@ ${objets} ${SLEPC_LIB}// > >>>> // > >>>> //.PHONY: clean// > >>>> // ${RM} *.o *.mod Pgm//Principal// > >>>> / > >>>> *** > >>>> > >>>> The code exits with Warning and error messages of the type: > >>>> > >>>> *Warning: Pgm**Principal**.f90:880: Illegal preprocessor directive** > >>>> **Warning: Pgm**Principal**.f90:889: Illegal preprocessor directive** > >>>> **Warning: Pgm**Principal**.f90:892: Illegal preprocessor directive** > >>>> **Warning: Pgm**Principal**.f90:895: Illegal preprocessor directive* > >>>> > >>>> and > >>>> > >>>> *USE slepceps** > >>>> ** 1** > >>>> **Fatal Error: Can't open module file 'slepceps.mod' for reading at (1): No > >>>> such file or directory > >>>> > >>>> > >>>> *I do not get these errors with the tests and errors files. > >>>> > >>>> > >>>> Sincerely, > >>>> > >>>> > >>>> > >> > >> > >> -- > >> Steve > >> > >> > >> > >> > >> -- > >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > >> -- Norbert Wiener > > > > > > -- > > Steve > > > > > > > > From jsd1 at rice.edu Fri Apr 25 00:25:36 2014 From: jsd1 at rice.edu (Justin Dong) Date: Fri, 25 Apr 2014 00:25:36 -0500 Subject: [petsc-users] How to configure PETSc with gcc-4.9 In-Reply-To: <3EE82E45-497B-4901-9BBF-3D45BD53302F@mcs.anl.gov> References: <3EE82E45-497B-4901-9BBF-3D45BD53302F@mcs.anl.gov> Message-ID: I'll leave it running for a few hours and see what happens. In the meantime, is it possible to tell PETSc to compile a specific .c file using gcc-4.9? Something along the lines of this: ALL: include ${PETSC_DIR}/conf/variables include ${PETSC_DIR}/conf/rules include ${PETSC_DIR}/conf/test main: main.o gcc-4.9 -${CLINKER} -fopenmp -o main main.o ${PETSC_LIB} ${RM} -f main.o The main reason I want to compile with gcc-4.9 is because I need to use openmp for a simple parallel for loop. With the gcc compiler I have now (clang), there's no support for openmp. I can get gcc-4.9 to work outside of PETSc, but am not sure how to get it to work here if I want to do it for a single example, or if that's even possible. On Thu, Apr 24, 2014 at 10:03 PM, Barry Smith wrote: > > Leave it to run for several hours it could be that gcc-4.9 is just > taking a very long time on mpich. > > Barry > > On Apr 24, 2014, at 8:35 PM, Matthew Knepley wrote: > > > On Thu, Apr 24, 2014 at 9:27 PM, Justin Dong wrote: > > Hi all, > > > > I'm trying to configure PETSc on Mac OS X with gcc-4.9. Currently, it's > configured with gcc that comes with Xcode, but I want to use gcc-4.9 that I > installed myself. I try: > > > > > > ./configure --with-cc=gcc-4.9 --with-fc=gfortran > --download-f-blas-lapack --download-mpich > > > > > > > > But while configuring MPICH it gets stuck indefinitely. I've included > the output below after I terminate the process. I don't have this issue if > I just configure with cc=gcc. Any ideas what the problem is here? > > > > From the stack trace, its just taking a long time. If it overruns the > timeout, it should just die. You can try > > running using -useThreads=0 if you suspect the timeout is not working. > > > > Matt > > > > ^CTraceback (most recent call last): > > > > File "./configure", line 10, in > > > > execfile(os.path.join(os.path.dirname(__file__), 'config', > 'configure.py')) > > > > File "./config/configure.py", line 372, in > > > > petsc_configure([]) > > > > File "./config/configure.py", line 287, in petsc_configure > > > > framework.configure(out = sys.stdout) > > > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/framework.py", > line 933, in configure > > > > child.configure() > > > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", > line 558, in configure > > > > self.executeTest(self.configureLibrary) > > > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/base.py", > line 115, in executeTest > > > > ret = apply(test, args,kargs) > > > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", > line 748, in configureLibrary > > > > config.package.Package.configureLibrary(self) > > > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", > line 486, in configureLibrary > > > > for location, directory, lib, incl in self.generateGuesses(): > > > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", > line 232, in generateGuesses > > > > d = self.checkDownload(1) > > > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", > line 351, in checkDownload > > > > return config.package.Package.checkDownload(self, requireDownload) > > > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", > line 340, in checkDownload > > > > return self.getInstallDir() > > > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", > line 187, in getInstallDir > > > > return os.path.abspath(self.Install()) > > > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", > line 366, in Install > > > > return self.InstallMPICH() > > > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", > line 544, in InstallMPICH > > > > output,err,ret = config.base.Configure.executeShellCommand('cd > '+mpichDir+' && ./configure '+args, timeout=2000, log = self.framework.log) > > > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/script.py", > line 254, in executeShellCommand > > > > (output, error, status) = runInShell(command, log, cwd) > > > > File > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/script.py", > line 243, in runInShell > > > > thread.join(timeout) > > > > File > "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", > line 958, in join > > > > self.__block.wait(delay) > > > > File > "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", > line 358, in wait > > > > _sleep(delay) > > > > > > KeyboardInterrupt > > > > > > > > > > > > Sincerely, > > > > Justin > > > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Fri Apr 25 00:46:04 2014 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 25 Apr 2014 00:46:04 -0500 Subject: [petsc-users] How to configure PETSc with gcc-4.9 In-Reply-To: References: <3EE82E45-497B-4901-9BBF-3D45BD53302F@mcs.anl.gov> Message-ID: if you are primarily intesrested in -fopenmp - and not mpi - just configure petsc with --with-mpi=0 If you have currently working build with clang - and would like to just compile your code with gcc-4.9 - you can try: export MPICH_CC=gcc-4.9 make [your-target] It might work. Or you might get link errors. [If you get errors - its best to rebuild PETSc with gcc-4.9] BTW: you can use a different PETSC_ARCH for each of your builds - so that they don't overwrite each other.. ./configure PETSC_ARCH=arch-gcc --with-cc=gcc-4.9 --with-cxx=0 --with-mpi=0 [on osx petsc can use VecLib so --download-f-blas-lapack shouldn't be needed] Satish On Fri, 25 Apr 2014, Justin Dong wrote: > I'll leave it running for a few hours and see what happens. > > In the meantime, is it possible to tell PETSc to compile a specific .c file > using gcc-4.9? Something along the lines of this: > > ALL: > > include ${PETSC_DIR}/conf/variables > include ${PETSC_DIR}/conf/rules > include ${PETSC_DIR}/conf/test > > main: main.o > gcc-4.9 -${CLINKER} -fopenmp -o main main.o ${PETSC_LIB} > ${RM} -f main.o > > The main reason I want to compile with gcc-4.9 is because I need to use > openmp for a simple parallel for loop. With the gcc compiler I have now > (clang), there's no support for openmp. I can get gcc-4.9 to work outside > of PETSc, but am not sure how to get it to work here if I want to do it for > a single example, or if that's even possible. > > > On Thu, Apr 24, 2014 at 10:03 PM, Barry Smith wrote: > > > > > Leave it to run for several hours it could be that gcc-4.9 is just > > taking a very long time on mpich. > > > > Barry > > > > On Apr 24, 2014, at 8:35 PM, Matthew Knepley wrote: > > > > > On Thu, Apr 24, 2014 at 9:27 PM, Justin Dong wrote: > > > Hi all, > > > > > > I'm trying to configure PETSc on Mac OS X with gcc-4.9. Currently, it's > > configured with gcc that comes with Xcode, but I want to use gcc-4.9 that I > > installed myself. I try: > > > > > > > > > ./configure --with-cc=gcc-4.9 --with-fc=gfortran > > --download-f-blas-lapack --download-mpich > > > > > > > > > > > > But while configuring MPICH it gets stuck indefinitely. I've included > > the output below after I terminate the process. I don't have this issue if > > I just configure with cc=gcc. Any ideas what the problem is here? > > > > > > From the stack trace, its just taking a long time. If it overruns the > > timeout, it should just die. You can try > > > running using -useThreads=0 if you suspect the timeout is not working. > > > > > > Matt > > > > > > ^CTraceback (most recent call last): > > > > > > File "./configure", line 10, in > > > > > > execfile(os.path.join(os.path.dirname(__file__), 'config', > > 'configure.py')) > > > > > > File "./config/configure.py", line 372, in > > > > > > petsc_configure([]) > > > > > > File "./config/configure.py", line 287, in petsc_configure > > > > > > framework.configure(out = sys.stdout) > > > > > > File > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/framework.py", > > line 933, in configure > > > > > > child.configure() > > > > > > File > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", > > line 558, in configure > > > > > > self.executeTest(self.configureLibrary) > > > > > > File > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/base.py", > > line 115, in executeTest > > > > > > ret = apply(test, args,kargs) > > > > > > File > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", > > line 748, in configureLibrary > > > > > > config.package.Package.configureLibrary(self) > > > > > > File > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", > > line 486, in configureLibrary > > > > > > for location, directory, lib, incl in self.generateGuesses(): > > > > > > File > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", > > line 232, in generateGuesses > > > > > > d = self.checkDownload(1) > > > > > > File > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", > > line 351, in checkDownload > > > > > > return config.package.Package.checkDownload(self, requireDownload) > > > > > > File > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", > > line 340, in checkDownload > > > > > > return self.getInstallDir() > > > > > > File > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", > > line 187, in getInstallDir > > > > > > return os.path.abspath(self.Install()) > > > > > > File > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", > > line 366, in Install > > > > > > return self.InstallMPICH() > > > > > > File > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", > > line 544, in InstallMPICH > > > > > > output,err,ret = config.base.Configure.executeShellCommand('cd > > '+mpichDir+' && ./configure '+args, timeout=2000, log = self.framework.log) > > > > > > File > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/script.py", > > line 254, in executeShellCommand > > > > > > (output, error, status) = runInShell(command, log, cwd) > > > > > > File > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/script.py", > > line 243, in runInShell > > > > > > thread.join(timeout) > > > > > > File > > "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", > > line 958, in join > > > > > > self.__block.wait(delay) > > > > > > File > > "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", > > line 358, in wait > > > > > > _sleep(delay) > > > > > > > > > KeyboardInterrupt > > > > > > > > > > > > > > > > > > Sincerely, > > > > > > Justin > > > > > > > > > > > > > > > -- > > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results to which their > > experiments lead. > > > -- Norbert Wiener > > > > > From jsd1 at rice.edu Fri Apr 25 01:36:56 2014 From: jsd1 at rice.edu (Justin Dong) Date: Fri, 25 Apr 2014 01:36:56 -0500 Subject: [petsc-users] How to configure PETSc with gcc-4.9 In-Reply-To: References: <3EE82E45-497B-4901-9BBF-3D45BD53302F@mcs.anl.gov> Message-ID: Hi Satish, Thanks, that installation worked and it's now compiling with gcc-4.9. However, I'm still having the issue that OpenMP isn't working when I'm using PETSc in the same code. I'm including this segment at the beginning of my code: int nthreads, tid; #pragma omp parallel private(nthreads, tid) { /* Obtain thread number */ tid = omp_get_thread_num(); printf("Hello World from thread = %d\n", tid); /* Only master thread does this */ if (tid == 0) { nthreads = omp_get_num_threads(); printf("Number of threads = %d\n", nthreads); } } I have no problems getting this to run outside of PETSc, with typical output as follows: Hello World from thread = 0 Hello World from thread = 1 Hello World from thread = 3 Hello World from thread = 2 Number of threads = 4 But when I include that snippet of code in my larger PETSc code, all I get is Hello World from thread = 0 Number of threads = 1 My makefile is just ALL: include ${PETSC_DIR}/conf/variables include ${PETSC_DIR}/conf/rules include ${PETSC_DIR}/conf/test main: main.o -${CLINKER} -fopenmp -o main main.o ${PETSC_LIB} ${RM} -f main.o I don't get any errors when compiling, as far as I can tell. This is the output when compiling: gcc-4.9 -o main.o -c -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -fno-inline -O0 -I/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/include -I/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/arch-gcc/include -I/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/include/mpiuni -D__INSDIR__= main.c gcc-4.9 -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress -Wl,-commons,use_dylibs -Wl,-search_paths_first -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -fno-inline -O0 -fopenmp -o main main.o -L/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/arch-gcc/lib -L/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/arch-gcc/lib -lpetsc -llapack -lblas -lpthread -L/usr/local/lib/gcc/x86_64-apple-darwin13.1.0/4.9.0 -L/usr/local/lib -ldl -lgfortran -lgfortran -lquadmath -lm -lm -lSystem -lgcc_ext.10.5 -ldl /bin/rm -f -f main.o On Fri, Apr 25, 2014 at 12:46 AM, Satish Balay wrote: > if you are primarily intesrested in -fopenmp - and not mpi - just > configure petsc with --with-mpi=0 > > If you have currently working build with clang - and would > like to just compile your code with gcc-4.9 - you can try: > > export MPICH_CC=gcc-4.9 > make [your-target] > > It might work. Or you might get link errors. [If you get > errors - its best to rebuild PETSc with gcc-4.9] > > BTW: you can use a different PETSC_ARCH for each of your builds - so > that they don't overwrite each other.. > > ./configure PETSC_ARCH=arch-gcc --with-cc=gcc-4.9 --with-cxx=0 --with-mpi=0 > > [on osx petsc can use VecLib so --download-f-blas-lapack shouldn't be > needed] > > Satish > > On Fri, 25 Apr 2014, Justin Dong wrote: > > > I'll leave it running for a few hours and see what happens. > > > > In the meantime, is it possible to tell PETSc to compile a specific .c > file > > using gcc-4.9? Something along the lines of this: > > > > ALL: > > > > include ${PETSC_DIR}/conf/variables > > include ${PETSC_DIR}/conf/rules > > include ${PETSC_DIR}/conf/test > > > > main: main.o > > gcc-4.9 -${CLINKER} -fopenmp -o main main.o ${PETSC_LIB} > > ${RM} -f main.o > > > > The main reason I want to compile with gcc-4.9 is because I need to use > > openmp for a simple parallel for loop. With the gcc compiler I have now > > (clang), there's no support for openmp. I can get gcc-4.9 to work outside > > of PETSc, but am not sure how to get it to work here if I want to do it > for > > a single example, or if that's even possible. > > > > > > On Thu, Apr 24, 2014 at 10:03 PM, Barry Smith > wrote: > > > > > > > > Leave it to run for several hours it could be that gcc-4.9 is just > > > taking a very long time on mpich. > > > > > > Barry > > > > > > On Apr 24, 2014, at 8:35 PM, Matthew Knepley > wrote: > > > > > > > On Thu, Apr 24, 2014 at 9:27 PM, Justin Dong wrote: > > > > Hi all, > > > > > > > > I'm trying to configure PETSc on Mac OS X with gcc-4.9. Currently, > it's > > > configured with gcc that comes with Xcode, but I want to use gcc-4.9 > that I > > > installed myself. I try: > > > > > > > > > > > > ./configure --with-cc=gcc-4.9 --with-fc=gfortran > > > --download-f-blas-lapack --download-mpich > > > > > > > > > > > > > > > > But while configuring MPICH it gets stuck indefinitely. I've included > > > the output below after I terminate the process. I don't have this > issue if > > > I just configure with cc=gcc. Any ideas what the problem is here? > > > > > > > > From the stack trace, its just taking a long time. If it overruns the > > > timeout, it should just die. You can try > > > > running using -useThreads=0 if you suspect the timeout is not > working. > > > > > > > > Matt > > > > > > > > ^CTraceback (most recent call last): > > > > > > > > File "./configure", line 10, in > > > > > > > > execfile(os.path.join(os.path.dirname(__file__), 'config', > > > 'configure.py')) > > > > > > > > File "./config/configure.py", line 372, in > > > > > > > > petsc_configure([]) > > > > > > > > File "./config/configure.py", line 287, in petsc_configure > > > > > > > > framework.configure(out = sys.stdout) > > > > > > > > File > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/framework.py", > > > line 933, in configure > > > > > > > > child.configure() > > > > > > > > File > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", > > > line 558, in configure > > > > > > > > self.executeTest(self.configureLibrary) > > > > > > > > File > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/base.py", > > > line 115, in executeTest > > > > > > > > ret = apply(test, args,kargs) > > > > > > > > File > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", > > > line 748, in configureLibrary > > > > > > > > config.package.Package.configureLibrary(self) > > > > > > > > File > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", > > > line 486, in configureLibrary > > > > > > > > for location, directory, lib, incl in self.generateGuesses(): > > > > > > > > File > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", > > > line 232, in generateGuesses > > > > > > > > d = self.checkDownload(1) > > > > > > > > File > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", > > > line 351, in checkDownload > > > > > > > > return config.package.Package.checkDownload(self, > requireDownload) > > > > > > > > File > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", > > > line 340, in checkDownload > > > > > > > > return self.getInstallDir() > > > > > > > > File > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", > > > line 187, in getInstallDir > > > > > > > > return os.path.abspath(self.Install()) > > > > > > > > File > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", > > > line 366, in Install > > > > > > > > return self.InstallMPICH() > > > > > > > > File > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", > > > line 544, in InstallMPICH > > > > > > > > output,err,ret = config.base.Configure.executeShellCommand('cd > > > '+mpichDir+' && ./configure '+args, timeout=2000, log = > self.framework.log) > > > > > > > > File > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/script.py", > > > line 254, in executeShellCommand > > > > > > > > (output, error, status) = runInShell(command, log, cwd) > > > > > > > > File > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/script.py", > > > line 243, in runInShell > > > > > > > > thread.join(timeout) > > > > > > > > File > > > > "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", > > > line 958, in join > > > > > > > > self.__block.wait(delay) > > > > > > > > File > > > > "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", > > > line 358, in wait > > > > > > > > _sleep(delay) > > > > > > > > > > > > KeyboardInterrupt > > > > > > > > > > > > > > > > > > > > > > > > Sincerely, > > > > > > > > Justin > > > > > > > > > > > > > > > > > > > > -- > > > > What most experimenters take for granted before they begin their > > > experiments is infinitely more interesting than any results to which > their > > > experiments lead. > > > > -- Norbert Wiener > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Fri Apr 25 01:44:21 2014 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 25 Apr 2014 01:44:21 -0500 Subject: [petsc-users] How to configure PETSc with gcc-4.9 In-Reply-To: References: <3EE82E45-497B-4901-9BBF-3D45BD53302F@mcs.anl.gov> Message-ID: On Fri, 25 Apr 2014, Justin Dong wrote: > Hi Satish, > > Thanks, that installation worked and it's now compiling with gcc-4.9. > However, I'm still having the issue that OpenMP isn't working when I'm > using PETSc in the same code. I'm including this segment at the beginning > of my code: > > int nthreads, tid; > > #pragma omp parallel private(nthreads, tid) > { > > /* Obtain thread number */ > tid = omp_get_thread_num(); > > printf("Hello World from thread = %d\n", tid); > > /* Only master thread does this */ > if (tid == 0) > { > nthreads = omp_get_num_threads(); > printf("Number of threads = %d\n", nthreads); > } > > } > > I have no problems getting this to run outside of PETSc, with typical > output as follows: > > Hello World from thread = 0 > > Hello World from thread = 1 > > Hello World from thread = 3 > > Hello World from thread = 2 > > Number of threads = 4 > > > But when I include that snippet of code in my larger PETSc code, all I get > is > > Hello World from thread = 0 > > Number of threads = 1 > > > My makefile is just > > ALL: Add the line: CFLAGS = -fopenmp > > include ${PETSC_DIR}/conf/variables > > include ${PETSC_DIR}/conf/rules > > include ${PETSC_DIR}/conf/test > > > main: main.o > > -${CLINKER} -fopenmp -o main main.o ${PETSC_LIB} you can remove -fopenmp from here. Satish > > ${RM} -f main.o > > > I don't get any errors when compiling, as far as I can tell. This is the > output when compiling: > > gcc-4.9 -o main.o -c -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing > -Wno-unknown-pragmas -g3 -fno-inline -O0 > -I/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/include > -I/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/arch-gcc/include > -I/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/include/mpiuni > -D__INSDIR__= main.c > > gcc-4.9 -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress > -Wl,-commons,use_dylibs -Wl,-search_paths_first -fPIC -Wall > -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -fno-inline > -O0 -fopenmp -o main main.o > -L/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/arch-gcc/lib > -L/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/arch-gcc/lib -lpetsc > -llapack -lblas -lpthread > -L/usr/local/lib/gcc/x86_64-apple-darwin13.1.0/4.9.0 -L/usr/local/lib -ldl > -lgfortran -lgfortran -lquadmath -lm -lm -lSystem -lgcc_ext.10.5 -ldl > > /bin/rm -f -f main.o > > > On Fri, Apr 25, 2014 at 12:46 AM, Satish Balay wrote: > > > if you are primarily intesrested in -fopenmp - and not mpi - just > > configure petsc with --with-mpi=0 > > > > If you have currently working build with clang - and would > > like to just compile your code with gcc-4.9 - you can try: > > > > export MPICH_CC=gcc-4.9 > > make [your-target] > > > > It might work. Or you might get link errors. [If you get > > errors - its best to rebuild PETSc with gcc-4.9] > > > > BTW: you can use a different PETSC_ARCH for each of your builds - so > > that they don't overwrite each other.. > > > > ./configure PETSC_ARCH=arch-gcc --with-cc=gcc-4.9 --with-cxx=0 --with-mpi=0 > > > > [on osx petsc can use VecLib so --download-f-blas-lapack shouldn't be > > needed] > > > > Satish > > > > On Fri, 25 Apr 2014, Justin Dong wrote: > > > > > I'll leave it running for a few hours and see what happens. > > > > > > In the meantime, is it possible to tell PETSc to compile a specific .c > > file > > > using gcc-4.9? Something along the lines of this: > > > > > > ALL: > > > > > > include ${PETSC_DIR}/conf/variables > > > include ${PETSC_DIR}/conf/rules > > > include ${PETSC_DIR}/conf/test > > > > > > main: main.o > > > gcc-4.9 -${CLINKER} -fopenmp -o main main.o ${PETSC_LIB} > > > ${RM} -f main.o > > > > > > The main reason I want to compile with gcc-4.9 is because I need to use > > > openmp for a simple parallel for loop. With the gcc compiler I have now > > > (clang), there's no support for openmp. I can get gcc-4.9 to work outside > > > of PETSc, but am not sure how to get it to work here if I want to do it > > for > > > a single example, or if that's even possible. > > > > > > > > > On Thu, Apr 24, 2014 at 10:03 PM, Barry Smith > > wrote: > > > > > > > > > > > Leave it to run for several hours it could be that gcc-4.9 is just > > > > taking a very long time on mpich. > > > > > > > > Barry > > > > > > > > On Apr 24, 2014, at 8:35 PM, Matthew Knepley > > wrote: > > > > > > > > > On Thu, Apr 24, 2014 at 9:27 PM, Justin Dong wrote: > > > > > Hi all, > > > > > > > > > > I'm trying to configure PETSc on Mac OS X with gcc-4.9. Currently, > > it's > > > > configured with gcc that comes with Xcode, but I want to use gcc-4.9 > > that I > > > > installed myself. I try: > > > > > > > > > > > > > > > ./configure --with-cc=gcc-4.9 --with-fc=gfortran > > > > --download-f-blas-lapack --download-mpich > > > > > > > > > > > > > > > > > > > > But while configuring MPICH it gets stuck indefinitely. I've included > > > > the output below after I terminate the process. I don't have this > > issue if > > > > I just configure with cc=gcc. Any ideas what the problem is here? > > > > > > > > > > From the stack trace, its just taking a long time. If it overruns the > > > > timeout, it should just die. You can try > > > > > running using -useThreads=0 if you suspect the timeout is not > > working. > > > > > > > > > > Matt > > > > > > > > > > ^CTraceback (most recent call last): > > > > > > > > > > File "./configure", line 10, in > > > > > > > > > > execfile(os.path.join(os.path.dirname(__file__), 'config', > > > > 'configure.py')) > > > > > > > > > > File "./config/configure.py", line 372, in > > > > > > > > > > petsc_configure([]) > > > > > > > > > > File "./config/configure.py", line 287, in petsc_configure > > > > > > > > > > framework.configure(out = sys.stdout) > > > > > > > > > > File > > > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/framework.py", > > > > line 933, in configure > > > > > > > > > > child.configure() > > > > > > > > > > File > > > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", > > > > line 558, in configure > > > > > > > > > > self.executeTest(self.configureLibrary) > > > > > > > > > > File > > > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/base.py", > > > > line 115, in executeTest > > > > > > > > > > ret = apply(test, args,kargs) > > > > > > > > > > File > > > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", > > > > line 748, in configureLibrary > > > > > > > > > > config.package.Package.configureLibrary(self) > > > > > > > > > > File > > > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", > > > > line 486, in configureLibrary > > > > > > > > > > for location, directory, lib, incl in self.generateGuesses(): > > > > > > > > > > File > > > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", > > > > line 232, in generateGuesses > > > > > > > > > > d = self.checkDownload(1) > > > > > > > > > > File > > > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", > > > > line 351, in checkDownload > > > > > > > > > > return config.package.Package.checkDownload(self, > > requireDownload) > > > > > > > > > > File > > > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", > > > > line 340, in checkDownload > > > > > > > > > > return self.getInstallDir() > > > > > > > > > > File > > > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", > > > > line 187, in getInstallDir > > > > > > > > > > return os.path.abspath(self.Install()) > > > > > > > > > > File > > > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", > > > > line 366, in Install > > > > > > > > > > return self.InstallMPICH() > > > > > > > > > > File > > > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", > > > > line 544, in InstallMPICH > > > > > > > > > > output,err,ret = config.base.Configure.executeShellCommand('cd > > > > '+mpichDir+' && ./configure '+args, timeout=2000, log = > > self.framework.log) > > > > > > > > > > File > > > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/script.py", > > > > line 254, in executeShellCommand > > > > > > > > > > (output, error, status) = runInShell(command, log, cwd) > > > > > > > > > > File > > > > > > "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/script.py", > > > > line 243, in runInShell > > > > > > > > > > thread.join(timeout) > > > > > > > > > > File > > > > > > "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", > > > > line 958, in join > > > > > > > > > > self.__block.wait(delay) > > > > > > > > > > File > > > > > > "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", > > > > line 358, in wait > > > > > > > > > > _sleep(delay) > > > > > > > > > > > > > > > KeyboardInterrupt > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Sincerely, > > > > > > > > > > Justin > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > What most experimenters take for granted before they begin their > > > > experiments is infinitely more interesting than any results to which > > their > > > > experiments lead. > > > > > -- Norbert Wiener > > > > > > > > > > > > > > > > From wumeng07maths at qq.com Fri Apr 25 03:13:27 2014 From: wumeng07maths at qq.com (=?gb18030?B?T28gICAgICA=?=) Date: Fri, 25 Apr 2014 16:13:27 +0800 Subject: [petsc-users] Convergence_Eigenvalues_k=3 In-Reply-To: <4EC22F71-3552-402B-8886-9180229E1BC2@mcs.anl.gov> References: <65E3810E-04F5-4B83-B3AF-EDE2D3AEE791@mcs.anl.gov> <4EC22F71-3552-402B-8886-9180229E1BC2@mcs.anl.gov> Message-ID: Hi, What I can do for solving this linear system? At this moment, I use the common line "/Users/wumeng/MyWork/BuildMSplineTools/bin/msplinePDE_PFEM_2 -ksp_gmres_restart 200 -ksp_max_it 200 -ksp_monitor_true_residual -ksp_monitor_singular_value" The following are the output: ============================================ 0 KSP preconditioned resid norm 7.463734841673e+00 true resid norm 7.520241011357e-02 ||r(i)||/||b|| 1.000000000000e+00 0 KSP Residual norm 7.463734841673e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 1 KSP preconditioned resid norm 3.449001344285e-03 true resid norm 7.834435231711e-05 ||r(i)||/||b|| 1.041779807307e-03 1 KSP Residual norm 3.449001344285e-03 % max 9.999991695261e-01 min 9.999991695261e-01 max/min 1.000000000000e+00 2 KSP preconditioned resid norm 1.811463883605e-05 true resid norm 3.597611565181e-07 ||r(i)||/||b|| 4.783904611232e-06 2 KSP Residual norm 1.811463883605e-05 % max 1.000686014764e+00 min 9.991339510077e-01 max/min 1.001553409084e+00 0 KSP preconditioned resid norm 9.374463936067e+00 true resid norm 9.058107112571e-02 ||r(i)||/||b|| 1.000000000000e+00 0 KSP Residual norm 9.374463936067e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 1 KSP preconditioned resid norm 2.595582184655e-01 true resid norm 6.637387889158e-03 ||r(i)||/||b|| 7.327566131280e-02 1 KSP Residual norm 2.595582184655e-01 % max 9.933440157684e-01 min 9.933440157684e-01 max/min 1.000000000000e+00 2 KSP preconditioned resid norm 6.351429855766e-03 true resid norm 1.844857600919e-04 ||r(i)||/||b|| 2.036692189651e-03 2 KSP Residual norm 6.351429855766e-03 % max 1.000795215571e+00 min 8.099278726624e-01 max/min 1.235659679523e+00 3 KSP preconditioned resid norm 1.883016084950e-04 true resid norm 3.876682412610e-06 ||r(i)||/||b|| 4.279793078656e-05 3 KSP Residual norm 1.883016084950e-04 % max 1.184638644500e+00 min 8.086172954187e-01 max/min 1.465017692809e+00 ============================================= 0 KSP preconditioned resid norm 1.414935390756e+03 true resid norm 1.787617427503e+01 ||r(i)||/||b|| 1.000000000000e+00 0 KSP Residual norm 1.414935390756e+03 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 1 KSP preconditioned resid norm 1.384179962960e+03 true resid norm 1.802958083039e+01 ||r(i)||/||b|| 1.008581621157e+00 1 KSP Residual norm 1.384179962960e+03 % max 1.895999321723e+01 min 1.895999321723e+01 max/min 1.000000000000e+00 2 KSP preconditioned resid norm 1.382373771674e+03 true resid norm 1.813982830244e+01 ||r(i)||/||b|| 1.014748906749e+00 2 KSP Residual norm 1.382373771674e+03 % max 3.551645921348e+01 min 6.051184451182e+00 max/min 5.869340044086e+00 3 KSP preconditioned resid norm 1.332723893134e+03 true resid norm 1.774590608681e+01 ||r(i)||/||b|| 9.927127479173e-01 3 KSP Residual norm 1.332723893134e+03 % max 5.076435191911e+01 min 4.941060752900e+00 max/min 1.027397849527e+01 4 KSP preconditioned resid norm 1.093788576095e+03 true resid norm 1.717433802192e+01 ||r(i)||/||b|| 9.607390125925e-01 4 KSP Residual norm 1.093788576095e+03 % max 6.610818819562e+01 min 1.960683367943e+00 max/min 3.371691180560e+01 5 KSP preconditioned resid norm 1.077330470806e+03 true resid norm 1.750861653688e+01 ||r(i)||/||b|| 9.794386800837e-01 5 KSP Residual norm 1.077330470806e+03 % max 7.296006568006e+01 min 1.721275741199e+00 max/min 4.238720382432e+01 6 KSP preconditioned resid norm 1.021951470305e+03 true resid norm 1.587225289158e+01 ||r(i)||/||b|| 8.878998742895e-01 6 KSP Residual norm 1.021951470305e+03 % max 7.313469782749e+01 min 1.113708844536e+00 max/min 6.566769958438e+01 7 KSP preconditioned resid norm 8.968401331418e+02 true resid norm 1.532116106999e+01 ||r(i)||/||b|| 8.570715878170e-01 7 KSP Residual norm 8.968401331418e+02 % max 7.385099946342e+01 min 1.112940159952e+00 max/min 6.635666689088e+01 8 KSP preconditioned resid norm 8.540750681178e+02 true resid norm 1.665367409583e+01 ||r(i)||/||b|| 9.316128741873e-01 8 KSP Residual norm 8.540750681178e+02 % max 9.333712052307e+01 min 9.984044777583e-01 max/min 9.348627996205e+01 9 KSP preconditioned resid norm 8.540461721597e+02 true resid norm 1.669107313551e+01 ||r(i)||/||b|| 9.337049907166e-01 9 KSP Residual norm 8.540461721597e+02 % max 9.673114572055e+01 min 9.309725017804e-01 max/min 1.039033328434e+02 10 KSP preconditioned resid norm 8.540453927813e+02 true resid norm 1.668902252063e+01 ||r(i)||/||b|| 9.335902785387e-01 10 KSP Residual norm 8.540453927813e+02 % max 9.685490817256e+01 min 8.452348922200e-01 max/min 1.145893396783e+02 11 KSP preconditioned resid norm 7.927564518868e+02 true resid norm 1.671721115974e+01 ||r(i)||/||b|| 9.351671617505e-01 11 KSP Residual norm 7.927564518868e+02 % max 1.076430910935e+02 min 8.433341413962e-01 max/min 1.276399066629e+02 12 KSP preconditioned resid norm 4.831253651357e+02 true resid norm 2.683431434549e+01 ||r(i)||/||b|| 1.501121768709e+00 12 KSP Residual norm 4.831253651357e+02 % max 1.079017603981e+02 min 8.354112427301e-01 max/min 1.291600530124e+02 13 KSP preconditioned resid norm 4.093462051807e+02 true resid norm 1.923302214627e+01 ||r(i)||/||b|| 1.075902586894e+00 13 KSP Residual norm 4.093462051807e+02 % max 1.088474472678e+02 min 6.034301796734e-01 max/min 1.803811790233e+02 14 KSP preconditioned resid norm 3.809274390266e+02 true resid norm 2.118308925475e+01 ||r(i)||/||b|| 1.184990083943e+00 14 KSP Residual norm 3.809274390266e+02 % max 1.104675729761e+02 min 6.010582938812e-01 max/min 1.837884513045e+02 15 KSP preconditioned resid norm 2.408316705377e+02 true resid norm 1.238065951026e+01 ||r(i)||/||b|| 6.925788101960e-01 15 KSP Residual norm 2.408316705377e+02 % max 1.215487322437e+02 min 5.131416858605e-01 max/min 2.368716781211e+02 16 KSP preconditioned resid norm 1.979802937570e+02 true resid norm 9.481184679187e+00 ||r(i)||/||b|| 5.303810834083e-01 16 KSP Residual norm 1.979802937570e+02 % max 1.246827043503e+02 min 4.780929228356e-01 max/min 2.607917799970e+02 17 KSP preconditioned resid norm 1.853329360151e+02 true resid norm 9.049630342765e+00 ||r(i)||/||b|| 5.062397694011e-01 17 KSP Residual norm 1.853329360151e+02 % max 1.252018074901e+02 min 4.612413672912e-01 max/min 2.714453133841e+02 18 KSP preconditioned resid norm 1.853299943955e+02 true resid norm 9.047741838474e+00 ||r(i)||/||b|| 5.061341257515e-01 18 KSP Residual norm 1.853299943955e+02 % max 1.256988083109e+02 min 4.556318323953e-01 max/min 2.758780211867e+02 19 KSP preconditioned resid norm 1.730151601343e+02 true resid norm 9.854026387827e+00 ||r(i)||/||b|| 5.512379906472e-01 19 KSP Residual norm 1.730151601343e+02 % max 1.279580192729e+02 min 3.287110716087e-01 max/min 3.892720091436e+02 20 KSP preconditioned resid norm 1.429145143492e+02 true resid norm 8.228490997826e+00 ||r(i)||/||b|| 4.603049215804e-01 20 KSP Residual norm 1.429145143492e+02 % max 1.321397322884e+02 min 2.823235578054e-01 max/min 4.680435926621e+02 21 KSP preconditioned resid norm 1.345382626439e+02 true resid norm 8.176256473861e+00 ||r(i)||/||b|| 4.573829024078e-01 21 KSP Residual norm 1.345382626439e+02 % max 1.332774949926e+02 min 2.425224324298e-01 max/min 5.495470817166e+02 22 KSP preconditioned resid norm 1.301499631466e+02 true resid norm 8.487706077838e+00 ||r(i)||/||b|| 4.748055119207e-01 22 KSP Residual norm 1.301499631466e+02 % max 1.334143594976e+02 min 2.077893364534e-01 max/min 6.420654773469e+02 23 KSP preconditioned resid norm 1.260084835452e+02 true resid norm 8.288260183397e+00 ||r(i)||/||b|| 4.636484325941e-01 23 KSP Residual norm 1.260084835452e+02 % max 1.342473982017e+02 min 2.010966692943e-01 max/min 6.675764381023e+02 24 KSP preconditioned resid norm 1.255711443195e+02 true resid norm 8.117619099395e+00 ||r(i)||/||b|| 4.541027053386e-01 24 KSP Residual norm 1.255711443195e+02 % max 1.342478258493e+02 min 1.586270065907e-01 max/min 8.463112854147e+02 25 KSP preconditioned resid norm 1.064125166220e+02 true resid norm 8.683750469293e+00 ||r(i)||/||b|| 4.857723098741e-01 25 KSP Residual norm 1.064125166220e+02 % max 1.343100269972e+02 min 1.586061159091e-01 max/min 8.468149303534e+02 26 KSP preconditioned resid norm 9.497012777512e+01 true resid norm 7.776308733811e+00 ||r(i)||/||b|| 4.350096734441e-01 26 KSP Residual norm 9.497012777512e+01 % max 1.346211743671e+02 min 1.408944545921e-01 max/min 9.554753219835e+02 27 KSP preconditioned resid norm 9.449347291209e+01 true resid norm 8.027397390699e+00 ||r(i)||/||b|| 4.490556685785e-01 27 KSP Residual norm 9.449347291209e+01 % max 1.353601106604e+02 min 1.302056396509e-01 max/min 1.039587156311e+03 28 KSP preconditioned resid norm 7.708808620337e+01 true resid norm 8.253756419882e+00 ||r(i)||/||b|| 4.617182789167e-01 28 KSP Residual norm 7.708808620337e+01 % max 1.354170803310e+02 min 1.300840147004e-01 max/min 1.040997086712e+03 29 KSP preconditioned resid norm 6.883976717639e+01 true resid norm 7.200274893950e+00 ||r(i)||/||b|| 4.027861209659e-01 29 KSP Residual norm 6.883976717639e+01 % max 1.359172085675e+02 min 1.060952954746e-01 max/min 1.281086102446e+03 30 KSP preconditioned resid norm 6.671786230822e+01 true resid norm 6.613746362850e+00 ||r(i)||/||b|| 3.699754914612e-01 30 KSP Residual norm 6.671786230822e+01 % max 1.361448012233e+02 min 8.135160167393e-02 max/min 1.673535596373e+03 31 KSP preconditioned resid norm 5.753308015718e+01 true resid norm 7.287752888130e+00 ||r(i)||/||b|| 4.076796732906e-01 31 KSP Residual norm 5.753308015718e+01 % max 1.363247397022e+02 min 6.837262871767e-02 max/min 1.993849618759e+03 32 KSP preconditioned resid norm 5.188554631220e+01 true resid norm 7.000101427370e+00 ||r(i)||/||b|| 3.915883409767e-01 32 KSP Residual norm 5.188554631220e+01 % max 1.365417051351e+02 min 6.421828602757e-02 max/min 2.126212229901e+03 33 KSP preconditioned resid norm 4.949709579590e+01 true resid norm 6.374314702607e+00 ||r(i)||/||b|| 3.565815931606e-01 33 KSP Residual norm 4.949709579590e+01 % max 1.366317667622e+02 min 6.421330753944e-02 max/min 2.127779614503e+03 34 KSP preconditioned resid norm 4.403827792460e+01 true resid norm 6.147327125810e+00 ||r(i)||/||b|| 3.438838216294e-01 34 KSP Residual norm 4.403827792460e+01 % max 1.370253126927e+02 min 6.368216512224e-02 max/min 2.151706249775e+03 35 KSP preconditioned resid norm 4.140066382940e+01 true resid norm 5.886089041852e+00 ||r(i)||/||b|| 3.292700636777e-01 35 KSP Residual norm 4.140066382940e+01 % max 1.390943209074e+02 min 6.114512372501e-02 max/min 2.274822789352e+03 36 KSP preconditioned resid norm 3.745028333544e+01 true resid norm 5.122854489971e+00 ||r(i)||/||b|| 2.865744320432e-01 36 KSP Residual norm 3.745028333544e+01 % max 1.396462040364e+02 min 5.993486381149e-02 max/min 2.329966152515e+03 37 KSP preconditioned resid norm 3.492028700266e+01 true resid norm 4.982433448736e+00 ||r(i)||/||b|| 2.787192254942e-01 37 KSP Residual norm 3.492028700266e+01 % max 1.418761690073e+02 min 5.972847536278e-02 max/min 2.375352261139e+03 38 KSP preconditioned resid norm 3.068024157121e+01 true resid norm 4.394664243655e+00 ||r(i)||/||b|| 2.458391922143e-01 38 KSP Residual norm 3.068024157121e+01 % max 1.419282962644e+02 min 5.955136512602e-02 max/min 2.383292070032e+03 39 KSP preconditioned resid norm 2.614836504484e+01 true resid norm 3.671741082517e+00 ||r(i)||/||b|| 2.053985951371e-01 39 KSP Residual norm 2.614836504484e+01 % max 1.424590886621e+02 min 5.534838135054e-02 max/min 2.573861876102e+03 40 KSP preconditioned resid norm 2.598742703782e+01 true resid norm 3.707086113835e+00 ||r(i)||/||b|| 2.073758096559e-01 40 KSP Residual norm 2.598742703782e+01 % max 1.428352406023e+02 min 5.401263258718e-02 max/min 2.644478407376e+03 41 KSP preconditioned resid norm 2.271029765350e+01 true resid norm 2.912150121100e+00 ||r(i)||/||b|| 1.629067873414e-01 41 KSP Residual norm 2.271029765350e+01 % max 1.446928584608e+02 min 5.373910075949e-02 max/min 2.692506134562e+03 42 KSP preconditioned resid norm 1.795836259709e+01 true resid norm 2.316834528159e+00 ||r(i)||/||b|| 1.296046062493e-01 42 KSP Residual norm 1.795836259709e+01 % max 1.449375360478e+02 min 5.251640030193e-02 max/min 2.759852831011e+03 43 KSP preconditioned resid norm 1.795769958691e+01 true resid norm 2.311558013062e+00 ||r(i)||/||b|| 1.293094359844e-01 43 KSP Residual norm 1.795769958691e+01 % max 1.449864885595e+02 min 4.685673226983e-02 max/min 3.094250954688e+03 44 KSP preconditioned resid norm 1.688955122733e+01 true resid norm 2.370951018341e+00 ||r(i)||/||b|| 1.326319033292e-01 44 KSP Residual norm 1.688955122733e+01 % max 1.450141764818e+02 min 4.554223489616e-02 max/min 3.184169086398e+03 45 KSP preconditioned resid norm 1.526531678793e+01 true resid norm 1.947726448658e+00 ||r(i)||/||b|| 1.089565596470e-01 45 KSP Residual norm 1.526531678793e+01 % max 1.460128059490e+02 min 4.478596884382e-02 max/min 3.260235509433e+03 46 KSP preconditioned resid norm 1.519131456699e+01 true resid norm 1.914601584750e+00 ||r(i)||/||b|| 1.071035421390e-01 46 KSP Residual norm 1.519131456699e+01 % max 1.467440844081e+02 min 4.068627190104e-02 max/min 3.606722305867e+03 47 KSP preconditioned resid norm 1.396606833105e+01 true resid norm 1.916085501806e+00 ||r(i)||/||b|| 1.071865530245e-01 47 KSP Residual norm 1.396606833105e+01 % max 1.472676539618e+02 min 4.068014642358e-02 max/min 3.620135788804e+03 48 KSP preconditioned resid norm 1.128785630240e+01 true resid norm 1.818072605558e+00 ||r(i)||/||b|| 1.017036742642e-01 48 KSP Residual norm 1.128785630240e+01 % max 1.476343971960e+02 min 4.003858518052e-02 max/min 3.687303048557e+03 49 KSP preconditioned resid norm 9.686331225178e+00 true resid norm 1.520702063515e+00 ||r(i)||/||b|| 8.506865284031e-02 49 KSP Residual norm 9.686331225178e+00 % max 1.478478861627e+02 min 3.802148926353e-02 max/min 3.888534852961e+03 50 KSP preconditioned resid norm 9.646313260413e+00 true resid norm 1.596152870343e+00 ||r(i)||/||b|| 8.928939972200e-02 50 KSP Residual norm 9.646313260413e+00 % max 1.479986700685e+02 min 3.796210121957e-02 max/min 3.898590049388e+03 51 KSP preconditioned resid norm 9.270552344731e+00 true resid norm 1.442564661256e+00 ||r(i)||/||b|| 8.069761678656e-02 51 KSP Residual norm 9.270552344731e+00 % max 1.482179613835e+02 min 3.763123640547e-02 max/min 3.938694965706e+03 52 KSP preconditioned resid norm 8.025547426875e+00 true resid norm 1.151158202903e+00 ||r(i)||/||b|| 6.439622847661e-02 52 KSP Residual norm 8.025547426875e+00 % max 1.482612345232e+02 min 3.498901666812e-02 max/min 4.237364997409e+03 53 KSP preconditioned resid norm 7.830903379041e+00 true resid norm 1.083539895672e+00 ||r(i)||/||b|| 6.061363460667e-02 53 KSP Residual norm 7.830903379041e+00 % max 1.496916293673e+02 min 3.180425422032e-02 max/min 4.706654283743e+03 54 KSP preconditioned resid norm 7.818451528162e+00 true resid norm 1.101275786636e+00 ||r(i)||/||b|| 6.160578710481e-02 54 KSP Residual norm 7.818451528162e+00 % max 1.497547864750e+02 min 2.963830547627e-02 max/min 5.052744550289e+03 55 KSP preconditioned resid norm 6.716190950888e+00 true resid norm 1.068051260258e+00 ||r(i)||/||b|| 5.974719444025e-02 55 KSP Residual norm 6.716190950888e+00 % max 1.509985715255e+02 min 2.913583319816e-02 max/min 5.182572624524e+03 56 KSP preconditioned resid norm 6.435651273713e+00 true resid norm 9.738533937643e-01 ||r(i)||/||b|| 5.447772989798e-02 56 KSP Residual norm 6.435651273713e+00 % max 1.513751779517e+02 min 2.849133088254e-02 max/min 5.313025866562e+03 57 KSP preconditioned resid norm 6.427111085122e+00 true resid norm 9.515491605865e-01 ||r(i)||/||b|| 5.323002259581e-02 57 KSP Residual norm 6.427111085122e+00 % max 1.514217018182e+02 min 2.805206411012e-02 max/min 5.397880926830e+03 58 KSP preconditioned resid norm 6.330738649886e+00 true resid norm 9.488038114070e-01 ||r(i)||/||b|| 5.307644671670e-02 58 KSP Residual norm 6.330738649886e+00 % max 1.514492565938e+02 min 2.542367477260e-02 max/min 5.957016754989e+03 59 KSP preconditioned resid norm 5.861133560095e+00 true resid norm 9.281566562924e-01 ||r(i)||/||b|| 5.192143699275e-02 59 KSP Residual norm 5.861133560095e+00 % max 1.522146406650e+02 min 2.316380798762e-02 max/min 6.571227008373e+03 60 KSP preconditioned resid norm 5.523892064332e+00 true resid norm 8.250464923972e-01 ||r(i)||/||b|| 4.615341513814e-02 60 KSP Residual norm 5.523892064332e+00 % max 1.522274643717e+02 min 2.316038063761e-02 max/min 6.572753131893e+03 61 KSP preconditioned resid norm 5.345610652504e+00 true resid norm 7.967455833060e-01 ||r(i)||/||b|| 4.457025150056e-02 61 KSP Residual norm 5.345610652504e+00 % max 1.527689509969e+02 min 2.278107359995e-02 max/min 6.705959239659e+03 62 KSP preconditioned resid norm 4.883527474160e+00 true resid norm 6.939057385062e-01 ||r(i)||/||b|| 3.881735139915e-02 62 KSP Residual norm 4.883527474160e+00 % max 1.527847370492e+02 min 2.215885299007e-02 max/min 6.894974984387e+03 63 KSP preconditioned resid norm 3.982941325093e+00 true resid norm 6.883730294386e-01 ||r(i)||/||b|| 3.850784954587e-02 63 KSP Residual norm 3.982941325093e+00 % max 1.528029462870e+02 min 2.036330717837e-02 max/min 7.503837414449e+03 64 KSP preconditioned resid norm 3.768777539791e+00 true resid norm 6.576940275451e-01 ||r(i)||/||b|| 3.679165449085e-02 64 KSP Residual norm 3.768777539791e+00 % max 1.534842011203e+02 min 2.015613298342e-02 max/min 7.614764262895e+03 65 KSP preconditioned resid norm 3.754783611569e+00 true resid norm 6.240730525064e-01 ||r(i)||/||b|| 3.491088433716e-02 65 KSP Residual norm 3.754783611569e+00 % max 1.536539871370e+02 min 2.002205183069e-02 max/min 7.674237807208e+03 66 KSP preconditioned resid norm 3.242712853326e+00 true resid norm 5.919043028914e-01 ||r(i)||/||b|| 3.311135222698e-02 66 KSP Residual norm 3.242712853326e+00 % max 1.536763506920e+02 min 1.853540050760e-02 max/min 8.290964666717e+03 67 KSP preconditioned resid norm 2.639460944665e+00 true resid norm 5.172526166420e-01 ||r(i)||/||b|| 2.893530845493e-02 67 KSP Residual norm 2.639460944665e+00 % max 1.536900244470e+02 min 1.819164547837e-02 max/min 8.448384981431e+03 68 KSP preconditioned resid norm 2.332848633723e+00 true resid norm 4.860086486067e-01 ||r(i)||/||b|| 2.718750897868e-02 68 KSP Residual norm 2.332848633723e+00 % max 1.537025056230e+02 min 1.794260313013e-02 max/min 8.566343718814e+03 69 KSP preconditioned resid norm 2.210456807969e+00 true resid norm 5.392952883712e-01 ||r(i)||/||b|| 3.016838391000e-02 69 KSP Residual norm 2.210456807969e+00 % max 1.537288460363e+02 min 1.753831098488e-02 max/min 8.765316464556e+03 70 KSP preconditioned resid norm 2.193036145802e+00 true resid norm 5.031209038994e-01 ||r(i)||/||b|| 2.814477505974e-02 70 KSP Residual norm 2.193036145802e+00 % max 1.541461506421e+02 min 1.700508158735e-02 max/min 9.064711030659e+03 71 KSP preconditioned resid norm 1.938788470922e+00 true resid norm 3.360317906409e-01 ||r(i)||/||b|| 1.879774640094e-02 71 KSP Residual norm 1.938788470922e+00 % max 1.545641242905e+02 min 1.655870257931e-02 max/min 9.334313696993e+03 72 KSP preconditioned resid norm 1.450214827437e+00 true resid norm 2.879267785157e-01 ||r(i)||/||b|| 1.610673369402e-02 72 KSP Residual norm 1.450214827437e+00 % max 1.549625503409e+02 min 1.648921859604e-02 max/min 9.397810420079e+03 73 KSP preconditioned resid norm 1.110389920050e+00 true resid norm 2.664130281817e-01 ||r(i)||/||b|| 1.490324630331e-02 73 KSP Residual norm 1.110389920050e+00 % max 1.553700778851e+02 min 1.637230817799e-02 max/min 9.489809023627e+03 74 KSP preconditioned resid norm 8.588797630056e-01 true resid norm 2.680334659775e-01 ||r(i)||/||b|| 1.499389421102e-02 74 KSP Residual norm 8.588797630056e-01 % max 1.560105458550e+02 min 1.627972447765e-02 max/min 9.583119546597e+03 75 KSP preconditioned resid norm 6.037657330005e-01 true resid norm 2.938298885198e-01 ||r(i)||/||b|| 1.643695591681e-02 75 KSP Residual norm 6.037657330005e-01 % max 1.568417947105e+02 min 1.617453208191e-02 max/min 9.696836601902e+03 76 KSP preconditioned resid norm 4.778003132962e-01 true resid norm 3.104440343355e-01 ||r(i)||/||b|| 1.736635756394e-02 76 KSP Residual norm 4.778003132962e-01 % max 1.582399629678e+02 min 1.614387326366e-02 max/min 9.801858598824e+03 77 KSP preconditioned resid norm 4.523141317161e-01 true resid norm 3.278599563956e-01 ||r(i)||/||b|| 1.834061087967e-02 77 KSP Residual norm 4.523141317161e-01 % max 1.610754319747e+02 min 1.614204788287e-02 max/min 9.978624344533e+03 78 KSP preconditioned resid norm 4.522674526201e-01 true resid norm 3.287567680693e-01 ||r(i)||/||b|| 1.839077886640e-02 78 KSP Residual norm 4.522674526201e-01 % max 1.611450941232e+02 min 1.596556934579e-02 max/min 1.009328829014e+04 79 KSP preconditioned resid norm 4.452139421181e-01 true resid norm 3.198810083892e-01 ||r(i)||/||b|| 1.789426548812e-02 79 KSP Residual norm 4.452139421181e-01 % max 1.619065061486e+02 min 1.456184518464e-02 max/min 1.111854329555e+04 80 KSP preconditioned resid norm 4.217533807980e-01 true resid norm 2.940346905843e-01 ||r(i)||/||b|| 1.644841262233e-02 80 KSP Residual norm 4.217533807980e-01 % max 1.620409915505e+02 min 1.154094228288e-02 max/min 1.404053391644e+04 81 KSP preconditioned resid norm 3.815105809981e-01 true resid norm 2.630686686818e-01 ||r(i)||/||b|| 1.471616155865e-02 81 KSP Residual norm 3.815105809981e-01 % max 1.643140061303e+02 min 9.352799654900e-03 max/min 1.756843000953e+04 82 KSP preconditioned resid norm 3.257333222641e-01 true resid norm 2.350339902943e-01 ||r(i)||/||b|| 1.314789096807e-02 82 KSP Residual norm 3.257333222641e-01 % max 1.661730623934e+02 min 7.309343891643e-03 max/min 2.273433359502e+04 83 KSP preconditioned resid norm 2.653029571859e-01 true resid norm 1.993496448136e-01 ||r(i)||/||b|| 1.115169508568e-02 83 KSP Residual norm 2.653029571859e-01 % max 1.672909675648e+02 min 5.727931297058e-03 max/min 2.920617564857e+04 84 KSP preconditioned resid norm 2.125985080811e-01 true resid norm 1.651488251908e-01 ||r(i)||/||b|| 9.238488205020e-03 84 KSP Residual norm 2.125985080811e-01 % max 1.677205536916e+02 min 4.904920732181e-03 max/min 3.419434540321e+04 85 KSP preconditioned resid norm 1.823170233996e-01 true resid norm 1.538640960567e-01 ||r(i)||/||b|| 8.607216157635e-03 85 KSP Residual norm 1.823170233996e-01 % max 1.684045979154e+02 min 4.411887233900e-03 max/min 3.817064874674e+04 86 KSP preconditioned resid norm 1.568093845918e-01 true resid norm 1.468517035130e-01 ||r(i)||/||b|| 8.214940246925e-03 86 KSP Residual norm 1.568093845918e-01 % max 1.707938893954e+02 min 3.835536173319e-03 max/min 4.452933870980e+04 87 KSP preconditioned resid norm 1.416153038224e-01 true resid norm 1.326462010194e-01 ||r(i)||/||b|| 7.420279024951e-03 87 KSP Residual norm 1.416153038224e-01 % max 1.708004671154e+02 min 3.204409749322e-03 max/min 5.330169375233e+04 88 KSP preconditioned resid norm 1.298764107063e-01 true resid norm 1.273603812747e-01 ||r(i)||/||b|| 7.124588254468e-03 88 KSP Residual norm 1.298764107063e-01 % max 1.708079667291e+02 min 2.881989285724e-03 max/min 5.926738436372e+04 89 KSP preconditioned resid norm 1.226436579597e-01 true resid norm 1.269051556519e-01 ||r(i)||/||b|| 7.099122759682e-03 89 KSP Residual norm 1.226436579597e-01 % max 1.708200970069e+02 min 2.600538747361e-03 max/min 6.568642639154e+04 90 KSP preconditioned resid norm 1.131214284449e-01 true resid norm 1.236585929815e-01 ||r(i)||/||b|| 6.917508806917e-03 90 KSP Residual norm 1.131214284449e-01 % max 1.708228433345e+02 min 2.097699611380e-03 max/min 8.143341516003e+04 91 KSP preconditioned resid norm 1.097877127886e-01 true resid norm 1.208660430826e-01 ||r(i)||/||b|| 6.761292501577e-03 91 KSP Residual norm 1.097877127886e-01 % max 1.734053778533e+02 min 1.669096618002e-03 max/min 1.038917555659e+05 92 KSP preconditioned resid norm 1.087819681706e-01 true resid norm 1.220967287438e-01 ||r(i)||/||b|| 6.830137526368e-03 92 KSP Residual norm 1.087819681706e-01 % max 1.735898650322e+02 min 1.410331102754e-03 max/min 1.230844761867e+05 93 KSP preconditioned resid norm 1.082792743495e-01 true resid norm 1.231464976286e-01 ||r(i)||/||b|| 6.888861997758e-03 93 KSP Residual norm 1.082792743495e-01 % max 1.744253420547e+02 min 1.160377474118e-03 max/min 1.503177594750e+05 94 KSP preconditioned resid norm 1.082786837568e-01 true resid norm 1.231972007582e-01 ||r(i)||/||b|| 6.891698350150e-03 94 KSP Residual norm 1.082786837568e-01 % max 1.744405973598e+02 min 9.020124135986e-04 max/min 1.933904619603e+05 95 KSP preconditioned resid norm 1.076120024876e-01 true resid norm 1.193390416348e-01 ||r(i)||/||b|| 6.675871458778e-03 95 KSP Residual norm 1.076120024876e-01 % max 1.744577996139e+02 min 6.759449987513e-04 max/min 2.580946673712e+05 96 KSP preconditioned resid norm 1.051338654734e-01 true resid norm 1.113305708771e-01 ||r(i)||/||b|| 6.227874553259e-03 96 KSP Residual norm 1.051338654734e-01 % max 1.754402425643e+02 min 5.544787352320e-04 max/min 3.164057184102e+05 97 KSP preconditioned resid norm 9.747169190114e-02 true resid norm 9.349805869502e-02 ||r(i)||/||b|| 5.230317027375e-03 97 KSP Residual norm 9.747169190114e-02 % max 1.756210909470e+02 min 4.482270392115e-04 max/min 3.918127992813e+05 98 KSP preconditioned resid norm 8.842962288098e-02 true resid norm 7.385511932515e-02 ||r(i)||/||b|| 4.131483514810e-03 98 KSP Residual norm 8.842962288098e-02 % max 1.757463096622e+02 min 3.987817611456e-04 max/min 4.407079931572e+05 99 KSP preconditioned resid norm 7.588962343124e-02 true resid norm 4.759948399141e-02 ||r(i)||/||b|| 2.662733270502e-03 99 KSP Residual norm 7.588962343124e-02 % max 1.758800802719e+02 min 3.549508940132e-04 max/min 4.955053874730e+05 100 KSP preconditioned resid norm 5.762080294705e-02 true resid norm 2.979101328333e-02 ||r(i)||/||b|| 1.666520633833e-03 100 KSP Residual norm 5.762080294705e-02 % max 1.767054165485e+02 min 3.399974141580e-04 max/min 5.197257661094e+05 101 KSP preconditioned resid norm 4.010101211334e-02 true resid norm 1.618156300742e-02 ||r(i)||/||b|| 9.052028000211e-04 101 KSP Residual norm 4.010101211334e-02 % max 1.780221758963e+02 min 3.330380731227e-04 max/min 5.345400128792e+05 102 KSP preconditioned resid norm 2.613215615582e-02 true resid norm 8.507285387866e-03 ||r(i)||/||b|| 4.759007859835e-04 102 KSP Residual norm 2.613215615582e-02 % max 1.780247903707e+02 min 3.267877108360e-04 max/min 5.447719864228e+05 103 KSP preconditioned resid norm 1.756896108812e-02 true resid norm 4.899136161981e-03 ||r(i)||/||b|| 2.740595435358e-04 103 KSP Residual norm 1.756896108812e-02 % max 1.780337300599e+02 min 3.237486252629e-04 max/min 5.499134704135e+05 104 KSP preconditioned resid norm 1.142112016905e-02 true resid norm 3.312246267926e-03 ||r(i)||/||b|| 1.852883182367e-04 104 KSP Residual norm 1.142112016905e-02 % max 1.805569604942e+02 min 3.232308817378e-04 max/min 5.586005876774e+05 105 KSP preconditioned resid norm 7.345608733466e-03 true resid norm 1.880672268508e-03 ||r(i)||/||b|| 1.052055232609e-04 105 KSP Residual norm 7.345608733466e-03 % max 1.817568861663e+02 min 3.229958821968e-04 max/min 5.627219917794e+05 106 KSP preconditioned resid norm 4.810582452316e-03 true resid norm 1.418216565286e-03 ||r(i)||/||b|| 7.933557502107e-05 106 KSP Residual norm 4.810582452316e-03 % max 1.845797149153e+02 min 3.220343456392e-04 max/min 5.731677922395e+05 107 KSP preconditioned resid norm 3.114068270187e-03 true resid norm 1.177898700463e-03 ||r(i)||/||b|| 6.589210209864e-05 107 KSP Residual norm 3.114068270187e-03 % max 1.850561815508e+02 min 3.216059660171e-04 max/min 5.754127755856e+05 108 KSP preconditioned resid norm 2.293796939903e-03 true resid norm 1.090384984298e-03 ||r(i)||/||b|| 6.099655147246e-05 108 KSP Residual norm 2.293796939903e-03 % max 1.887275970943e+02 min 3.204019028106e-04 max/min 5.890339459248e+05 109 KSP preconditioned resid norm 1.897140553486e-03 true resid norm 1.085409353133e-03 ||r(i)||/||b|| 6.071821276936e-05 109 KSP Residual norm 1.897140553486e-03 % max 1.900700995337e+02 min 3.201092729524e-04 max/min 5.937663029274e+05 110 KSP preconditioned resid norm 1.638607873536e-03 true resid norm 1.080912735078e-03 ||r(i)||/||b|| 6.046667024205e-05 110 KSP Residual norm 1.638607873536e-03 % max 1.941847763082e+02 min 3.201003961856e-04 max/min 6.066371008040e+05 111 KSP preconditioned resid norm 1.445734513987e-03 true resid norm 1.072337220355e-03 ||r(i)||/||b|| 5.998695268109e-05 111 KSP Residual norm 1.445734513987e-03 % max 1.968706347099e+02 min 3.200841174449e-04 max/min 6.150590547303e+05 112 KSP preconditioned resid norm 1.309787940098e-03 true resid norm 1.071880522659e-03 ||r(i)||/||b|| 5.996140483796e-05 112 KSP Residual norm 1.309787940098e-03 % max 2.005995546657e+02 min 3.200727858161e-04 max/min 6.267310547950e+05 113 KSP preconditioned resid norm 1.203600185791e-03 true resid norm 1.068704252153e-03 ||r(i)||/||b|| 5.978372305568e-05 113 KSP Residual norm 1.203600185791e-03 % max 2.044005133646e+02 min 3.200669375682e-04 max/min 6.386180182109e+05 114 KSP preconditioned resid norm 1.120060134211e-03 true resid norm 1.067737533794e-03 ||r(i)||/||b|| 5.972964446230e-05 114 KSP Residual norm 1.120060134211e-03 % max 2.080860182134e+02 min 3.200663739818e-04 max/min 6.501339569812e+05 115 KSP preconditioned resid norm 1.051559355475e-03 true resid norm 1.066440021011e-03 ||r(i)||/||b|| 5.965706110284e-05 115 KSP Residual norm 1.051559355475e-03 % max 2.120208487134e+02 min 3.200660278464e-04 max/min 6.624284687130e+05 116 KSP preconditioned resid norm 9.943175666627e-04 true resid norm 1.065634036988e-03 ||r(i)||/||b|| 5.961197404955e-05 116 KSP Residual norm 9.943175666627e-04 % max 2.158491989548e+02 min 3.200660249513e-04 max/min 6.743896012946e+05 117 KSP preconditioned resid norm 9.454894909455e-04 true resid norm 1.064909725825e-03 ||r(i)||/||b|| 5.957145580713e-05 117 KSP Residual norm 9.454894909455e-04 % max 2.197604049280e+02 min 3.200659829252e-04 max/min 6.866096887885e+05 118 KSP preconditioned resid norm 9.032188050175e-04 true resid norm 1.064337854187e-03 ||r(i)||/||b|| 5.953946508976e-05 118 KSP Residual norm 9.032188050175e-04 % max 2.236450290076e+02 min 3.200659636119e-04 max/min 6.987466786029e+05 119 KSP preconditioned resid norm 8.661523005930e-04 true resid norm 1.063851383334e-03 ||r(i)||/||b|| 5.951225172494e-05 119 KSP Residual norm 8.661523005930e-04 % max 2.275386631435e+02 min 3.200659439058e-04 max/min 7.109118213789e+05 120 KSP preconditioned resid norm 8.333044554060e-04 true resid norm 1.063440179004e-03 ||r(i)||/||b|| 5.948924879800e-05 120 KSP Residual norm 8.333044554060e-04 % max 2.314181044639e+02 min 3.200659279880e-04 max/min 7.230326136825e+05 121 KSP preconditioned resid norm 8.039309218182e-04 true resid norm 1.063085844171e-03 ||r(i)||/||b|| 5.946942717246e-05 121 KSP Residual norm 8.039309218182e-04 % max 2.352840091878e+02 min 3.200659140872e-04 max/min 7.351111094065e+05 122 KSP preconditioned resid norm 7.774595735272e-04 true resid norm 1.062777951335e-03 ||r(i)||/||b|| 5.945220352987e-05 122 KSP Residual norm 7.774595735272e-04 % max 2.391301402333e+02 min 3.200659020264e-04 max/min 7.471278218621e+05 123 KSP preconditioned resid norm 7.534419441080e-04 true resid norm 1.062507774643e-03 ||r(i)||/||b|| 5.943708974281e-05 123 KSP Residual norm 7.534419441080e-04 % max 2.429535997478e+02 min 3.200658914252e-04 max/min 7.590736978126e+05 124 KSP preconditioned resid norm 7.315209774625e-04 true resid norm 1.062268823392e-03 ||r(i)||/||b|| 5.942372271877e-05 124 KSP Residual norm 7.315209774625e-04 % max 2.467514538222e+02 min 3.200658820413e-04 max/min 7.709395710924e+05 125 KSP preconditioned resid norm 7.114083600074e-04 true resid norm 1.062055971251e-03 ||r(i)||/||b|| 5.941181568889e-05 125 KSP Residual norm 7.114083600074e-04 % max 2.505215705167e+02 min 3.200658736748e-04 max/min 7.827187811071e+05 126 KSP preconditioned resid norm 6.928683961266e-04 true resid norm 1.061865165985e-03 ||r(i)||/||b|| 5.940114196966e-05 126 KSP Residual norm 6.928683961266e-04 % max 2.542622648377e+02 min 3.200658661692e-04 max/min 7.944060636045e+05 127 KSP preconditioned resid norm 6.757062709768e-04 true resid norm 1.061693150669e-03 ||r(i)||/||b|| 5.939151936730e-05 127 KSP Residual norm 6.757062709768e-04 % max 2.579722822206e+02 min 3.200658593982e-04 max/min 8.059974990947e+05 128 KSP preconditioned resid norm 6.597593638309e-04 true resid norm 1.061537280201e-03 ||r(i)||/||b|| 5.938279991397e-05 128 KSP Residual norm 6.597593638309e-04 % max 2.616507118184e+02 min 3.200658532588e-04 max/min 8.174902419435e+05 129 KSP preconditioned resid norm 6.448907155810e-04 true resid norm 1.061395383620e-03 ||r(i)||/||b|| 5.937486216514e-05 129 KSP Residual norm 6.448907155810e-04 % max 2.652969302909e+02 min 3.200658476664e-04 max/min 8.288823447585e+05 130 KSP preconditioned resid norm 6.309840465804e-04 true resid norm 1.061265662452e-03 ||r(i)||/||b|| 5.936760551361e-05 130 KSP Residual norm 6.309840465804e-04 % max 2.689105505574e+02 min 3.200658425513e-04 max/min 8.401725982813e+05 131 KSP preconditioned resid norm 6.179399078869e-04 true resid norm 1.061146614178e-03 ||r(i)||/||b|| 5.936094590780e-05 131 KSP Residual norm 6.179399078869e-04 % max 2.724913796166e+02 min 3.200658378548e-04 max/min 8.513603996069e+05 132 KSP preconditioned resid norm 6.056726732840e-04 true resid norm 1.061036973709e-03 ||r(i)||/||b|| 5.935481257818e-05 132 KSP Residual norm 6.056726732840e-04 % max 2.760393830832e+02 min 3.200658335276e-04 max/min 8.624456413884e+05 133 KSP preconditioned resid norm 5.941081632891e-04 true resid norm 1.060935668299e-03 ||r(i)||/||b|| 5.934914551493e-05 133 KSP Residual norm 5.941081632891e-04 % max 2.795546556158e+02 min 3.200658295276e-04 max/min 8.734286194450e+05 134 KSP preconditioned resid norm 5.831817499918e-04 true resid norm 1.060841782314e-03 ||r(i)||/||b|| 5.934389349718e-05 134 KSP Residual norm 5.831817499918e-04 % max 2.830373962684e+02 min 3.200658258193e-04 max/min 8.843099557533e+05 135 KSP preconditioned resid norm 5.728368317981e-04 true resid norm 1.060754529477e-03 ||r(i)||/||b|| 5.933901254021e-05 135 KSP Residual norm 5.728368317981e-04 % max 2.864878879810e+02 min 3.200658223718e-04 max/min 8.950905343719e+05 136 KSP preconditioned resid norm 5.630235956754e-04 true resid norm 1.060673230868e-03 ||r(i)||/||b|| 5.933446466506e-05 136 KSP Residual norm 5.630235956754e-04 % max 2.899064805338e+02 min 3.200658191584e-04 max/min 9.057714481857e+05 137 KSP preconditioned resid norm 5.536980049705e-04 true resid norm 1.060597297101e-03 ||r(i)||/||b|| 5.933021690118e-05 137 KSP Residual norm 5.536980049705e-04 % max 2.932935763968e+02 min 3.200658161562e-04 max/min 9.163539546933e+05 138 KSP preconditioned resid norm 5.448209657758e-04 true resid norm 1.060526214124e-03 ||r(i)||/||b|| 5.932624049239e-05 138 KSP Residual norm 5.448209657758e-04 % max 2.966496189942e+02 min 3.200658133448e-04 max/min 9.268394393456e+05 139 KSP preconditioned resid norm 5.363576357737e-04 true resid norm 1.060459531517e-03 ||r(i)||/||b|| 5.932251024194e-05 139 KSP Residual norm 5.363576357737e-04 % max 2.999750829834e+02 min 3.200658107070e-04 max/min 9.372293851716e+05 140 KSP preconditioned resid norm 5.282768476492e-04 true resid norm 1.060396852955e-03 ||r(i)||/||b|| 5.931900397932e-05 140 KSP Residual norm 5.282768476492e-04 % max 3.032704662136e+02 min 3.200658082268e-04 max/min 9.475253476584e+05 141 KSP preconditioned resid norm 5.205506252810e-04 true resid norm 1.060337828321e-03 ||r(i)||/||b|| 5.931570211878e-05 141 KSP Residual norm 5.205506252810e-04 % max 3.065362830844e+02 min 3.200658058907e-04 max/min 9.577289339963e+05 142 KSP preconditioned resid norm 5.131537755696e-04 true resid norm 1.060282147141e-03 ||r(i)||/||b|| 5.931258729233e-05 142 KSP Residual norm 5.131537755696e-04 % max 3.097730590695e+02 min 3.200658036863e-04 max/min 9.678417859756e+05 143 KSP preconditioned resid norm 5.060635423123e-04 true resid norm 1.060229533174e-03 ||r(i)||/||b|| 5.930964404698e-05 143 KSP Residual norm 5.060635423123e-04 % max 3.129813262144e+02 min 3.200658016030e-04 max/min 9.778655659143e+05 144 KSP preconditioned resid norm 4.992593112791e-04 true resid norm 1.060179739774e-03 ||r(i)||/||b|| 5.930685858524e-05 144 KSP Residual norm 4.992593112791e-04 % max 3.161616194433e+02 min 3.200657996311e-04 max/min 9.878019451242e+05 145 KSP preconditioned resid norm 4.927223577718e-04 true resid norm 1.060132546053e-03 ||r(i)||/||b|| 5.930421855048e-05 145 KSP Residual norm 4.927223577718e-04 % max 3.193144735429e+02 min 3.200657977617e-04 max/min 9.976525944849e+05 146 KSP preconditioned resid norm 4.864356296191e-04 true resid norm 1.060087753599e-03 ||r(i)||/||b|| 5.930171284355e-05 146 KSP Residual norm 4.864356296191e-04 % max 3.224404207086e+02 min 3.200657959872e-04 max/min 1.007419176779e+06 147 KSP preconditioned resid norm 4.803835598739e-04 true resid norm 1.060045183705e-03 ||r(i)||/||b|| 5.929933146747e-05 147 KSP Residual norm 4.803835598739e-04 % max 3.255399885620e+02 min 3.200657943002e-04 max/min 1.017103340498e+06 148 KSP preconditioned resid norm 4.745519045267e-04 true resid norm 1.060004674955e-03 ||r(i)||/||b|| 5.929706539253e-05 148 KSP Residual norm 4.745519045267e-04 % max 3.286136985587e+02 min 3.200657926948e-04 max/min 1.026706714866e+06 149 KSP preconditioned resid norm 4.689276013781e-04 true resid norm 1.059966081181e-03 ||r(i)||/||b|| 5.929490644211e-05 149 KSP Residual norm 4.689276013781e-04 % max 3.316620647237e+02 min 3.200657911651e-04 max/min 1.036230905891e+06 150 KSP preconditioned resid norm 4.634986468865e-04 true resid norm 1.059929269736e-03 ||r(i)||/||b|| 5.929284719586e-05 150 KSP Residual norm 4.634986468865e-04 % max 3.346855926588e+02 min 3.200657897058e-04 max/min 1.045677493263e+06 151 KSP preconditioned resid norm 4.582539883466e-04 true resid norm 1.059894119959e-03 ||r(i)||/||b|| 5.929088090393e-05 151 KSP Residual norm 4.582539883466e-04 % max 3.376847787762e+02 min 3.200657883122e-04 max/min 1.055048027960e+06 152 KSP preconditioned resid norm 4.531834291922e-04 true resid norm 1.059860521790e-03 ||r(i)||/||b|| 5.928900140955e-05 152 KSP Residual norm 4.531834291922e-04 % max 3.406601097208e+02 min 3.200657869798e-04 max/min 1.064344030443e+06 153 KSP preconditioned resid norm 4.482775455769e-04 true resid norm 1.059828374748e-03 ||r(i)||/||b|| 5.928720309181e-05 153 KSP Residual norm 4.482775455769e-04 % max 3.436120619489e+02 min 3.200657857049e-04 max/min 1.073566989337e+06 154 KSP preconditioned resid norm 4.435276126791e-04 true resid norm 1.059797586777e-03 ||r(i)||/||b|| 5.928548080094e-05 154 KSP Residual norm 4.435276126791e-04 % max 3.465411014367e+02 min 3.200657844837e-04 max/min 1.082718360526e+06 155 KSP preconditioned resid norm 4.389255394186e-04 true resid norm 1.059768073502e-03 ||r(i)||/||b|| 5.928382981713e-05 155 KSP Residual norm 4.389255394186e-04 % max 3.494476834965e+02 min 3.200657833130e-04 max/min 1.091799566575e+06 156 KSP preconditioned resid norm 4.344638104739e-04 true resid norm 1.059739757336e-03 ||r(i)||/||b|| 5.928224580001e-05 156 KSP Residual norm 4.344638104739e-04 % max 3.523322526810e+02 min 3.200657821896e-04 max/min 1.100811996430e+06 157 KSP preconditioned resid norm 4.301354346542e-04 true resid norm 1.059712566921e-03 ||r(i)||/||b|| 5.928072475785e-05 157 KSP Residual norm 4.301354346542e-04 % max 3.551952427605e+02 min 3.200657811108e-04 max/min 1.109757005350e+06 158 KSP preconditioned resid norm 4.259338988194e-04 true resid norm 1.059686436414e-03 ||r(i)||/||b|| 5.927926300733e-05 158 KSP Residual norm 4.259338988194e-04 % max 3.580370767604e+02 min 3.200657800738e-04 max/min 1.118635915023e+06 159 KSP preconditioned resid norm 4.218531266563e-04 true resid norm 1.059661305045e-03 ||r(i)||/||b|| 5.927785714894e-05 159 KSP Residual norm 4.218531266563e-04 % max 3.608581670458e+02 min 3.200657790764e-04 max/min 1.127450013829e+06 160 KSP preconditioned resid norm 4.178874417186e-04 true resid norm 1.059637116561e-03 ||r(i)||/||b|| 5.927650403596e-05 160 KSP Residual norm 4.178874417186e-04 % max 3.636589154466e+02 min 3.200657781164e-04 max/min 1.136200557232e+06 161 KSP preconditioned resid norm 4.140315342180e-04 true resid norm 1.059613818894e-03 ||r(i)||/||b|| 5.927520075559e-05 161 KSP Residual norm 4.140315342180e-04 % max 3.664397134134e+02 min 3.200657771916e-04 max/min 1.144888768267e+06 162 KSP preconditioned resid norm 4.102804311252e-04 true resid norm 1.059591363713e-03 ||r(i)||/||b|| 5.927394460419e-05 162 KSP Residual norm 4.102804311252e-04 % max 3.692009421979e+02 min 3.200657763002e-04 max/min 1.153515838106e+06 163 KSP preconditioned resid norm 4.066294691985e-04 true resid norm 1.059569706138e-03 ||r(i)||/||b|| 5.927273307117e-05 163 KSP Residual norm 4.066294691985e-04 % max 3.719429730532e+02 min 3.200657754405e-04 max/min 1.162082926678e+06 164 KSP preconditioned resid norm 4.030742706055e-04 true resid norm 1.059548804412e-03 ||r(i)||/||b|| 5.927156382068e-05 164 KSP Residual norm 4.030742706055e-04 % max 3.746661674483e+02 min 3.200657746106e-04 max/min 1.170591163345e+06 165 KSP preconditioned resid norm 3.996107208498e-04 true resid norm 1.059528619651e-03 ||r(i)||/||b|| 5.927043467745e-05 165 KSP Residual norm 3.996107208498e-04 % max 3.773708772938e+02 min 3.200657738091e-04 max/min 1.179041647605e+06 166 KSP preconditioned resid norm 3.962349487489e-04 true resid norm 1.059509115572e-03 ||r(i)||/||b|| 5.926934361186e-05 166 KSP Residual norm 3.962349487489e-04 % max 3.800574451754e+02 min 3.200657730346e-04 max/min 1.187435449820e+06 167 KSP preconditioned resid norm 3.929433082418e-04 true resid norm 1.059490258328e-03 ||r(i)||/||b|| 5.926828873043e-05 167 KSP Residual norm 3.929433082418e-04 % max 3.827262045917e+02 min 3.200657722858e-04 max/min 1.195773611962e+06 168 KSP preconditioned resid norm 3.897323618319e-04 true resid norm 1.059472016256e-03 ||r(i)||/||b|| 5.926726826196e-05 168 KSP Residual norm 3.897323618319e-04 % max 3.853774801959e+02 min 3.200657715611e-04 max/min 1.204057148368e+06 169 KSP preconditioned resid norm 3.865988654946e-04 true resid norm 1.059454359751e-03 ||r(i)||/||b|| 5.926628055033e-05 169 KSP Residual norm 3.865988654946e-04 % max 3.880115880375e+02 min 3.200657708600e-04 max/min 1.212287046487e+06 170 KSP preconditioned resid norm 3.835397548978e-04 true resid norm 1.059437261066e-03 ||r(i)||/||b|| 5.926532404338e-05 170 KSP Residual norm 3.835397548978e-04 % max 3.906288358040e+02 min 3.200657701808e-04 max/min 1.220464267652e+06 171 KSP preconditioned resid norm 3.805521328045e-04 true resid norm 1.059420694153e-03 ||r(i)||/||b|| 5.926439728400e-05 171 KSP Residual norm 3.805521328045e-04 % max 3.932295230610e+02 min 3.200657695227e-04 max/min 1.228589747812e+06 172 KSP preconditioned resid norm 3.776332575372e-04 true resid norm 1.059404634604e-03 ||r(i)||/||b|| 5.926349890668e-05 172 KSP Residual norm 3.776332575372e-04 % max 3.958139414889e+02 min 3.200657688847e-04 max/min 1.236664398283e+06 173 KSP preconditioned resid norm 3.747805324013e-04 true resid norm 1.059389059461e-03 ||r(i)||/||b|| 5.926262762725e-05 173 KSP Residual norm 3.747805324013e-04 % max 3.983823751168e+02 min 3.200657682658e-04 max/min 1.244689106477e+06 174 KSP preconditioned resid norm 3.719914959745e-04 true resid norm 1.059373947132e-03 ||r(i)||/||b|| 5.926178223783e-05 174 KSP Residual norm 3.719914959745e-04 % max 4.009351005517e+02 min 3.200657676654e-04 max/min 1.252664736614e+06 175 KSP preconditioned resid norm 3.692638131792e-04 true resid norm 1.059359277291e-03 ||r(i)||/||b|| 5.926096160129e-05 175 KSP Residual norm 3.692638131792e-04 % max 4.034723872035e+02 min 3.200657670824e-04 max/min 1.260592130428e+06 176 KSP preconditioned resid norm 3.665952670646e-04 true resid norm 1.059345030772e-03 ||r(i)||/||b|| 5.926016464562e-05 176 KSP Residual norm 3.665952670646e-04 % max 4.059944975044e+02 min 3.200657665165e-04 max/min 1.268472107852e+06 177 KSP preconditioned resid norm 3.639837512333e-04 true resid norm 1.059331189535e-03 ||r(i)||/||b|| 5.925939036157e-05 177 KSP Residual norm 3.639837512333e-04 % max 4.085016871234e+02 min 3.200657659664e-04 max/min 1.276305467690e+06 178 KSP preconditioned resid norm 3.614272628526e-04 true resid norm 1.059317736504e-03 ||r(i)||/||b|| 5.925863779385e-05 178 KSP Residual norm 3.614272628526e-04 % max 4.109942051749e+02 min 3.200657654318e-04 max/min 1.284092988266e+06 179 KSP preconditioned resid norm 3.589238961979e-04 true resid norm 1.059304655605e-03 ||r(i)||/||b|| 5.925790604337e-05 179 KSP Residual norm 3.589238961979e-04 % max 4.134722944217e+02 min 3.200657649118e-04 max/min 1.291835428058e+06 180 KSP preconditioned resid norm 3.564718366825e-04 true resid norm 1.059291931579e-03 ||r(i)||/||b|| 5.925719425653e-05 180 KSP Residual norm 3.564718366825e-04 % max 4.159361914728e+02 min 3.200657644063e-04 max/min 1.299533526319e+06 181 KSP preconditioned resid norm 3.540693553287e-04 true resid norm 1.059279550018e-03 ||r(i)||/||b|| 5.925650162730e-05 181 KSP Residual norm 3.540693553287e-04 % max 4.183861269743e+02 min 3.200657639142e-04 max/min 1.307188003670e+06 182 KSP preconditioned resid norm 3.517148036444e-04 true resid norm 1.059267497309e-03 ||r(i)||/||b|| 5.925582739414e-05 182 KSP Residual norm 3.517148036444e-04 % max 4.208223257952e+02 min 3.200657634351e-04 max/min 1.314799562686e+06 183 KSP preconditioned resid norm 3.494066088688e-04 true resid norm 1.059255760488e-03 ||r(i)||/||b|| 5.925517083190e-05 183 KSP Residual norm 3.494066088688e-04 % max 4.232450072075e+02 min 3.200657629684e-04 max/min 1.322368888450e+06 184 KSP preconditioned resid norm 3.471432695577e-04 true resid norm 1.059244327320e-03 ||r(i)||/||b|| 5.925453125615e-05 184 KSP Residual norm 3.471432695577e-04 % max 4.256543850606e+02 min 3.200657625139e-04 max/min 1.329896649105e+06 185 KSP preconditioned resid norm 3.449233514781e-04 true resid norm 1.059233186159e-03 ||r(i)||/||b|| 5.925390801536e-05 185 KSP Residual norm 3.449233514781e-04 % max 4.280506679492e+02 min 3.200657620711e-04 max/min 1.337383496377e+06 186 KSP preconditioned resid norm 3.427454837895e-04 true resid norm 1.059222325963e-03 ||r(i)||/||b|| 5.925330049183e-05 186 KSP Residual norm 3.427454837895e-04 % max 4.304340593770e+02 min 3.200657616393e-04 max/min 1.344830066085e+06 187 KSP preconditioned resid norm 3.406083554860e-04 true resid norm 1.059211736238e-03 ||r(i)||/||b|| 5.925270809861e-05 187 KSP Residual norm 3.406083554860e-04 % max 4.328047579144e+02 min 3.200657612184e-04 max/min 1.352236978635e+06 188 KSP preconditioned resid norm 3.385107120797e-04 true resid norm 1.059201407008e-03 ||r(i)||/||b|| 5.925213027752e-05 188 KSP Residual norm 3.385107120797e-04 % max 4.351629573503e+02 min 3.200657608077e-04 max/min 1.359604839494e+06 189 KSP preconditioned resid norm 3.364513525062e-04 true resid norm 1.059191328770e-03 ||r(i)||/||b|| 5.925156649705e-05 189 KSP Residual norm 3.364513525062e-04 % max 4.375088468402e+02 min 3.200657604069e-04 max/min 1.366934239651e+06 190 KSP preconditioned resid norm 3.344291262348e-04 true resid norm 1.059181492481e-03 ||r(i)||/||b|| 5.925101625132e-05 190 KSP Residual norm 3.344291262348e-04 % max 4.398426110478e+02 min 3.200657600159e-04 max/min 1.374225756063e+06 191 KSP preconditioned resid norm 3.324429305670e-04 true resid norm 1.059171889545e-03 ||r(i)||/||b|| 5.925047905943e-05 191 KSP Residual norm 3.324429305670e-04 % max 4.421644302826e+02 min 3.200657596338e-04 max/min 1.381479952084e+06 192 KSP preconditioned resid norm 3.304917081101e-04 true resid norm 1.059162511741e-03 ||r(i)||/||b|| 5.924995446149e-05 192 KSP Residual norm 3.304917081101e-04 % max 4.444744806328e+02 min 3.200657592613e-04 max/min 1.388697377872e+06 193 KSP preconditioned resid norm 3.285744444114e-04 true resid norm 1.059153351271e-03 ||r(i)||/||b|| 5.924944202127e-05 193 KSP Residual norm 3.285744444114e-04 % max 4.467729340931e+02 min 3.200657588967e-04 max/min 1.395878570808e+06 194 KSP preconditioned resid norm 3.266901657422e-04 true resid norm 1.059144400627e-03 ||r(i)||/||b|| 5.924894131886e-05 194 KSP Residual norm 3.266901657422e-04 % max 4.490599586887e+02 min 3.200657585410e-04 max/min 1.403024055856e+06 195 KSP preconditioned resid norm 3.248379370193e-04 true resid norm 1.059135652724e-03 ||r(i)||/||b|| 5.924845195782e-05 195 KSP Residual norm 3.248379370193e-04 % max 4.513357185943e+02 min 3.200657581935e-04 max/min 1.410134345960e+06 196 KSP preconditioned resid norm 3.230168598555e-04 true resid norm 1.059127100716e-03 ||r(i)||/||b|| 5.924797355524e-05 196 KSP Residual norm 3.230168598555e-04 % max 4.536003742498e+02 min 3.200657578528e-04 max/min 1.417209942397e+06 197 KSP preconditioned resid norm 3.212260707284e-04 true resid norm 1.059118738110e-03 ||r(i)||/||b|| 5.924750574787e-05 197 KSP Residual norm 3.212260707284e-04 % max 4.558540824713e+02 min 3.200657575203e-04 max/min 1.424251335110e+06 198 KSP preconditioned resid norm 3.194647392600e-04 true resid norm 1.059110558683e-03 ||r(i)||/||b|| 5.924704818760e-05 198 KSP Residual norm 3.194647392600e-04 % max 4.580969965587e+02 min 3.200657571950e-04 max/min 1.431259003067e+06 199 KSP preconditioned resid norm 3.177320665985e-04 true resid norm 1.059102556481e-03 ||r(i)||/||b|| 5.924660054140e-05 199 KSP Residual norm 3.177320665985e-04 % max 4.603292663993e+02 min 3.200657568766e-04 max/min 1.438233414569e+06 200 KSP preconditioned resid norm 3.160272838961e-04 true resid norm 1.059094725800e-03 ||r(i)||/||b|| 5.924616249011e-05 200 KSP Residual norm 3.160272838961e-04 % max 4.625510385677e+02 min 3.200657565653e-04 max/min 1.445175027567e+06 Thanks, Meng ------------------ Original ------------------ From: "Barry Smith";; Send time: Friday, Apr 25, 2014 6:54 AM To: "Oo "; Cc: "Dave May"; "petsc-users"; Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 Run the bad case again with -ksp_gmres_restart 200 -ksp_max_it 200 and send the output On Apr 24, 2014, at 5:46 PM, Oo wrote: > > Thanks, > The following is the output at the beginning. > > > 0 KSP preconditioned resid norm 7.463734841673e+00 true resid norm 7.520241011357e-02 ||r(i)||/||b|| 1.000000000000e+00 > 0 KSP Residual norm 7.463734841673e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 1 KSP preconditioned resid norm 3.449001344285e-03 true resid norm 7.834435231711e-05 ||r(i)||/||b|| 1.041779807307e-03 > 1 KSP Residual norm 3.449001344285e-03 % max 9.999991695261e-01 min 9.999991695261e-01 max/min 1.000000000000e+00 > 2 KSP preconditioned resid norm 1.811463883605e-05 true resid norm 3.597611565181e-07 ||r(i)||/||b|| 4.783904611232e-06 > 2 KSP Residual norm 1.811463883605e-05 % max 1.000686014764e+00 min 9.991339510077e-01 max/min 1.001553409084e+00 > 0 KSP preconditioned resid norm 9.374463936067e+00 true resid norm 9.058107112571e-02 ||r(i)||/||b|| 1.000000000000e+00 > 0 KSP Residual norm 9.374463936067e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 1 KSP preconditioned resid norm 2.595582184655e-01 true resid norm 6.637387889158e-03 ||r(i)||/||b|| 7.327566131280e-02 > 1 KSP Residual norm 2.595582184655e-01 % max 9.933440157684e-01 min 9.933440157684e-01 max/min 1.000000000000e+00 > 2 KSP preconditioned resid norm 6.351429855766e-03 true resid norm 1.844857600919e-04 ||r(i)||/||b|| 2.036692189651e-03 > 2 KSP Residual norm 6.351429855766e-03 % max 1.000795215571e+00 min 8.099278726624e-01 max/min 1.235659679523e+00 > 3 KSP preconditioned resid norm 1.883016084950e-04 true resid norm 3.876682412610e-06 ||r(i)||/||b|| 4.279793078656e-05 > 3 KSP Residual norm 1.883016084950e-04 % max 1.184638644500e+00 min 8.086172954187e-01 max/min 1.465017692809e+00 > > > When solving the linear system: > Output: > ......... > ........ > ........ > +00 max/min 7.343521316293e+01 > 9636 KSP preconditioned resid norm 1.080641720588e+00 true resid norm 1.594064989678e-01 ||r(i)||/||b|| 8.917260288210e-03 > 9636 KSP Residual norm 1.080641720588e+00 % max 9.802496207537e+01 min 1.168945135768e+00 max/min 8.385762434515e+01 > 9637 KSP preconditioned resid norm 1.080641720588e+00 true resid norm 1.594064987918e-01 ||r(i)||/||b|| 8.917260278361e-03 > 9637 KSP Residual norm 1.080641720588e+00 % max 1.122401280488e+02 min 1.141681830513e+00 max/min 9.831121512938e+01 > 9638 KSP preconditioned resid norm 1.080641720588e+00 true resid norm 1.594064986859e-01 ||r(i)||/||b|| 8.917260272440e-03 > 9638 KSP Residual norm 1.080641720588e+00 % max 1.134941067042e+02 min 1.090790142559e+00 max/min 1.040476094127e+02 > 9639 KSP preconditioned resid norm 1.080641720588e+00 true resid norm 1.594064442988e-01 ||r(i)||/||b|| 8.917257230005e-03 > 9639 KSP Residual norm 1.080641720588e+00 % max 1.139914662925e+02 min 4.119649156568e-01 max/min 2.767018791170e+02 > 9640 KSP preconditioned resid norm 1.080641720586e+00 true resid norm 1.594063809183e-01 ||r(i)||/||b|| 8.917253684473e-03 > 9640 KSP Residual norm 1.080641720586e+00 % max 1.140011421526e+02 min 2.894486589274e-01 max/min 3.938561766878e+02 > 9641 KSP preconditioned resid norm 1.080641720586e+00 true resid norm 1.594063839202e-01 ||r(i)||/||b|| 8.917253852403e-03 > 9641 KSP Residual norm 1.080641720586e+00 % max 1.140392299942e+02 min 2.880532973299e-01 max/min 3.958962839563e+02 > 9642 KSP preconditioned resid norm 1.080641720586e+00 true resid norm 1.594064091676e-01 ||r(i)||/||b|| 8.917255264750e-03 > 9642 KSP Residual norm 1.080641720586e+00 % max 1.140392728591e+02 min 2.501717295613e-01 max/min 4.558439639005e+02 > 9643 KSP preconditioned resid norm 1.080641720583e+00 true resid norm 1.594064099334e-01 ||r(i)||/||b|| 8.917255307591e-03 > 9643 KSP Residual norm 1.080641720583e+00 % max 1.141360429432e+02 min 2.500714638111e-01 max/min 4.564137035220e+02 > 9644 KSP preconditioned resid norm 1.080641720582e+00 true resid norm 1.594064169337e-01 ||r(i)||/||b|| 8.917255699186e-03 > 9644 KSP Residual norm 1.080641720582e+00 % max 1.141719168213e+02 min 2.470526471293e-01 max/min 4.621359784969e+02 > 9645 KSP preconditioned resid norm 1.080641720554e+00 true resid norm 1.594064833602e-01 ||r(i)||/||b|| 8.917259415111e-03 > 9645 KSP Residual norm 1.080641720554e+00 % max 1.141770017757e+02 min 2.461729098264e-01 max/min 4.638081495493e+02 > 9646 KSP preconditioned resid norm 1.080641720550e+00 true resid norm 1.594066163854e-01 ||r(i)||/||b|| 8.917266856592e-03 > 9646 KSP Residual norm 1.080641720550e+00 % max 1.150251695783e+02 min 1.817293289064e-01 max/min 6.329477485583e+02 > 9647 KSP preconditioned resid norm 1.080641720425e+00 true resid norm 1.594070759575e-01 ||r(i)||/||b|| 8.917292565231e-03 > 9647 KSP Residual norm 1.080641720425e+00 % max 1.153670774825e+02 min 1.757825842976e-01 max/min 6.563055034347e+02 > 9648 KSP preconditioned resid norm 1.080641720405e+00 true resid norm 1.594072309986e-01 ||r(i)||/||b|| 8.917301238287e-03 > 9648 KSP Residual norm 1.080641720405e+00 % max 1.154419449950e+02 min 1.682003217110e-01 max/min 6.863360534671e+02 > 9649 KSP preconditioned resid norm 1.080641719971e+00 true resid norm 1.594088666650e-01 ||r(i)||/||b|| 8.917392738093e-03 > 9649 KSP Residual norm 1.080641719971e+00 % max 1.154420890958e+02 min 1.254364806923e-01 max/min 9.203230867027e+02 > 9650 KSP preconditioned resid norm 1.080641719766e+00 true resid norm 1.594089470619e-01 ||r(i)||/||b|| 8.917397235527e-03 > 9650 KSP Residual norm 1.080641719766e+00 % max 1.155791388935e+02 min 1.115280748954e-01 max/min 1.036323266603e+03 > 9651 KSP preconditioned resid norm 1.080641719668e+00 true resid norm 1.594099325489e-01 ||r(i)||/||b|| 8.917452364041e-03 > 9651 KSP Residual norm 1.080641719668e+00 % max 1.156952656131e+02 min 9.753165869338e-02 max/min 1.186232933624e+03 > 9652 KSP preconditioned resid norm 1.080641719560e+00 true resid norm 1.594104650490e-01 ||r(i)||/||b|| 8.917482152303e-03 > 9652 KSP Residual norm 1.080641719560e+00 % max 1.157175173166e+02 min 8.164906465197e-02 max/min 1.417254659437e+03 > 9653 KSP preconditioned resid norm 1.080641719545e+00 true resid norm 1.594102433389e-01 ||r(i)||/||b|| 8.917469749751e-03 > 9653 KSP Residual norm 1.080641719545e+00 % max 1.157284977956e+02 min 8.043379142473e-02 max/min 1.438804459490e+03 > 9654 KSP preconditioned resid norm 1.080641719502e+00 true resid norm 1.594103748106e-01 ||r(i)||/||b|| 8.917477104328e-03 > 9654 KSP Residual norm 1.080641719502e+00 % max 1.158252103352e+02 min 8.042977537341e-02 max/min 1.440078749412e+03 > 9655 KSP preconditioned resid norm 1.080641719500e+00 true resid norm 1.594103839160e-01 ||r(i)||/||b|| 8.917477613692e-03 > 9655 KSP Residual norm 1.080641719500e+00 % max 1.158319413225e+02 min 7.912584859399e-02 max/min 1.463895090931e+03 > 9656 KSP preconditioned resid norm 1.080641719298e+00 true resid norm 1.594103559180e-01 ||r(i)||/||b|| 8.917476047469e-03 > 9656 KSP Residual norm 1.080641719298e+00 % max 1.164752277567e+02 min 7.459488142962e-02 max/min 1.561437266532e+03 > 9657 KSP preconditioned resid norm 1.080641719265e+00 true resid norm 1.594102184171e-01 ||r(i)||/||b|| 8.917468355617e-03 > 9657 KSP Residual norm 1.080641719265e+00 % max 1.166579038733e+02 min 7.458594814570e-02 max/min 1.564073485335e+03 > 9658 KSP preconditioned resid norm 1.080641717458e+00 true resid norm 1.594091333285e-01 ||r(i)||/||b|| 8.917407655346e-03 > 9658 KSP Residual norm 1.080641717458e+00 % max 1.166903646829e+02 min 6.207842503132e-02 max/min 1.879724954749e+03 > 9659 KSP preconditioned resid norm 1.080641710951e+00 true resid norm 1.594084758922e-01 ||r(i)||/||b|| 8.917370878110e-03 > 9659 KSP Residual norm 1.080641710951e+00 % max 1.166911765130e+02 min 4.511759655558e-02 max/min 2.586378384966e+03 > 9660 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073892570e-01 ||r(i)||/||b|| 8.917310091324e-03 > 9660 KSP Residual norm 1.080641710473e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9661 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073892624e-01 ||r(i)||/||b|| 8.917310091626e-03 > 9661 KSP Residual norm 1.080641710473e+00 % max 3.063497860835e+01 min 3.063497860835e+01 max/min 1.000000000000e+00 > 9662 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073892715e-01 ||r(i)||/||b|| 8.917310092135e-03 > 9662 KSP Residual norm 1.080641710473e+00 % max 3.066567116490e+01 min 3.845920135843e+00 max/min 7.973559013643e+00 > 9663 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073893803e-01 ||r(i)||/||b|| 8.917310098220e-03 > 9663 KSP Residual norm 1.080641710473e+00 % max 3.713314039929e+01 min 1.336313376350e+00 max/min 2.778774878443e+01 > 9664 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073893840e-01 ||r(i)||/||b|| 8.917310098430e-03 > 9664 KSP Residual norm 1.080641710473e+00 % max 4.496286107838e+01 min 1.226793755688e+00 max/min 3.665070911057e+01 > 9665 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073893901e-01 ||r(i)||/||b|| 8.917310098770e-03 > 9665 KSP Residual norm 1.080641710473e+00 % max 8.684753794468e+01 min 1.183106109633e+00 max/min 7.340638108242e+01 > 9666 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073897030e-01 ||r(i)||/||b|| 8.917310116272e-03 > 9666 KSP Residual norm 1.080641710473e+00 % max 9.802657279239e+01 min 1.168685545918e+00 max/min 8.387762913199e+01 > 9667 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073898787e-01 ||r(i)||/||b|| 8.917310126104e-03 > 9667 KSP Residual norm 1.080641710473e+00 % max 1.123342619847e+02 min 1.141262650540e+00 max/min 9.842980661068e+01 > 9668 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073899832e-01 ||r(i)||/||b|| 8.917310131950e-03 > 9668 KSP Residual norm 1.080641710473e+00 % max 1.134978630027e+02 min 1.090790451862e+00 max/min 1.040510235573e+02 > 9669 KSP preconditioned resid norm 1.080641710472e+00 true resid norm 1.594074437274e-01 ||r(i)||/||b|| 8.917313138421e-03 > 9669 KSP Residual norm 1.080641710472e+00 % max 1.139911467053e+02 min 4.122424185123e-01 max/min 2.765148407498e+02 > 9670 KSP preconditioned resid norm 1.080641710471e+00 true resid norm 1.594075064381e-01 ||r(i)||/||b|| 8.917316646478e-03 > 9670 KSP Residual norm 1.080641710471e+00 % max 1.140007007543e+02 min 2.895766957825e-01 max/min 3.936805081855e+02 > 9671 KSP preconditioned resid norm 1.080641710471e+00 true resid norm 1.594075034738e-01 ||r(i)||/||b|| 8.917316480653e-03 > 9671 KSP Residual norm 1.080641710471e+00 % max 1.140387089437e+02 min 2.881955202443e-01 max/min 3.956991033277e+02 > 9672 KSP preconditioned resid norm 1.080641710470e+00 true resid norm 1.594074784660e-01 ||r(i)||/||b|| 8.917315081711e-03 > 9672 KSP Residual norm 1.080641710470e+00 % max 1.140387584514e+02 min 2.501644562253e-01 max/min 4.558551609294e+02 > 9673 KSP preconditioned resid norm 1.080641710468e+00 true resid norm 1.594074777329e-01 ||r(i)||/||b|| 8.917315040700e-03 > 9673 KSP Residual norm 1.080641710468e+00 % max 1.141359894823e+02 min 2.500652210717e-01 max/min 4.564248838488e+02 > 9674 KSP preconditioned resid norm 1.080641710467e+00 true resid norm 1.594074707419e-01 ||r(i)||/||b|| 8.917314649619e-03 > 9674 KSP Residual norm 1.080641710467e+00 % max 1.141719424825e+02 min 2.470352404045e-01 max/min 4.621686456374e+02 > 9675 KSP preconditioned resid norm 1.080641710439e+00 true resid norm 1.594074050132e-01 ||r(i)||/||b|| 8.917310972734e-03 > 9675 KSP Residual norm 1.080641710439e+00 % max 1.141769957478e+02 min 2.461583334135e-01 max/min 4.638355897383e+02 > 9676 KSP preconditioned resid norm 1.080641710435e+00 true resid norm 1.594072732317e-01 ||r(i)||/||b|| 8.917303600825e-03 > 9676 KSP Residual norm 1.080641710435e+00 % max 1.150247840524e+02 min 1.817432478135e-01 max/min 6.328971526384e+02 > 9677 KSP preconditioned resid norm 1.080641710313e+00 true resid norm 1.594068192028e-01 ||r(i)||/||b|| 8.917278202275e-03 > 9677 KSP Residual norm 1.080641710313e+00 % max 1.153658698688e+02 min 1.758229849082e-01 max/min 6.561478291873e+02 > 9678 KSP preconditioned resid norm 1.080641710294e+00 true resid norm 1.594066656020e-01 ||r(i)||/||b|| 8.917269609788e-03 > 9678 KSP Residual norm 1.080641710294e+00 % max 1.154409214869e+02 min 1.682261493961e-01 max/min 6.862245964808e+02 > 9679 KSP preconditioned resid norm 1.080641709867e+00 true resid norm 1.594050456081e-01 ||r(i)||/||b|| 8.917178986714e-03 > 9679 KSP Residual norm 1.080641709867e+00 % max 1.154410567101e+02 min 1.254614849745e-01 max/min 9.201314390116e+02 > 9680 KSP preconditioned resid norm 1.080641709664e+00 true resid norm 1.594049650069e-01 ||r(i)||/||b|| 8.917174477851e-03 > 9680 KSP Residual norm 1.080641709664e+00 % max 1.155780217907e+02 min 1.115227545913e-01 max/min 1.036362688622e+03 > 9681 KSP preconditioned resid norm 1.080641709568e+00 true resid norm 1.594039822189e-01 ||r(i)||/||b|| 8.917119500317e-03 > 9681 KSP Residual norm 1.080641709568e+00 % max 1.156949294099e+02 min 9.748012982669e-02 max/min 1.186856538000e+03 > 9682 KSP preconditioned resid norm 1.080641709462e+00 true resid norm 1.594034585197e-01 ||r(i)||/||b|| 8.917090204385e-03 > 9682 KSP Residual norm 1.080641709462e+00 % max 1.157178049396e+02 min 8.161660044005e-02 max/min 1.417821917547e+03 > 9683 KSP preconditioned resid norm 1.080641709447e+00 true resid norm 1.594036816693e-01 ||r(i)||/||b|| 8.917102687458e-03 > 9683 KSP Residual norm 1.080641709447e+00 % max 1.157298376396e+02 min 8.041659530019e-02 max/min 1.439128791856e+03 > 9684 KSP preconditioned resid norm 1.080641709407e+00 true resid norm 1.594035571975e-01 ||r(i)||/||b|| 8.917095724454e-03 > 9684 KSP Residual norm 1.080641709407e+00 % max 1.158244705264e+02 min 8.041209608100e-02 max/min 1.440386162919e+03 > 9685 KSP preconditioned resid norm 1.080641709405e+00 true resid norm 1.594035489927e-01 ||r(i)||/||b|| 8.917095265477e-03 > 9685 KSP Residual norm 1.080641709405e+00 % max 1.158301451854e+02 min 7.912026880308e-02 max/min 1.463975627707e+03 > 9686 KSP preconditioned resid norm 1.080641709207e+00 true resid norm 1.594035774591e-01 ||r(i)||/||b|| 8.917096857899e-03 > 9686 KSP Residual norm 1.080641709207e+00 % max 1.164678839370e+02 min 7.460755571674e-02 max/min 1.561073577845e+03 > 9687 KSP preconditioned resid norm 1.080641709174e+00 true resid norm 1.594037147171e-01 ||r(i)||/||b|| 8.917104536162e-03 > 9687 KSP Residual norm 1.080641709174e+00 % max 1.166488052404e+02 min 7.459794478802e-02 max/min 1.563699986264e+03 > 9688 KSP preconditioned resid norm 1.080641707398e+00 true resid norm 1.594047860513e-01 ||r(i)||/||b|| 8.917164467006e-03 > 9688 KSP Residual norm 1.080641707398e+00 % max 1.166807852257e+02 min 6.210503072987e-02 max/min 1.878765437428e+03 > 9689 KSP preconditioned resid norm 1.080641701010e+00 true resid norm 1.594054414016e-01 ||r(i)||/||b|| 8.917201127549e-03 > 9689 KSP Residual norm 1.080641701010e+00 % max 1.166815140099e+02 min 4.513527468187e-02 max/min 2.585151299782e+03 > 9690 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065078534e-01 ||r(i)||/||b|| 8.917260785273e-03 > 9690 KSP Residual norm 1.080641700548e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9691 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065078481e-01 ||r(i)||/||b|| 8.917260784977e-03 > 9691 KSP Residual norm 1.080641700548e+00 % max 3.063462923862e+01 min 3.063462923862e+01 max/min 1.000000000000e+00 > 9692 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065078391e-01 ||r(i)||/||b|| 8.917260784472e-03 > 9692 KSP Residual norm 1.080641700548e+00 % max 3.066527639129e+01 min 3.844401168431e+00 max/min 7.976606771193e+00 > 9693 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065077314e-01 ||r(i)||/||b|| 8.917260778448e-03 > 9693 KSP Residual norm 1.080641700548e+00 % max 3.714560719346e+01 min 1.336559823443e+00 max/min 2.779195255007e+01 > 9694 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065077277e-01 ||r(i)||/||b|| 8.917260778237e-03 > 9694 KSP Residual norm 1.080641700548e+00 % max 4.500980776068e+01 min 1.226798344943e+00 max/min 3.668883965014e+01 > 9695 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065077215e-01 ||r(i)||/||b|| 8.917260777894e-03 > 9695 KSP Residual norm 1.080641700548e+00 % max 8.688252795875e+01 min 1.183121330492e+00 max/min 7.343501103356e+01 > 9696 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065074170e-01 ||r(i)||/||b|| 8.917260760857e-03 > 9696 KSP Residual norm 1.080641700548e+00 % max 9.802496799855e+01 min 1.168942571066e+00 max/min 8.385781339895e+01 > 9697 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065072444e-01 ||r(i)||/||b|| 8.917260751202e-03 > 9697 KSP Residual norm 1.080641700548e+00 % max 1.122410641497e+02 min 1.141677885656e+00 max/min 9.831237475989e+01 > 9698 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065071406e-01 ||r(i)||/||b|| 8.917260745398e-03 > 9698 KSP Residual norm 1.080641700548e+00 % max 1.134941426413e+02 min 1.090790153651e+00 max/min 1.040476413006e+02 > 9699 KSP preconditioned resid norm 1.080641700547e+00 true resid norm 1.594064538413e-01 ||r(i)||/||b|| 8.917257763815e-03 > 9699 KSP Residual norm 1.080641700547e+00 % max 1.139914632588e+02 min 4.119681054781e-01 max/min 2.766997292824e+02 > 9700 KSP preconditioned resid norm 1.080641700546e+00 true resid norm 1.594063917269e-01 ||r(i)||/||b|| 8.917254289111e-03 > 9700 KSP Residual norm 1.080641700546e+00 % max 1.140011377578e+02 min 2.894500209686e-01 max/min 3.938543081679e+02 > 9701 KSP preconditioned resid norm 1.080641700546e+00 true resid norm 1.594063946692e-01 ||r(i)||/||b|| 8.917254453705e-03 > 9701 KSP Residual norm 1.080641700546e+00 % max 1.140392249545e+02 min 2.880548133071e-01 max/min 3.958941829341e+02 > 9702 KSP preconditioned resid norm 1.080641700546e+00 true resid norm 1.594064194159e-01 ||r(i)||/||b|| 8.917255838042e-03 > 9702 KSP Residual norm 1.080641700546e+00 % max 1.140392678944e+02 min 2.501716607446e-01 max/min 4.558440694479e+02 > 9703 KSP preconditioned resid norm 1.080641700543e+00 true resid norm 1.594064201661e-01 ||r(i)||/||b|| 8.917255880013e-03 > 9703 KSP Residual norm 1.080641700543e+00 % max 1.141360426034e+02 min 2.500714053220e-01 max/min 4.564138089137e+02 > 9704 KSP preconditioned resid norm 1.080641700542e+00 true resid norm 1.594064270279e-01 ||r(i)||/||b|| 8.917256263859e-03 > 9704 KSP Residual norm 1.080641700542e+00 % max 1.141719170495e+02 min 2.470524869868e-01 max/min 4.621362789824e+02 > 9705 KSP preconditioned resid norm 1.080641700515e+00 true resid norm 1.594064921269e-01 ||r(i)||/||b|| 8.917259905523e-03 > 9705 KSP Residual norm 1.080641700515e+00 % max 1.141770017414e+02 min 2.461727640845e-01 max/min 4.638084239987e+02 > 9706 KSP preconditioned resid norm 1.080641700511e+00 true resid norm 1.594066225181e-01 ||r(i)||/||b|| 8.917267199657e-03 > 9706 KSP Residual norm 1.080641700511e+00 % max 1.150251667711e+02 min 1.817294170466e-01 max/min 6.329474261263e+02 > 9707 KSP preconditioned resid norm 1.080641700391e+00 true resid norm 1.594070728983e-01 ||r(i)||/||b|| 8.917292394097e-03 > 9707 KSP Residual norm 1.080641700391e+00 % max 1.153670687494e+02 min 1.757829178698e-01 max/min 6.563042083238e+02 > 9708 KSP preconditioned resid norm 1.080641700372e+00 true resid norm 1.594072248425e-01 ||r(i)||/||b|| 8.917300893916e-03 > 9708 KSP Residual norm 1.080641700372e+00 % max 1.154419380718e+02 min 1.682005223292e-01 max/min 6.863351936912e+02 > 9709 KSP preconditioned resid norm 1.080641699955e+00 true resid norm 1.594088278507e-01 ||r(i)||/||b|| 8.917390566803e-03 > 9709 KSP Residual norm 1.080641699955e+00 % max 1.154420821193e+02 min 1.254367014624e-01 max/min 9.203214113045e+02 > 9710 KSP preconditioned resid norm 1.080641699758e+00 true resid norm 1.594089066545e-01 ||r(i)||/||b|| 8.917394975116e-03 > 9710 KSP Residual norm 1.080641699758e+00 % max 1.155791306030e+02 min 1.115278858784e-01 max/min 1.036324948623e+03 > 9711 KSP preconditioned resid norm 1.080641699664e+00 true resid norm 1.594098725388e-01 ||r(i)||/||b|| 8.917449007053e-03 > 9711 KSP Residual norm 1.080641699664e+00 % max 1.156952614563e+02 min 9.753133694719e-02 max/min 1.186236804269e+03 > 9712 KSP preconditioned resid norm 1.080641699560e+00 true resid norm 1.594103943811e-01 ||r(i)||/||b|| 8.917478199111e-03 > 9712 KSP Residual norm 1.080641699560e+00 % max 1.157175157006e+02 min 8.164872572382e-02 max/min 1.417260522743e+03 > 9713 KSP preconditioned resid norm 1.080641699546e+00 true resid norm 1.594101771105e-01 ||r(i)||/||b|| 8.917466044914e-03 > 9713 KSP Residual norm 1.080641699546e+00 % max 1.157285025175e+02 min 8.043358651149e-02 max/min 1.438808183705e+03 > 9714 KSP preconditioned resid norm 1.080641699505e+00 true resid norm 1.594103059104e-01 ||r(i)||/||b|| 8.917473250025e-03 > 9714 KSP Residual norm 1.080641699505e+00 % max 1.158251990486e+02 min 8.042956633721e-02 max/min 1.440082351843e+03 > 9715 KSP preconditioned resid norm 1.080641699503e+00 true resid norm 1.594103148215e-01 ||r(i)||/||b|| 8.917473748516e-03 > 9715 KSP Residual norm 1.080641699503e+00 % max 1.158319218998e+02 min 7.912575668150e-02 max/min 1.463896545925e+03 > 9716 KSP preconditioned resid norm 1.080641699309e+00 true resid norm 1.594102873744e-01 ||r(i)||/||b|| 8.917472213115e-03 > 9716 KSP Residual norm 1.080641699309e+00 % max 1.164751648212e+02 min 7.459498920670e-02 max/min 1.561434166824e+03 > 9717 KSP preconditioned resid norm 1.080641699277e+00 true resid norm 1.594101525703e-01 ||r(i)||/||b|| 8.917464672122e-03 > 9717 KSP Residual norm 1.080641699277e+00 % max 1.166578264103e+02 min 7.458604738274e-02 max/min 1.564070365758e+03 > 9718 KSP preconditioned resid norm 1.080641697541e+00 true resid norm 1.594090891807e-01 ||r(i)||/||b|| 8.917405185702e-03 > 9718 KSP Residual norm 1.080641697541e+00 % max 1.166902828397e+02 min 6.207863457218e-02 max/min 1.879717291527e+03 > 9719 KSP preconditioned resid norm 1.080641691292e+00 true resid norm 1.594084448306e-01 ||r(i)||/||b|| 8.917369140517e-03 > 9719 KSP Residual norm 1.080641691292e+00 % max 1.166910940162e+02 min 4.511779816158e-02 max/min 2.586364999424e+03 > 9720 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073798840e-01 ||r(i)||/||b|| 8.917309566993e-03 > 9720 KSP Residual norm 1.080641690833e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9721 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073798892e-01 ||r(i)||/||b|| 8.917309567288e-03 > 9721 KSP Residual norm 1.080641690833e+00 % max 3.063497720423e+01 min 3.063497720423e+01 max/min 1.000000000000e+00 > 9722 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073798982e-01 ||r(i)||/||b|| 8.917309567787e-03 > 9722 KSP Residual norm 1.080641690833e+00 % max 3.066566915002e+01 min 3.845904218039e+00 max/min 7.973591491484e+00 > 9723 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073800048e-01 ||r(i)||/||b|| 8.917309573750e-03 > 9723 KSP Residual norm 1.080641690833e+00 % max 3.713330064708e+01 min 1.336316867286e+00 max/min 2.778779611043e+01 > 9724 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073800084e-01 ||r(i)||/||b|| 8.917309573955e-03 > 9724 KSP Residual norm 1.080641690833e+00 % max 4.496345800250e+01 min 1.226793989785e+00 max/min 3.665118868930e+01 > 9725 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073800144e-01 ||r(i)||/||b|| 8.917309574290e-03 > 9725 KSP Residual norm 1.080641690833e+00 % max 8.684799302076e+01 min 1.183106260116e+00 max/min 7.340675639080e+01 > 9726 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073803210e-01 ||r(i)||/||b|| 8.917309591441e-03 > 9726 KSP Residual norm 1.080641690833e+00 % max 9.802654643741e+01 min 1.168688159857e+00 max/min 8.387741897664e+01 > 9727 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073804933e-01 ||r(i)||/||b|| 8.917309601077e-03 > 9727 KSP Residual norm 1.080641690833e+00 % max 1.123333202151e+02 min 1.141267080659e+00 max/min 9.842859933384e+01 > 9728 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073805957e-01 ||r(i)||/||b|| 8.917309606809e-03 > 9728 KSP Residual norm 1.080641690833e+00 % max 1.134978238790e+02 min 1.090790456842e+00 max/min 1.040509872149e+02 > 9729 KSP preconditioned resid norm 1.080641690832e+00 true resid norm 1.594074332665e-01 ||r(i)||/||b|| 8.917312553232e-03 > 9729 KSP Residual norm 1.080641690832e+00 % max 1.139911500486e+02 min 4.122400558994e-01 max/min 2.765164336102e+02 > 9730 KSP preconditioned resid norm 1.080641690831e+00 true resid norm 1.594074947247e-01 ||r(i)||/||b|| 8.917315991226e-03 > 9730 KSP Residual norm 1.080641690831e+00 % max 1.140007051733e+02 min 2.895754967285e-01 max/min 3.936821535706e+02 > 9731 KSP preconditioned resid norm 1.080641690831e+00 true resid norm 1.594074918191e-01 ||r(i)||/||b|| 8.917315828690e-03 > 9731 KSP Residual norm 1.080641690831e+00 % max 1.140387143094e+02 min 2.881941911453e-01 max/min 3.957009468380e+02 > 9732 KSP preconditioned resid norm 1.080641690830e+00 true resid norm 1.594074673079e-01 ||r(i)||/||b|| 8.917314457521e-03 > 9732 KSP Residual norm 1.080641690830e+00 % max 1.140387637599e+02 min 2.501645326450e-01 max/min 4.558550428958e+02 > 9733 KSP preconditioned resid norm 1.080641690828e+00 true resid norm 1.594074665892e-01 ||r(i)||/||b|| 8.917314417314e-03 > 9733 KSP Residual norm 1.080641690828e+00 % max 1.141359902062e+02 min 2.500652873669e-01 max/min 4.564247657404e+02 > 9734 KSP preconditioned resid norm 1.080641690827e+00 true resid norm 1.594074597377e-01 ||r(i)||/||b|| 8.917314034043e-03 > 9734 KSP Residual norm 1.080641690827e+00 % max 1.141719421992e+02 min 2.470354276866e-01 max/min 4.621682941123e+02 > 9735 KSP preconditioned resid norm 1.080641690800e+00 true resid norm 1.594073953224e-01 ||r(i)||/||b|| 8.917310430625e-03 > 9735 KSP Residual norm 1.080641690800e+00 % max 1.141769958343e+02 min 2.461584786890e-01 max/min 4.638353163473e+02 > 9736 KSP preconditioned resid norm 1.080641690797e+00 true resid norm 1.594072661531e-01 ||r(i)||/||b|| 8.917303204847e-03 > 9736 KSP Residual norm 1.080641690797e+00 % max 1.150247889475e+02 min 1.817430598855e-01 max/min 6.328978340080e+02 > 9737 KSP preconditioned resid norm 1.080641690679e+00 true resid norm 1.594068211899e-01 ||r(i)||/||b|| 8.917278313432e-03 > 9737 KSP Residual norm 1.080641690679e+00 % max 1.153658852526e+02 min 1.758225138542e-01 max/min 6.561496745986e+02 > 9738 KSP preconditioned resid norm 1.080641690661e+00 true resid norm 1.594066706603e-01 ||r(i)||/||b|| 8.917269892752e-03 > 9738 KSP Residual norm 1.080641690661e+00 % max 1.154409350014e+02 min 1.682258367468e-01 max/min 6.862259521714e+02 > 9739 KSP preconditioned resid norm 1.080641690251e+00 true resid norm 1.594050830368e-01 ||r(i)||/||b|| 8.917181080485e-03 > 9739 KSP Residual norm 1.080641690251e+00 % max 1.154410703479e+02 min 1.254612102469e-01 max/min 9.201335625627e+02 > 9740 KSP preconditioned resid norm 1.080641690057e+00 true resid norm 1.594050040532e-01 ||r(i)||/||b|| 8.917176662114e-03 > 9740 KSP Residual norm 1.080641690057e+00 % max 1.155780358111e+02 min 1.115226752201e-01 max/min 1.036363551923e+03 > 9741 KSP preconditioned resid norm 1.080641689964e+00 true resid norm 1.594040409622e-01 ||r(i)||/||b|| 8.917122786435e-03 > 9741 KSP Residual norm 1.080641689964e+00 % max 1.156949319460e+02 min 9.748084084270e-02 max/min 1.186847907198e+03 > 9742 KSP preconditioned resid norm 1.080641689862e+00 true resid norm 1.594035276798e-01 ||r(i)||/||b|| 8.917094073223e-03 > 9742 KSP Residual norm 1.080641689862e+00 % max 1.157177974822e+02 min 8.161690928512e-02 max/min 1.417816461022e+03 > 9743 KSP preconditioned resid norm 1.080641689848e+00 true resid norm 1.594037462881e-01 ||r(i)||/||b|| 8.917106302257e-03 > 9743 KSP Residual norm 1.080641689848e+00 % max 1.157298152734e+02 min 8.041673363017e-02 max/min 1.439126038190e+03 > 9744 KSP preconditioned resid norm 1.080641689809e+00 true resid norm 1.594036242374e-01 ||r(i)||/||b|| 8.917099474695e-03 > 9744 KSP Residual norm 1.080641689809e+00 % max 1.158244737331e+02 min 8.041224000025e-02 max/min 1.440383624841e+03 > 9745 KSP preconditioned resid norm 1.080641689808e+00 true resid norm 1.594036161930e-01 ||r(i)||/||b|| 8.917099024685e-03 > 9745 KSP Residual norm 1.080641689808e+00 % max 1.158301611517e+02 min 7.912028815054e-02 max/min 1.463975471516e+03 > 9746 KSP preconditioned resid norm 1.080641689617e+00 true resid norm 1.594036440841e-01 ||r(i)||/||b|| 8.917100584928e-03 > 9746 KSP Residual norm 1.080641689617e+00 % max 1.164679674294e+02 min 7.460741052548e-02 max/min 1.561077734894e+03 > 9747 KSP preconditioned resid norm 1.080641689585e+00 true resid norm 1.594037786262e-01 ||r(i)||/||b|| 8.917108111263e-03 > 9747 KSP Residual norm 1.080641689585e+00 % max 1.166489093034e+02 min 7.459780450955e-02 max/min 1.563704321734e+03 > 9748 KSP preconditioned resid norm 1.080641687880e+00 true resid norm 1.594048285858e-01 ||r(i)||/||b|| 8.917166846404e-03 > 9748 KSP Residual norm 1.080641687880e+00 % max 1.166808944816e+02 min 6.210471238399e-02 max/min 1.878776827114e+03 > 9749 KSP preconditioned resid norm 1.080641681745e+00 true resid norm 1.594054707973e-01 ||r(i)||/||b|| 8.917202771956e-03 > 9749 KSP Residual norm 1.080641681745e+00 % max 1.166816242497e+02 min 4.513512286802e-02 max/min 2.585162437485e+03 > 9750 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065161384e-01 ||r(i)||/||b|| 8.917261248737e-03 > 9750 KSP Residual norm 1.080641681302e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9751 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065161332e-01 ||r(i)||/||b|| 8.917261248446e-03 > 9751 KSP Residual norm 1.080641681302e+00 % max 3.063463477969e+01 min 3.063463477969e+01 max/min 1.000000000000e+00 > 9752 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065161243e-01 ||r(i)||/||b|| 8.917261247951e-03 > 9752 KSP Residual norm 1.080641681302e+00 % max 3.066528223339e+01 min 3.844415595957e+00 max/min 7.976578355796e+00 > 9753 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065160188e-01 ||r(i)||/||b|| 8.917261242049e-03 > 9753 KSP Residual norm 1.080641681302e+00 % max 3.714551741771e+01 min 1.336558367241e+00 max/min 2.779191566050e+01 > 9754 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065160151e-01 ||r(i)||/||b|| 8.917261241840e-03 > 9754 KSP Residual norm 1.080641681302e+00 % max 4.500946346995e+01 min 1.226798482639e+00 max/min 3.668855489054e+01 > 9755 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065160091e-01 ||r(i)||/||b|| 8.917261241506e-03 > 9755 KSP Residual norm 1.080641681302e+00 % max 8.688228012809e+01 min 1.183121177118e+00 max/min 7.343481108144e+01 > 9756 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065157105e-01 ||r(i)||/||b|| 8.917261224803e-03 > 9756 KSP Residual norm 1.080641681302e+00 % max 9.802497401249e+01 min 1.168940055038e+00 max/min 8.385799903943e+01 > 9757 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065155414e-01 ||r(i)||/||b|| 8.917261215340e-03 > 9757 KSP Residual norm 1.080641681302e+00 % max 1.122419823631e+02 min 1.141674012007e+00 max/min 9.831351259875e+01 > 9758 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065154397e-01 ||r(i)||/||b|| 8.917261209650e-03 > 9758 KSP Residual norm 1.080641681302e+00 % max 1.134941779165e+02 min 1.090790164362e+00 max/min 1.040476726180e+02 > 9759 KSP preconditioned resid norm 1.080641681301e+00 true resid norm 1.594064632071e-01 ||r(i)||/||b|| 8.917258287741e-03 > 9759 KSP Residual norm 1.080641681301e+00 % max 1.139914602803e+02 min 4.119712251563e-01 max/min 2.766976267263e+02 > 9760 KSP preconditioned resid norm 1.080641681300e+00 true resid norm 1.594064023343e-01 ||r(i)||/||b|| 8.917254882493e-03 > 9760 KSP Residual norm 1.080641681300e+00 % max 1.140011334474e+02 min 2.894513550128e-01 max/min 3.938524780524e+02 > 9761 KSP preconditioned resid norm 1.080641681299e+00 true resid norm 1.594064052181e-01 ||r(i)||/||b|| 8.917255043816e-03 > 9761 KSP Residual norm 1.080641681299e+00 % max 1.140392200085e+02 min 2.880562980628e-01 max/min 3.958921251693e+02 > 9762 KSP preconditioned resid norm 1.080641681299e+00 true resid norm 1.594064294735e-01 ||r(i)||/||b|| 8.917256400669e-03 > 9762 KSP Residual norm 1.080641681299e+00 % max 1.140392630219e+02 min 2.501715931763e-01 max/min 4.558441730892e+02 > 9763 KSP preconditioned resid norm 1.080641681297e+00 true resid norm 1.594064302085e-01 ||r(i)||/||b|| 8.917256441789e-03 > 9763 KSP Residual norm 1.080641681297e+00 % max 1.141360422663e+02 min 2.500713478839e-01 max/min 4.564139123977e+02 > 9764 KSP preconditioned resid norm 1.080641681296e+00 true resid norm 1.594064369343e-01 ||r(i)||/||b|| 8.917256818032e-03 > 9764 KSP Residual norm 1.080641681296e+00 % max 1.141719172739e+02 min 2.470523296471e-01 max/min 4.621365742109e+02 > 9765 KSP preconditioned resid norm 1.080641681269e+00 true resid norm 1.594065007316e-01 ||r(i)||/||b|| 8.917260386877e-03 > 9765 KSP Residual norm 1.080641681269e+00 % max 1.141770017074e+02 min 2.461726211496e-01 max/min 4.638086931610e+02 > 9766 KSP preconditioned resid norm 1.080641681266e+00 true resid norm 1.594066285385e-01 ||r(i)||/||b|| 8.917267536440e-03 > 9766 KSP Residual norm 1.080641681266e+00 % max 1.150251639969e+02 min 1.817295045513e-01 max/min 6.329471060897e+02 > 9767 KSP preconditioned resid norm 1.080641681150e+00 true resid norm 1.594070699054e-01 ||r(i)||/||b|| 8.917292226672e-03 > 9767 KSP Residual norm 1.080641681150e+00 % max 1.153670601170e+02 min 1.757832464778e-01 max/min 6.563029323251e+02 > 9768 KSP preconditioned resid norm 1.080641681133e+00 true resid norm 1.594072188128e-01 ||r(i)||/||b|| 8.917300556609e-03 > 9768 KSP Residual norm 1.080641681133e+00 % max 1.154419312149e+02 min 1.682007202940e-01 max/min 6.863343451393e+02 > 9769 KSP preconditioned resid norm 1.080641680732e+00 true resid norm 1.594087897943e-01 ||r(i)||/||b|| 8.917388437917e-03 > 9769 KSP Residual norm 1.080641680732e+00 % max 1.154420752094e+02 min 1.254369186538e-01 max/min 9.203197627011e+02 > 9770 KSP preconditioned resid norm 1.080641680543e+00 true resid norm 1.594088670353e-01 ||r(i)||/||b|| 8.917392758804e-03 > 9770 KSP Residual norm 1.080641680543e+00 % max 1.155791224140e+02 min 1.115277033208e-01 max/min 1.036326571538e+03 > 9771 KSP preconditioned resid norm 1.080641680452e+00 true resid norm 1.594098136932e-01 ||r(i)||/||b|| 8.917445715209e-03 > 9771 KSP Residual norm 1.080641680452e+00 % max 1.156952573953e+02 min 9.753101753312e-02 max/min 1.186240647556e+03 > 9772 KSP preconditioned resid norm 1.080641680352e+00 true resid norm 1.594103250846e-01 ||r(i)||/||b|| 8.917474322641e-03 > 9772 KSP Residual norm 1.080641680352e+00 % max 1.157175142052e+02 min 8.164839358709e-02 max/min 1.417266269688e+03 > 9773 KSP preconditioned resid norm 1.080641680339e+00 true resid norm 1.594101121664e-01 ||r(i)||/||b|| 8.917462411912e-03 > 9773 KSP Residual norm 1.080641680339e+00 % max 1.157285073186e+02 min 8.043338619163e-02 max/min 1.438811826757e+03 > 9774 KSP preconditioned resid norm 1.080641680299e+00 true resid norm 1.594102383476e-01 ||r(i)||/||b|| 8.917469470538e-03 > 9774 KSP Residual norm 1.080641680299e+00 % max 1.158251880541e+02 min 8.042936196082e-02 max/min 1.440085874492e+03 > 9775 KSP preconditioned resid norm 1.080641680298e+00 true resid norm 1.594102470687e-01 ||r(i)||/||b|| 8.917469958400e-03 > 9775 KSP Residual norm 1.080641680298e+00 % max 1.158319028736e+02 min 7.912566724984e-02 max/min 1.463897960037e+03 > 9776 KSP preconditioned resid norm 1.080641680112e+00 true resid norm 1.594102201623e-01 ||r(i)||/||b|| 8.917468453243e-03 > 9776 KSP Residual norm 1.080641680112e+00 % max 1.164751028861e+02 min 7.459509526345e-02 max/min 1.561431116546e+03 > 9777 KSP preconditioned resid norm 1.080641680081e+00 true resid norm 1.594100880055e-01 ||r(i)||/||b|| 8.917461060344e-03 > 9777 KSP Residual norm 1.080641680081e+00 % max 1.166577501679e+02 min 7.458614509876e-02 max/min 1.564067294448e+03 > 9778 KSP preconditioned resid norm 1.080641678414e+00 true resid norm 1.594090458938e-01 ||r(i)||/||b|| 8.917402764217e-03 > 9778 KSP Residual norm 1.080641678414e+00 % max 1.166902022924e+02 min 6.207884124039e-02 max/min 1.879709736213e+03 > 9779 KSP preconditioned resid norm 1.080641672412e+00 true resid norm 1.594084143785e-01 ||r(i)||/||b|| 8.917367437011e-03 > 9779 KSP Residual norm 1.080641672412e+00 % max 1.166910128239e+02 min 4.511799534103e-02 max/min 2.586351896663e+03 > 9780 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073707031e-01 ||r(i)||/||b|| 8.917309053412e-03 > 9780 KSP Residual norm 1.080641671971e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9781 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073707083e-01 ||r(i)||/||b|| 8.917309053702e-03 > 9781 KSP Residual norm 1.080641671971e+00 % max 3.063497578379e+01 min 3.063497578379e+01 max/min 1.000000000000e+00 > 9782 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073707170e-01 ||r(i)||/||b|| 8.917309054190e-03 > 9782 KSP Residual norm 1.080641671971e+00 % max 3.066566713389e+01 min 3.845888622043e+00 max/min 7.973623302070e+00 > 9783 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073708215e-01 ||r(i)||/||b|| 8.917309060034e-03 > 9783 KSP Residual norm 1.080641671971e+00 % max 3.713345706068e+01 min 1.336320269525e+00 max/min 2.778784241137e+01 > 9784 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073708251e-01 ||r(i)||/||b|| 8.917309060235e-03 > 9784 KSP Residual norm 1.080641671971e+00 % max 4.496404075363e+01 min 1.226794215442e+00 max/min 3.665165696713e+01 > 9785 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073708309e-01 ||r(i)||/||b|| 8.917309060564e-03 > 9785 KSP Residual norm 1.080641671971e+00 % max 8.684843710054e+01 min 1.183106407741e+00 max/min 7.340712258198e+01 > 9786 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073711314e-01 ||r(i)||/||b|| 8.917309077372e-03 > 9786 KSP Residual norm 1.080641671971e+00 % max 9.802652080694e+01 min 1.168690722569e+00 max/min 8.387721311884e+01 > 9787 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073713003e-01 ||r(i)||/||b|| 8.917309086817e-03 > 9787 KSP Residual norm 1.080641671971e+00 % max 1.123323967807e+02 min 1.141271419633e+00 max/min 9.842741599265e+01 > 9788 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073714007e-01 ||r(i)||/||b|| 8.917309092435e-03 > 9788 KSP Residual norm 1.080641671971e+00 % max 1.134977855491e+02 min 1.090790461557e+00 max/min 1.040509516255e+02 > 9789 KSP preconditioned resid norm 1.080641671970e+00 true resid norm 1.594074230188e-01 ||r(i)||/||b|| 8.917311979972e-03 > 9789 KSP Residual norm 1.080641671970e+00 % max 1.139911533240e+02 min 4.122377311421e-01 max/min 2.765180009315e+02 > 9790 KSP preconditioned resid norm 1.080641671969e+00 true resid norm 1.594074832488e-01 ||r(i)||/||b|| 8.917315349259e-03 > 9790 KSP Residual norm 1.080641671969e+00 % max 1.140007095063e+02 min 2.895743194700e-01 max/min 3.936837690403e+02 > 9791 KSP preconditioned resid norm 1.080641671969e+00 true resid norm 1.594074804009e-01 ||r(i)||/||b|| 8.917315189946e-03 > 9791 KSP Residual norm 1.080641671969e+00 % max 1.140387195674e+02 min 2.881928861513e-01 max/min 3.957027568944e+02 > 9792 KSP preconditioned resid norm 1.080641671968e+00 true resid norm 1.594074563767e-01 ||r(i)||/||b|| 8.917313846024e-03 > 9792 KSP Residual norm 1.080641671968e+00 % max 1.140387689617e+02 min 2.501646075030e-01 max/min 4.558549272813e+02 > 9793 KSP preconditioned resid norm 1.080641671966e+00 true resid norm 1.594074556720e-01 ||r(i)||/||b|| 8.917313806607e-03 > 9793 KSP Residual norm 1.080641671966e+00 % max 1.141359909124e+02 min 2.500653522913e-01 max/min 4.564246500628e+02 > 9794 KSP preconditioned resid norm 1.080641671965e+00 true resid norm 1.594074489575e-01 ||r(i)||/||b|| 8.917313430994e-03 > 9794 KSP Residual norm 1.080641671965e+00 % max 1.141719419220e+02 min 2.470356110582e-01 max/min 4.621679499282e+02 > 9795 KSP preconditioned resid norm 1.080641671939e+00 true resid norm 1.594073858301e-01 ||r(i)||/||b|| 8.917309899620e-03 > 9795 KSP Residual norm 1.080641671939e+00 % max 1.141769959186e+02 min 2.461586211459e-01 max/min 4.638350482589e+02 > 9796 KSP preconditioned resid norm 1.080641671936e+00 true resid norm 1.594072592237e-01 ||r(i)||/||b|| 8.917302817214e-03 > 9796 KSP Residual norm 1.080641671936e+00 % max 1.150247937261e+02 min 1.817428765765e-01 max/min 6.328984986528e+02 > 9797 KSP preconditioned resid norm 1.080641671823e+00 true resid norm 1.594068231505e-01 ||r(i)||/||b|| 8.917278423111e-03 > 9797 KSP Residual norm 1.080641671823e+00 % max 1.153659002697e+02 min 1.758220533048e-01 max/min 6.561514787326e+02 > 9798 KSP preconditioned resid norm 1.080641671805e+00 true resid norm 1.594066756326e-01 ||r(i)||/||b|| 8.917270170903e-03 > 9798 KSP Residual norm 1.080641671805e+00 % max 1.154409481864e+02 min 1.682255312531e-01 max/min 6.862272767188e+02 > 9799 KSP preconditioned resid norm 1.080641671412e+00 true resid norm 1.594051197519e-01 ||r(i)||/||b|| 8.917183134347e-03 > 9799 KSP Residual norm 1.080641671412e+00 % max 1.154410836529e+02 min 1.254609413083e-01 max/min 9.201356410140e+02 > 9800 KSP preconditioned resid norm 1.080641671225e+00 true resid norm 1.594050423544e-01 ||r(i)||/||b|| 8.917178804700e-03 > 9800 KSP Residual norm 1.080641671225e+00 % max 1.155780495006e+02 min 1.115226000164e-01 max/min 1.036364373532e+03 > 9801 KSP preconditioned resid norm 1.080641671136e+00 true resid norm 1.594040985764e-01 ||r(i)||/||b|| 8.917126009400e-03 > 9801 KSP Residual norm 1.080641671136e+00 % max 1.156949344498e+02 min 9.748153395641e-02 max/min 1.186839494150e+03 > 9802 KSP preconditioned resid norm 1.080641671038e+00 true resid norm 1.594035955111e-01 ||r(i)||/||b|| 8.917097867731e-03 > 9802 KSP Residual norm 1.080641671038e+00 % max 1.157177902650e+02 min 8.161721244731e-02 max/min 1.417811106202e+03 > 9803 KSP preconditioned resid norm 1.080641671024e+00 true resid norm 1.594038096704e-01 ||r(i)||/||b|| 8.917109847884e-03 > 9803 KSP Residual norm 1.080641671024e+00 % max 1.157297935320e+02 min 8.041686995864e-02 max/min 1.439123328121e+03 > 9804 KSP preconditioned resid norm 1.080641670988e+00 true resid norm 1.594036899968e-01 ||r(i)||/||b|| 8.917103153299e-03 > 9804 KSP Residual norm 1.080641670988e+00 % max 1.158244769690e+02 min 8.041238179269e-02 max/min 1.440381125230e+03 > 9805 KSP preconditioned resid norm 1.080641670986e+00 true resid norm 1.594036821095e-01 ||r(i)||/||b|| 8.917102712083e-03 > 9805 KSP Residual norm 1.080641670986e+00 % max 1.158301768585e+02 min 7.912030787936e-02 max/min 1.463975304989e+03 > 9806 KSP preconditioned resid norm 1.080641670803e+00 true resid norm 1.594037094369e-01 ||r(i)||/||b|| 8.917104240783e-03 > 9806 KSP Residual norm 1.080641670803e+00 % max 1.164680491000e+02 min 7.460726855236e-02 max/min 1.561081800203e+03 > 9807 KSP preconditioned resid norm 1.080641670773e+00 true resid norm 1.594038413140e-01 ||r(i)||/||b|| 8.917111618039e-03 > 9807 KSP Residual norm 1.080641670773e+00 % max 1.166490110820e+02 min 7.459766739249e-02 max/min 1.563708560326e+03 > 9808 KSP preconditioned resid norm 1.080641669134e+00 true resid norm 1.594048703119e-01 ||r(i)||/||b|| 8.917169180576e-03 > 9808 KSP Residual norm 1.080641669134e+00 % max 1.166810013448e+02 min 6.210440124755e-02 max/min 1.878787960288e+03 > 9809 KSP preconditioned resid norm 1.080641663243e+00 true resid norm 1.594054996410e-01 ||r(i)||/||b|| 8.917204385483e-03 > 9809 KSP Residual norm 1.080641663243e+00 % max 1.166817320749e+02 min 4.513497352470e-02 max/min 2.585173380262e+03 > 9810 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065242705e-01 ||r(i)||/||b|| 8.917261703653e-03 > 9810 KSP Residual norm 1.080641662817e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9811 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065242654e-01 ||r(i)||/||b|| 8.917261703367e-03 > 9811 KSP Residual norm 1.080641662817e+00 % max 3.063464017134e+01 min 3.063464017134e+01 max/min 1.000000000000e+00 > 9812 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065242568e-01 ||r(i)||/||b|| 8.917261702882e-03 > 9812 KSP Residual norm 1.080641662817e+00 % max 3.066528792324e+01 min 3.844429757599e+00 max/min 7.976550452671e+00 > 9813 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065241534e-01 ||r(i)||/||b|| 8.917261697099e-03 > 9813 KSP Residual norm 1.080641662817e+00 % max 3.714542869717e+01 min 1.336556919260e+00 max/min 2.779187938941e+01 > 9814 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065241497e-01 ||r(i)||/||b|| 8.917261696894e-03 > 9814 KSP Residual norm 1.080641662817e+00 % max 4.500912339640e+01 min 1.226798613987e+00 max/min 3.668827375841e+01 > 9815 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065241439e-01 ||r(i)||/||b|| 8.917261696565e-03 > 9815 KSP Residual norm 1.080641662817e+00 % max 8.688203511576e+01 min 1.183121026759e+00 max/min 7.343461332421e+01 > 9816 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065238512e-01 ||r(i)||/||b|| 8.917261680193e-03 > 9816 KSP Residual norm 1.080641662817e+00 % max 9.802498010891e+01 min 1.168937586878e+00 max/min 8.385818131719e+01 > 9817 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065236853e-01 ||r(i)||/||b|| 8.917261670916e-03 > 9817 KSP Residual norm 1.080641662817e+00 % max 1.122428829897e+02 min 1.141670208530e+00 max/min 9.831462899798e+01 > 9818 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065235856e-01 ||r(i)||/||b|| 8.917261665338e-03 > 9818 KSP Residual norm 1.080641662817e+00 % max 1.134942125400e+02 min 1.090790174709e+00 max/min 1.040477033727e+02 > 9819 KSP preconditioned resid norm 1.080641662816e+00 true resid norm 1.594064723991e-01 ||r(i)||/||b|| 8.917258801942e-03 > 9819 KSP Residual norm 1.080641662816e+00 % max 1.139914573562e+02 min 4.119742762810e-01 max/min 2.766955703769e+02 > 9820 KSP preconditioned resid norm 1.080641662815e+00 true resid norm 1.594064127438e-01 ||r(i)||/||b|| 8.917255464802e-03 > 9820 KSP Residual norm 1.080641662815e+00 % max 1.140011292201e+02 min 2.894526616195e-01 max/min 3.938506855740e+02 > 9821 KSP preconditioned resid norm 1.080641662815e+00 true resid norm 1.594064155702e-01 ||r(i)||/||b|| 8.917255622914e-03 > 9821 KSP Residual norm 1.080641662815e+00 % max 1.140392151549e+02 min 2.880577522236e-01 max/min 3.958901097943e+02 > 9822 KSP preconditioned resid norm 1.080641662815e+00 true resid norm 1.594064393436e-01 ||r(i)||/||b|| 8.917256952808e-03 > 9822 KSP Residual norm 1.080641662815e+00 % max 1.140392582401e+02 min 2.501715268375e-01 max/min 4.558442748531e+02 > 9823 KSP preconditioned resid norm 1.080641662812e+00 true resid norm 1.594064400637e-01 ||r(i)||/||b|| 8.917256993092e-03 > 9823 KSP Residual norm 1.080641662812e+00 % max 1.141360419319e+02 min 2.500712914815e-01 max/min 4.564140140028e+02 > 9824 KSP preconditioned resid norm 1.080641662811e+00 true resid norm 1.594064466562e-01 ||r(i)||/||b|| 8.917257361878e-03 > 9824 KSP Residual norm 1.080641662811e+00 % max 1.141719174947e+02 min 2.470521750742e-01 max/min 4.621368642491e+02 > 9825 KSP preconditioned resid norm 1.080641662786e+00 true resid norm 1.594065091771e-01 ||r(i)||/||b|| 8.917260859317e-03 > 9825 KSP Residual norm 1.080641662786e+00 % max 1.141770016737e+02 min 2.461724809746e-01 max/min 4.638089571251e+02 > 9826 KSP preconditioned resid norm 1.080641662783e+00 true resid norm 1.594066344484e-01 ||r(i)||/||b|| 8.917267867042e-03 > 9826 KSP Residual norm 1.080641662783e+00 % max 1.150251612560e+02 min 1.817295914017e-01 max/min 6.329467885162e+02 > 9827 KSP preconditioned resid norm 1.080641662672e+00 true resid norm 1.594070669773e-01 ||r(i)||/||b|| 8.917292062876e-03 > 9827 KSP Residual norm 1.080641662672e+00 % max 1.153670515865e+02 min 1.757835701538e-01 max/min 6.563016753248e+02 > 9828 KSP preconditioned resid norm 1.080641662655e+00 true resid norm 1.594072129068e-01 ||r(i)||/||b|| 8.917300226227e-03 > 9828 KSP Residual norm 1.080641662655e+00 % max 1.154419244262e+02 min 1.682009156098e-01 max/min 6.863335078032e+02 > 9829 KSP preconditioned resid norm 1.080641662270e+00 true resid norm 1.594087524825e-01 ||r(i)||/||b|| 8.917386350680e-03 > 9829 KSP Residual norm 1.080641662270e+00 % max 1.154420683680e+02 min 1.254371322814e-01 max/min 9.203181407961e+02 > 9830 KSP preconditioned resid norm 1.080641662088e+00 true resid norm 1.594088281904e-01 ||r(i)||/||b|| 8.917390585807e-03 > 9830 KSP Residual norm 1.080641662088e+00 % max 1.155791143272e+02 min 1.115275270000e-01 max/min 1.036328137421e+03 > 9831 KSP preconditioned resid norm 1.080641662001e+00 true resid norm 1.594097559916e-01 ||r(i)||/||b|| 8.917442487360e-03 > 9831 KSP Residual norm 1.080641662001e+00 % max 1.156952534277e+02 min 9.753070058662e-02 max/min 1.186244461814e+03 > 9832 KSP preconditioned resid norm 1.080641661905e+00 true resid norm 1.594102571355e-01 ||r(i)||/||b|| 8.917470521541e-03 > 9832 KSP Residual norm 1.080641661905e+00 % max 1.157175128244e+02 min 8.164806814555e-02 max/min 1.417271901866e+03 > 9833 KSP preconditioned resid norm 1.080641661892e+00 true resid norm 1.594100484838e-01 ||r(i)||/||b|| 8.917458849484e-03 > 9833 KSP Residual norm 1.080641661892e+00 % max 1.157285121903e+02 min 8.043319039381e-02 max/min 1.438815389813e+03 > 9834 KSP preconditioned resid norm 1.080641661854e+00 true resid norm 1.594101720988e-01 ||r(i)||/||b|| 8.917465764556e-03 > 9834 KSP Residual norm 1.080641661854e+00 % max 1.158251773437e+02 min 8.042916217223e-02 max/min 1.440089318544e+03 > 9835 KSP preconditioned resid norm 1.080641661853e+00 true resid norm 1.594101806342e-01 ||r(i)||/||b|| 8.917466242026e-03 > 9835 KSP Residual norm 1.080641661853e+00 % max 1.158318842362e+02 min 7.912558026309e-02 max/min 1.463899333832e+03 > 9836 KSP preconditioned resid norm 1.080641661674e+00 true resid norm 1.594101542582e-01 ||r(i)||/||b|| 8.917464766544e-03 > 9836 KSP Residual norm 1.080641661674e+00 % max 1.164750419425e+02 min 7.459519966941e-02 max/min 1.561428114124e+03 > 9837 KSP preconditioned resid norm 1.080641661644e+00 true resid norm 1.594100247000e-01 ||r(i)||/||b|| 8.917457519010e-03 > 9837 KSP Residual norm 1.080641661644e+00 % max 1.166576751361e+02 min 7.458624135723e-02 max/min 1.564064269942e+03 > 9838 KSP preconditioned resid norm 1.080641660043e+00 true resid norm 1.594090034525e-01 ||r(i)||/||b|| 8.917400390036e-03 > 9838 KSP Residual norm 1.080641660043e+00 % max 1.166901230302e+02 min 6.207904511461e-02 max/min 1.879702286251e+03 > 9839 KSP preconditioned resid norm 1.080641654279e+00 true resid norm 1.594083845247e-01 ||r(i)||/||b|| 8.917365766976e-03 > 9839 KSP Residual norm 1.080641654279e+00 % max 1.166909329256e+02 min 4.511818825181e-02 max/min 2.586339067391e+03 > 9840 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073617105e-01 ||r(i)||/||b|| 8.917308550363e-03 > 9840 KSP Residual norm 1.080641653856e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9841 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073617156e-01 ||r(i)||/||b|| 8.917308550646e-03 > 9841 KSP Residual norm 1.080641653856e+00 % max 3.063497434930e+01 min 3.063497434930e+01 max/min 1.000000000000e+00 > 9842 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073617242e-01 ||r(i)||/||b|| 8.917308551127e-03 > 9842 KSP Residual norm 1.080641653856e+00 % max 3.066566511835e+01 min 3.845873341758e+00 max/min 7.973654458505e+00 > 9843 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073618265e-01 ||r(i)||/||b|| 8.917308556853e-03 > 9843 KSP Residual norm 1.080641653856e+00 % max 3.713360973998e+01 min 1.336323585560e+00 max/min 2.778788771015e+01 > 9844 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073618300e-01 ||r(i)||/||b|| 8.917308557050e-03 > 9844 KSP Residual norm 1.080641653856e+00 % max 4.496460969624e+01 min 1.226794432981e+00 max/min 3.665211423154e+01 > 9845 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073618358e-01 ||r(i)||/||b|| 8.917308557372e-03 > 9845 KSP Residual norm 1.080641653856e+00 % max 8.684887047460e+01 min 1.183106552555e+00 max/min 7.340747989862e+01 > 9846 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073621302e-01 ||r(i)||/||b|| 8.917308573843e-03 > 9846 KSP Residual norm 1.080641653856e+00 % max 9.802649587879e+01 min 1.168693234980e+00 max/min 8.387701147298e+01 > 9847 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073622957e-01 ||r(i)||/||b|| 8.917308583099e-03 > 9847 KSP Residual norm 1.080641653856e+00 % max 1.123314913547e+02 min 1.141275669292e+00 max/min 9.842625614229e+01 > 9848 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073623942e-01 ||r(i)||/||b|| 8.917308588608e-03 > 9848 KSP Residual norm 1.080641653856e+00 % max 1.134977479974e+02 min 1.090790466016e+00 max/min 1.040509167741e+02 > 9849 KSP preconditioned resid norm 1.080641653855e+00 true resid norm 1.594074129801e-01 ||r(i)||/||b|| 8.917311418404e-03 > 9849 KSP Residual norm 1.080641653855e+00 % max 1.139911565327e+02 min 4.122354439173e-01 max/min 2.765195429329e+02 > 9850 KSP preconditioned resid norm 1.080641653854e+00 true resid norm 1.594074720057e-01 ||r(i)||/||b|| 8.917314720319e-03 > 9850 KSP Residual norm 1.080641653854e+00 % max 1.140007137546e+02 min 2.895731636857e-01 max/min 3.936853550364e+02 > 9851 KSP preconditioned resid norm 1.080641653854e+00 true resid norm 1.594074692144e-01 ||r(i)||/||b|| 8.917314564169e-03 > 9851 KSP Residual norm 1.080641653854e+00 % max 1.140387247200e+02 min 2.881916049088e-01 max/min 3.957045339889e+02 > 9852 KSP preconditioned resid norm 1.080641653853e+00 true resid norm 1.594074456679e-01 ||r(i)||/||b|| 8.917313246972e-03 > 9852 KSP Residual norm 1.080641653853e+00 % max 1.140387740588e+02 min 2.501646808296e-01 max/min 4.558548140395e+02 > 9853 KSP preconditioned resid norm 1.080641653851e+00 true resid norm 1.594074449772e-01 ||r(i)||/||b|| 8.917313208332e-03 > 9853 KSP Residual norm 1.080641653851e+00 % max 1.141359916013e+02 min 2.500654158720e-01 max/min 4.564245367688e+02 > 9854 KSP preconditioned resid norm 1.080641653850e+00 true resid norm 1.594074383969e-01 ||r(i)||/||b|| 8.917312840228e-03 > 9854 KSP Residual norm 1.080641653850e+00 % max 1.141719416509e+02 min 2.470357905993e-01 max/min 4.621676129356e+02 > 9855 KSP preconditioned resid norm 1.080641653826e+00 true resid norm 1.594073765323e-01 ||r(i)||/||b|| 8.917309379499e-03 > 9855 KSP Residual norm 1.080641653826e+00 % max 1.141769960009e+02 min 2.461587608334e-01 max/min 4.638347853812e+02 > 9856 KSP preconditioned resid norm 1.080641653822e+00 true resid norm 1.594072524403e-01 ||r(i)||/||b|| 8.917302437746e-03 > 9856 KSP Residual norm 1.080641653822e+00 % max 1.150247983913e+02 min 1.817426977612e-01 max/min 6.328991470261e+02 > 9857 KSP preconditioned resid norm 1.080641653714e+00 true resid norm 1.594068250847e-01 ||r(i)||/||b|| 8.917278531310e-03 > 9857 KSP Residual norm 1.080641653714e+00 % max 1.153659149296e+02 min 1.758216030104e-01 max/min 6.561532425728e+02 > 9858 KSP preconditioned resid norm 1.080641653697e+00 true resid norm 1.594066805198e-01 ||r(i)||/||b|| 8.917270444298e-03 > 9858 KSP Residual norm 1.080641653697e+00 % max 1.154409610504e+02 min 1.682252327368e-01 max/min 6.862285709010e+02 > 9859 KSP preconditioned resid norm 1.080641653319e+00 true resid norm 1.594051557657e-01 ||r(i)||/||b|| 8.917185148971e-03 > 9859 KSP Residual norm 1.080641653319e+00 % max 1.154410966341e+02 min 1.254606780252e-01 max/min 9.201376754146e+02 > 9860 KSP preconditioned resid norm 1.080641653140e+00 true resid norm 1.594050799233e-01 ||r(i)||/||b|| 8.917180906316e-03 > 9860 KSP Residual norm 1.080641653140e+00 % max 1.155780628675e+02 min 1.115225287992e-01 max/min 1.036365155202e+03 > 9861 KSP preconditioned resid norm 1.080641653054e+00 true resid norm 1.594041550812e-01 ||r(i)||/||b|| 8.917129170295e-03 > 9861 KSP Residual norm 1.080641653054e+00 % max 1.156949369211e+02 min 9.748220965886e-02 max/min 1.186831292869e+03 > 9862 KSP preconditioned resid norm 1.080641652960e+00 true resid norm 1.594036620365e-01 ||r(i)||/||b|| 8.917101589189e-03 > 9862 KSP Residual norm 1.080641652960e+00 % max 1.157177832797e+02 min 8.161751000999e-02 max/min 1.417805851533e+03 > 9863 KSP preconditioned resid norm 1.080641652947e+00 true resid norm 1.594038718371e-01 ||r(i)||/||b|| 8.917113325517e-03 > 9863 KSP Residual norm 1.080641652947e+00 % max 1.157297723960e+02 min 8.041700428565e-02 max/min 1.439120661408e+03 > 9864 KSP preconditioned resid norm 1.080641652912e+00 true resid norm 1.594037544972e-01 ||r(i)||/||b|| 8.917106761475e-03 > 9864 KSP Residual norm 1.080641652912e+00 % max 1.158244802295e+02 min 8.041252146115e-02 max/min 1.440378663980e+03 > 9865 KSP preconditioned resid norm 1.080641652910e+00 true resid norm 1.594037467642e-01 ||r(i)||/||b|| 8.917106328887e-03 > 9865 KSP Residual norm 1.080641652910e+00 % max 1.158301923080e+02 min 7.912032794006e-02 max/min 1.463975129069e+03 > 9866 KSP preconditioned resid norm 1.080641652734e+00 true resid norm 1.594037735388e-01 ||r(i)||/||b|| 8.917107826673e-03 > 9866 KSP Residual norm 1.080641652734e+00 % max 1.164681289889e+02 min 7.460712971447e-02 max/min 1.561085776047e+03 > 9867 KSP preconditioned resid norm 1.080641652705e+00 true resid norm 1.594039028009e-01 ||r(i)||/||b|| 8.917115057644e-03 > 9867 KSP Residual norm 1.080641652705e+00 % max 1.166491106270e+02 min 7.459753335320e-02 max/min 1.563712704477e+03 > 9868 KSP preconditioned resid norm 1.080641651132e+00 true resid norm 1.594049112430e-01 ||r(i)||/||b|| 8.917171470279e-03 > 9868 KSP Residual norm 1.080641651132e+00 % max 1.166811058685e+02 min 6.210409714784e-02 max/min 1.878798843026e+03 > 9869 KSP preconditioned resid norm 1.080641645474e+00 true resid norm 1.594055279416e-01 ||r(i)||/||b|| 8.917205968634e-03 > 9869 KSP Residual norm 1.080641645474e+00 % max 1.166818375393e+02 min 4.513482662958e-02 max/min 2.585184130582e+03 > 9870 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065322524e-01 ||r(i)||/||b|| 8.917262150160e-03 > 9870 KSP Residual norm 1.080641645065e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9871 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065322474e-01 ||r(i)||/||b|| 8.917262149880e-03 > 9871 KSP Residual norm 1.080641645065e+00 % max 3.063464541814e+01 min 3.063464541814e+01 max/min 1.000000000000e+00 > 9872 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065322389e-01 ||r(i)||/||b|| 8.917262149406e-03 > 9872 KSP Residual norm 1.080641645065e+00 % max 3.066529346532e+01 min 3.844443657370e+00 max/min 7.976523054651e+00 > 9873 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065321376e-01 ||r(i)||/||b|| 8.917262143737e-03 > 9873 KSP Residual norm 1.080641645065e+00 % max 3.714534104643e+01 min 1.336555480314e+00 max/min 2.779184373079e+01 > 9874 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065321340e-01 ||r(i)||/||b|| 8.917262143537e-03 > 9874 KSP Residual norm 1.080641645065e+00 % max 4.500878758461e+01 min 1.226798739269e+00 max/min 3.668799628162e+01 > 9875 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065321283e-01 ||r(i)||/||b|| 8.917262143216e-03 > 9875 KSP Residual norm 1.080641645065e+00 % max 8.688179296761e+01 min 1.183120879361e+00 max/min 7.343441780400e+01 > 9876 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065318413e-01 ||r(i)||/||b|| 8.917262127165e-03 > 9876 KSP Residual norm 1.080641645065e+00 % max 9.802498627683e+01 min 1.168935165788e+00 max/min 8.385836028017e+01 > 9877 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065316788e-01 ||r(i)||/||b|| 8.917262118072e-03 > 9877 KSP Residual norm 1.080641645065e+00 % max 1.122437663284e+02 min 1.141666474192e+00 max/min 9.831572430803e+01 > 9878 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065315811e-01 ||r(i)||/||b|| 8.917262112605e-03 > 9878 KSP Residual norm 1.080641645065e+00 % max 1.134942465219e+02 min 1.090790184702e+00 max/min 1.040477335730e+02 > 9879 KSP preconditioned resid norm 1.080641645064e+00 true resid norm 1.594064814200e-01 ||r(i)||/||b|| 8.917259306578e-03 > 9879 KSP Residual norm 1.080641645064e+00 % max 1.139914544856e+02 min 4.119772603942e-01 max/min 2.766935591945e+02 > 9880 KSP preconditioned resid norm 1.080641645063e+00 true resid norm 1.594064229585e-01 ||r(i)||/||b|| 8.917256036220e-03 > 9880 KSP Residual norm 1.080641645063e+00 % max 1.140011250742e+02 min 2.894539413393e-01 max/min 3.938489299774e+02 > 9881 KSP preconditioned resid norm 1.080641645063e+00 true resid norm 1.594064257287e-01 ||r(i)||/||b|| 8.917256191184e-03 > 9881 KSP Residual norm 1.080641645063e+00 % max 1.140392103921e+02 min 2.880591764056e-01 max/min 3.958881359556e+02 > 9882 KSP preconditioned resid norm 1.080641645063e+00 true resid norm 1.594064490294e-01 ||r(i)||/||b|| 8.917257494633e-03 > 9882 KSP Residual norm 1.080641645063e+00 % max 1.140392535477e+02 min 2.501714617108e-01 max/min 4.558443747654e+02 > 9883 KSP preconditioned resid norm 1.080641645061e+00 true resid norm 1.594064497349e-01 ||r(i)||/||b|| 8.917257534101e-03 > 9883 KSP Residual norm 1.080641645061e+00 % max 1.141360416004e+02 min 2.500712361002e-01 max/min 4.564141137554e+02 > 9884 KSP preconditioned resid norm 1.080641645059e+00 true resid norm 1.594064561966e-01 ||r(i)||/||b|| 8.917257895569e-03 > 9884 KSP Residual norm 1.080641645059e+00 % max 1.141719177119e+02 min 2.470520232306e-01 max/min 4.621371491678e+02 > 9885 KSP preconditioned resid norm 1.080641645035e+00 true resid norm 1.594065174658e-01 ||r(i)||/||b|| 8.917261322993e-03 > 9885 KSP Residual norm 1.080641645035e+00 % max 1.141770016402e+02 min 2.461723435101e-01 max/min 4.638092159834e+02 > 9886 KSP preconditioned resid norm 1.080641645032e+00 true resid norm 1.594066402497e-01 ||r(i)||/||b|| 8.917268191569e-03 > 9886 KSP Residual norm 1.080641645032e+00 % max 1.150251585488e+02 min 1.817296775460e-01 max/min 6.329464735865e+02 > 9887 KSP preconditioned resid norm 1.080641644925e+00 true resid norm 1.594070641129e-01 ||r(i)||/||b|| 8.917291902641e-03 > 9887 KSP Residual norm 1.080641644925e+00 % max 1.153670431587e+02 min 1.757838889123e-01 max/min 6.563004372733e+02 > 9888 KSP preconditioned resid norm 1.080641644909e+00 true resid norm 1.594072071224e-01 ||r(i)||/||b|| 8.917299902647e-03 > 9888 KSP Residual norm 1.080641644909e+00 % max 1.154419177071e+02 min 1.682011082589e-01 max/min 6.863326817642e+02 > 9889 KSP preconditioned resid norm 1.080641644540e+00 true resid norm 1.594087159021e-01 ||r(i)||/||b|| 8.917384304358e-03 > 9889 KSP Residual norm 1.080641644540e+00 % max 1.154420615965e+02 min 1.254373423966e-01 max/min 9.203165452239e+02 > 9890 KSP preconditioned resid norm 1.080641644365e+00 true resid norm 1.594087901062e-01 ||r(i)||/||b|| 8.917388455364e-03 > 9890 KSP Residual norm 1.080641644365e+00 % max 1.155791063432e+02 min 1.115273566807e-01 max/min 1.036329648465e+03 > 9891 KSP preconditioned resid norm 1.080641644282e+00 true resid norm 1.594096994140e-01 ||r(i)||/||b|| 8.917439322388e-03 > 9891 KSP Residual norm 1.080641644282e+00 % max 1.156952495514e+02 min 9.753038620512e-02 max/min 1.186248245835e+03 > 9892 KSP preconditioned resid norm 1.080641644189e+00 true resid norm 1.594101905102e-01 ||r(i)||/||b|| 8.917466794493e-03 > 9892 KSP Residual norm 1.080641644189e+00 % max 1.157175115527e+02 min 8.164774923625e-02 max/min 1.417277422037e+03 > 9893 KSP preconditioned resid norm 1.080641644177e+00 true resid norm 1.594099860409e-01 ||r(i)||/||b|| 8.917455356405e-03 > 9893 KSP Residual norm 1.080641644177e+00 % max 1.157285171248e+02 min 8.043299897298e-02 max/min 1.438818875368e+03 > 9894 KSP preconditioned resid norm 1.080641644141e+00 true resid norm 1.594101071411e-01 ||r(i)||/||b|| 8.917462130798e-03 > 9894 KSP Residual norm 1.080641644141e+00 % max 1.158251669093e+02 min 8.042896682578e-02 max/min 1.440092686509e+03 > 9895 KSP preconditioned resid norm 1.080641644139e+00 true resid norm 1.594101154949e-01 ||r(i)||/||b|| 8.917462598110e-03 > 9895 KSP Residual norm 1.080641644139e+00 % max 1.158318659802e+02 min 7.912549560750e-02 max/min 1.463900669321e+03 > 9896 KSP preconditioned resid norm 1.080641643967e+00 true resid norm 1.594100896394e-01 ||r(i)||/||b|| 8.917461151744e-03 > 9896 KSP Residual norm 1.080641643967e+00 % max 1.164749819823e+02 min 7.459530237415e-02 max/min 1.561425160502e+03 > 9897 KSP preconditioned resid norm 1.080641643939e+00 true resid norm 1.594099626317e-01 ||r(i)||/||b|| 8.917454046886e-03 > 9897 KSP Residual norm 1.080641643939e+00 % max 1.166576013049e+02 min 7.458633610448e-02 max/min 1.564061293230e+03 > 9898 KSP preconditioned resid norm 1.080641642401e+00 true resid norm 1.594089618420e-01 ||r(i)||/||b|| 8.917398062328e-03 > 9898 KSP Residual norm 1.080641642401e+00 % max 1.166900450421e+02 min 6.207924609942e-02 max/min 1.879694944349e+03 > 9899 KSP preconditioned resid norm 1.080641636866e+00 true resid norm 1.594083552587e-01 ||r(i)||/||b|| 8.917364129827e-03 > 9899 KSP Residual norm 1.080641636866e+00 % max 1.166908543099e+02 min 4.511837692126e-02 max/min 2.586326509785e+03 > 9900 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073529026e-01 ||r(i)||/||b|| 8.917308057647e-03 > 9900 KSP Residual norm 1.080641636459e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9901 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073529076e-01 ||r(i)||/||b|| 8.917308057926e-03 > 9901 KSP Residual norm 1.080641636459e+00 % max 3.063497290246e+01 min 3.063497290246e+01 max/min 1.000000000000e+00 > 9902 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073529160e-01 ||r(i)||/||b|| 8.917308058394e-03 > 9902 KSP Residual norm 1.080641636459e+00 % max 3.066566310477e+01 min 3.845858371013e+00 max/min 7.973684973919e+00 > 9903 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073530163e-01 ||r(i)||/||b|| 8.917308064007e-03 > 9903 KSP Residual norm 1.080641636459e+00 % max 3.713375877488e+01 min 1.336326817614e+00 max/min 2.778793202787e+01 > 9904 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073530198e-01 ||r(i)||/||b|| 8.917308064200e-03 > 9904 KSP Residual norm 1.080641636459e+00 % max 4.496516516046e+01 min 1.226794642680e+00 max/min 3.665256074336e+01 > 9905 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073530254e-01 ||r(i)||/||b|| 8.917308064515e-03 > 9905 KSP Residual norm 1.080641636459e+00 % max 8.684929340512e+01 min 1.183106694605e+00 max/min 7.340782855945e+01 > 9906 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073533139e-01 ||r(i)||/||b|| 8.917308080655e-03 > 9906 KSP Residual norm 1.080641636459e+00 % max 9.802647163348e+01 min 1.168695697992e+00 max/min 8.387681395754e+01 > 9907 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073534761e-01 ||r(i)||/||b|| 8.917308089727e-03 > 9907 KSP Residual norm 1.080641636459e+00 % max 1.123306036185e+02 min 1.141279831413e+00 max/min 9.842511934993e+01 > 9908 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073535726e-01 ||r(i)||/||b|| 8.917308095124e-03 > 9908 KSP Residual norm 1.080641636459e+00 % max 1.134977112090e+02 min 1.090790470231e+00 max/min 1.040508826457e+02 > 9909 KSP preconditioned resid norm 1.080641636458e+00 true resid norm 1.594074031465e-01 ||r(i)||/||b|| 8.917310868307e-03 > 9909 KSP Residual norm 1.080641636458e+00 % max 1.139911596762e+02 min 4.122331938771e-01 max/min 2.765210598500e+02 > 9910 KSP preconditioned resid norm 1.080641636457e+00 true resid norm 1.594074609911e-01 ||r(i)||/||b|| 8.917314104156e-03 > 9910 KSP Residual norm 1.080641636457e+00 % max 1.140007179199e+02 min 2.895720290537e-01 max/min 3.936869120007e+02 > 9911 KSP preconditioned resid norm 1.080641636457e+00 true resid norm 1.594074582552e-01 ||r(i)||/||b|| 8.917313951111e-03 > 9911 KSP Residual norm 1.080641636457e+00 % max 1.140387297689e+02 min 2.881903470643e-01 max/min 3.957062786128e+02 > 9912 KSP preconditioned resid norm 1.080641636457e+00 true resid norm 1.594074351775e-01 ||r(i)||/||b|| 8.917312660133e-03 > 9912 KSP Residual norm 1.080641636457e+00 % max 1.140387790534e+02 min 2.501647526543e-01 max/min 4.558547031242e+02 > 9913 KSP preconditioned resid norm 1.080641636455e+00 true resid norm 1.594074345003e-01 ||r(i)||/||b|| 8.917312622252e-03 > 9913 KSP Residual norm 1.080641636455e+00 % max 1.141359922733e+02 min 2.500654781355e-01 max/min 4.564244258117e+02 > 9914 KSP preconditioned resid norm 1.080641636454e+00 true resid norm 1.594074280517e-01 ||r(i)||/||b|| 8.917312261516e-03 > 9914 KSP Residual norm 1.080641636454e+00 % max 1.141719413857e+02 min 2.470359663864e-01 max/min 4.621672829905e+02 > 9915 KSP preconditioned resid norm 1.080641636430e+00 true resid norm 1.594073674253e-01 ||r(i)||/||b|| 8.917308870053e-03 > 9915 KSP Residual norm 1.080641636430e+00 % max 1.141769960812e+02 min 2.461588977988e-01 max/min 4.638345276249e+02 > 9916 KSP preconditioned resid norm 1.080641636427e+00 true resid norm 1.594072457998e-01 ||r(i)||/||b|| 8.917302066275e-03 > 9916 KSP Residual norm 1.080641636427e+00 % max 1.150248029458e+02 min 1.817425233097e-01 max/min 6.328997795954e+02 > 9917 KSP preconditioned resid norm 1.080641636323e+00 true resid norm 1.594068269925e-01 ||r(i)||/||b|| 8.917278638033e-03 > 9917 KSP Residual norm 1.080641636323e+00 % max 1.153659292411e+02 min 1.758211627252e-01 max/min 6.561549670868e+02 > 9918 KSP preconditioned resid norm 1.080641636306e+00 true resid norm 1.594066853231e-01 ||r(i)||/||b|| 8.917270712996e-03 > 9918 KSP Residual norm 1.080641636306e+00 % max 1.154409736018e+02 min 1.682249410195e-01 max/min 6.862298354942e+02 > 9919 KSP preconditioned resid norm 1.080641635944e+00 true resid norm 1.594051910900e-01 ||r(i)||/||b|| 8.917187125026e-03 > 9919 KSP Residual norm 1.080641635944e+00 % max 1.154411092996e+02 min 1.254604202789e-01 max/min 9.201396667011e+02 > 9920 KSP preconditioned resid norm 1.080641635771e+00 true resid norm 1.594051167722e-01 ||r(i)||/||b|| 8.917182967656e-03 > 9920 KSP Residual norm 1.080641635771e+00 % max 1.155780759198e+02 min 1.115224613869e-01 max/min 1.036365898694e+03 > 9921 KSP preconditioned resid norm 1.080641635689e+00 true resid norm 1.594042104954e-01 ||r(i)||/||b|| 8.917132270191e-03 > 9921 KSP Residual norm 1.080641635689e+00 % max 1.156949393598e+02 min 9.748286841768e-02 max/min 1.186823297649e+03 > 9922 KSP preconditioned resid norm 1.080641635599e+00 true resid norm 1.594037272785e-01 ||r(i)||/||b|| 8.917105238852e-03 > 9922 KSP Residual norm 1.080641635599e+00 % max 1.157177765183e+02 min 8.161780203814e-02 max/min 1.417800695787e+03 > 9923 KSP preconditioned resid norm 1.080641635586e+00 true resid norm 1.594039328090e-01 ||r(i)||/||b|| 8.917116736308e-03 > 9923 KSP Residual norm 1.080641635586e+00 % max 1.157297518468e+02 min 8.041713659406e-02 max/min 1.439118038125e+03 > 9924 KSP preconditioned resid norm 1.080641635552e+00 true resid norm 1.594038177599e-01 ||r(i)||/||b|| 8.917110300416e-03 > 9924 KSP Residual norm 1.080641635552e+00 % max 1.158244835100e+02 min 8.041265899129e-02 max/min 1.440376241290e+03 > 9925 KSP preconditioned resid norm 1.080641635551e+00 true resid norm 1.594038101782e-01 ||r(i)||/||b|| 8.917109876291e-03 > 9925 KSP Residual norm 1.080641635551e+00 % max 1.158302075022e+02 min 7.912034826741e-02 max/min 1.463974944988e+03 > 9926 KSP preconditioned resid norm 1.080641635382e+00 true resid norm 1.594038364112e-01 ||r(i)||/||b|| 8.917111343775e-03 > 9926 KSP Residual norm 1.080641635382e+00 % max 1.164682071358e+02 min 7.460699390243e-02 max/min 1.561089665242e+03 > 9927 KSP preconditioned resid norm 1.080641635354e+00 true resid norm 1.594039631076e-01 ||r(i)||/||b|| 8.917118431221e-03 > 9927 KSP Residual norm 1.080641635354e+00 % max 1.166492079885e+02 min 7.459740228247e-02 max/min 1.563716757144e+03 > 9928 KSP preconditioned resid norm 1.080641633843e+00 true resid norm 1.594049513925e-01 ||r(i)||/||b|| 8.917173716256e-03 > 9928 KSP Residual norm 1.080641633843e+00 % max 1.166812081048e+02 min 6.210379986959e-02 max/min 1.878809482669e+03 > 9929 KSP preconditioned resid norm 1.080641628410e+00 true resid norm 1.594055557080e-01 ||r(i)||/||b|| 8.917207521894e-03 > 9929 KSP Residual norm 1.080641628410e+00 % max 1.166819406955e+02 min 4.513468212670e-02 max/min 2.585194692807e+03 > 9930 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065400862e-01 ||r(i)||/||b|| 8.917262588389e-03 > 9930 KSP Residual norm 1.080641628017e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9931 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065400814e-01 ||r(i)||/||b|| 8.917262588116e-03 > 9931 KSP Residual norm 1.080641628017e+00 % max 3.063465052437e+01 min 3.063465052437e+01 max/min 1.000000000000e+00 > 9932 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065400730e-01 ||r(i)||/||b|| 8.917262587648e-03 > 9932 KSP Residual norm 1.080641628017e+00 % max 3.066529886382e+01 min 3.844457299464e+00 max/min 7.976496154112e+00 > 9933 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065399737e-01 ||r(i)||/||b|| 8.917262582094e-03 > 9933 KSP Residual norm 1.080641628017e+00 % max 3.714525447417e+01 min 1.336554051053e+00 max/min 2.779180867762e+01 > 9934 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065399702e-01 ||r(i)||/||b|| 8.917262581897e-03 > 9934 KSP Residual norm 1.080641628017e+00 % max 4.500845605790e+01 min 1.226798858744e+00 max/min 3.668772247146e+01 > 9935 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065399646e-01 ||r(i)||/||b|| 8.917262581583e-03 > 9935 KSP Residual norm 1.080641628017e+00 % max 8.688155371236e+01 min 1.183120734870e+00 max/min 7.343422454843e+01 > 9936 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065396833e-01 ||r(i)||/||b|| 8.917262565850e-03 > 9936 KSP Residual norm 1.080641628017e+00 % max 9.802499250688e+01 min 1.168932790974e+00 max/min 8.385853597724e+01 > 9937 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065395240e-01 ||r(i)||/||b|| 8.917262556936e-03 > 9937 KSP Residual norm 1.080641628017e+00 % max 1.122446326752e+02 min 1.141662807959e+00 max/min 9.831679887675e+01 > 9938 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065394282e-01 ||r(i)||/||b|| 8.917262551579e-03 > 9938 KSP Residual norm 1.080641628017e+00 % max 1.134942798726e+02 min 1.090790194356e+00 max/min 1.040477632269e+02 > 9939 KSP preconditioned resid norm 1.080641628016e+00 true resid norm 1.594064902727e-01 ||r(i)||/||b|| 8.917259801801e-03 > 9939 KSP Residual norm 1.080641628016e+00 % max 1.139914516679e+02 min 4.119801790176e-01 max/min 2.766915921531e+02 > 9940 KSP preconditioned resid norm 1.080641628015e+00 true resid norm 1.594064329818e-01 ||r(i)||/||b|| 8.917256596927e-03 > 9940 KSP Residual norm 1.080641628015e+00 % max 1.140011210085e+02 min 2.894551947128e-01 max/min 3.938472105212e+02 > 9941 KSP preconditioned resid norm 1.080641628015e+00 true resid norm 1.594064356968e-01 ||r(i)||/||b|| 8.917256748803e-03 > 9941 KSP Residual norm 1.080641628015e+00 % max 1.140392057187e+02 min 2.880605712140e-01 max/min 3.958862028153e+02 > 9942 KSP preconditioned resid norm 1.080641628014e+00 true resid norm 1.594064585337e-01 ||r(i)||/||b|| 8.917258026309e-03 > 9942 KSP Residual norm 1.080641628014e+00 % max 1.140392489431e+02 min 2.501713977769e-01 max/min 4.558444728556e+02 > 9943 KSP preconditioned resid norm 1.080641628013e+00 true resid norm 1.594064592249e-01 ||r(i)||/||b|| 8.917258064976e-03 > 9943 KSP Residual norm 1.080641628013e+00 % max 1.141360412719e+02 min 2.500711817244e-01 max/min 4.564142116850e+02 > 9944 KSP preconditioned resid norm 1.080641628011e+00 true resid norm 1.594064655583e-01 ||r(i)||/||b|| 8.917258419265e-03 > 9944 KSP Residual norm 1.080641628011e+00 % max 1.141719179255e+02 min 2.470518740808e-01 max/min 4.621374290332e+02 > 9945 KSP preconditioned resid norm 1.080641627988e+00 true resid norm 1.594065256003e-01 ||r(i)||/||b|| 8.917261778039e-03 > 9945 KSP Residual norm 1.080641627988e+00 % max 1.141770016071e+02 min 2.461722087102e-01 max/min 4.638094698230e+02 > 9946 KSP preconditioned resid norm 1.080641627985e+00 true resid norm 1.594066459440e-01 ||r(i)||/||b|| 8.917268510113e-03 > 9946 KSP Residual norm 1.080641627985e+00 % max 1.150251558754e+02 min 1.817297629761e-01 max/min 6.329461613313e+02 > 9947 KSP preconditioned resid norm 1.080641627883e+00 true resid norm 1.594070613108e-01 ||r(i)||/||b|| 8.917291745887e-03 > 9947 KSP Residual norm 1.080641627883e+00 % max 1.153670348347e+02 min 1.757842027946e-01 max/min 6.562992180216e+02 > 9948 KSP preconditioned resid norm 1.080641627867e+00 true resid norm 1.594072014571e-01 ||r(i)||/||b|| 8.917299585729e-03 > 9948 KSP Residual norm 1.080641627867e+00 % max 1.154419110590e+02 min 1.682012982555e-01 max/min 6.863318669729e+02 > 9949 KSP preconditioned resid norm 1.080641627512e+00 true resid norm 1.594086800399e-01 ||r(i)||/||b|| 8.917382298211e-03 > 9949 KSP Residual norm 1.080641627512e+00 % max 1.154420548965e+02 min 1.254375490145e-01 max/min 9.203149758861e+02 > 9950 KSP preconditioned resid norm 1.080641627344e+00 true resid norm 1.594087527690e-01 ||r(i)||/||b|| 8.917386366706e-03 > 9950 KSP Residual norm 1.080641627344e+00 % max 1.155790984624e+02 min 1.115271921586e-01 max/min 1.036331106570e+03 > 9951 KSP preconditioned resid norm 1.080641627264e+00 true resid norm 1.594096439404e-01 ||r(i)||/||b|| 8.917436219173e-03 > 9951 KSP Residual norm 1.080641627264e+00 % max 1.156952457641e+02 min 9.753007450777e-02 max/min 1.186251998145e+03 > 9952 KSP preconditioned resid norm 1.080641627176e+00 true resid norm 1.594101251850e-01 ||r(i)||/||b|| 8.917463140178e-03 > 9952 KSP Residual norm 1.080641627176e+00 % max 1.157175103847e+02 min 8.164743677427e-02 max/min 1.417282831604e+03 > 9953 KSP preconditioned resid norm 1.080641627164e+00 true resid norm 1.594099248157e-01 ||r(i)||/||b|| 8.917451931444e-03 > 9953 KSP Residual norm 1.080641627164e+00 % max 1.157285221143e+02 min 8.043281187160e-02 max/min 1.438822284357e+03 > 9954 KSP preconditioned resid norm 1.080641627129e+00 true resid norm 1.594100434515e-01 ||r(i)||/||b|| 8.917458567979e-03 > 9954 KSP Residual norm 1.080641627129e+00 % max 1.158251567433e+02 min 8.042877586324e-02 max/min 1.440095979333e+03 > 9955 KSP preconditioned resid norm 1.080641627127e+00 true resid norm 1.594100516277e-01 ||r(i)||/||b|| 8.917459025356e-03 > 9955 KSP Residual norm 1.080641627127e+00 % max 1.158318480981e+02 min 7.912541326160e-02 max/min 1.463901966807e+03 > 9956 KSP preconditioned resid norm 1.080641626963e+00 true resid norm 1.594100262829e-01 ||r(i)||/||b|| 8.917457607557e-03 > 9956 KSP Residual norm 1.080641626963e+00 % max 1.164749229958e+02 min 7.459540346436e-02 max/min 1.561422253738e+03 > 9957 KSP preconditioned resid norm 1.080641626935e+00 true resid norm 1.594099017783e-01 ||r(i)||/||b|| 8.917450642725e-03 > 9957 KSP Residual norm 1.080641626935e+00 % max 1.166575286636e+02 min 7.458642942097e-02 max/min 1.564058362483e+03 > 9958 KSP preconditioned resid norm 1.080641625459e+00 true resid norm 1.594089210473e-01 ||r(i)||/||b|| 8.917395780257e-03 > 9958 KSP Residual norm 1.080641625459e+00 % max 1.166899683166e+02 min 6.207944430169e-02 max/min 1.879687707085e+03 > 9959 KSP preconditioned resid norm 1.080641620143e+00 true resid norm 1.594083265698e-01 ||r(i)||/||b|| 8.917362524958e-03 > 9959 KSP Residual norm 1.080641620143e+00 % max 1.166907769657e+02 min 4.511856151865e-02 max/min 2.586314213883e+03 > 9960 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073442756e-01 ||r(i)||/||b|| 8.917307575050e-03 > 9960 KSP Residual norm 1.080641619752e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9961 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073442805e-01 ||r(i)||/||b|| 8.917307575322e-03 > 9961 KSP Residual norm 1.080641619752e+00 % max 3.063497144523e+01 min 3.063497144523e+01 max/min 1.000000000000e+00 > 9962 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073442888e-01 ||r(i)||/||b|| 8.917307575783e-03 > 9962 KSP Residual norm 1.080641619752e+00 % max 3.066566109468e+01 min 3.845843703829e+00 max/min 7.973714861098e+00 > 9963 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073443871e-01 ||r(i)||/||b|| 8.917307581283e-03 > 9963 KSP Residual norm 1.080641619752e+00 % max 3.713390425813e+01 min 1.336329967989e+00 max/min 2.778797538606e+01 > 9964 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073443904e-01 ||r(i)||/||b|| 8.917307581472e-03 > 9964 KSP Residual norm 1.080641619752e+00 % max 4.496570748601e+01 min 1.226794844829e+00 max/min 3.665299677086e+01 > 9965 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073443960e-01 ||r(i)||/||b|| 8.917307581781e-03 > 9965 KSP Residual norm 1.080641619752e+00 % max 8.684970616180e+01 min 1.183106833938e+00 max/min 7.340816878957e+01 > 9966 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073446787e-01 ||r(i)||/||b|| 8.917307597597e-03 > 9966 KSP Residual norm 1.080641619752e+00 % max 9.802644805035e+01 min 1.168698112506e+00 max/min 8.387662049028e+01 > 9967 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073448376e-01 ||r(i)||/||b|| 8.917307606488e-03 > 9967 KSP Residual norm 1.080641619752e+00 % max 1.123297332549e+02 min 1.141283907756e+00 max/min 9.842400518531e+01 > 9968 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073449323e-01 ||r(i)||/||b|| 8.917307611782e-03 > 9968 KSP Residual norm 1.080641619752e+00 % max 1.134976751689e+02 min 1.090790474217e+00 max/min 1.040508492250e+02 > 9969 KSP preconditioned resid norm 1.080641619751e+00 true resid norm 1.594073935137e-01 ||r(i)||/||b|| 8.917310329444e-03 > 9969 KSP Residual norm 1.080641619751e+00 % max 1.139911627557e+02 min 4.122309806702e-01 max/min 2.765225519207e+02 > 9970 KSP preconditioned resid norm 1.080641619750e+00 true resid norm 1.594074502003e-01 ||r(i)||/||b|| 8.917313500516e-03 > 9970 KSP Residual norm 1.080641619750e+00 % max 1.140007220037e+02 min 2.895709152543e-01 max/min 3.936884403725e+02 > 9971 KSP preconditioned resid norm 1.080641619750e+00 true resid norm 1.594074475189e-01 ||r(i)||/||b|| 8.917313350515e-03 > 9971 KSP Residual norm 1.080641619750e+00 % max 1.140387347164e+02 min 2.881891122657e-01 max/min 3.957079912555e+02 > 9972 KSP preconditioned resid norm 1.080641619750e+00 true resid norm 1.594074249007e-01 ||r(i)||/||b|| 8.917312085247e-03 > 9972 KSP Residual norm 1.080641619750e+00 % max 1.140387839472e+02 min 2.501648230072e-01 max/min 4.558545944886e+02 > 9973 KSP preconditioned resid norm 1.080641619748e+00 true resid norm 1.594074242369e-01 ||r(i)||/||b|| 8.917312048114e-03 > 9973 KSP Residual norm 1.080641619748e+00 % max 1.141359929290e+02 min 2.500655391090e-01 max/min 4.564243171438e+02 > 9974 KSP preconditioned resid norm 1.080641619747e+00 true resid norm 1.594074179174e-01 ||r(i)||/||b|| 8.917311694599e-03 > 9974 KSP Residual norm 1.080641619747e+00 % max 1.141719411262e+02 min 2.470361384981e-01 max/min 4.621669599450e+02 > 9975 KSP preconditioned resid norm 1.080641619724e+00 true resid norm 1.594073585052e-01 ||r(i)||/||b|| 8.917308371056e-03 > 9975 KSP Residual norm 1.080641619724e+00 % max 1.141769961595e+02 min 2.461590320913e-01 max/min 4.638342748974e+02 > 9976 KSP preconditioned resid norm 1.080641619721e+00 true resid norm 1.594072392992e-01 ||r(i)||/||b|| 8.917301702630e-03 > 9976 KSP Residual norm 1.080641619721e+00 % max 1.150248073926e+02 min 1.817423531140e-01 max/min 6.329003967526e+02 > 9977 KSP preconditioned resid norm 1.080641619621e+00 true resid norm 1.594068288737e-01 ||r(i)||/||b|| 8.917278743269e-03 > 9977 KSP Residual norm 1.080641619621e+00 % max 1.153659432133e+02 min 1.758207322277e-01 max/min 6.561566531518e+02 > 9978 KSP preconditioned resid norm 1.080641619605e+00 true resid norm 1.594066900433e-01 ||r(i)||/||b|| 8.917270977044e-03 > 9978 KSP Residual norm 1.080641619605e+00 % max 1.154409858489e+02 min 1.682246559484e-01 max/min 6.862310711714e+02 > 9979 KSP preconditioned resid norm 1.080641619257e+00 true resid norm 1.594052257365e-01 ||r(i)||/||b|| 8.917189063164e-03 > 9979 KSP Residual norm 1.080641619257e+00 % max 1.154411216580e+02 min 1.254601679658e-01 max/min 9.201416157000e+02 > 9980 KSP preconditioned resid norm 1.080641619092e+00 true resid norm 1.594051529133e-01 ||r(i)||/||b|| 8.917184989407e-03 > 9980 KSP Residual norm 1.080641619092e+00 % max 1.155780886655e+02 min 1.115223976110e-01 max/min 1.036366605645e+03 > 9981 KSP preconditioned resid norm 1.080641619012e+00 true resid norm 1.594042648383e-01 ||r(i)||/||b|| 8.917135310152e-03 > 9981 KSP Residual norm 1.080641619012e+00 % max 1.156949417657e+02 min 9.748351070064e-02 max/min 1.186815502788e+03 > 9982 KSP preconditioned resid norm 1.080641618926e+00 true resid norm 1.594037912594e-01 ||r(i)||/||b|| 8.917108817969e-03 > 9982 KSP Residual norm 1.080641618926e+00 % max 1.157177699729e+02 min 8.161808862755e-02 max/min 1.417795637202e+03 > 9983 KSP preconditioned resid norm 1.080641618914e+00 true resid norm 1.594039926066e-01 ||r(i)||/||b|| 8.917120081407e-03 > 9983 KSP Residual norm 1.080641618914e+00 % max 1.157297318664e+02 min 8.041726690542e-02 max/min 1.439115457660e+03 > 9984 KSP preconditioned resid norm 1.080641618881e+00 true resid norm 1.594038798061e-01 ||r(i)||/||b|| 8.917113771305e-03 > 9984 KSP Residual norm 1.080641618881e+00 % max 1.158244868066e+02 min 8.041279440761e-02 max/min 1.440373856671e+03 > 9985 KSP preconditioned resid norm 1.080641618880e+00 true resid norm 1.594038723728e-01 ||r(i)||/||b|| 8.917113355483e-03 > 9985 KSP Residual norm 1.080641618880e+00 % max 1.158302224434e+02 min 7.912036884330e-02 max/min 1.463974753110e+03 > 9986 KSP preconditioned resid norm 1.080641618717e+00 true resid norm 1.594038980749e-01 ||r(i)||/||b|| 8.917114793267e-03 > 9986 KSP Residual norm 1.080641618717e+00 % max 1.164682835790e+02 min 7.460686106511e-02 max/min 1.561093469371e+03 > 9987 KSP preconditioned resid norm 1.080641618691e+00 true resid norm 1.594040222541e-01 ||r(i)||/||b|| 8.917121739901e-03 > 9987 KSP Residual norm 1.080641618691e+00 % max 1.166493032152e+02 min 7.459727412837e-02 max/min 1.563720720069e+03 > 9988 KSP preconditioned resid norm 1.080641617240e+00 true resid norm 1.594049907736e-01 ||r(i)||/||b|| 8.917175919248e-03 > 9988 KSP Residual norm 1.080641617240e+00 % max 1.166813081045e+02 min 6.210350927266e-02 max/min 1.878819884271e+03 > 9989 KSP preconditioned resid norm 1.080641612022e+00 true resid norm 1.594055829488e-01 ||r(i)||/||b|| 8.917209045757e-03 > 9989 KSP Residual norm 1.080641612022e+00 % max 1.166820415947e+02 min 4.513453999933e-02 max/min 2.585205069033e+03 > 9990 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065477745e-01 ||r(i)||/||b|| 8.917263018470e-03 > 9990 KSP Residual norm 1.080641611645e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9991 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065477696e-01 ||r(i)||/||b|| 8.917263018201e-03 > 9991 KSP Residual norm 1.080641611645e+00 % max 3.063465549435e+01 min 3.063465549435e+01 max/min 1.000000000000e+00 > 9992 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065477615e-01 ||r(i)||/||b|| 8.917263017744e-03 > 9992 KSP Residual norm 1.080641611645e+00 % max 3.066530412302e+01 min 3.844470687795e+00 max/min 7.976469744031e+00 > 9993 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065476642e-01 ||r(i)||/||b|| 8.917263012302e-03 > 9993 KSP Residual norm 1.080641611645e+00 % max 3.714516899070e+01 min 1.336552632161e+00 max/min 2.779177422339e+01 > 9994 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065476608e-01 ||r(i)||/||b|| 8.917263012110e-03 > 9994 KSP Residual norm 1.080641611645e+00 % max 4.500812884571e+01 min 1.226798972667e+00 max/min 3.668745234425e+01 > 9995 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065476552e-01 ||r(i)||/||b|| 8.917263011801e-03 > 9995 KSP Residual norm 1.080641611645e+00 % max 8.688131738345e+01 min 1.183120593233e+00 max/min 7.343403358914e+01 > 9996 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065473796e-01 ||r(i)||/||b|| 8.917262996379e-03 > 9996 KSP Residual norm 1.080641611645e+00 % max 9.802499878957e+01 min 1.168930461656e+00 max/min 8.385870845619e+01 > 9997 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065472234e-01 ||r(i)||/||b|| 8.917262987642e-03 > 9997 KSP Residual norm 1.080641611645e+00 % max 1.122454823211e+02 min 1.141659208813e+00 max/min 9.831785304632e+01 > 9998 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065471295e-01 ||r(i)||/||b|| 8.917262982389e-03 > 9998 KSP Residual norm 1.080641611645e+00 % max 1.134943126018e+02 min 1.090790203681e+00 max/min 1.040477923425e+02 > 9999 KSP preconditioned resid norm 1.080641611644e+00 true resid norm 1.594064989598e-01 ||r(i)||/||b|| 8.917260287762e-03 > 9999 KSP Residual norm 1.080641611644e+00 % max 1.139914489020e+02 min 4.119830336343e-01 max/min 2.766896682528e+02 > 10000 KSP preconditioned resid norm 1.080641611643e+00 true resid norm 1.594064428168e-01 ||r(i)||/||b|| 8.917257147097e-03 > 10000 KSP Residual norm 1.080641611643e+00 % max 1.140011170215e+02 min 2.894564222709e-01 max/min 3.938455264772e+02 > > > Then the number of iteration >10000 > The Programme stop with convergenceReason=-3 > > Best, > > Meng > ------------------ Original ------------------ > From: "Barry Smith";; > Send time: Friday, Apr 25, 2014 6:27 AM > To: "Oo "; > Cc: "Dave May"; "petsc-users"; > Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 > > > On Apr 24, 2014, at 5:24 PM, Oo wrote: > > > > > Configure PETSC again? > > No, the command line when you run the program. > > Barry > > > > > ------------------ Original ------------------ > > From: "Dave May";; > > Send time: Friday, Apr 25, 2014 6:20 AM > > To: "Oo "; > > Cc: "Barry Smith"; "Matthew Knepley"; "petsc-users"; > > Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 > > > > On the command line > > > > > > On 25 April 2014 00:11, Oo wrote: > > > > Where should I put "-ksp_monitor_true_residual -ksp_monitor_singular_value " ? > > > > Thanks, > > > > Meng > > > > > > ------------------ Original ------------------ > > From: "Barry Smith";; > > Date: Apr 25, 2014 > > To: "Matthew Knepley"; > > Cc: "Oo "; "petsc-users"; > > Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 > > > > > > There are also a great deal of ?bogus? numbers that have no meaning and many zeros. Most of these are not the eigenvalues of anything. > > > > Run the two cases with -ksp_monitor_true_residual -ksp_monitor_singular_value and send the output > > > > Barry > > > > > > On Apr 24, 2014, at 4:53 PM, Matthew Knepley wrote: > > > > > On Thu, Apr 24, 2014 at 12:57 PM, Oo wrote: > > > > > > Hi, > > > > > > For analysis the convergence of linear solver, > > > I meet a problem. > > > > > > One is the list of Eigenvalues whose linear system which has a convergence solution. > > > The other is the list of Eigenvalues whose linear system whose solution does not converge (convergenceReason=-3). > > > > > > These are just lists of numbers. It does not tell us anything about the computation. What is the problem you are having? > > > > > > Matt > > > > > > Do you know what kind of method can be used to obtain a convergence solution for our non-convergence case? > > > > > > Thanks, > > > > > > Meng > > > > > > > > > > > > -- > > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > > -- Norbert Wiener > > > > . > > > > . -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Apr 25 05:22:35 2014 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 25 Apr 2014 06:22:35 -0400 Subject: [petsc-users] Convergence_Eigenvalues_k=3 In-Reply-To: References: <65E3810E-04F5-4B83-B3AF-EDE2D3AEE791@mcs.anl.gov> <4EC22F71-3552-402B-8886-9180229E1BC2@mcs.anl.gov> Message-ID: On Fri, Apr 25, 2014 at 4:13 AM, Oo wrote: > Hi, > > What I can do for solving this linear system? > I am guessing that you are using the default GMRES/ILU. Run with -ksp_view to confirm. We would have to know something about the linear system to suggest improvements. Matt > At this moment, > I use the common line "/Users/wumeng/MyWork/BuildMSplineTools/bin/msplinePDE_PFEM_2 > -ksp_gmres_restart 200 -ksp_max_it 200 -ksp_monitor_true_residual > -ksp_monitor_singular_value" > > The following are the output: > ============================================ > > 0 KSP preconditioned resid norm 7.463734841673e+00 true resid norm > 7.520241011357e-02 ||r(i)||/||b|| 1.000000000000e+00 > > 0 KSP Residual norm 7.463734841673e+00 % max 1.000000000000e+00 min > 1.000000000000e+00 max/min 1.000000000000e+00 > > 1 KSP preconditioned resid norm 3.449001344285e-03 true resid norm > 7.834435231711e-05 ||r(i)||/||b|| 1.041779807307e-03 > > 1 KSP Residual norm 3.449001344285e-03 % max 9.999991695261e-01 min > 9.999991695261e-01 max/min 1.000000000000e+00 > > 2 KSP preconditioned resid norm 1.811463883605e-05 true resid norm > 3.597611565181e-07 ||r(i)||/||b|| 4.783904611232e-06 > > 2 KSP Residual norm 1.811463883605e-05 % max 1.000686014764e+00 min > 9.991339510077e-01 max/min 1.001553409084e+00 > > 0 KSP preconditioned resid norm 9.374463936067e+00 true resid norm > 9.058107112571e-02 ||r(i)||/||b|| 1.000000000000e+00 > > 0 KSP Residual norm 9.374463936067e+00 % max 1.000000000000e+00 min > 1.000000000000e+00 max/min 1.000000000000e+00 > > 1 KSP preconditioned resid norm 2.595582184655e-01 true resid norm > 6.637387889158e-03 ||r(i)||/||b|| 7.327566131280e-02 > > 1 KSP Residual norm 2.595582184655e-01 % max 9.933440157684e-01 min > 9.933440157684e-01 max/min 1.000000000000e+00 > > 2 KSP preconditioned resid norm 6.351429855766e-03 true resid norm > 1.844857600919e-04 ||r(i)||/||b|| 2.036692189651e-03 > > 2 KSP Residual norm 6.351429855766e-03 % max 1.000795215571e+00 min > 8.099278726624e-01 max/min 1.235659679523e+00 > > 3 KSP preconditioned resid norm 1.883016084950e-04 true resid norm > 3.876682412610e-06 ||r(i)||/||b|| 4.279793078656e-05 > > 3 KSP Residual norm 1.883016084950e-04 % max 1.184638644500e+00 min > 8.086172954187e-01 max/min 1.465017692809e+00 > > ============================================= > > 0 KSP preconditioned resid norm 1.414935390756e+03 true resid norm > 1.787617427503e+01 ||r(i)||/||b|| 1.000000000000e+00 > > 0 KSP Residual norm 1.414935390756e+03 % max 1.000000000000e+00 min > 1.000000000000e+00 max/min 1.000000000000e+00 > > 1 KSP preconditioned resid norm 1.384179962960e+03 true resid norm > 1.802958083039e+01 ||r(i)||/||b|| 1.008581621157e+00 > > 1 KSP Residual norm 1.384179962960e+03 % max 1.895999321723e+01 min > 1.895999321723e+01 max/min 1.000000000000e+00 > > 2 KSP preconditioned resid norm 1.382373771674e+03 true resid norm > 1.813982830244e+01 ||r(i)||/||b|| 1.014748906749e+00 > > 2 KSP Residual norm 1.382373771674e+03 % max 3.551645921348e+01 min > 6.051184451182e+00 max/min 5.869340044086e+00 > > 3 KSP preconditioned resid norm 1.332723893134e+03 true resid norm > 1.774590608681e+01 ||r(i)||/||b|| 9.927127479173e-01 > > 3 KSP Residual norm 1.332723893134e+03 % max 5.076435191911e+01 min > 4.941060752900e+00 max/min 1.027397849527e+01 > > 4 KSP preconditioned resid norm 1.093788576095e+03 true resid norm > 1.717433802192e+01 ||r(i)||/||b|| 9.607390125925e-01 > > 4 KSP Residual norm 1.093788576095e+03 % max 6.610818819562e+01 min > 1.960683367943e+00 max/min 3.371691180560e+01 > > 5 KSP preconditioned resid norm 1.077330470806e+03 true resid norm > 1.750861653688e+01 ||r(i)||/||b|| 9.794386800837e-01 > > 5 KSP Residual norm 1.077330470806e+03 % max 7.296006568006e+01 min > 1.721275741199e+00 max/min 4.238720382432e+01 > > 6 KSP preconditioned resid norm 1.021951470305e+03 true resid norm > 1.587225289158e+01 ||r(i)||/||b|| 8.878998742895e-01 > > 6 KSP Residual norm 1.021951470305e+03 % max 7.313469782749e+01 min > 1.113708844536e+00 max/min 6.566769958438e+01 > > 7 KSP preconditioned resid norm 8.968401331418e+02 true resid norm > 1.532116106999e+01 ||r(i)||/||b|| 8.570715878170e-01 > > 7 KSP Residual norm 8.968401331418e+02 % max 7.385099946342e+01 min > 1.112940159952e+00 max/min 6.635666689088e+01 > > 8 KSP preconditioned resid norm 8.540750681178e+02 true resid norm > 1.665367409583e+01 ||r(i)||/||b|| 9.316128741873e-01 > > 8 KSP Residual norm 8.540750681178e+02 % max 9.333712052307e+01 min > 9.984044777583e-01 max/min 9.348627996205e+01 > > 9 KSP preconditioned resid norm 8.540461721597e+02 true resid norm > 1.669107313551e+01 ||r(i)||/||b|| 9.337049907166e-01 > > 9 KSP Residual norm 8.540461721597e+02 % max 9.673114572055e+01 min > 9.309725017804e-01 max/min 1.039033328434e+02 > > 10 KSP preconditioned resid norm 8.540453927813e+02 true resid norm > 1.668902252063e+01 ||r(i)||/||b|| 9.335902785387e-01 > > 10 KSP Residual norm 8.540453927813e+02 % max 9.685490817256e+01 min > 8.452348922200e-01 max/min 1.145893396783e+02 > > 11 KSP preconditioned resid norm 7.927564518868e+02 true resid norm > 1.671721115974e+01 ||r(i)||/||b|| 9.351671617505e-01 > > 11 KSP Residual norm 7.927564518868e+02 % max 1.076430910935e+02 min > 8.433341413962e-01 max/min 1.276399066629e+02 > > 12 KSP preconditioned resid norm 4.831253651357e+02 true resid norm > 2.683431434549e+01 ||r(i)||/||b|| 1.501121768709e+00 > > 12 KSP Residual norm 4.831253651357e+02 % max 1.079017603981e+02 min > 8.354112427301e-01 max/min 1.291600530124e+02 > > 13 KSP preconditioned resid norm 4.093462051807e+02 true resid norm > 1.923302214627e+01 ||r(i)||/||b|| 1.075902586894e+00 > > 13 KSP Residual norm 4.093462051807e+02 % max 1.088474472678e+02 min > 6.034301796734e-01 max/min 1.803811790233e+02 > > 14 KSP preconditioned resid norm 3.809274390266e+02 true resid norm > 2.118308925475e+01 ||r(i)||/||b|| 1.184990083943e+00 > > 14 KSP Residual norm 3.809274390266e+02 % max 1.104675729761e+02 min > 6.010582938812e-01 max/min 1.837884513045e+02 > > 15 KSP preconditioned resid norm 2.408316705377e+02 true resid norm > 1.238065951026e+01 ||r(i)||/||b|| 6.925788101960e-01 > > 15 KSP Residual norm 2.408316705377e+02 % max 1.215487322437e+02 min > 5.131416858605e-01 max/min 2.368716781211e+02 > > 16 KSP preconditioned resid norm 1.979802937570e+02 true resid norm > 9.481184679187e+00 ||r(i)||/||b|| 5.303810834083e-01 > > 16 KSP Residual norm 1.979802937570e+02 % max 1.246827043503e+02 min > 4.780929228356e-01 max/min 2.607917799970e+02 > > 17 KSP preconditioned resid norm 1.853329360151e+02 true resid norm > 9.049630342765e+00 ||r(i)||/||b|| 5.062397694011e-01 > > 17 KSP Residual norm 1.853329360151e+02 % max 1.252018074901e+02 min > 4.612413672912e-01 max/min 2.714453133841e+02 > > 18 KSP preconditioned resid norm 1.853299943955e+02 true resid norm > 9.047741838474e+00 ||r(i)||/||b|| 5.061341257515e-01 > > 18 KSP Residual norm 1.853299943955e+02 % max 1.256988083109e+02 min > 4.556318323953e-01 max/min 2.758780211867e+02 > > 19 KSP preconditioned resid norm 1.730151601343e+02 true resid norm > 9.854026387827e+00 ||r(i)||/||b|| 5.512379906472e-01 > > 19 KSP Residual norm 1.730151601343e+02 % max 1.279580192729e+02 min > 3.287110716087e-01 max/min 3.892720091436e+02 > > 20 KSP preconditioned resid norm 1.429145143492e+02 true resid norm > 8.228490997826e+00 ||r(i)||/||b|| 4.603049215804e-01 > > 20 KSP Residual norm 1.429145143492e+02 % max 1.321397322884e+02 min > 2.823235578054e-01 max/min 4.680435926621e+02 > > 21 KSP preconditioned resid norm 1.345382626439e+02 true resid norm > 8.176256473861e+00 ||r(i)||/||b|| 4.573829024078e-01 > > 21 KSP Residual norm 1.345382626439e+02 % max 1.332774949926e+02 min > 2.425224324298e-01 max/min 5.495470817166e+02 > > 22 KSP preconditioned resid norm 1.301499631466e+02 true resid norm > 8.487706077838e+00 ||r(i)||/||b|| 4.748055119207e-01 > > 22 KSP Residual norm 1.301499631466e+02 % max 1.334143594976e+02 min > 2.077893364534e-01 max/min 6.420654773469e+02 > > 23 KSP preconditioned resid norm 1.260084835452e+02 true resid norm > 8.288260183397e+00 ||r(i)||/||b|| 4.636484325941e-01 > > 23 KSP Residual norm 1.260084835452e+02 % max 1.342473982017e+02 min > 2.010966692943e-01 max/min 6.675764381023e+02 > > 24 KSP preconditioned resid norm 1.255711443195e+02 true resid norm > 8.117619099395e+00 ||r(i)||/||b|| 4.541027053386e-01 > > 24 KSP Residual norm 1.255711443195e+02 % max 1.342478258493e+02 min > 1.586270065907e-01 max/min 8.463112854147e+02 > > 25 KSP preconditioned resid norm 1.064125166220e+02 true resid norm > 8.683750469293e+00 ||r(i)||/||b|| 4.857723098741e-01 > > 25 KSP Residual norm 1.064125166220e+02 % max 1.343100269972e+02 min > 1.586061159091e-01 max/min 8.468149303534e+02 > > 26 KSP preconditioned resid norm 9.497012777512e+01 true resid norm > 7.776308733811e+00 ||r(i)||/||b|| 4.350096734441e-01 > > 26 KSP Residual norm 9.497012777512e+01 % max 1.346211743671e+02 min > 1.408944545921e-01 max/min 9.554753219835e+02 > > 27 KSP preconditioned resid norm 9.449347291209e+01 true resid norm > 8.027397390699e+00 ||r(i)||/||b|| 4.490556685785e-01 > > 27 KSP Residual norm 9.449347291209e+01 % max 1.353601106604e+02 min > 1.302056396509e-01 max/min 1.039587156311e+03 > > 28 KSP preconditioned resid norm 7.708808620337e+01 true resid norm > 8.253756419882e+00 ||r(i)||/||b|| 4.617182789167e-01 > > 28 KSP Residual norm 7.708808620337e+01 % max 1.354170803310e+02 min > 1.300840147004e-01 max/min 1.040997086712e+03 > > 29 KSP preconditioned resid norm 6.883976717639e+01 true resid norm > 7.200274893950e+00 ||r(i)||/||b|| 4.027861209659e-01 > > 29 KSP Residual norm 6.883976717639e+01 % max 1.359172085675e+02 min > 1.060952954746e-01 max/min 1.281086102446e+03 > > 30 KSP preconditioned resid norm 6.671786230822e+01 true resid norm > 6.613746362850e+00 ||r(i)||/||b|| 3.699754914612e-01 > > 30 KSP Residual norm 6.671786230822e+01 % max 1.361448012233e+02 min > 8.135160167393e-02 max/min 1.673535596373e+03 > > 31 KSP preconditioned resid norm 5.753308015718e+01 true resid norm > 7.287752888130e+00 ||r(i)||/||b|| 4.076796732906e-01 > > 31 KSP Residual norm 5.753308015718e+01 % max 1.363247397022e+02 min > 6.837262871767e-02 max/min 1.993849618759e+03 > > 32 KSP preconditioned resid norm 5.188554631220e+01 true resid norm > 7.000101427370e+00 ||r(i)||/||b|| 3.915883409767e-01 > > 32 KSP Residual norm 5.188554631220e+01 % max 1.365417051351e+02 min > 6.421828602757e-02 max/min 2.126212229901e+03 > > 33 KSP preconditioned resid norm 4.949709579590e+01 true resid norm > 6.374314702607e+00 ||r(i)||/||b|| 3.565815931606e-01 > > 33 KSP Residual norm 4.949709579590e+01 % max 1.366317667622e+02 min > 6.421330753944e-02 max/min 2.127779614503e+03 > > 34 KSP preconditioned resid norm 4.403827792460e+01 true resid norm > 6.147327125810e+00 ||r(i)||/||b|| 3.438838216294e-01 > > 34 KSP Residual norm 4.403827792460e+01 % max 1.370253126927e+02 min > 6.368216512224e-02 max/min 2.151706249775e+03 > > 35 KSP preconditioned resid norm 4.140066382940e+01 true resid norm > 5.886089041852e+00 ||r(i)||/||b|| 3.292700636777e-01 > > 35 KSP Residual norm 4.140066382940e+01 % max 1.390943209074e+02 min > 6.114512372501e-02 max/min 2.274822789352e+03 > > 36 KSP preconditioned resid norm 3.745028333544e+01 true resid norm > 5.122854489971e+00 ||r(i)||/||b|| 2.865744320432e-01 > > 36 KSP Residual norm 3.745028333544e+01 % max 1.396462040364e+02 min > 5.993486381149e-02 max/min 2.329966152515e+03 > > 37 KSP preconditioned resid norm 3.492028700266e+01 true resid norm > 4.982433448736e+00 ||r(i)||/||b|| 2.787192254942e-01 > > 37 KSP Residual norm 3.492028700266e+01 % max 1.418761690073e+02 min > 5.972847536278e-02 max/min 2.375352261139e+03 > > 38 KSP preconditioned resid norm 3.068024157121e+01 true resid norm > 4.394664243655e+00 ||r(i)||/||b|| 2.458391922143e-01 > > 38 KSP Residual norm 3.068024157121e+01 % max 1.419282962644e+02 min > 5.955136512602e-02 max/min 2.383292070032e+03 > > 39 KSP preconditioned resid norm 2.614836504484e+01 true resid norm > 3.671741082517e+00 ||r(i)||/||b|| 2.053985951371e-01 > > 39 KSP Residual norm 2.614836504484e+01 % max 1.424590886621e+02 min > 5.534838135054e-02 max/min 2.573861876102e+03 > > 40 KSP preconditioned resid norm 2.598742703782e+01 true resid norm > 3.707086113835e+00 ||r(i)||/||b|| 2.073758096559e-01 > > 40 KSP Residual norm 2.598742703782e+01 % max 1.428352406023e+02 min > 5.401263258718e-02 max/min 2.644478407376e+03 > > 41 KSP preconditioned resid norm 2.271029765350e+01 true resid norm > 2.912150121100e+00 ||r(i)||/||b|| 1.629067873414e-01 > > 41 KSP Residual norm 2.271029765350e+01 % max 1.446928584608e+02 min > 5.373910075949e-02 max/min 2.692506134562e+03 > > 42 KSP preconditioned resid norm 1.795836259709e+01 true resid norm > 2.316834528159e+00 ||r(i)||/||b|| 1.296046062493e-01 > > 42 KSP Residual norm 1.795836259709e+01 % max 1.449375360478e+02 min > 5.251640030193e-02 max/min 2.759852831011e+03 > > 43 KSP preconditioned resid norm 1.795769958691e+01 true resid norm > 2.311558013062e+00 ||r(i)||/||b|| 1.293094359844e-01 > > 43 KSP Residual norm 1.795769958691e+01 % max 1.449864885595e+02 min > 4.685673226983e-02 max/min 3.094250954688e+03 > > 44 KSP preconditioned resid norm 1.688955122733e+01 true resid norm > 2.370951018341e+00 ||r(i)||/||b|| 1.326319033292e-01 > > 44 KSP Residual norm 1.688955122733e+01 % max 1.450141764818e+02 min > 4.554223489616e-02 max/min 3.184169086398e+03 > > 45 KSP preconditioned resid norm 1.526531678793e+01 true resid norm > 1.947726448658e+00 ||r(i)||/||b|| 1.089565596470e-01 > > 45 KSP Residual norm 1.526531678793e+01 % max 1.460128059490e+02 min > 4.478596884382e-02 max/min 3.260235509433e+03 > > 46 KSP preconditioned resid norm 1.519131456699e+01 true resid norm > 1.914601584750e+00 ||r(i)||/||b|| 1.071035421390e-01 > > 46 KSP Residual norm 1.519131456699e+01 % max 1.467440844081e+02 min > 4.068627190104e-02 max/min 3.606722305867e+03 > > 47 KSP preconditioned resid norm 1.396606833105e+01 true resid norm > 1.916085501806e+00 ||r(i)||/||b|| 1.071865530245e-01 > > 47 KSP Residual norm 1.396606833105e+01 % max 1.472676539618e+02 min > 4.068014642358e-02 max/min 3.620135788804e+03 > > 48 KSP preconditioned resid norm 1.128785630240e+01 true resid norm > 1.818072605558e+00 ||r(i)||/||b|| 1.017036742642e-01 > > 48 KSP Residual norm 1.128785630240e+01 % max 1.476343971960e+02 min > 4.003858518052e-02 max/min 3.687303048557e+03 > > 49 KSP preconditioned resid norm 9.686331225178e+00 true resid norm > 1.520702063515e+00 ||r(i)||/||b|| 8.506865284031e-02 > > 49 KSP Residual norm 9.686331225178e+00 % max 1.478478861627e+02 min > 3.802148926353e-02 max/min 3.888534852961e+03 > > 50 KSP preconditioned resid norm 9.646313260413e+00 true resid norm > 1.596152870343e+00 ||r(i)||/||b|| 8.928939972200e-02 > > 50 KSP Residual norm 9.646313260413e+00 % max 1.479986700685e+02 min > 3.796210121957e-02 max/min 3.898590049388e+03 > > 51 KSP preconditioned resid norm 9.270552344731e+00 true resid norm > 1.442564661256e+00 ||r(i)||/||b|| 8.069761678656e-02 > > 51 KSP Residual norm 9.270552344731e+00 % max 1.482179613835e+02 min > 3.763123640547e-02 max/min 3.938694965706e+03 > > 52 KSP preconditioned resid norm 8.025547426875e+00 true resid norm > 1.151158202903e+00 ||r(i)||/||b|| 6.439622847661e-02 > > 52 KSP Residual norm 8.025547426875e+00 % max 1.482612345232e+02 min > 3.498901666812e-02 max/min 4.237364997409e+03 > > 53 KSP preconditioned resid norm 7.830903379041e+00 true resid norm > 1.083539895672e+00 ||r(i)||/||b|| 6.061363460667e-02 > > 53 KSP Residual norm 7.830903379041e+00 % max 1.496916293673e+02 min > 3.180425422032e-02 max/min 4.706654283743e+03 > > 54 KSP preconditioned resid norm 7.818451528162e+00 true resid norm > 1.101275786636e+00 ||r(i)||/||b|| 6.160578710481e-02 > > 54 KSP Residual norm 7.818451528162e+00 % max 1.497547864750e+02 min > 2.963830547627e-02 max/min 5.052744550289e+03 > > 55 KSP preconditioned resid norm 6.716190950888e+00 true resid norm > 1.068051260258e+00 ||r(i)||/||b|| 5.974719444025e-02 > > 55 KSP Residual norm 6.716190950888e+00 % max 1.509985715255e+02 min > 2.913583319816e-02 max/min 5.182572624524e+03 > > 56 KSP preconditioned resid norm 6.435651273713e+00 true resid norm > 9.738533937643e-01 ||r(i)||/||b|| 5.447772989798e-02 > > 56 KSP Residual norm 6.435651273713e+00 % max 1.513751779517e+02 min > 2.849133088254e-02 max/min 5.313025866562e+03 > > 57 KSP preconditioned resid norm 6.427111085122e+00 true resid norm > 9.515491605865e-01 ||r(i)||/||b|| 5.323002259581e-02 > > 57 KSP Residual norm 6.427111085122e+00 % max 1.514217018182e+02 min > 2.805206411012e-02 max/min 5.397880926830e+03 > > 58 KSP preconditioned resid norm 6.330738649886e+00 true resid norm > 9.488038114070e-01 ||r(i)||/||b|| 5.307644671670e-02 > > 58 KSP Residual norm 6.330738649886e+00 % max 1.514492565938e+02 min > 2.542367477260e-02 max/min 5.957016754989e+03 > > 59 KSP preconditioned resid norm 5.861133560095e+00 true resid norm > 9.281566562924e-01 ||r(i)||/||b|| 5.192143699275e-02 > > 59 KSP Residual norm 5.861133560095e+00 % max 1.522146406650e+02 min > 2.316380798762e-02 max/min 6.571227008373e+03 > > 60 KSP preconditioned resid norm 5.523892064332e+00 true resid norm > 8.250464923972e-01 ||r(i)||/||b|| 4.615341513814e-02 > > 60 KSP Residual norm 5.523892064332e+00 % max 1.522274643717e+02 min > 2.316038063761e-02 max/min 6.572753131893e+03 > > 61 KSP preconditioned resid norm 5.345610652504e+00 true resid norm > 7.967455833060e-01 ||r(i)||/||b|| 4.457025150056e-02 > > 61 KSP Residual norm 5.345610652504e+00 % max 1.527689509969e+02 min > 2.278107359995e-02 max/min 6.705959239659e+03 > > 62 KSP preconditioned resid norm 4.883527474160e+00 true resid norm > 6.939057385062e-01 ||r(i)||/||b|| 3.881735139915e-02 > > 62 KSP Residual norm 4.883527474160e+00 % max 1.527847370492e+02 min > 2.215885299007e-02 max/min 6.894974984387e+03 > > 63 KSP preconditioned resid norm 3.982941325093e+00 true resid norm > 6.883730294386e-01 ||r(i)||/||b|| 3.850784954587e-02 > > 63 KSP Residual norm 3.982941325093e+00 % max 1.528029462870e+02 min > 2.036330717837e-02 max/min 7.503837414449e+03 > > 64 KSP preconditioned resid norm 3.768777539791e+00 true resid norm > 6.576940275451e-01 ||r(i)||/||b|| 3.679165449085e-02 > > 64 KSP Residual norm 3.768777539791e+00 % max 1.534842011203e+02 min > 2.015613298342e-02 max/min 7.614764262895e+03 > > 65 KSP preconditioned resid norm 3.754783611569e+00 true resid norm > 6.240730525064e-01 ||r(i)||/||b|| 3.491088433716e-02 > > 65 KSP Residual norm 3.754783611569e+00 % max 1.536539871370e+02 min > 2.002205183069e-02 max/min 7.674237807208e+03 > > 66 KSP preconditioned resid norm 3.242712853326e+00 true resid norm > 5.919043028914e-01 ||r(i)||/||b|| 3.311135222698e-02 > > 66 KSP Residual norm 3.242712853326e+00 % max 1.536763506920e+02 min > 1.853540050760e-02 max/min 8.290964666717e+03 > > 67 KSP preconditioned resid norm 2.639460944665e+00 true resid norm > 5.172526166420e-01 ||r(i)||/||b|| 2.893530845493e-02 > > 67 KSP Residual norm 2.639460944665e+00 % max 1.536900244470e+02 min > 1.819164547837e-02 max/min 8.448384981431e+03 > > 68 KSP preconditioned resid norm 2.332848633723e+00 true resid norm > 4.860086486067e-01 ||r(i)||/||b|| 2.718750897868e-02 > > 68 KSP Residual norm 2.332848633723e+00 % max 1.537025056230e+02 min > 1.794260313013e-02 max/min 8.566343718814e+03 > > 69 KSP preconditioned resid norm 2.210456807969e+00 true resid norm > 5.392952883712e-01 ||r(i)||/||b|| 3.016838391000e-02 > > 69 KSP Residual norm 2.210456807969e+00 % max 1.537288460363e+02 min > 1.753831098488e-02 max/min 8.765316464556e+03 > > 70 KSP preconditioned resid norm 2.193036145802e+00 true resid norm > 5.031209038994e-01 ||r(i)||/||b|| 2.814477505974e-02 > > 70 KSP Residual norm 2.193036145802e+00 % max 1.541461506421e+02 min > 1.700508158735e-02 max/min 9.064711030659e+03 > > 71 KSP preconditioned resid norm 1.938788470922e+00 true resid norm > 3.360317906409e-01 ||r(i)||/||b|| 1.879774640094e-02 > > 71 KSP Residual norm 1.938788470922e+00 % max 1.545641242905e+02 min > 1.655870257931e-02 max/min 9.334313696993e+03 > > 72 KSP preconditioned resid norm 1.450214827437e+00 true resid norm > 2.879267785157e-01 ||r(i)||/||b|| 1.610673369402e-02 > > 72 KSP Residual norm 1.450214827437e+00 % max 1.549625503409e+02 min > 1.648921859604e-02 max/min 9.397810420079e+03 > > 73 KSP preconditioned resid norm 1.110389920050e+00 true resid norm > 2.664130281817e-01 ||r(i)||/||b|| 1.490324630331e-02 > > 73 KSP Residual norm 1.110389920050e+00 % max 1.553700778851e+02 min > 1.637230817799e-02 max/min 9.489809023627e+03 > > 74 KSP preconditioned resid norm 8.588797630056e-01 true resid norm > 2.680334659775e-01 ||r(i)||/||b|| 1.499389421102e-02 > > 74 KSP Residual norm 8.588797630056e-01 % max 1.560105458550e+02 min > 1.627972447765e-02 max/min 9.583119546597e+03 > > 75 KSP preconditioned resid norm 6.037657330005e-01 true resid norm > 2.938298885198e-01 ||r(i)||/||b|| 1.643695591681e-02 > > 75 KSP Residual norm 6.037657330005e-01 % max 1.568417947105e+02 min > 1.617453208191e-02 max/min 9.696836601902e+03 > > 76 KSP preconditioned resid norm 4.778003132962e-01 true resid norm > 3.104440343355e-01 ||r(i)||/||b|| 1.736635756394e-02 > > 76 KSP Residual norm 4.778003132962e-01 % max 1.582399629678e+02 min > 1.614387326366e-02 max/min 9.801858598824e+03 > > 77 KSP preconditioned resid norm 4.523141317161e-01 true resid norm > 3.278599563956e-01 ||r(i)||/||b|| 1.834061087967e-02 > > 77 KSP Residual norm 4.523141317161e-01 % max 1.610754319747e+02 min > 1.614204788287e-02 max/min 9.978624344533e+03 > > 78 KSP preconditioned resid norm 4.522674526201e-01 true resid norm > 3.287567680693e-01 ||r(i)||/||b|| 1.839077886640e-02 > > 78 KSP Residual norm 4.522674526201e-01 % max 1.611450941232e+02 min > 1.596556934579e-02 max/min 1.009328829014e+04 > > 79 KSP preconditioned resid norm 4.452139421181e-01 true resid norm > 3.198810083892e-01 ||r(i)||/||b|| 1.789426548812e-02 > > 79 KSP Residual norm 4.452139421181e-01 % max 1.619065061486e+02 min > 1.456184518464e-02 max/min 1.111854329555e+04 > > 80 KSP preconditioned resid norm 4.217533807980e-01 true resid norm > 2.940346905843e-01 ||r(i)||/||b|| 1.644841262233e-02 > > 80 KSP Residual norm 4.217533807980e-01 % max 1.620409915505e+02 min > 1.154094228288e-02 max/min 1.404053391644e+04 > > 81 KSP preconditioned resid norm 3.815105809981e-01 true resid norm > 2.630686686818e-01 ||r(i)||/||b|| 1.471616155865e-02 > > 81 KSP Residual norm 3.815105809981e-01 % max 1.643140061303e+02 min > 9.352799654900e-03 max/min 1.756843000953e+04 > > 82 KSP preconditioned resid norm 3.257333222641e-01 true resid norm > 2.350339902943e-01 ||r(i)||/||b|| 1.314789096807e-02 > > 82 KSP Residual norm 3.257333222641e-01 % max 1.661730623934e+02 min > 7.309343891643e-03 max/min 2.273433359502e+04 > > 83 KSP preconditioned resid norm 2.653029571859e-01 true resid norm > 1.993496448136e-01 ||r(i)||/||b|| 1.115169508568e-02 > > 83 KSP Residual norm 2.653029571859e-01 % max 1.672909675648e+02 min > 5.727931297058e-03 max/min 2.920617564857e+04 > > 84 KSP preconditioned resid norm 2.125985080811e-01 true resid norm > 1.651488251908e-01 ||r(i)||/||b|| 9.238488205020e-03 > > 84 KSP Residual norm 2.125985080811e-01 % max 1.677205536916e+02 min > 4.904920732181e-03 max/min 3.419434540321e+04 > > 85 KSP preconditioned resid norm 1.823170233996e-01 true resid norm > 1.538640960567e-01 ||r(i)||/||b|| 8.607216157635e-03 > > 85 KSP Residual norm 1.823170233996e-01 % max 1.684045979154e+02 min > 4.411887233900e-03 max/min 3.817064874674e+04 > > 86 KSP preconditioned resid norm 1.568093845918e-01 true resid norm > 1.468517035130e-01 ||r(i)||/||b|| 8.214940246925e-03 > > 86 KSP Residual norm 1.568093845918e-01 % max 1.707938893954e+02 min > 3.835536173319e-03 max/min 4.452933870980e+04 > > 87 KSP preconditioned resid norm 1.416153038224e-01 true resid norm > 1.326462010194e-01 ||r(i)||/||b|| 7.420279024951e-03 > > 87 KSP Residual norm 1.416153038224e-01 % max 1.708004671154e+02 min > 3.204409749322e-03 max/min 5.330169375233e+04 > > 88 KSP preconditioned resid norm 1.298764107063e-01 true resid norm > 1.273603812747e-01 ||r(i)||/||b|| 7.124588254468e-03 > > 88 KSP Residual norm 1.298764107063e-01 % max 1.708079667291e+02 min > 2.881989285724e-03 max/min 5.926738436372e+04 > > 89 KSP preconditioned resid norm 1.226436579597e-01 true resid norm > 1.269051556519e-01 ||r(i)||/||b|| 7.099122759682e-03 > > 89 KSP Residual norm 1.226436579597e-01 % max 1.708200970069e+02 min > 2.600538747361e-03 max/min 6.568642639154e+04 > > 90 KSP preconditioned resid norm 1.131214284449e-01 true resid norm > 1.236585929815e-01 ||r(i)||/||b|| 6.917508806917e-03 > > 90 KSP Residual norm 1.131214284449e-01 % max 1.708228433345e+02 min > 2.097699611380e-03 max/min 8.143341516003e+04 > > 91 KSP preconditioned resid norm 1.097877127886e-01 true resid norm > 1.208660430826e-01 ||r(i)||/||b|| 6.761292501577e-03 > > 91 KSP Residual norm 1.097877127886e-01 % max 1.734053778533e+02 min > 1.669096618002e-03 max/min 1.038917555659e+05 > > 92 KSP preconditioned resid norm 1.087819681706e-01 true resid norm > 1.220967287438e-01 ||r(i)||/||b|| 6.830137526368e-03 > > 92 KSP Residual norm 1.087819681706e-01 % max 1.735898650322e+02 min > 1.410331102754e-03 max/min 1.230844761867e+05 > > 93 KSP preconditioned resid norm 1.082792743495e-01 true resid norm > 1.231464976286e-01 ||r(i)||/||b|| 6.888861997758e-03 > > 93 KSP Residual norm 1.082792743495e-01 % max 1.744253420547e+02 min > 1.160377474118e-03 max/min 1.503177594750e+05 > > 94 KSP preconditioned resid norm 1.082786837568e-01 true resid norm > 1.231972007582e-01 ||r(i)||/||b|| 6.891698350150e-03 > > 94 KSP Residual norm 1.082786837568e-01 % max 1.744405973598e+02 min > 9.020124135986e-04 max/min 1.933904619603e+05 > > 95 KSP preconditioned resid norm 1.076120024876e-01 true resid norm > 1.193390416348e-01 ||r(i)||/||b|| 6.675871458778e-03 > > 95 KSP Residual norm 1.076120024876e-01 % max 1.744577996139e+02 min > 6.759449987513e-04 max/min 2.580946673712e+05 > > 96 KSP preconditioned resid norm 1.051338654734e-01 true resid norm > 1.113305708771e-01 ||r(i)||/||b|| 6.227874553259e-03 > > 96 KSP Residual norm 1.051338654734e-01 % max 1.754402425643e+02 min > 5.544787352320e-04 max/min 3.164057184102e+05 > > 97 KSP preconditioned resid norm 9.747169190114e-02 true resid norm > 9.349805869502e-02 ||r(i)||/||b|| 5.230317027375e-03 > > 97 KSP Residual norm 9.747169190114e-02 % max 1.756210909470e+02 min > 4.482270392115e-04 max/min 3.918127992813e+05 > > 98 KSP preconditioned resid norm 8.842962288098e-02 true resid norm > 7.385511932515e-02 ||r(i)||/||b|| 4.131483514810e-03 > > 98 KSP Residual norm 8.842962288098e-02 % max 1.757463096622e+02 min > 3.987817611456e-04 max/min 4.407079931572e+05 > > 99 KSP preconditioned resid norm 7.588962343124e-02 true resid norm > 4.759948399141e-02 ||r(i)||/||b|| 2.662733270502e-03 > > 99 KSP Residual norm 7.588962343124e-02 % max 1.758800802719e+02 min > 3.549508940132e-04 max/min 4.955053874730e+05 > > 100 KSP preconditioned resid norm 5.762080294705e-02 true resid norm > 2.979101328333e-02 ||r(i)||/||b|| 1.666520633833e-03 > > 100 KSP Residual norm 5.762080294705e-02 % max 1.767054165485e+02 min > 3.399974141580e-04 max/min 5.197257661094e+05 > > 101 KSP preconditioned resid norm 4.010101211334e-02 true resid norm > 1.618156300742e-02 ||r(i)||/||b|| 9.052028000211e-04 > > 101 KSP Residual norm 4.010101211334e-02 % max 1.780221758963e+02 min > 3.330380731227e-04 max/min 5.345400128792e+05 > > 102 KSP preconditioned resid norm 2.613215615582e-02 true resid norm > 8.507285387866e-03 ||r(i)||/||b|| 4.759007859835e-04 > > 102 KSP Residual norm 2.613215615582e-02 % max 1.780247903707e+02 min > 3.267877108360e-04 max/min 5.447719864228e+05 > > 103 KSP preconditioned resid norm 1.756896108812e-02 true resid norm > 4.899136161981e-03 ||r(i)||/||b|| 2.740595435358e-04 > > 103 KSP Residual norm 1.756896108812e-02 % max 1.780337300599e+02 min > 3.237486252629e-04 max/min 5.499134704135e+05 > > 104 KSP preconditioned resid norm 1.142112016905e-02 true resid norm > 3.312246267926e-03 ||r(i)||/||b|| 1.852883182367e-04 > > 104 KSP Residual norm 1.142112016905e-02 % max 1.805569604942e+02 min > 3.232308817378e-04 max/min 5.586005876774e+05 > > 105 KSP preconditioned resid norm 7.345608733466e-03 true resid norm > 1.880672268508e-03 ||r(i)||/||b|| 1.052055232609e-04 > > 105 KSP Residual norm 7.345608733466e-03 % max 1.817568861663e+02 min > 3.229958821968e-04 max/min 5.627219917794e+05 > > 106 KSP preconditioned resid norm 4.810582452316e-03 true resid norm > 1.418216565286e-03 ||r(i)||/||b|| 7.933557502107e-05 > > 106 KSP Residual norm 4.810582452316e-03 % max 1.845797149153e+02 min > 3.220343456392e-04 max/min 5.731677922395e+05 > > 107 KSP preconditioned resid norm 3.114068270187e-03 true resid norm > 1.177898700463e-03 ||r(i)||/||b|| 6.589210209864e-05 > > 107 KSP Residual norm 3.114068270187e-03 % max 1.850561815508e+02 min > 3.216059660171e-04 max/min 5.754127755856e+05 > > 108 KSP preconditioned resid norm 2.293796939903e-03 true resid norm > 1.090384984298e-03 ||r(i)||/||b|| 6.099655147246e-05 > > 108 KSP Residual norm 2.293796939903e-03 % max 1.887275970943e+02 min > 3.204019028106e-04 max/min 5.890339459248e+05 > > 109 KSP preconditioned resid norm 1.897140553486e-03 true resid norm > 1.085409353133e-03 ||r(i)||/||b|| 6.071821276936e-05 > > 109 KSP Residual norm 1.897140553486e-03 % max 1.900700995337e+02 min > 3.201092729524e-04 max/min 5.937663029274e+05 > > 110 KSP preconditioned resid norm 1.638607873536e-03 true resid norm > 1.080912735078e-03 ||r(i)||/||b|| 6.046667024205e-05 > > 110 KSP Residual norm 1.638607873536e-03 % max 1.941847763082e+02 min > 3.201003961856e-04 max/min 6.066371008040e+05 > > 111 KSP preconditioned resid norm 1.445734513987e-03 true resid norm > 1.072337220355e-03 ||r(i)||/||b|| 5.998695268109e-05 > > 111 KSP Residual norm 1.445734513987e-03 % max 1.968706347099e+02 min > 3.200841174449e-04 max/min 6.150590547303e+05 > > 112 KSP preconditioned resid norm 1.309787940098e-03 true resid norm > 1.071880522659e-03 ||r(i)||/||b|| 5.996140483796e-05 > > 112 KSP Residual norm 1.309787940098e-03 % max 2.005995546657e+02 min > 3.200727858161e-04 max/min 6.267310547950e+05 > > 113 KSP preconditioned resid norm 1.203600185791e-03 true resid norm > 1.068704252153e-03 ||r(i)||/||b|| 5.978372305568e-05 > > 113 KSP Residual norm 1.203600185791e-03 % max 2.044005133646e+02 min > 3.200669375682e-04 max/min 6.386180182109e+05 > > 114 KSP preconditioned resid norm 1.120060134211e-03 true resid norm > 1.067737533794e-03 ||r(i)||/||b|| 5.972964446230e-05 > > 114 KSP Residual norm 1.120060134211e-03 % max 2.080860182134e+02 min > 3.200663739818e-04 max/min 6.501339569812e+05 > > 115 KSP preconditioned resid norm 1.051559355475e-03 true resid norm > 1.066440021011e-03 ||r(i)||/||b|| 5.965706110284e-05 > > 115 KSP Residual norm 1.051559355475e-03 % max 2.120208487134e+02 min > 3.200660278464e-04 max/min 6.624284687130e+05 > > 116 KSP preconditioned resid norm 9.943175666627e-04 true resid norm > 1.065634036988e-03 ||r(i)||/||b|| 5.961197404955e-05 > > 116 KSP Residual norm 9.943175666627e-04 % max 2.158491989548e+02 min > 3.200660249513e-04 max/min 6.743896012946e+05 > > 117 KSP preconditioned resid norm 9.454894909455e-04 true resid norm > 1.064909725825e-03 ||r(i)||/||b|| 5.957145580713e-05 > > 117 KSP Residual norm 9.454894909455e-04 % max 2.197604049280e+02 min > 3.200659829252e-04 max/min 6.866096887885e+05 > > 118 KSP preconditioned resid norm 9.032188050175e-04 true resid norm > 1.064337854187e-03 ||r(i)||/||b|| 5.953946508976e-05 > > 118 KSP Residual norm 9.032188050175e-04 % max 2.236450290076e+02 min > 3.200659636119e-04 max/min 6.987466786029e+05 > > 119 KSP preconditioned resid norm 8.661523005930e-04 true resid norm > 1.063851383334e-03 ||r(i)||/||b|| 5.951225172494e-05 > > 119 KSP Residual norm 8.661523005930e-04 % max 2.275386631435e+02 min > 3.200659439058e-04 max/min 7.109118213789e+05 > > 120 KSP preconditioned resid norm 8.333044554060e-04 true resid norm > 1.063440179004e-03 ||r(i)||/||b|| 5.948924879800e-05 > > 120 KSP Residual norm 8.333044554060e-04 % max 2.314181044639e+02 min > 3.200659279880e-04 max/min 7.230326136825e+05 > > 121 KSP preconditioned resid norm 8.039309218182e-04 true resid norm > 1.063085844171e-03 ||r(i)||/||b|| 5.946942717246e-05 > > 121 KSP Residual norm 8.039309218182e-04 % max 2.352840091878e+02 min > 3.200659140872e-04 max/min 7.351111094065e+05 > > 122 KSP preconditioned resid norm 7.774595735272e-04 true resid norm > 1.062777951335e-03 ||r(i)||/||b|| 5.945220352987e-05 > > 122 KSP Residual norm 7.774595735272e-04 % max 2.391301402333e+02 min > 3.200659020264e-04 max/min 7.471278218621e+05 > > 123 KSP preconditioned resid norm 7.534419441080e-04 true resid norm > 1.062507774643e-03 ||r(i)||/||b|| 5.943708974281e-05 > > 123 KSP Residual norm 7.534419441080e-04 % max 2.429535997478e+02 min > 3.200658914252e-04 max/min 7.590736978126e+05 > > 124 KSP preconditioned resid norm 7.315209774625e-04 true resid norm > 1.062268823392e-03 ||r(i)||/||b|| 5.942372271877e-05 > > 124 KSP Residual norm 7.315209774625e-04 % max 2.467514538222e+02 min > 3.200658820413e-04 max/min 7.709395710924e+05 > > 125 KSP preconditioned resid norm 7.114083600074e-04 true resid norm > 1.062055971251e-03 ||r(i)||/||b|| 5.941181568889e-05 > > 125 KSP Residual norm 7.114083600074e-04 % max 2.505215705167e+02 min > 3.200658736748e-04 max/min 7.827187811071e+05 > > 126 KSP preconditioned resid norm 6.928683961266e-04 true resid norm > 1.061865165985e-03 ||r(i)||/||b|| 5.940114196966e-05 > > 126 KSP Residual norm 6.928683961266e-04 % max 2.542622648377e+02 min > 3.200658661692e-04 max/min 7.944060636045e+05 > > 127 KSP preconditioned resid norm 6.757062709768e-04 true resid norm > 1.061693150669e-03 ||r(i)||/||b|| 5.939151936730e-05 > > 127 KSP Residual norm 6.757062709768e-04 % max 2.579722822206e+02 min > 3.200658593982e-04 max/min 8.059974990947e+05 > > 128 KSP preconditioned resid norm 6.597593638309e-04 true resid norm > 1.061537280201e-03 ||r(i)||/||b|| 5.938279991397e-05 > > 128 KSP Residual norm 6.597593638309e-04 % max 2.616507118184e+02 min > 3.200658532588e-04 max/min 8.174902419435e+05 > > 129 KSP preconditioned resid norm 6.448907155810e-04 true resid norm > 1.061395383620e-03 ||r(i)||/||b|| 5.937486216514e-05 > > 129 KSP Residual norm 6.448907155810e-04 % max 2.652969302909e+02 min > 3.200658476664e-04 max/min 8.288823447585e+05 > > 130 KSP preconditioned resid norm 6.309840465804e-04 true resid norm > 1.061265662452e-03 ||r(i)||/||b|| 5.936760551361e-05 > > 130 KSP Residual norm 6.309840465804e-04 % max 2.689105505574e+02 min > 3.200658425513e-04 max/min 8.401725982813e+05 > > 131 KSP preconditioned resid norm 6.179399078869e-04 true resid norm > 1.061146614178e-03 ||r(i)||/||b|| 5.936094590780e-05 > > 131 KSP Residual norm 6.179399078869e-04 % max 2.724913796166e+02 min > 3.200658378548e-04 max/min 8.513603996069e+05 > > 132 KSP preconditioned resid norm 6.056726732840e-04 true resid norm > 1.061036973709e-03 ||r(i)||/||b|| 5.935481257818e-05 > > 132 KSP Residual norm 6.056726732840e-04 % max 2.760393830832e+02 min > 3.200658335276e-04 max/min 8.624456413884e+05 > > 133 KSP preconditioned resid norm 5.941081632891e-04 true resid norm > 1.060935668299e-03 ||r(i)||/||b|| 5.934914551493e-05 > > 133 KSP Residual norm 5.941081632891e-04 % max 2.795546556158e+02 min > 3.200658295276e-04 max/min 8.734286194450e+05 > > 134 KSP preconditioned resid norm 5.831817499918e-04 true resid norm > 1.060841782314e-03 ||r(i)||/||b|| 5.934389349718e-05 > > 134 KSP Residual norm 5.831817499918e-04 % max 2.830373962684e+02 min > 3.200658258193e-04 max/min 8.843099557533e+05 > > 135 KSP preconditioned resid norm 5.728368317981e-04 true resid norm > 1.060754529477e-03 ||r(i)||/||b|| 5.933901254021e-05 > > 135 KSP Residual norm 5.728368317981e-04 % max 2.864878879810e+02 min > 3.200658223718e-04 max/min 8.950905343719e+05 > > 136 KSP preconditioned resid norm 5.630235956754e-04 true resid norm > 1.060673230868e-03 ||r(i)||/||b|| 5.933446466506e-05 > > 136 KSP Residual norm 5.630235956754e-04 % max 2.899064805338e+02 min > 3.200658191584e-04 max/min 9.057714481857e+05 > > 137 KSP preconditioned resid norm 5.536980049705e-04 true resid norm > 1.060597297101e-03 ||r(i)||/||b|| 5.933021690118e-05 > > 137 KSP Residual norm 5.536980049705e-04 % max 2.932935763968e+02 min > 3.200658161562e-04 max/min 9.163539546933e+05 > > 138 KSP preconditioned resid norm 5.448209657758e-04 true resid norm > 1.060526214124e-03 ||r(i)||/||b|| 5.932624049239e-05 > > 138 KSP Residual norm 5.448209657758e-04 % max 2.966496189942e+02 min > 3.200658133448e-04 max/min 9.268394393456e+05 > > 139 KSP preconditioned resid norm 5.363576357737e-04 true resid norm > 1.060459531517e-03 ||r(i)||/||b|| 5.932251024194e-05 > > 139 KSP Residual norm 5.363576357737e-04 % max 2.999750829834e+02 min > 3.200658107070e-04 max/min 9.372293851716e+05 > > 140 KSP preconditioned resid norm 5.282768476492e-04 true resid norm > 1.060396852955e-03 ||r(i)||/||b|| 5.931900397932e-05 > > 140 KSP Residual norm 5.282768476492e-04 % max 3.032704662136e+02 min > 3.200658082268e-04 max/min 9.475253476584e+05 > > 141 KSP preconditioned resid norm 5.205506252810e-04 true resid norm > 1.060337828321e-03 ||r(i)||/||b|| 5.931570211878e-05 > > 141 KSP Residual norm 5.205506252810e-04 % max 3.065362830844e+02 min > 3.200658058907e-04 max/min 9.577289339963e+05 > > 142 KSP preconditioned resid norm 5.131537755696e-04 true resid norm > 1.060282147141e-03 ||r(i)||/||b|| 5.931258729233e-05 > > 142 KSP Residual norm 5.131537755696e-04 % max 3.097730590695e+02 min > 3.200658036863e-04 max/min 9.678417859756e+05 > > 143 KSP preconditioned resid norm 5.060635423123e-04 true resid norm > 1.060229533174e-03 ||r(i)||/||b|| 5.930964404698e-05 > > 143 KSP Residual norm 5.060635423123e-04 % max 3.129813262144e+02 min > 3.200658016030e-04 max/min 9.778655659143e+05 > > 144 KSP preconditioned resid norm 4.992593112791e-04 true resid norm > 1.060179739774e-03 ||r(i)||/||b|| 5.930685858524e-05 > > 144 KSP Residual norm 4.992593112791e-04 % max 3.161616194433e+02 min > 3.200657996311e-04 max/min 9.878019451242e+05 > > 145 KSP preconditioned resid norm 4.927223577718e-04 true resid norm > 1.060132546053e-03 ||r(i)||/||b|| 5.930421855048e-05 > > 145 KSP Residual norm 4.927223577718e-04 % max 3.193144735429e+02 min > 3.200657977617e-04 max/min 9.976525944849e+05 > > 146 KSP preconditioned resid norm 4.864356296191e-04 true resid norm > 1.060087753599e-03 ||r(i)||/||b|| 5.930171284355e-05 > > 146 KSP Residual norm 4.864356296191e-04 % max 3.224404207086e+02 min > 3.200657959872e-04 max/min 1.007419176779e+06 > > 147 KSP preconditioned resid norm 4.803835598739e-04 true resid norm > 1.060045183705e-03 ||r(i)||/||b|| 5.929933146747e-05 > > 147 KSP Residual norm 4.803835598739e-04 % max 3.255399885620e+02 min > 3.200657943002e-04 max/min 1.017103340498e+06 > > 148 KSP preconditioned resid norm 4.745519045267e-04 true resid norm > 1.060004674955e-03 ||r(i)||/||b|| 5.929706539253e-05 > > 148 KSP Residual norm 4.745519045267e-04 % max 3.286136985587e+02 min > 3.200657926948e-04 max/min 1.026706714866e+06 > > 149 KSP preconditioned resid norm 4.689276013781e-04 true resid norm > 1.059966081181e-03 ||r(i)||/||b|| 5.929490644211e-05 > > 149 KSP Residual norm 4.689276013781e-04 % max 3.316620647237e+02 min > 3.200657911651e-04 max/min 1.036230905891e+06 > > 150 KSP preconditioned resid norm 4.634986468865e-04 true resid norm > 1.059929269736e-03 ||r(i)||/||b|| 5.929284719586e-05 > > 150 KSP Residual norm 4.634986468865e-04 % max 3.346855926588e+02 min > 3.200657897058e-04 max/min 1.045677493263e+06 > > 151 KSP preconditioned resid norm 4.582539883466e-04 true resid norm > 1.059894119959e-03 ||r(i)||/||b|| 5.929088090393e-05 > > 151 KSP Residual norm 4.582539883466e-04 % max 3.376847787762e+02 min > 3.200657883122e-04 max/min 1.055048027960e+06 > > 152 KSP preconditioned resid norm 4.531834291922e-04 true resid norm > 1.059860521790e-03 ||r(i)||/||b|| 5.928900140955e-05 > > 152 KSP Residual norm 4.531834291922e-04 % max 3.406601097208e+02 min > 3.200657869798e-04 max/min 1.064344030443e+06 > > 153 KSP preconditioned resid norm 4.482775455769e-04 true resid norm > 1.059828374748e-03 ||r(i)||/||b|| 5.928720309181e-05 > > 153 KSP Residual norm 4.482775455769e-04 % max 3.436120619489e+02 min > 3.200657857049e-04 max/min 1.073566989337e+06 > > 154 KSP preconditioned resid norm 4.435276126791e-04 true resid norm > 1.059797586777e-03 ||r(i)||/||b|| 5.928548080094e-05 > > 154 KSP Residual norm 4.435276126791e-04 % max 3.465411014367e+02 min > 3.200657844837e-04 max/min 1.082718360526e+06 > > 155 KSP preconditioned resid norm 4.389255394186e-04 true resid norm > 1.059768073502e-03 ||r(i)||/||b|| 5.928382981713e-05 > > 155 KSP Residual norm 4.389255394186e-04 % max 3.494476834965e+02 min > 3.200657833130e-04 max/min 1.091799566575e+06 > > 156 KSP preconditioned resid norm 4.344638104739e-04 true resid norm > 1.059739757336e-03 ||r(i)||/||b|| 5.928224580001e-05 > > 156 KSP Residual norm 4.344638104739e-04 % max 3.523322526810e+02 min > 3.200657821896e-04 max/min 1.100811996430e+06 > > 157 KSP preconditioned resid norm 4.301354346542e-04 true resid norm > 1.059712566921e-03 ||r(i)||/||b|| 5.928072475785e-05 > > 157 KSP Residual norm 4.301354346542e-04 % max 3.551952427605e+02 min > 3.200657811108e-04 max/min 1.109757005350e+06 > > 158 KSP preconditioned resid norm 4.259338988194e-04 true resid norm > 1.059686436414e-03 ||r(i)||/||b|| 5.927926300733e-05 > > 158 KSP Residual norm 4.259338988194e-04 % max 3.580370767604e+02 min > 3.200657800738e-04 max/min 1.118635915023e+06 > > 159 KSP preconditioned resid norm 4.218531266563e-04 true resid norm > 1.059661305045e-03 ||r(i)||/||b|| 5.927785714894e-05 > > 159 KSP Residual norm 4.218531266563e-04 % max 3.608581670458e+02 min > 3.200657790764e-04 max/min 1.127450013829e+06 > > 160 KSP preconditioned resid norm 4.178874417186e-04 true resid norm > 1.059637116561e-03 ||r(i)||/||b|| 5.927650403596e-05 > > 160 KSP Residual norm 4.178874417186e-04 % max 3.636589154466e+02 min > 3.200657781164e-04 max/min 1.136200557232e+06 > > 161 KSP preconditioned resid norm 4.140315342180e-04 true resid norm > 1.059613818894e-03 ||r(i)||/||b|| 5.927520075559e-05 > > 161 KSP Residual norm 4.140315342180e-04 % max 3.664397134134e+02 min > 3.200657771916e-04 max/min 1.144888768267e+06 > > 162 KSP preconditioned resid norm 4.102804311252e-04 true resid norm > 1.059591363713e-03 ||r(i)||/||b|| 5.927394460419e-05 > > 162 KSP Residual norm 4.102804311252e-04 % max 3.692009421979e+02 min > 3.200657763002e-04 max/min 1.153515838106e+06 > > 163 KSP preconditioned resid norm 4.066294691985e-04 true resid norm > 1.059569706138e-03 ||r(i)||/||b|| 5.927273307117e-05 > > 163 KSP Residual norm 4.066294691985e-04 % max 3.719429730532e+02 min > 3.200657754405e-04 max/min 1.162082926678e+06 > > 164 KSP preconditioned resid norm 4.030742706055e-04 true resid norm > 1.059548804412e-03 ||r(i)||/||b|| 5.927156382068e-05 > > 164 KSP Residual norm 4.030742706055e-04 % max 3.746661674483e+02 min > 3.200657746106e-04 max/min 1.170591163345e+06 > > 165 KSP preconditioned resid norm 3.996107208498e-04 true resid norm > 1.059528619651e-03 ||r(i)||/||b|| 5.927043467745e-05 > > 165 KSP Residual norm 3.996107208498e-04 % max 3.773708772938e+02 min > 3.200657738091e-04 max/min 1.179041647605e+06 > > 166 KSP preconditioned resid norm 3.962349487489e-04 true resid norm > 1.059509115572e-03 ||r(i)||/||b|| 5.926934361186e-05 > > 166 KSP Residual norm 3.962349487489e-04 % max 3.800574451754e+02 min > 3.200657730346e-04 max/min 1.187435449820e+06 > > 167 KSP preconditioned resid norm 3.929433082418e-04 true resid norm > 1.059490258328e-03 ||r(i)||/||b|| 5.926828873043e-05 > > 167 KSP Residual norm 3.929433082418e-04 % max 3.827262045917e+02 min > 3.200657722858e-04 max/min 1.195773611962e+06 > > 168 KSP preconditioned resid norm 3.897323618319e-04 true resid norm > 1.059472016256e-03 ||r(i)||/||b|| 5.926726826196e-05 > > 168 KSP Residual norm 3.897323618319e-04 % max 3.853774801959e+02 min > 3.200657715611e-04 max/min 1.204057148368e+06 > > 169 KSP preconditioned resid norm 3.865988654946e-04 true resid norm > 1.059454359751e-03 ||r(i)||/||b|| 5.926628055033e-05 > > 169 KSP Residual norm 3.865988654946e-04 % max 3.880115880375e+02 min > 3.200657708600e-04 max/min 1.212287046487e+06 > > 170 KSP preconditioned resid norm 3.835397548978e-04 true resid norm > 1.059437261066e-03 ||r(i)||/||b|| 5.926532404338e-05 > > 170 KSP Residual norm 3.835397548978e-04 % max 3.906288358040e+02 min > 3.200657701808e-04 max/min 1.220464267652e+06 > > 171 KSP preconditioned resid norm 3.805521328045e-04 true resid norm > 1.059420694153e-03 ||r(i)||/||b|| 5.926439728400e-05 > > 171 KSP Residual norm 3.805521328045e-04 % max 3.932295230610e+02 min > 3.200657695227e-04 max/min 1.228589747812e+06 > > 172 KSP preconditioned resid norm 3.776332575372e-04 true resid norm > 1.059404634604e-03 ||r(i)||/||b|| 5.926349890668e-05 > > 172 KSP Residual norm 3.776332575372e-04 % max 3.958139414889e+02 min > 3.200657688847e-04 max/min 1.236664398283e+06 > > 173 KSP preconditioned resid norm 3.747805324013e-04 true resid norm > 1.059389059461e-03 ||r(i)||/||b|| 5.926262762725e-05 > > 173 KSP Residual norm 3.747805324013e-04 % max 3.983823751168e+02 min > 3.200657682658e-04 max/min 1.244689106477e+06 > > 174 KSP preconditioned resid norm 3.719914959745e-04 true resid norm > 1.059373947132e-03 ||r(i)||/||b|| 5.926178223783e-05 > > 174 KSP Residual norm 3.719914959745e-04 % max 4.009351005517e+02 min > 3.200657676654e-04 max/min 1.252664736614e+06 > > 175 KSP preconditioned resid norm 3.692638131792e-04 true resid norm > 1.059359277291e-03 ||r(i)||/||b|| 5.926096160129e-05 > > 175 KSP Residual norm 3.692638131792e-04 % max 4.034723872035e+02 min > 3.200657670824e-04 max/min 1.260592130428e+06 > > 176 KSP preconditioned resid norm 3.665952670646e-04 true resid norm > 1.059345030772e-03 ||r(i)||/||b|| 5.926016464562e-05 > > 176 KSP Residual norm 3.665952670646e-04 % max 4.059944975044e+02 min > 3.200657665165e-04 max/min 1.268472107852e+06 > > 177 KSP preconditioned resid norm 3.639837512333e-04 true resid norm > 1.059331189535e-03 ||r(i)||/||b|| 5.925939036157e-05 > > 177 KSP Residual norm 3.639837512333e-04 % max 4.085016871234e+02 min > 3.200657659664e-04 max/min 1.276305467690e+06 > > 178 KSP preconditioned resid norm 3.614272628526e-04 true resid norm > 1.059317736504e-03 ||r(i)||/||b|| 5.925863779385e-05 > > 178 KSP Residual norm 3.614272628526e-04 % max 4.109942051749e+02 min > 3.200657654318e-04 max/min 1.284092988266e+06 > > 179 KSP preconditioned resid norm 3.589238961979e-04 true resid norm > 1.059304655605e-03 ||r(i)||/||b|| 5.925790604337e-05 > > 179 KSP Residual norm 3.589238961979e-04 % max 4.134722944217e+02 min > 3.200657649118e-04 max/min 1.291835428058e+06 > > 180 KSP preconditioned resid norm 3.564718366825e-04 true resid norm > 1.059291931579e-03 ||r(i)||/||b|| 5.925719425653e-05 > > 180 KSP Residual norm 3.564718366825e-04 % max 4.159361914728e+02 min > 3.200657644063e-04 max/min 1.299533526319e+06 > > 181 KSP preconditioned resid norm 3.540693553287e-04 true resid norm > 1.059279550018e-03 ||r(i)||/||b|| 5.925650162730e-05 > > 181 KSP Residual norm 3.540693553287e-04 % max 4.183861269743e+02 min > 3.200657639142e-04 max/min 1.307188003670e+06 > > 182 KSP preconditioned resid norm 3.517148036444e-04 true resid norm > 1.059267497309e-03 ||r(i)||/||b|| 5.925582739414e-05 > > 182 KSP Residual norm 3.517148036444e-04 % max 4.208223257952e+02 min > 3.200657634351e-04 max/min 1.314799562686e+06 > > 183 KSP preconditioned resid norm 3.494066088688e-04 true resid norm > 1.059255760488e-03 ||r(i)||/||b|| 5.925517083190e-05 > > 183 KSP Residual norm 3.494066088688e-04 % max 4.232450072075e+02 min > 3.200657629684e-04 max/min 1.322368888450e+06 > > 184 KSP preconditioned resid norm 3.471432695577e-04 true resid norm > 1.059244327320e-03 ||r(i)||/||b|| 5.925453125615e-05 > > 184 KSP Residual norm 3.471432695577e-04 % max 4.256543850606e+02 min > 3.200657625139e-04 max/min 1.329896649105e+06 > > 185 KSP preconditioned resid norm 3.449233514781e-04 true resid norm > 1.059233186159e-03 ||r(i)||/||b|| 5.925390801536e-05 > > 185 KSP Residual norm 3.449233514781e-04 % max 4.280506679492e+02 min > 3.200657620711e-04 max/min 1.337383496377e+06 > > 186 KSP preconditioned resid norm 3.427454837895e-04 true resid norm > 1.059222325963e-03 ||r(i)||/||b|| 5.925330049183e-05 > > 186 KSP Residual norm 3.427454837895e-04 % max 4.304340593770e+02 min > 3.200657616393e-04 max/min 1.344830066085e+06 > > 187 KSP preconditioned resid norm 3.406083554860e-04 true resid norm > 1.059211736238e-03 ||r(i)||/||b|| 5.925270809861e-05 > > 187 KSP Residual norm 3.406083554860e-04 % max 4.328047579144e+02 min > 3.200657612184e-04 max/min 1.352236978635e+06 > > 188 KSP preconditioned resid norm 3.385107120797e-04 true resid norm > 1.059201407008e-03 ||r(i)||/||b|| 5.925213027752e-05 > > 188 KSP Residual norm 3.385107120797e-04 % max 4.351629573503e+02 min > 3.200657608077e-04 max/min 1.359604839494e+06 > > 189 KSP preconditioned resid norm 3.364513525062e-04 true resid norm > 1.059191328770e-03 ||r(i)||/||b|| 5.925156649705e-05 > > 189 KSP Residual norm 3.364513525062e-04 % max 4.375088468402e+02 min > 3.200657604069e-04 max/min 1.366934239651e+06 > > 190 KSP preconditioned resid norm 3.344291262348e-04 true resid norm > 1.059181492481e-03 ||r(i)||/||b|| 5.925101625132e-05 > > 190 KSP Residual norm 3.344291262348e-04 % max 4.398426110478e+02 min > 3.200657600159e-04 max/min 1.374225756063e+06 > > 191 KSP preconditioned resid norm 3.324429305670e-04 true resid norm > 1.059171889545e-03 ||r(i)||/||b|| 5.925047905943e-05 > > 191 KSP Residual norm 3.324429305670e-04 % max 4.421644302826e+02 min > 3.200657596338e-04 max/min 1.381479952084e+06 > > 192 KSP preconditioned resid norm 3.304917081101e-04 true resid norm > 1.059162511741e-03 ||r(i)||/||b|| 5.924995446149e-05 > > 192 KSP Residual norm 3.304917081101e-04 % max 4.444744806328e+02 min > 3.200657592613e-04 max/min 1.388697377872e+06 > > 193 KSP preconditioned resid norm 3.285744444114e-04 true resid norm > 1.059153351271e-03 ||r(i)||/||b|| 5.924944202127e-05 > > 193 KSP Residual norm 3.285744444114e-04 % max 4.467729340931e+02 min > 3.200657588967e-04 max/min 1.395878570808e+06 > > 194 KSP preconditioned resid norm 3.266901657422e-04 true resid norm > 1.059144400627e-03 ||r(i)||/||b|| 5.924894131886e-05 > > 194 KSP Residual norm 3.266901657422e-04 % max 4.490599586887e+02 min > 3.200657585410e-04 max/min 1.403024055856e+06 > > 195 KSP preconditioned resid norm 3.248379370193e-04 true resid norm > 1.059135652724e-03 ||r(i)||/||b|| 5.924845195782e-05 > > 195 KSP Residual norm 3.248379370193e-04 % max 4.513357185943e+02 min > 3.200657581935e-04 max/min 1.410134345960e+06 > > 196 KSP preconditioned resid norm 3.230168598555e-04 true resid norm > 1.059127100716e-03 ||r(i)||/||b|| 5.924797355524e-05 > > 196 KSP Residual norm 3.230168598555e-04 % max 4.536003742498e+02 min > 3.200657578528e-04 max/min 1.417209942397e+06 > > 197 KSP preconditioned resid norm 3.212260707284e-04 true resid norm > 1.059118738110e-03 ||r(i)||/||b|| 5.924750574787e-05 > > 197 KSP Residual norm 3.212260707284e-04 % max 4.558540824713e+02 min > 3.200657575203e-04 max/min 1.424251335110e+06 > > 198 KSP preconditioned resid norm 3.194647392600e-04 true resid norm > 1.059110558683e-03 ||r(i)||/||b|| 5.924704818760e-05 > > 198 KSP Residual norm 3.194647392600e-04 % max 4.580969965587e+02 min > 3.200657571950e-04 max/min 1.431259003067e+06 > > 199 KSP preconditioned resid norm 3.177320665985e-04 true resid norm > 1.059102556481e-03 ||r(i)||/||b|| 5.924660054140e-05 > > 199 KSP Residual norm 3.177320665985e-04 % max 4.603292663993e+02 min > 3.200657568766e-04 max/min 1.438233414569e+06 > > 200 KSP preconditioned resid norm 3.160272838961e-04 true resid norm > 1.059094725800e-03 ||r(i)||/||b|| 5.924616249011e-05 > > 200 KSP Residual norm 3.160272838961e-04 % max 4.625510385677e+02 min > 3.200657565653e-04 max/min 1.445175027567e+06 > > > > > Thanks, > > > Meng > ------------------ Original ------------------ > *From: * "Barry Smith";; > *Send time:* Friday, Apr 25, 2014 6:54 AM > *To:* "Oo "; > *Cc:* "Dave May"; "petsc-users"< > petsc-users at mcs.anl.gov>; > *Subject: * Re: [petsc-users] Convergence_Eigenvalues_k=3 > > > Run the bad case again with -ksp_gmres_restart 200 -ksp_max_it 200 and > send the output > > > On Apr 24, 2014, at 5:46 PM, Oo wrote: > > > > > Thanks, > > The following is the output at the beginning. > > > > > > 0 KSP preconditioned resid norm 7.463734841673e+00 true resid norm > 7.520241011357e-02 ||r(i)||/||b|| 1.000000000000e+00 > > 0 KSP Residual norm 7.463734841673e+00 % max 1.000000000000e+00 min > 1.000000000000e+00 max/min 1.000000000000e+00 > > 1 KSP preconditioned resid norm 3.449001344285e-03 true resid norm > 7.834435231711e-05 ||r(i)||/||b|| 1.041779807307e-03 > > 1 KSP Residual norm 3.449001344285e-03 % max 9.999991695261e-01 min > 9.999991695261e-01 max/min 1.000000000000e+00 > > 2 KSP preconditioned resid norm 1.811463883605e-05 true resid norm > 3.597611565181e-07 ||r(i)||/||b|| 4.783904611232e-06 > > 2 KSP Residual norm 1.811463883605e-05 % max 1.000686014764e+00 min > 9.991339510077e-01 max/min 1.001553409084e+00 > > 0 KSP preconditioned resid norm 9.374463936067e+00 true resid norm > 9.058107112571e-02 ||r(i)||/||b|| 1.000000000000e+00 > > 0 KSP Residual norm 9.374463936067e+00 % max 1.000000000000e+00 min > 1.000000000000e+00 max/min 1.000000000000e+00 > > 1 KSP preconditioned resid norm 2.595582184655e-01 true resid norm > 6.637387889158e-03 ||r(i)||/||b|| 7.327566131280e-02 > > 1 KSP Residual norm 2.595582184655e-01 % max 9.933440157684e-01 min > 9.933440157684e-01 max/min 1.000000000000e+00 > > 2 KSP preconditioned resid norm 6.351429855766e-03 true resid norm > 1.844857600919e-04 ||r(i)||/||b|| 2.036692189651e-03 > > 2 KSP Residual norm 6.351429855766e-03 % max 1.000795215571e+00 min > 8.099278726624e-01 max/min 1.235659679523e+00 > > 3 KSP preconditioned resid norm 1.883016084950e-04 true resid norm > 3.876682412610e-06 ||r(i)||/||b|| 4.279793078656e-05 > > 3 KSP Residual norm 1.883016084950e-04 % max 1.184638644500e+00 min > 8.086172954187e-01 max/min 1.465017692809e+00 > > > > > > When solving the linear system: > > Output: > > ......... > > ........ > > ........ > > +00 max/min 7.343521316293e+01 > > 9636 KSP preconditioned resid norm 1.080641720588e+00 true resid norm > 1.594064989678e-01 ||r(i)||/||b|| 8.917260288210e-03 > > 9636 KSP Residual norm 1.080641720588e+00 % max 9.802496207537e+01 min > 1.168945135768e+00 max/min 8.385762434515e+01 > > 9637 KSP preconditioned resid norm 1.080641720588e+00 true resid norm > 1.594064987918e-01 ||r(i)||/||b|| 8.917260278361e-03 > > 9637 KSP Residual norm 1.080641720588e+00 % max 1.122401280488e+02 min > 1.141681830513e+00 max/min 9.831121512938e+01 > > 9638 KSP preconditioned resid norm 1.080641720588e+00 true resid norm > 1.594064986859e-01 ||r(i)||/||b|| 8.917260272440e-03 > > 9638 KSP Residual norm 1.080641720588e+00 % max 1.134941067042e+02 min > 1.090790142559e+00 max/min 1.040476094127e+02 > > 9639 KSP preconditioned resid norm 1.080641720588e+00 true resid norm > 1.594064442988e-01 ||r(i)||/||b|| 8.917257230005e-03 > > 9639 KSP Residual norm 1.080641720588e+00 % max 1.139914662925e+02 min > 4.119649156568e-01 max/min 2.767018791170e+02 > > 9640 KSP preconditioned resid norm 1.080641720586e+00 true resid norm > 1.594063809183e-01 ||r(i)||/||b|| 8.917253684473e-03 > > 9640 KSP Residual norm 1.080641720586e+00 % max 1.140011421526e+02 min > 2.894486589274e-01 max/min 3.938561766878e+02 > > 9641 KSP preconditioned resid norm 1.080641720586e+00 true resid norm > 1.594063839202e-01 ||r(i)||/||b|| 8.917253852403e-03 > > 9641 KSP Residual norm 1.080641720586e+00 % max 1.140392299942e+02 min > 2.880532973299e-01 max/min 3.958962839563e+02 > > 9642 KSP preconditioned resid norm 1.080641720586e+00 true resid norm > 1.594064091676e-01 ||r(i)||/||b|| 8.917255264750e-03 > > 9642 KSP Residual norm 1.080641720586e+00 % max 1.140392728591e+02 min > 2.501717295613e-01 max/min 4.558439639005e+02 > > 9643 KSP preconditioned resid norm 1.080641720583e+00 true resid norm > 1.594064099334e-01 ||r(i)||/||b|| 8.917255307591e-03 > > 9643 KSP Residual norm 1.080641720583e+00 % max 1.141360429432e+02 min > 2.500714638111e-01 max/min 4.564137035220e+02 > > 9644 KSP preconditioned resid norm 1.080641720582e+00 true resid norm > 1.594064169337e-01 ||r(i)||/||b|| 8.917255699186e-03 > > 9644 KSP Residual norm 1.080641720582e+00 % max 1.141719168213e+02 min > 2.470526471293e-01 max/min 4.621359784969e+02 > > 9645 KSP preconditioned resid norm 1.080641720554e+00 true resid norm > 1.594064833602e-01 ||r(i)||/||b|| 8.917259415111e-03 > > 9645 KSP Residual norm 1.080641720554e+00 % max 1.141770017757e+02 min > 2.461729098264e-01 max/min 4.638081495493e+02 > > 9646 KSP preconditioned resid norm 1.080641720550e+00 true resid norm > 1.594066163854e-01 ||r(i)||/||b|| 8.917266856592e-03 > > 9646 KSP Residual norm 1.080641720550e+00 % max 1.150251695783e+02 min > 1.817293289064e-01 max/min 6.329477485583e+02 > > 9647 KSP preconditioned resid norm 1.080641720425e+00 true resid norm > 1.594070759575e-01 ||r(i)||/||b|| 8.917292565231e-03 > > 9647 KSP Residual norm 1.080641720425e+00 % max 1.153670774825e+02 min > 1.757825842976e-01 max/min 6.563055034347e+02 > > 9648 KSP preconditioned resid norm 1.080641720405e+00 true resid norm > 1.594072309986e-01 ||r(i)||/||b|| 8.917301238287e-03 > > 9648 KSP Residual norm 1.080641720405e+00 % max 1.154419449950e+02 min > 1.682003217110e-01 max/min 6.863360534671e+02 > > 9649 KSP preconditioned resid norm 1.080641719971e+00 true resid norm > 1.594088666650e-01 ||r(i)||/||b|| 8.917392738093e-03 > > 9649 KSP Residual norm 1.080641719971e+00 % max 1.154420890958e+02 min > 1.254364806923e-01 max/min 9.203230867027e+02 > > 9650 KSP preconditioned resid norm 1.080641719766e+00 true resid norm > 1.594089470619e-01 ||r(i)||/||b|| 8.917397235527e-03 > > 9650 KSP Residual norm 1.080641719766e+00 % max 1.155791388935e+02 min > 1.115280748954e-01 max/min 1.036323266603e+03 > > 9651 KSP preconditioned resid norm 1.080641719668e+00 true resid norm > 1.594099325489e-01 ||r(i)||/||b|| 8.917452364041e-03 > > 9651 KSP Residual norm 1.080641719668e+00 % max 1.156952656131e+02 min > 9.753165869338e-02 max/min 1.186232933624e+03 > > 9652 KSP preconditioned resid norm 1.080641719560e+00 true resid norm > 1.594104650490e-01 ||r(i)||/||b|| 8.917482152303e-03 > > 9652 KSP Residual norm 1.080641719560e+00 % max 1.157175173166e+02 min > 8.164906465197e-02 max/min 1.417254659437e+03 > > 9653 KSP preconditioned resid norm 1.080641719545e+00 true resid norm > 1.594102433389e-01 ||r(i)||/||b|| 8.917469749751e-03 > > 9653 KSP Residual norm 1.080641719545e+00 % max 1.157284977956e+02 min > 8.043379142473e-02 max/min 1.438804459490e+03 > > 9654 KSP preconditioned resid norm 1.080641719502e+00 true resid norm > 1.594103748106e-01 ||r(i)||/||b|| 8.917477104328e-03 > > 9654 KSP Residual norm 1.080641719502e+00 % max 1.158252103352e+02 min > 8.042977537341e-02 max/min 1.440078749412e+03 > > 9655 KSP preconditioned resid norm 1.080641719500e+00 true resid norm > 1.594103839160e-01 ||r(i)||/||b|| 8.917477613692e-03 > > 9655 KSP Residual norm 1.080641719500e+00 % max 1.158319413225e+02 min > 7.912584859399e-02 max/min 1.463895090931e+03 > > 9656 KSP preconditioned resid norm 1.080641719298e+00 true resid norm > 1.594103559180e-01 ||r(i)||/||b|| 8.917476047469e-03 > > 9656 KSP Residual norm 1.080641719298e+00 % max 1.164752277567e+02 min > 7.459488142962e-02 max/min 1.561437266532e+03 > > 9657 KSP preconditioned resid norm 1.080641719265e+00 true resid norm > 1.594102184171e-01 ||r(i)||/||b|| 8.917468355617e-03 > > 9657 KSP Residual norm 1.080641719265e+00 % max 1.166579038733e+02 min > 7.458594814570e-02 max/min 1.564073485335e+03 > > 9658 KSP preconditioned resid norm 1.080641717458e+00 true resid norm > 1.594091333285e-01 ||r(i)||/||b|| 8.917407655346e-03 > > 9658 KSP Residual norm 1.080641717458e+00 % max 1.166903646829e+02 min > 6.207842503132e-02 max/min 1.879724954749e+03 > > 9659 KSP preconditioned resid norm 1.080641710951e+00 true resid norm > 1.594084758922e-01 ||r(i)||/||b|| 8.917370878110e-03 > > 9659 KSP Residual norm 1.080641710951e+00 % max 1.166911765130e+02 min > 4.511759655558e-02 max/min 2.586378384966e+03 > > 9660 KSP preconditioned resid norm 1.080641710473e+00 true resid norm > 1.594073892570e-01 ||r(i)||/||b|| 8.917310091324e-03 > > 9660 KSP Residual norm 1.080641710473e+00 % max 1.000000000000e+00 min > 1.000000000000e+00 max/min 1.000000000000e+00 > > 9661 KSP preconditioned resid norm 1.080641710473e+00 true resid norm > 1.594073892624e-01 ||r(i)||/||b|| 8.917310091626e-03 > > 9661 KSP Residual norm 1.080641710473e+00 % max 3.063497860835e+01 min > 3.063497860835e+01 max/min 1.000000000000e+00 > > 9662 KSP preconditioned resid norm 1.080641710473e+00 true resid norm > 1.594073892715e-01 ||r(i)||/||b|| 8.917310092135e-03 > > 9662 KSP Residual norm 1.080641710473e+00 % max 3.066567116490e+01 min > 3.845920135843e+00 max/min 7.973559013643e+00 > > 9663 KSP preconditioned resid norm 1.080641710473e+00 true resid norm > 1.594073893803e-01 ||r(i)||/||b|| 8.917310098220e-03 > > 9663 KSP Residual norm 1.080641710473e+00 % max 3.713314039929e+01 min > 1.336313376350e+00 max/min 2.778774878443e+01 > > 9664 KSP preconditioned resid norm 1.080641710473e+00 true resid norm > 1.594073893840e-01 ||r(i)||/||b|| 8.917310098430e-03 > > 9664 KSP Residual norm 1.080641710473e+00 % max 4.496286107838e+01 min > 1.226793755688e+00 max/min 3.665070911057e+01 > > 9665 KSP preconditioned resid norm 1.080641710473e+00 true resid norm > 1.594073893901e-01 ||r(i)||/||b|| 8.917310098770e-03 > > 9665 KSP Residual norm 1.080641710473e+00 % max 8.684753794468e+01 min > 1.183106109633e+00 max/min 7.340638108242e+01 > > 9666 KSP preconditioned resid norm 1.080641710473e+00 true resid norm > 1.594073897030e-01 ||r(i)||/||b|| 8.917310116272e-03 > > 9666 KSP Residual norm 1.080641710473e+00 % max 9.802657279239e+01 min > 1.168685545918e+00 max/min 8.387762913199e+01 > > 9667 KSP preconditioned resid norm 1.080641710473e+00 true resid norm > 1.594073898787e-01 ||r(i)||/||b|| 8.917310126104e-03 > > 9667 KSP Residual norm 1.080641710473e+00 % max 1.123342619847e+02 min > 1.141262650540e+00 max/min 9.842980661068e+01 > > 9668 KSP preconditioned resid norm 1.080641710473e+00 true resid norm > 1.594073899832e-01 ||r(i)||/||b|| 8.917310131950e-03 > > 9668 KSP Residual norm 1.080641710473e+00 % max 1.134978630027e+02 min > 1.090790451862e+00 max/min 1.040510235573e+02 > > 9669 KSP preconditioned resid norm 1.080641710472e+00 true resid norm > 1.594074437274e-01 ||r(i)||/||b|| 8.917313138421e-03 > > 9669 KSP Residual norm 1.080641710472e+00 % max 1.139911467053e+02 min > 4.122424185123e-01 max/min 2.765148407498e+02 > > 9670 KSP preconditioned resid norm 1.080641710471e+00 true resid norm > 1.594075064381e-01 ||r(i)||/||b|| 8.917316646478e-03 > > 9670 KSP Residual norm 1.080641710471e+00 % max 1.140007007543e+02 min > 2.895766957825e-01 max/min 3.936805081855e+02 > > 9671 KSP preconditioned resid norm 1.080641710471e+00 true resid norm > 1.594075034738e-01 ||r(i)||/||b|| 8.917316480653e-03 > > 9671 KSP Residual norm 1.080641710471e+00 % max 1.140387089437e+02 min > 2.881955202443e-01 max/min 3.956991033277e+02 > > 9672 KSP preconditioned resid norm 1.080641710470e+00 true resid norm > 1.594074784660e-01 ||r(i)||/||b|| 8.917315081711e-03 > > 9672 KSP Residual norm 1.080641710470e+00 % max 1.140387584514e+02 min > 2.501644562253e-01 max/min 4.558551609294e+02 > > 9673 KSP preconditioned resid norm 1.080641710468e+00 true resid norm > 1.594074777329e-01 ||r(i)||/||b|| 8.917315040700e-03 > > 9673 KSP Residual norm 1.080641710468e+00 % max 1.141359894823e+02 min > 2.500652210717e-01 max/min 4.564248838488e+02 > > 9674 KSP preconditioned resid norm 1.080641710467e+00 true resid norm > 1.594074707419e-01 ||r(i)||/||b|| 8.917314649619e-03 > > 9674 KSP Residual norm 1.080641710467e+00 % max 1.141719424825e+02 min > 2.470352404045e-01 max/min 4.621686456374e+02 > > 9675 KSP preconditioned resid norm 1.080641710439e+00 true resid norm > 1.594074050132e-01 ||r(i)||/||b|| 8.917310972734e-03 > > 9675 KSP Residual norm 1.080641710439e+00 % max 1.141769957478e+02 min > 2.461583334135e-01 max/min 4.638355897383e+02 > > 9676 KSP preconditioned resid norm 1.080641710435e+00 true resid norm > 1.594072732317e-01 ||r(i)||/||b|| 8.917303600825e-03 > > 9676 KSP Residual norm 1.080641710435e+00 % max 1.150247840524e+02 min > 1.817432478135e-01 max/min 6.328971526384e+02 > > 9677 KSP preconditioned resid norm 1.080641710313e+00 true resid norm > 1.594068192028e-01 ||r(i)||/||b|| 8.917278202275e-03 > > 9677 KSP Residual norm 1.080641710313e+00 % max 1.153658698688e+02 min > 1.758229849082e-01 max/min 6.561478291873e+02 > > 9678 KSP preconditioned resid norm 1.080641710294e+00 true resid norm > 1.594066656020e-01 ||r(i)||/||b|| 8.917269609788e-03 > > 9678 KSP Residual norm 1.080641710294e+00 % max 1.154409214869e+02 min > 1.682261493961e-01 max/min 6.862245964808e+02 > > 9679 KSP preconditioned resid norm 1.080641709867e+00 true resid norm > 1.594050456081e-01 ||r(i)||/||b|| 8.917178986714e-03 > > 9679 KSP Residual norm 1.080641709867e+00 % max 1.154410567101e+02 min > 1.254614849745e-01 max/min 9.201314390116e+02 > > 9680 KSP preconditioned resid norm 1.080641709664e+00 true resid norm > 1.594049650069e-01 ||r(i)||/||b|| 8.917174477851e-03 > > 9680 KSP Residual norm 1.080641709664e+00 % max 1.155780217907e+02 min > 1.115227545913e-01 max/min 1.036362688622e+03 > > 9681 KSP preconditioned resid norm 1.080641709568e+00 true resid norm > 1.594039822189e-01 ||r(i)||/||b|| 8.917119500317e-03 > > 9681 KSP Residual norm 1.080641709568e+00 % max 1.156949294099e+02 min > 9.748012982669e-02 max/min 1.186856538000e+03 > > 9682 KSP preconditioned resid norm 1.080641709462e+00 true resid norm > 1.594034585197e-01 ||r(i)||/||b|| 8.917090204385e-03 > > 9682 KSP Residual norm 1.080641709462e+00 % max 1.157178049396e+02 min > 8.161660044005e-02 max/min 1.417821917547e+03 > > 9683 KSP preconditioned resid norm 1.080641709447e+00 true resid norm > 1.594036816693e-01 ||r(i)||/||b|| 8.917102687458e-03 > > 9683 KSP Residual norm 1.080641709447e+00 % max 1.157298376396e+02 min > 8.041659530019e-02 max/min 1.439128791856e+03 > > 9684 KSP preconditioned resid norm 1.080641709407e+00 true resid norm > 1.594035571975e-01 ||r(i)||/||b|| 8.917095724454e-03 > > 9684 KSP Residual norm 1.080641709407e+00 % max 1.158244705264e+02 min > 8.041209608100e-02 max/min 1.440386162919e+03 > > 9685 KSP preconditioned resid norm 1.080641709405e+00 true resid norm > 1.594035489927e-01 ||r(i)||/||b|| 8.917095265477e-03 > > 9685 KSP Residual norm 1.080641709405e+00 % max 1.158301451854e+02 min > 7.912026880308e-02 max/min 1.463975627707e+03 > > 9686 KSP preconditioned resid norm 1.080641709207e+00 true resid norm > 1.594035774591e-01 ||r(i)||/||b|| 8.917096857899e-03 > > 9686 KSP Residual norm 1.080641709207e+00 % max 1.164678839370e+02 min > 7.460755571674e-02 max/min 1.561073577845e+03 > > 9687 KSP preconditioned resid norm 1.080641709174e+00 true resid norm > 1.594037147171e-01 ||r(i)||/||b|| 8.917104536162e-03 > > 9687 KSP Residual norm 1.080641709174e+00 % max 1.166488052404e+02 min > 7.459794478802e-02 max/min 1.563699986264e+03 > > 9688 KSP preconditioned resid norm 1.080641707398e+00 true resid norm > 1.594047860513e-01 ||r(i)||/||b|| 8.917164467006e-03 > > 9688 KSP Residual norm 1.080641707398e+00 % max 1.166807852257e+02 min > 6.210503072987e-02 max/min 1.878765437428e+03 > > 9689 KSP preconditioned resid norm 1.080641701010e+00 true resid norm > 1.594054414016e-01 ||r(i)||/||b|| 8.917201127549e-03 > > 9689 KSP Residual norm 1.080641701010e+00 % max 1.166815140099e+02 min > 4.513527468187e-02 max/min 2.585151299782e+03 > > 9690 KSP preconditioned resid norm 1.080641700548e+00 true resid norm > 1.594065078534e-01 ||r(i)||/||b|| 8.917260785273e-03 > > 9690 KSP Residual norm 1.080641700548e+00 % max 1.000000000000e+00 min > 1.000000000000e+00 max/min 1.000000000000e+00 > > 9691 KSP preconditioned resid norm 1.080641700548e+00 true resid norm > 1.594065078481e-01 ||r(i)||/||b|| 8.917260784977e-03 > > 9691 KSP Residual norm 1.080641700548e+00 % max 3.063462923862e+01 min > 3.063462923862e+01 max/min 1.000000000000e+00 > > 9692 KSP preconditioned resid norm 1.080641700548e+00 true resid norm > 1.594065078391e-01 ||r(i)||/||b|| 8.917260784472e-03 > > 9692 KSP Residual norm 1.080641700548e+00 % max 3.066527639129e+01 min > 3.844401168431e+00 max/min 7.976606771193e+00 > > 9693 KSP preconditioned resid norm 1.080641700548e+00 true resid norm > 1.594065077314e-01 ||r(i)||/||b|| 8.917260778448e-03 > > 9693 KSP Residual norm 1.080641700548e+00 % max 3.714560719346e+01 min > 1.336559823443e+00 max/min 2.779195255007e+01 > > 9694 KSP preconditioned resid norm 1.080641700548e+00 true resid norm > 1.594065077277e-01 ||r(i)||/||b|| 8.917260778237e-03 > > 9694 KSP Residual norm 1.080641700548e+00 % max 4.500980776068e+01 min > 1.226798344943e+00 max/min 3.668883965014e+01 > > 9695 KSP preconditioned resid norm 1.080641700548e+00 true resid norm > 1.594065077215e-01 ||r(i)||/||b|| 8.917260777894e-03 > > 9695 KSP Residual norm 1.080641700548e+00 % max 8.688252795875e+01 min > 1.183121330492e+00 max/min 7.343501103356e+01 > > 9696 KSP preconditioned resid norm 1.080641700548e+00 true resid norm > 1.594065074170e-01 ||r(i)||/||b|| 8.917260760857e-03 > > 9696 KSP Residual norm 1.080641700548e+00 % max 9.802496799855e+01 min > 1.168942571066e+00 max/min 8.385781339895e+01 > > 9697 KSP preconditioned resid norm 1.080641700548e+00 true resid norm > 1.594065072444e-01 ||r(i)||/||b|| 8.917260751202e-03 > > 9697 KSP Residual norm 1.080641700548e+00 % max 1.122410641497e+02 min > 1.141677885656e+00 max/min 9.831237475989e+01 > > 9698 KSP preconditioned resid norm 1.080641700548e+00 true resid norm > 1.594065071406e-01 ||r(i)||/||b|| 8.917260745398e-03 > > 9698 KSP Residual norm 1.080641700548e+00 % max 1.134941426413e+02 min > 1.090790153651e+00 max/min 1.040476413006e+02 > > 9699 KSP preconditioned resid norm 1.080641700547e+00 true resid norm > 1.594064538413e-01 ||r(i)||/||b|| 8.917257763815e-03 > > 9699 KSP Residual norm 1.080641700547e+00 % max 1.139914632588e+02 min > 4.119681054781e-01 max/min 2.766997292824e+02 > > 9700 KSP preconditioned resid norm 1.080641700546e+00 true resid norm > 1.594063917269e-01 ||r(i)||/||b|| 8.917254289111e-03 > > 9700 KSP Residual norm 1.080641700546e+00 % max 1.140011377578e+02 min > 2.894500209686e-01 max/min 3.938543081679e+02 > > 9701 KSP preconditioned resid norm 1.080641700546e+00 true resid norm > 1.594063946692e-01 ||r(i)||/||b|| 8.917254453705e-03 > > 9701 KSP Residual norm 1.080641700546e+00 % max 1.140392249545e+02 min > 2.880548133071e-01 max/min 3.958941829341e+02 > > 9702 KSP preconditioned resid norm 1.080641700546e+00 true resid norm > 1.594064194159e-01 ||r(i)||/||b|| 8.917255838042e-03 > > 9702 KSP Residual norm 1.080641700546e+00 % max 1.140392678944e+02 min > 2.501716607446e-01 max/min 4.558440694479e+02 > > 9703 KSP preconditioned resid norm 1.080641700543e+00 true resid norm > 1.594064201661e-01 ||r(i)||/||b|| 8.917255880013e-03 > > 9703 KSP Residual norm 1.080641700543e+00 % max 1.141360426034e+02 min > 2.500714053220e-01 max/min 4.564138089137e+02 > > 9704 KSP preconditioned resid norm 1.080641700542e+00 true resid norm > 1.594064270279e-01 ||r(i)||/||b|| 8.917256263859e-03 > > 9704 KSP Residual norm 1.080641700542e+00 % max 1.141719170495e+02 min > 2.470524869868e-01 max/min 4.621362789824e+02 > > 9705 KSP preconditioned resid norm 1.080641700515e+00 true resid norm > 1.594064921269e-01 ||r(i)||/||b|| 8.917259905523e-03 > > 9705 KSP Residual norm 1.080641700515e+00 % max 1.141770017414e+02 min > 2.461727640845e-01 max/min 4.638084239987e+02 > > 9706 KSP preconditioned resid norm 1.080641700511e+00 true resid norm > 1.594066225181e-01 ||r(i)||/||b|| 8.917267199657e-03 > > 9706 KSP Residual norm 1.080641700511e+00 % max 1.150251667711e+02 min > 1.817294170466e-01 max/min 6.329474261263e+02 > > 9707 KSP preconditioned resid norm 1.080641700391e+00 true resid norm > 1.594070728983e-01 ||r(i)||/||b|| 8.917292394097e-03 > > 9707 KSP Residual norm 1.080641700391e+00 % max 1.153670687494e+02 min > 1.757829178698e-01 max/min 6.563042083238e+02 > > 9708 KSP preconditioned resid norm 1.080641700372e+00 true resid norm > 1.594072248425e-01 ||r(i)||/||b|| 8.917300893916e-03 > > 9708 KSP Residual norm 1.080641700372e+00 % max 1.154419380718e+02 min > 1.682005223292e-01 max/min 6.863351936912e+02 > > 9709 KSP preconditioned resid norm 1.080641699955e+00 true resid norm > 1.594088278507e-01 ||r(i)||/||b|| 8.917390566803e-03 > > 9709 KSP Residual norm 1.080641699955e+00 % max 1.154420821193e+02 min > 1.254367014624e-01 max/min 9.203214113045e+02 > > 9710 KSP preconditioned resid norm 1.080641699758e+00 true resid norm > 1.594089066545e-01 ||r(i)||/||b|| 8.917394975116e-03 > > 9710 KSP Residual norm 1.080641699758e+00 % max 1.155791306030e+02 min > 1.115278858784e-01 max/min 1.036324948623e+03 > > 9711 KSP preconditioned resid norm 1.080641699664e+00 true resid norm > 1.594098725388e-01 ||r(i)||/||b|| 8.917449007053e-03 > > 9711 KSP Residual norm 1.080641699664e+00 % max 1.156952614563e+02 min > 9.753133694719e-02 max/min 1.186236804269e+03 > > 9712 KSP preconditioned resid norm 1.080641699560e+00 true resid norm > 1.594103943811e-01 ||r(i)||/||b|| 8.917478199111e-03 > > 9712 KSP Residual norm 1.080641699560e+00 % max 1.157175157006e+02 min > 8.164872572382e-02 max/min 1.417260522743e+03 > > 9713 KSP preconditioned resid norm 1.080641699546e+00 true resid norm > 1.594101771105e-01 ||r(i)||/||b|| 8.917466044914e-03 > > 9713 KSP Residual norm 1.080641699546e+00 % max 1.157285025175e+02 min > 8.043358651149e-02 max/min 1.438808183705e+03 > > 9714 KSP preconditioned resid norm 1.080641699505e+00 true resid norm > 1.594103059104e-01 ||r(i)||/||b|| 8.917473250025e-03 > > 9714 KSP Residual norm 1.080641699505e+00 % max 1.158251990486e+02 min > 8.042956633721e-02 max/min 1.440082351843e+03 > > 9715 KSP preconditioned resid norm 1.080641699503e+00 true resid norm > 1.594103148215e-01 ||r(i)||/||b|| 8.917473748516e-03 > > 9715 KSP Residual norm 1.080641699503e+00 % max 1.158319218998e+02 min > 7.912575668150e-02 max/min 1.463896545925e+03 > > 9716 KSP preconditioned resid norm 1.080641699309e+00 true resid norm > 1.594102873744e-01 ||r(i)||/||b|| 8.917472213115e-03 > > 9716 KSP Residual norm 1.080641699309e+00 % max 1.164751648212e+02 min > 7.459498920670e-02 max/min 1.561434166824e+03 > > 9717 KSP preconditioned resid norm 1.080641699277e+00 true resid norm > 1.594101525703e-01 ||r(i)||/||b|| 8.917464672122e-03 > > 9717 KSP Residual norm 1.080641699277e+00 % max 1.166578264103e+02 min > 7.458604738274e-02 max/min 1.564070365758e+03 > > 9718 KSP preconditioned resid norm 1.080641697541e+00 true resid norm > 1.594090891807e-01 ||r(i)||/||b|| 8.917405185702e-03 > > 9718 KSP Residual norm 1.080641697541e+00 % max 1.166902828397e+02 min > 6.207863457218e-02 max/min 1.879717291527e+03 > > 9719 KSP preconditioned resid norm 1.080641691292e+00 true resid norm > 1.594084448306e-01 ||r(i)||/||b|| 8.917369140517e-03 > > 9719 KSP Residual norm 1.080641691292e+00 % max 1.166910940162e+02 min > 4.511779816158e-02 max/min 2.586364999424e+03 > > 9720 KSP preconditioned resid norm 1.080641690833e+00 true resid norm > 1.594073798840e-01 ||r(i)||/||b|| 8.917309566993e-03 > > 9720 KSP Residual norm 1.080641690833e+00 % max 1.000000000000e+00 min > 1.000000000000e+00 max/min 1.000000000000e+00 > > 9721 KSP preconditioned resid norm 1.080641690833e+00 true resid norm > 1.594073798892e-01 ||r(i)||/||b|| 8.917309567288e-03 > > 9721 KSP Residual norm 1.080641690833e+00 % max 3.063497720423e+01 min > 3.063497720423e+01 max/min 1.000000000000e+00 > > 9722 KSP preconditioned resid norm 1.080641690833e+00 true resid norm > 1.594073798982e-01 ||r(i)||/||b|| 8.917309567787e-03 > > 9722 KSP Residual norm 1.080641690833e+00 % max 3.066566915002e+01 min > 3.845904218039e+00 max/min 7.973591491484e+00 > > 9723 KSP preconditioned resid norm 1.080641690833e+00 true resid norm > 1.594073800048e-01 ||r(i)||/||b|| 8.917309573750e-03 > > 9723 KSP Residual norm 1.080641690833e+00 % max 3.713330064708e+01 min > 1.336316867286e+00 max/min 2.778779611043e+01 > > 9724 KSP preconditioned resid norm 1.080641690833e+00 true resid norm > 1.594073800084e-01 ||r(i)||/||b|| 8.917309573955e-03 > > 9724 KSP Residual norm 1.080641690833e+00 % max 4.496345800250e+01 min > 1.226793989785e+00 max/min 3.665118868930e+01 > > 9725 KSP preconditioned resid norm 1.080641690833e+00 true resid norm > 1.594073800144e-01 ||r(i)||/||b|| 8.917309574290e-03 > > 9725 KSP Residual norm 1.080641690833e+00 % max 8.684799302076e+01 min > 1.183106260116e+00 max/min 7.340675639080e+01 > > 9726 KSP preconditioned resid norm 1.080641690833e+00 true resid norm > 1.594073803210e-01 ||r(i)||/||b|| 8.917309591441e-03 > > 9726 KSP Residual norm 1.080641690833e+00 % max 9.802654643741e+01 min > 1.168688159857e+00 max/min 8.387741897664e+01 > > 9727 KSP preconditioned resid norm 1.080641690833e+00 true resid norm > 1.594073804933e-01 ||r(i)||/||b|| 8.917309601077e-03 > > 9727 KSP Residual norm 1.080641690833e+00 % max 1.123333202151e+02 min > 1.141267080659e+00 max/min 9.842859933384e+01 > > 9728 KSP preconditioned resid norm 1.080641690833e+00 true resid norm > 1.594073805957e-01 ||r(i)||/||b|| 8.917309606809e-03 > > 9728 KSP Residual norm 1.080641690833e+00 % max 1.134978238790e+02 min > 1.090790456842e+00 max/min 1.040509872149e+02 > > 9729 KSP preconditioned resid norm 1.080641690832e+00 true resid norm > 1.594074332665e-01 ||r(i)||/||b|| 8.917312553232e-03 > > 9729 KSP Residual norm 1.080641690832e+00 % max 1.139911500486e+02 min > 4.122400558994e-01 max/min 2.765164336102e+02 > > 9730 KSP preconditioned resid norm 1.080641690831e+00 true resid norm > 1.594074947247e-01 ||r(i)||/||b|| 8.917315991226e-03 > > 9730 KSP Residual norm 1.080641690831e+00 % max 1.140007051733e+02 min > 2.895754967285e-01 max/min 3.936821535706e+02 > > 9731 KSP preconditioned resid norm 1.080641690831e+00 true resid norm > 1.594074918191e-01 ||r(i)||/||b|| 8.917315828690e-03 > > 9731 KSP Residual norm 1.080641690831e+00 % max 1.140387143094e+02 min > 2.881941911453e-01 max/min 3.957009468380e+02 > > 9732 KSP preconditioned resid norm 1.080641690830e+00 true resid norm > 1.594074673079e-01 ||r(i)||/||b|| 8.917314457521e-03 > > 9732 KSP Residual norm 1.080641690830e+00 % max 1.140387637599e+02 min > 2.501645326450e-01 max/min 4.558550428958e+02 > > 9733 KSP preconditioned resid norm 1.080641690828e+00 true resid norm > 1.594074665892e-01 ||r(i)||/||b|| 8.917314417314e-03 > > 9733 KSP Residual norm 1.080641690828e+00 % max 1.141359902062e+02 min > 2.500652873669e-01 max/min 4.564247657404e+02 > > 9734 KSP preconditioned resid norm 1.080641690827e+00 true resid norm > 1.594074597377e-01 ||r(i)||/||b|| 8.917314034043e-03 > > 9734 KSP Residual norm 1.080641690827e+00 % max 1.141719421992e+02 min > 2.470354276866e-01 max/min 4.621682941123e+02 > > 9735 KSP preconditioned resid norm 1.080641690800e+00 true resid norm > 1.594073953224e-01 ||r(i)||/||b|| 8.917310430625e-03 > > 9735 KSP Residual norm 1.080641690800e+00 % max 1.141769958343e+02 min > 2.461584786890e-01 max/min 4.638353163473e+02 > > 9736 KSP preconditioned resid norm 1.080641690797e+00 true resid norm > 1.594072661531e-01 ||r(i)||/||b|| 8.917303204847e-03 > > 9736 KSP Residual norm 1.080641690797e+00 % max 1.150247889475e+02 min > 1.817430598855e-01 max/min 6.328978340080e+02 > > 9737 KSP preconditioned resid norm 1.080641690679e+00 true resid norm > 1.594068211899e-01 ||r(i)||/||b|| 8.917278313432e-03 > > 9737 KSP Residual norm 1.080641690679e+00 % max 1.153658852526e+02 min > 1.758225138542e-01 max/min 6.561496745986e+02 > > 9738 KSP preconditioned resid norm 1.080641690661e+00 true resid norm > 1.594066706603e-01 ||r(i)||/||b|| 8.917269892752e-03 > > 9738 KSP Residual norm 1.080641690661e+00 % max 1.154409350014e+02 min > 1.682258367468e-01 max/min 6.862259521714e+02 > > 9739 KSP preconditioned resid norm 1.080641690251e+00 true resid norm > 1.594050830368e-01 ||r(i)||/||b|| 8.917181080485e-03 > > 9739 KSP Residual norm 1.080641690251e+00 % max 1.154410703479e+02 min > 1.254612102469e-01 max/min 9.201335625627e+02 > > 9740 KSP preconditioned resid norm 1.080641690057e+00 true resid norm > 1.594050040532e-01 ||r(i)||/||b|| 8.917176662114e-03 > > 9740 KSP Residual norm 1.080641690057e+00 % max 1.155780358111e+02 min > 1.115226752201e-01 max/min 1.036363551923e+03 > > 9741 KSP preconditioned resid norm 1.080641689964e+00 true resid norm > 1.594040409622e-01 ||r(i)||/||b|| 8.917122786435e-03 > > 9741 KSP Residual norm 1.080641689964e+00 % max 1.156949319460e+02 min > 9.748084084270e-02 max/min 1.186847907198e+03 > > 9742 KSP preconditioned resid norm 1.080641689862e+00 true resid norm > 1.594035276798e-01 ||r(i)||/||b|| 8.917094073223e-03 > > 9742 KSP Residual norm 1.080641689862e+00 % max 1.157177974822e+02 min > 8.161690928512e-02 max/min 1.417816461022e+03 > > 9743 KSP preconditioned resid norm 1.080641689848e+00 true resid norm > 1.594037462881e-01 ||r(i)||/||b|| 8.917106302257e-03 > > 9743 KSP Residual norm 1.080641689848e+00 % max 1.157298152734e+02 min > 8.041673363017e-02 max/min 1.439126038190e+03 > > 9744 KSP preconditioned resid norm 1.080641689809e+00 true resid norm > 1.594036242374e-01 ||r(i)||/||b|| 8.917099474695e-03 > > 9744 KSP Residual norm 1.080641689809e+00 % max 1.158244737331e+02 min > 8.041224000025e-02 max/min 1.440383624841e+03 > > 9745 KSP preconditioned resid norm 1.080641689808e+00 true resid norm > 1.594036161930e-01 ||r(i)||/||b|| 8.917099024685e-03 > > 9745 KSP Residual norm 1.080641689808e+00 % max 1.158301611517e+02 min > 7.912028815054e-02 max/min 1.463975471516e+03 > > 9746 KSP preconditioned resid norm 1.080641689617e+00 true resid norm > 1.594036440841e-01 ||r(i)||/||b|| 8.917100584928e-03 > > 9746 KSP Residual norm 1.080641689617e+00 % max 1.164679674294e+02 min > 7.460741052548e-02 max/min 1.561077734894e+03 > > 9747 KSP preconditioned resid norm 1.080641689585e+00 true resid norm > 1.594037786262e-01 ||r(i)||/||b|| 8.917108111263e-03 > > 9747 KSP Residual norm 1.080641689585e+00 % max 1.166489093034e+02 min > 7.459780450955e-02 max/min 1.563704321734e+03 > > 9748 KSP preconditioned resid norm 1.080641687880e+00 true resid norm > 1.594048285858e-01 ||r(i)||/||b|| 8.917166846404e-03 > > 9748 KSP Residual norm 1.080641687880e+00 % max 1.166808944816e+02 min > 6.210471238399e-02 max/min 1.878776827114e+03 > > 9749 KSP preconditioned resid norm 1.080641681745e+00 true resid norm > 1.594054707973e-01 ||r(i)||/||b|| 8.917202771956e-03 > > 9749 KSP Residual norm 1.080641681745e+00 % max 1.166816242497e+02 min > 4.513512286802e-02 max/min 2.585162437485e+03 > > 9750 KSP preconditioned resid norm 1.080641681302e+00 true resid norm > 1.594065161384e-01 ||r(i)||/||b|| 8.917261248737e-03 > > 9750 KSP Residual norm 1.080641681302e+00 % max 1.000000000000e+00 min > 1.000000000000e+00 max/min 1.000000000000e+00 > > 9751 KSP preconditioned resid norm 1.080641681302e+00 true resid norm > 1.594065161332e-01 ||r(i)||/||b|| 8.917261248446e-03 > > 9751 KSP Residual norm 1.080641681302e+00 % max 3.063463477969e+01 min > 3.063463477969e+01 max/min 1.000000000000e+00 > > 9752 KSP preconditioned resid norm 1.080641681302e+00 true resid norm > 1.594065161243e-01 ||r(i)||/||b|| 8.917261247951e-03 > > 9752 KSP Residual norm 1.080641681302e+00 % max 3.066528223339e+01 min > 3.844415595957e+00 max/min 7.976578355796e+00 > > 9753 KSP preconditioned resid norm 1.080641681302e+00 true resid norm > 1.594065160188e-01 ||r(i)||/||b|| 8.917261242049e-03 > > 9753 KSP Residual norm 1.080641681302e+00 % max 3.714551741771e+01 min > 1.336558367241e+00 max/min 2.779191566050e+01 > > 9754 KSP preconditioned resid norm 1.080641681302e+00 true resid norm > 1.594065160151e-01 ||r(i)||/||b|| 8.917261241840e-03 > > 9754 KSP Residual norm 1.080641681302e+00 % max 4.500946346995e+01 min > 1.226798482639e+00 max/min 3.668855489054e+01 > > 9755 KSP preconditioned resid norm 1.080641681302e+00 true resid norm > 1.594065160091e-01 ||r(i)||/||b|| 8.917261241506e-03 > > 9755 KSP Residual norm 1.080641681302e+00 % max 8.688228012809e+01 min > 1.183121177118e+00 max/min 7.343481108144e+01 > > 9756 KSP preconditioned resid norm 1.080641681302e+00 true resid norm > 1.594065157105e-01 ||r(i)||/||b|| 8.917261224803e-03 > > 9756 KSP Residual norm 1.080641681302e+00 % max 9.802497401249e+01 min > 1.168940055038e+00 max/min 8.385799903943e+01 > > 9757 KSP preconditioned resid norm 1.080641681302e+00 true resid norm > 1.594065155414e-01 ||r(i)||/||b|| 8.917261215340e-03 > > 9757 KSP Residual norm 1.080641681302e+00 % max 1.122419823631e+02 min > 1.141674012007e+00 max/min 9.831351259875e+01 > > 9758 KSP preconditioned resid norm 1.080641681302e+00 true resid norm > 1.594065154397e-01 ||r(i)||/||b|| 8.917261209650e-03 > > 9758 KSP Residual norm 1.080641681302e+00 % max 1.134941779165e+02 min > 1.090790164362e+00 max/min 1.040476726180e+02 > > 9759 KSP preconditioned resid norm 1.080641681301e+00 true resid norm > 1.594064632071e-01 ||r(i)||/||b|| 8.917258287741e-03 > > 9759 KSP Residual norm 1.080641681301e+00 % max 1.139914602803e+02 min > 4.119712251563e-01 max/min 2.766976267263e+02 > > 9760 KSP preconditioned resid norm 1.080641681300e+00 true resid norm > 1.594064023343e-01 ||r(i)||/||b|| 8.917254882493e-03 > > 9760 KSP Residual norm 1.080641681300e+00 % max 1.140011334474e+02 min > 2.894513550128e-01 max/min 3.938524780524e+02 > > 9761 KSP preconditioned resid norm 1.080641681299e+00 true resid norm > 1.594064052181e-01 ||r(i)||/||b|| 8.917255043816e-03 > > 9761 KSP Residual norm 1.080641681299e+00 % max 1.140392200085e+02 min > 2.880562980628e-01 max/min 3.958921251693e+02 > > 9762 KSP preconditioned resid norm 1.080641681299e+00 true resid norm > 1.594064294735e-01 ||r(i)||/||b|| 8.917256400669e-03 > > 9762 KSP Residual norm 1.080641681299e+00 % max 1.140392630219e+02 min > 2.501715931763e-01 max/min 4.558441730892e+02 > > 9763 KSP preconditioned resid norm 1.080641681297e+00 true resid norm > 1.594064302085e-01 ||r(i)||/||b|| 8.917256441789e-03 > > 9763 KSP Residual norm 1.080641681297e+00 % max 1.141360422663e+02 min > 2.500713478839e-01 max/min 4.564139123977e+02 > > 9764 KSP preconditioned resid norm 1.080641681296e+00 true resid norm > 1.594064369343e-01 ||r(i)||/||b|| 8.917256818032e-03 > > 9764 KSP Residual norm 1.080641681296e+00 % max 1.141719172739e+02 min > 2.470523296471e-01 max/min 4.621365742109e+02 > > 9765 KSP preconditioned resid norm 1.080641681269e+00 true resid norm > 1.594065007316e-01 ||r(i)||/||b|| 8.917260386877e-03 > > 9765 KSP Residual norm 1.080641681269e+00 % max 1.141770017074e+02 min > 2.461726211496e-01 max/min 4.638086931610e+02 > > 9766 KSP preconditioned resid norm 1.080641681266e+00 true resid norm > 1.594066285385e-01 ||r(i)||/||b|| 8.917267536440e-03 > > 9766 KSP Residual norm 1.080641681266e+00 % max 1.150251639969e+02 min > 1.817295045513e-01 max/min 6.329471060897e+02 > > 9767 KSP preconditioned resid norm 1.080641681150e+00 true resid norm > 1.594070699054e-01 ||r(i)||/||b|| 8.917292226672e-03 > > 9767 KSP Residual norm 1.080641681150e+00 % max 1.153670601170e+02 min > 1.757832464778e-01 max/min 6.563029323251e+02 > > 9768 KSP preconditioned resid norm 1.080641681133e+00 true resid norm > 1.594072188128e-01 ||r(i)||/||b|| 8.917300556609e-03 > > 9768 KSP Residual norm 1.080641681133e+00 % max 1.154419312149e+02 min > 1.682007202940e-01 max/min 6.863343451393e+02 > > 9769 KSP preconditioned resid norm 1.080641680732e+00 true resid norm > 1.594087897943e-01 ||r(i)||/||b|| 8.917388437917e-03 > > 9769 KSP Residual norm 1.080641680732e+00 % max 1.154420752094e+02 min > 1.254369186538e-01 max/min 9.203197627011e+02 > > 9770 KSP preconditioned resid norm 1.080641680543e+00 true resid norm > 1.594088670353e-01 ||r(i)||/||b|| 8.917392758804e-03 > > 9770 KSP Residual norm 1.080641680543e+00 % max 1.155791224140e+02 min > 1.115277033208e-01 max/min 1.036326571538e+03 > > 9771 KSP preconditioned resid norm 1.080641680452e+00 true resid norm > 1.594098136932e-01 ||r(i)||/||b|| 8.917445715209e-03 > > 9771 KSP Residual norm 1.080641680452e+00 % max 1.156952573953e+02 min > 9.753101753312e-02 max/min 1.186240647556e+03 > > 9772 KSP preconditioned resid norm 1.080641680352e+00 true resid norm > 1.594103250846e-01 ||r(i)||/||b|| 8.917474322641e-03 > > 9772 KSP Residual norm 1.080641680352e+00 % max 1.157175142052e+02 min > 8.164839358709e-02 max/min 1.417266269688e+03 > > 9773 KSP preconditioned resid norm 1.080641680339e+00 true resid norm > 1.594101121664e-01 ||r(i)||/||b|| 8.917462411912e-03 > > 9773 KSP Residual norm 1.080641680339e+00 % max 1.157285073186e+02 min > 8.043338619163e-02 max/min 1.438811826757e+03 > > 9774 KSP preconditioned resid norm 1.080641680299e+00 true resid norm > 1.594102383476e-01 ||r(i)||/||b|| 8.917469470538e-03 > > 9774 KSP Residual norm 1.080641680299e+00 % max 1.158251880541e+02 min > 8.042936196082e-02 max/min 1.440085874492e+03 > > 9775 KSP preconditioned resid norm 1.080641680298e+00 true resid norm > 1.594102470687e-01 ||r(i)||/||b|| 8.917469958400e-03 > > 9775 KSP Residual norm 1.080641680298e+00 % max 1.158319028736e+02 min > 7.912566724984e-02 max/min 1.463897960037e+03 > > 9776 KSP preconditioned resid norm 1.080641680112e+00 true resid norm > 1.594102201623e-01 ||r(i)||/||b|| 8.917468453243e-03 > > 9776 KSP Residual norm 1.080641680112e+00 % max 1.164751028861e+02 min > 7.459509526345e-02 max/min 1.561431116546e+03 > > 9777 KSP preconditioned resid norm 1.080641680081e+00 true resid norm > 1.594100880055e-01 ||r(i)||/||b|| 8.917461060344e-03 > > 9777 KSP Residual norm 1.080641680081e+00 % max 1.166577501679e+02 min > 7.458614509876e-02 max/min 1.564067294448e+03 > > 9778 KSP preconditioned resid norm 1.080641678414e+00 true resid norm > 1.594090458938e-01 ||r(i)||/||b|| 8.917402764217e-03 > > 9778 KSP Residual norm 1.080641678414e+00 % max 1.166902022924e+02 min > 6.207884124039e-02 max/min 1.879709736213e+03 > > 9779 KSP preconditioned resid norm 1.080641672412e+00 true resid norm > 1.594084143785e-01 ||r(i)||/||b|| 8.917367437011e-03 > > 9779 KSP Residual norm 1.080641672412e+00 % max 1.166910128239e+02 min > 4.511799534103e-02 max/min 2.586351896663e+03 > > 9780 KSP preconditioned resid norm 1.080641671971e+00 true resid norm > 1.594073707031e-01 ||r(i)||/||b|| 8.917309053412e-03 > > 9780 KSP Residual norm 1.080641671971e+00 % max 1.000000000000e+00 min > 1.000000000000e+00 max/min 1.000000000000e+00 > > 9781 KSP preconditioned resid norm 1.080641671971e+00 true resid norm > 1.594073707083e-01 ||r(i)||/||b|| 8.917309053702e-03 > > 9781 KSP Residual norm 1.080641671971e+00 % max 3.063497578379e+01 min > 3.063497578379e+01 max/min 1.000000000000e+00 > > 9782 KSP preconditioned resid norm 1.080641671971e+00 true resid norm > 1.594073707170e-01 ||r(i)||/||b|| 8.917309054190e-03 > > 9782 KSP Residual norm 1.080641671971e+00 % max 3.066566713389e+01 min > 3.845888622043e+00 max/min 7.973623302070e+00 > > 9783 KSP preconditioned resid norm 1.080641671971e+00 true resid norm > 1.594073708215e-01 ||r(i)||/||b|| 8.917309060034e-03 > > 9783 KSP Residual norm 1.080641671971e+00 % max 3.713345706068e+01 min > 1.336320269525e+00 max/min 2.778784241137e+01 > > 9784 KSP preconditioned resid norm 1.080641671971e+00 true resid norm > 1.594073708251e-01 ||r(i)||/||b|| 8.917309060235e-03 > > 9784 KSP Residual norm 1.080641671971e+00 % max 4.496404075363e+01 min > 1.226794215442e+00 max/min 3.665165696713e+01 > > 9785 KSP preconditioned resid norm 1.080641671971e+00 true resid norm > 1.594073708309e-01 ||r(i)||/||b|| 8.917309060564e-03 > > 9785 KSP Residual norm 1.080641671971e+00 % max 8.684843710054e+01 min > 1.183106407741e+00 max/min 7.340712258198e+01 > > 9786 KSP preconditioned resid norm 1.080641671971e+00 true resid norm > 1.594073711314e-01 ||r(i)||/||b|| 8.917309077372e-03 > > 9786 KSP Residual norm 1.080641671971e+00 % max 9.802652080694e+01 min > 1.168690722569e+00 max/min 8.387721311884e+01 > > 9787 KSP preconditioned resid norm 1.080641671971e+00 true resid norm > 1.594073713003e-01 ||r(i)||/||b|| 8.917309086817e-03 > > 9787 KSP Residual norm 1.080641671971e+00 % max 1.123323967807e+02 min > 1.141271419633e+00 max/min 9.842741599265e+01 > > 9788 KSP preconditioned resid norm 1.080641671971e+00 true resid norm > 1.594073714007e-01 ||r(i)||/||b|| 8.917309092435e-03 > > 9788 KSP Residual norm 1.080641671971e+00 % max 1.134977855491e+02 min > 1.090790461557e+00 max/min 1.040509516255e+02 > > 9789 KSP preconditioned resid norm 1.080641671970e+00 true resid norm > 1.594074230188e-01 ||r(i)||/||b|| 8.917311979972e-03 > > 9789 KSP Residual norm 1.080641671970e+00 % max 1.139911533240e+02 min > 4.122377311421e-01 max/min 2.765180009315e+02 > > 9790 KSP preconditioned resid norm 1.080641671969e+00 true resid norm > 1.594074832488e-01 ||r(i)||/||b|| 8.917315349259e-03 > > 9790 KSP Residual norm 1.080641671969e+00 % max 1.140007095063e+02 min > 2.895743194700e-01 max/min 3.936837690403e+02 > > 9791 KSP preconditioned resid norm 1.080641671969e+00 true resid norm > 1.594074804009e-01 ||r(i)||/||b|| 8.917315189946e-03 > > 9791 KSP Residual norm 1.080641671969e+00 % max 1.140387195674e+02 min > 2.881928861513e-01 max/min 3.957027568944e+02 > > 9792 KSP preconditioned resid norm 1.080641671968e+00 true resid norm > 1.594074563767e-01 ||r(i)||/||b|| 8.917313846024e-03 > > 9792 KSP Residual norm 1.080641671968e+00 % max 1.140387689617e+02 min > 2.501646075030e-01 max/min 4.558549272813e+02 > > 9793 KSP preconditioned resid norm 1.080641671966e+00 true resid norm > 1.594074556720e-01 ||r(i)||/||b|| 8.917313806607e-03 > > 9793 KSP Residual norm 1.080641671966e+00 % max 1.141359909124e+02 min > 2.500653522913e-01 max/min 4.564246500628e+02 > > 9794 KSP preconditioned resid norm 1.080641671965e+00 true resid norm > 1.594074489575e-01 ||r(i)||/||b|| 8.917313430994e-03 > > 9794 KSP Residual norm 1.080641671965e+00 % max 1.141719419220e+02 min > 2.470356110582e-01 max/min 4.621679499282e+02 > > 9795 KSP preconditioned resid norm 1.080641671939e+00 true resid norm > 1.594073858301e-01 ||r(i)||/||b|| 8.917309899620e-03 > > 9795 KSP Residual norm 1.080641671939e+00 % max 1.141769959186e+02 min > 2.461586211459e-01 max/min 4.638350482589e+02 > > 9796 KSP preconditioned resid norm 1.080641671936e+00 true resid norm > 1.594072592237e-01 ||r(i)||/||b|| 8.917302817214e-03 > > 9796 KSP Residual norm 1.080641671936e+00 % max 1.150247937261e+02 min > 1.817428765765e-01 max/min 6.328984986528e+02 > > 9797 KSP preconditioned resid norm 1.080641671823e+00 true resid norm > 1.594068231505e-01 ||r(i)||/||b|| 8.917278423111e-03 > > 9797 KSP Residual norm 1.080641671823e+00 % max 1.153659002697e+02 min > 1.758220533048e-01 max/min 6.561514787326e+02 > > 9798 KSP preconditioned resid norm 1.080641671805e+00 true resid norm > 1.594066756326e-01 ||r(i)||/||b|| 8.917270170903e-03 > > 9798 KSP Residual norm 1.080641671805e+00 % max 1.154409481864e+02 min > 1.682255312531e-01 max/min 6.862272767188e+02 > > 9799 KSP preconditioned resid norm 1.080641671412e+00 true resid norm > 1.594051197519e-01 ||r(i)||/||b|| 8.917183134347e-03 > > 9799 KSP Residual norm 1.080641671412e+00 % max 1.154410836529e+02 min > 1.254609413083e-01 max/min 9.201356410140e+02 > > 9800 KSP preconditioned resid norm 1.080641671225e+00 true resid norm > 1.594050423544e-01 ||r(i)||/||b|| 8.917178804700e-03 > > 9800 KSP Residual norm 1.080641671225e+00 % max 1.155780495006e+02 min > 1.115226000164e-01 max/min 1.036364373532e+03 > > 9801 KSP preconditioned resid norm 1.080641671136e+00 true resid norm > 1.594040985764e-01 ||r(i)||/||b|| 8.917126009400e-03 > > 9801 KSP Residual norm 1.080641671136e+00 % max 1.156949344498e+02 min > 9.748153395641e-02 max/min 1.186839494150e+03 > > 9802 KSP preconditioned resid norm 1.080641671038e+00 true resid norm > 1.594035955111e-01 ||r(i)||/||b|| 8.917097867731e-03 > > 9802 KSP Residual norm 1.080641671038e+00 % max 1.157177902650e+02 min > 8.161721244731e-02 max/min 1.417811106202e+03 > > 9803 KSP preconditioned resid norm 1.080641671024e+00 true resid norm > 1.594038096704e-01 ||r(i)||/||b|| 8.917109847884e-03 > > 9803 KSP Residual norm 1.080641671024e+00 % max 1.157297935320e+02 min > 8.041686995864e-02 max/min 1.439123328121e+03 > > 9804 KSP preconditioned resid norm 1.080641670988e+00 true resid norm > 1.594036899968e-01 ||r(i)||/||b|| 8.917103153299e-03 > > 9804 KSP Residual norm 1.080641670988e+00 % max 1.158244769690e+02 min > 8.041238179269e-02 max/min 1.440381125230e+03 > > 9805 KSP preconditioned resid norm 1.080641670986e+00 true resid norm > 1.594036821095e-01 ||r(i)||/||b|| 8.917102712083e-03 > > 9805 KSP Residual norm 1.080641670986e+00 % max 1.158301768585e+02 min > 7.912030787936e-02 max/min 1.463975304989e+03 > > 9806 KSP preconditioned resid norm 1.080641670803e+00 true resid norm > 1.594037094369e-01 ||r(i)||/||b|| 8.917104240783e-03 > > 9806 KSP Residual norm 1.080641670803e+00 % max 1.164680491000e+02 min > 7.460726855236e-02 max/min 1.561081800203e+03 > > 9807 KSP preconditioned resid norm 1.080641670773e+00 true resid norm > 1.594038413140e-01 ||r(i)||/||b|| 8.917111618039e-03 > > 9807 KSP Residual norm 1.080641670773e+00 % max 1.166490110820e+02 min > 7.459766739249e-02 max/min 1.563708560326e+03 > > 9808 KSP preconditioned resid norm 1.080641669134e+00 true resid norm > 1.594048703119e-01 ||r(i)||/||b|| 8.917169180576e-03 > > 9808 KSP Residual norm 1.080641669134e+00 % max 1.166810013448e+02 min > 6.210440124755e-02 max/min 1.878787960288e+03 > > 9809 KSP preconditioned resid norm 1.080641663243e+00 true resid norm > 1.594054996410e-01 ||r(i)||/||b|| 8.917204385483e-03 > > 9809 KSP Residual norm 1.080641663243e+00 % max 1.166817320749e+02 min > 4.513497352470e-02 max/min 2.585173380262e+03 > > 9810 KSP preconditioned resid norm 1.080641662817e+00 true resid norm > 1.594065242705e-01 ||r(i)||/||b|| 8.917261703653e-03 > > 9810 KSP Residual norm 1.080641662817e+00 % max 1.000000000000e+00 min > 1.000000000000e+00 max/min 1.000000000000e+00 > > 9811 KSP preconditioned resid norm 1.080641662817e+00 true resid norm > 1.594065242654e-01 ||r(i)||/||b|| 8.917261703367e-03 > > 9811 KSP Residual norm 1.080641662817e+00 % max 3.063464017134e+01 min > 3.063464017134e+01 max/min 1.000000000000e+00 > > 9812 KSP preconditioned resid norm 1.080641662817e+00 true resid norm > 1.594065242568e-01 ||r(i)||/||b|| 8.917261702882e-03 > > 9812 KSP Residual norm 1.080641662817e+00 % max 3.066528792324e+01 min > 3.844429757599e+00 max/min 7.976550452671e+00 > > 9813 KSP preconditioned resid norm 1.080641662817e+00 true resid norm > 1.594065241534e-01 ||r(i)||/||b|| 8.917261697099e-03 > > 9813 KSP Residual norm 1.080641662817e+00 % max 3.714542869717e+01 min > 1.336556919260e+00 max/min 2.779187938941e+01 > > 9814 KSP preconditioned resid norm 1.080641662817e+00 true resid norm > 1.594065241497e-01 ||r(i)||/||b|| 8.917261696894e-03 > > 9814 KSP Residual norm 1.080641662817e+00 % max 4.500912339640e+01 min > 1.226798613987e+00 max/min 3.668827375841e+01 > > 9815 KSP preconditioned resid norm 1.080641662817e+00 true resid norm > 1.594065241439e-01 ||r(i)||/||b|| 8.917261696565e-03 > > 9815 KSP Residual norm 1.080641662817e+00 % max 8.688203511576e+01 min > 1.183121026759e+00 max/min 7.343461332421e+01 > > 9816 KSP preconditioned resid norm 1.080641662817e+00 true resid norm > 1.594065238512e-01 ||r(i)||/||b|| 8.917261680193e-03 > > 9816 KSP Residual norm 1.080641662817e+00 % max 9.802498010891e+01 min > 1.168937586878e+00 max/min 8.385818131719e+01 > > 9817 KSP preconditioned resid norm 1.080641662817e+00 true resid norm > 1.594065236853e-01 ||r(i)||/||b|| 8.917261670916e-03 > > 9817 KSP Residual norm 1.080641662817e+00 % max 1.122428829897e+02 min > 1.141670208530e+00 max/min 9.831462899798e+01 > > 9818 KSP preconditioned resid norm 1.080641662817e+00 true resid norm > 1.594065235856e-01 ||r(i)||/||b|| 8.917261665338e-03 > > 9818 KSP Residual norm 1.080641662817e+00 % max 1.134942125400e+02 min > 1.090790174709e+00 max/min 1.040477033727e+02 > > 9819 KSP preconditioned resid norm 1.080641662816e+00 true resid norm > 1.594064723991e-01 ||r(i)||/||b|| 8.917258801942e-03 > > 9819 KSP Residual norm 1.080641662816e+00 % max 1.139914573562e+02 min > 4.119742762810e-01 max/min 2.766955703769e+02 > > 9820 KSP preconditioned resid norm 1.080641662815e+00 true resid norm > 1.594064127438e-01 ||r(i)||/||b|| 8.917255464802e-03 > > 9820 KSP Residual norm 1.080641662815e+00 % max 1.140011292201e+02 min > 2.894526616195e-01 max/min 3.938506855740e+02 > > 9821 KSP preconditioned resid norm 1.080641662815e+00 true resid norm > 1.594064155702e-01 ||r(i)||/||b|| 8.917255622914e-03 > > 9821 KSP Residual norm 1.080641662815e+00 % max 1.140392151549e+02 min > 2.880577522236e-01 max/min 3.958901097943e+02 > > 9822 KSP preconditioned resid norm 1.080641662815e+00 true resid norm > 1.594064393436e-01 ||r(i)||/||b|| 8.917256952808e-03 > > 9822 KSP Residual norm 1.080641662815e+00 % max 1.140392582401e+02 min > 2.501715268375e-01 max/min 4.558442748531e+02 > > 9823 KSP preconditioned resid norm 1.080641662812e+00 true resid norm > 1.594064400637e-01 ||r(i)||/||b|| 8.917256993092e-03 > > 9823 KSP Residual norm 1.080641662812e+00 % max 1.141360419319e+02 min > 2.500712914815e-01 max/min 4.564140140028e+02 > > 9824 KSP preconditioned resid norm 1.080641662811e+00 true resid norm > 1.594064466562e-01 ||r(i)||/||b|| 8.917257361878e-03 > > 9824 KSP Residual norm 1.080641662811e+00 % max 1.141719174947e+02 min > 2.470521750742e-01 max/min 4.621368642491e+02 > > 9825 KSP preconditioned resid norm 1.080641662786e+00 true resid norm > 1.594065091771e-01 ||r(i)||/||b|| 8.917260859317e-03 > > 9825 KSP Residual norm 1.080641662786e+00 % max 1.141770016737e+02 min > 2.461724809746e-01 max/min 4.638089571251e+02 > > 9826 KSP preconditioned resid norm 1.080641662783e+00 true resid norm > 1.594066344484e-01 ||r(i)||/||b|| 8.917267867042e-03 > > 9826 KSP Residual norm 1.080641662783e+00 % max 1.150251612560e+02 min > 1.817295914017e-01 max/min 6.329467885162e+02 > > 9827 KSP preconditioned resid norm 1.080641662672e+00 true resid norm > 1.594070669773e-01 ||r(i)||/||b|| 8.917292062876e-03 > > 9827 KSP Residual norm 1.080641662672e+00 % max 1.153670515865e+02 min > 1.757835701538e-01 max/min 6.563016753248e+02 > > 9828 KSP preconditioned resid norm 1.080641662655e+00 true resid norm > 1.594072129068e-01 ||r(i)||/||b|| 8.917300226227e-03 > > 9828 KSP Residual norm 1.080641662655e+00 % max 1.154419244262e+02 min > 1.682009156098e-01 max/min 6.863335078032e+02 > > 9829 KSP preconditioned resid norm 1.080641662270e+00 true resid norm > 1.594087524825e-01 ||r(i)||/||b|| 8.917386350680e-03 > > 9829 KSP Residual norm 1.080641662270e+00 % max 1.154420683680e+02 min > 1.254371322814e-01 max/min 9.203181407961e+02 > > 9830 KSP preconditioned resid norm 1.080641662088e+00 true resid norm > 1.594088281904e-01 ||r(i)||/||b|| 8.917390585807e-03 > > 9830 KSP Residual norm 1.080641662088e+00 % max 1.155791143272e+02 min > 1.115275270000e-01 max/min 1.036328137421e+03 > > 9831 KSP preconditioned resid norm 1.080641662001e+00 true resid norm > 1.594097559916e-01 ||r(i)||/||b|| 8.917442487360e-03 > > 9831 KSP Residual norm 1.080641662001e+00 % max 1.156952534277e+02 min > 9.753070058662e-02 max/min 1.186244461814e+03 > > 9832 KSP preconditioned resid norm 1.080641661905e+00 true resid norm > 1.594102571355e-01 ||r(i)||/||b|| 8.917470521541e-03 > > 9832 KSP Residual norm 1.080641661905e+00 % max 1.157175128244e+02 min > 8.164806814555e-02 max/min 1.417271901866e+03 > > 9833 KSP preconditioned resid norm 1.080641661892e+00 true resid norm > 1.594100484838e-01 ||r(i)||/||b|| 8.917458849484e-03 > > 9833 KSP Residual norm 1.080641661892e+00 % max 1.157285121903e+02 min > 8.043319039381e-02 max/min 1.438815389813e+03 > > 9834 KSP preconditioned resid norm 1.080641661854e+00 true resid norm > 1.594101720988e-01 ||r(i)||/||b|| 8.917465764556e-03 > > 9834 KSP Residual norm 1.080641661854e+00 % max 1.158251773437e+02 min > 8.042916217223e-02 max/min 1.440089318544e+03 > > 9835 KSP preconditioned resid norm 1.080641661853e+00 true resid norm > 1.594101806342e-01 ||r(i)||/||b|| 8.917466242026e-03 > > 9835 KSP Residual norm 1.080641661853e+00 % max 1.158318842362e+02 min > 7.912558026309e-02 max/min 1.463899333832e+03 > > 9836 KSP preconditioned resid norm 1.080641661674e+00 true resid norm > 1.594101542582e-01 ||r(i)||/||b|| 8.917464766544e-03 > > 9836 KSP Residual norm 1.080641661674e+00 % max 1.164750419425e+02 min > 7.459519966941e-02 max/min 1.561428114124e+03 > > 9837 KSP preconditioned resid norm 1.080641661644e+00 true resid norm > 1.594100247000e-01 ||r(i)||/||b|| 8.917457519010e-03 > > 9837 KSP Residual norm 1.080641661644e+00 % max 1.166576751361e+02 min > 7.458624135723e-02 max/min 1.564064269942e+03 > > 9838 KSP preconditioned resid norm 1.080641660043e+00 true resid norm > 1.594090034525e-01 ||r(i)||/||b|| 8.917400390036e-03 > > 9838 KSP Residual norm 1.080641660043e+00 % max 1.166901230302e+02 min > 6.207904511461e-02 max/min 1.879702286251e+03 > > 9839 KSP preconditioned resid norm 1.080641654279e+00 true resid norm > 1.594083845247e-01 ||r(i)||/||b|| 8.917365766976e-03 > > 9839 KSP Residual norm 1.080641654279e+00 % max 1.166909329256e+02 min > 4.511818825181e-02 max/min 2.586339067391e+03 > > 9840 KSP preconditioned resid norm 1.080641653856e+00 true resid norm > 1.594073617105e-01 ||r(i)||/||b|| 8.917308550363e-03 > > 9840 KSP Residual norm 1.080641653856e+00 % max 1.000000000000e+00 min > 1.000000000000e+00 max/min 1.000000000000e+00 > > 9841 KSP preconditioned resid norm 1.080641653856e+00 true resid norm > 1.594073617156e-01 ||r(i)||/||b|| 8.917308550646e-03 > > 9841 KSP Residual norm 1.080641653856e+00 % max 3.063497434930e+01 min > 3.063497434930e+01 max/min 1.000000000000e+00 > > 9842 KSP preconditioned resid norm 1.080641653856e+00 true resid norm > 1.594073617242e-01 ||r(i)||/||b|| 8.917308551127e-03 > > 9842 KSP Residual norm 1.080641653856e+00 % max 3.066566511835e+01 min > 3.845873341758e+00 max/min 7.973654458505e+00 > > 9843 KSP preconditioned resid norm 1.080641653856e+00 true resid norm > 1.594073618265e-01 ||r(i)||/||b|| 8.917308556853e-03 > > 9843 KSP Residual norm 1.080641653856e+00 % max 3.713360973998e+01 min > 1.336323585560e+00 max/min 2.778788771015e+01 > > 9844 KSP preconditioned resid norm 1.080641653856e+00 true resid norm > 1.594073618300e-01 ||r(i)||/||b|| 8.917308557050e-03 > > 9844 KSP Residual norm 1.080641653856e+00 % max 4.496460969624e+01 min > 1.226794432981e+00 max/min 3.665211423154e+01 > > 9845 KSP preconditioned resid norm 1.080641653856e+00 true resid norm > 1.594073618358e-01 ||r(i)||/||b|| 8.917308557372e-03 > > 9845 KSP Residual norm 1.080641653856e+00 % max 8.684887047460e+01 min > 1.183106552555e+00 max/min 7.340747989862e+01 > > 9846 KSP preconditioned resid norm 1.080641653856e+00 true resid norm > 1.594073621302e-01 ||r(i)||/||b|| 8.917308573843e-03 > > 9846 KSP Residual norm 1.080641653856e+00 % max 9.802649587879e+01 min > 1.168693234980e+00 max/min 8.387701147298e+01 > > 9847 KSP preconditioned resid norm 1.080641653856e+00 true resid norm > 1.594073622957e-01 ||r(i)||/||b|| 8.917308583099e-03 > > 9847 KSP Residual norm 1.080641653856e+00 % max 1.123314913547e+02 min > 1.141275669292e+00 max/min 9.842625614229e+01 > > 9848 KSP preconditioned resid norm 1.080641653856e+00 true resid norm > 1.594073623942e-01 ||r(i)||/||b|| 8.917308588608e-03 > > 9848 KSP Residual norm 1.080641653856e+00 % max 1.134977479974e+02 min > 1.090790466016e+00 max/min 1.040509167741e+02 > > 9849 KSP preconditioned resid norm 1.080641653855e+00 true resid norm > 1.594074129801e-01 ||r(i)||/||b|| 8.917311418404e-03 > > 9849 KSP Residual norm 1.080641653855e+00 % max 1.139911565327e+02 min > 4.122354439173e-01 max/min 2.765195429329e+02 > > 9850 KSP preconditioned resid norm 1.080641653854e+00 true resid norm > 1.594074720057e-01 ||r(i)||/||b|| 8.917314720319e-03 > > 9850 KSP Residual norm 1.080641653854e+00 % max 1.140007137546e+02 min > 2.895731636857e-01 max/min 3.936853550364e+02 > > 9851 KSP preconditioned resid norm 1.080641653854e+00 true resid norm > 1.594074692144e-01 ||r(i)||/||b|| 8.917314564169e-03 > > 9851 KSP Residual norm 1.080641653854e+00 % max 1.140387247200e+02 min > 2.881916049088e-01 max/min 3.957045339889e+02 > > 9852 KSP preconditioned resid norm 1.080641653853e+00 true resid norm > 1.594074456679e-01 ||r(i)||/||b|| 8.917313246972e-03 > > 9852 KSP Residual norm 1.080641653853e+00 % max 1.140387740588e+02 min > 2.501646808296e-01 max/min 4.558548140395e+02 > > 9853 KSP preconditioned resid norm 1.080641653851e+00 true resid norm > 1.594074449772e-01 ||r(i)||/||b|| 8.917313208332e-03 > > 9853 KSP Residual norm 1.080641653851e+00 % max 1.141359916013e+02 min > 2.500654158720e-01 max/min 4.564245367688e+02 > > 9854 KSP preconditioned resid norm 1.080641653850e+00 true resid norm > 1.594074383969e-01 ||r(i)||/||b|| 8.917312840228e-03 > > 9854 KSP Residual norm 1.080641653850e+00 % max 1.141719416509e+02 min > 2.470357905993e-01 max/min 4.621676129356e+02 > > 9855 KSP preconditioned resid norm 1.080641653826e+00 true resid norm > 1.594073765323e-01 ||r(i)||/||b|| 8.917309379499e-03 > > 9855 KSP Residual norm 1.080641653826e+00 % max 1.141769960009e+02 min > 2.461587608334e-01 max/min 4.638347853812e+02 > > 9856 KSP preconditioned resid norm 1.080641653822e+00 true resid norm > 1.594072524403e-01 ||r(i)||/||b|| 8.917302437746e-03 > > 9856 KSP Residual norm 1.080641653822e+00 % max 1.150247983913e+02 min > 1.817426977612e-01 max/min 6.328991470261e+02 > > 9857 KSP preconditioned resid norm 1.080641653714e+00 true resid norm > 1.594068250847e-01 ||r(i)||/||b|| 8.917278531310e-03 > > 9857 KSP Residual norm 1.080641653714e+00 % max 1.153659149296e+02 min > 1.758216030104e-01 max/min 6.561532425728e+02 > > 9858 KSP preconditioned resid norm 1.080641653697e+00 true resid norm > 1.594066805198e-01 ||r(i)||/||b|| 8.917270444298e-03 > > 9858 KSP Residual norm 1.080641653697e+00 % max 1.154409610504e+02 min > 1.682252327368e-01 max/min 6.862285709010e+02 > > 9859 KSP preconditioned resid norm 1.080641653319e+00 true resid norm > 1.594051557657e-01 ||r(i)||/||b|| 8.917185148971e-03 > > 9859 KSP Residual norm 1.080641653319e+00 % max 1.154410966341e+02 min > 1.254606780252e-01 max/min 9.201376754146e+02 > > 9860 KSP preconditioned resid norm 1.080641653140e+00 true resid norm > 1.594050799233e-01 ||r(i)||/||b|| 8.917180906316e-03 > > 9860 KSP Residual norm 1.080641653140e+00 % max 1.155780628675e+02 min > 1.115225287992e-01 max/min 1.036365155202e+03 > > 9861 KSP preconditioned resid norm 1.080641653054e+00 true resid norm > 1.594041550812e-01 ||r(i)||/||b|| 8.917129170295e-03 > > 9861 KSP Residual norm 1.080641653054e+00 % max 1.156949369211e+02 min > 9.748220965886e-02 max/min 1.186831292869e+03 > > 9862 KSP preconditioned resid norm 1.080641652960e+00 true resid norm > 1.594036620365e-01 ||r(i)||/||b|| 8.917101589189e-03 > > 9862 KSP Residual norm 1.080641652960e+00 % max 1.157177832797e+02 min > 8.161751000999e-02 max/min 1.417805851533e+03 > > 9863 KSP preconditioned resid norm 1.080641652947e+00 true resid norm > 1.594038718371e-01 ||r(i)||/||b|| 8.917113325517e-03 > > 9863 KSP Residual norm 1.080641652947e+00 % max 1.157297723960e+02 min > 8.041700428565e-02 max/min 1.439120661408e+03 > > 9864 KSP preconditioned resid norm 1.080641652912e+00 true resid norm > 1.594037544972e-01 ||r(i)||/||b|| 8.917106761475e-03 > > 9864 KSP Residual norm 1.080641652912e+00 % max 1.158244802295e+02 min > 8.041252146115e-02 max/min 1.440378663980e+03 > > 9865 KSP preconditioned resid norm 1.080641652910e+00 true resid norm > 1.594037467642e-01 ||r(i)||/||b|| 8.917106328887e-03 > > 9865 KSP Residual norm 1.080641652910e+00 % max 1.158301923080e+02 min > 7.912032794006e-02 max/min 1.463975129069e+03 > > 9866 KSP preconditioned resid norm 1.080641652734e+00 true resid norm > 1.594037735388e-01 ||r(i)||/||b|| 8.917107826673e-03 > > 9866 KSP Residual norm 1.080641652734e+00 % max 1.164681289889e+02 min > 7.460712971447e-02 max/min 1.561085776047e+03 > > 9867 KSP preconditioned resid norm 1.080641652705e+00 true resid norm > 1.594039028009e-01 ||r(i)||/||b|| 8.917115057644e-03 > > 9867 KSP Residual norm 1.080641652705e+00 % max 1.166491106270e+02 min > 7.459753335320e-02 max/min 1.563712704477e+03 > > 9868 KSP preconditioned resid norm 1.080641651132e+00 true resid norm > 1.594049112430e-01 ||r(i)||/||b|| 8.917171470279e-03 > > 9868 KSP Residual norm 1.080641651132e+00 % max 1.166811058685e+02 min > 6.210409714784e-02 max/min 1.878798843026e+03 > > 9869 KSP preconditioned resid norm 1.080641645474e+00 true resid norm > 1.594055279416e-01 ||r(i)||/||b|| 8.917205968634e-03 > > 9869 KSP Residual norm 1.080641645474e+00 % max 1.166818375393e+02 min > 4.513482662958e-02 max/min 2.585184130582e+03 > > 9870 KSP preconditioned resid norm 1.080641645065e+00 true resid norm > 1.594065322524e-01 ||r(i)||/||b|| 8.917262150160e-03 > > 9870 KSP Residual norm 1.080641645065e+00 % max 1.000000000000e+00 min > 1.000000000000e+00 max/min 1.000000000000e+00 > > 9871 KSP preconditioned resid norm 1.080641645065e+00 true resid norm > 1.594065322474e-01 ||r(i)||/||b|| 8.917262149880e-03 > > 9871 KSP Residual norm 1.080641645065e+00 % max 3.063464541814e+01 min > 3.063464541814e+01 max/min 1.000000000000e+00 > > 9872 KSP preconditioned resid norm 1.080641645065e+00 true resid norm > 1.594065322389e-01 ||r(i)||/||b|| 8.917262149406e-03 > > 9872 KSP Residual norm 1.080641645065e+00 % max 3.066529346532e+01 min > 3.844443657370e+00 max/min 7.976523054651e+00 > > 9873 KSP preconditioned resid norm 1.080641645065e+00 true resid norm > 1.594065321376e-01 ||r(i)||/||b|| 8.917262143737e-03 > > 9873 KSP Residual norm 1.080641645065e+00 % max 3.714534104643e+01 min > 1.336555480314e+00 max/min 2.779184373079e+01 > > 9874 KSP preconditioned resid norm 1.080641645065e+00 true resid norm > 1.594065321340e-01 ||r(i)||/||b|| 8.917262143537e-03 > > 9874 KSP Residual norm 1.080641645065e+00 % max 4.500878758461e+01 min > 1.226798739269e+00 max/min 3.668799628162e+01 > > 9875 KSP preconditioned resid norm 1.080641645065e+00 true resid norm > 1.594065321283e-01 ||r(i)||/||b|| 8.917262143216e-03 > > 9875 KSP Residual norm 1.080641645065e+00 % max 8.688179296761e+01 min > 1.183120879361e+00 max/min 7.343441780400e+01 > > 9876 KSP preconditioned resid norm 1.080641645065e+00 true resid norm > 1.594065318413e-01 ||r(i)||/||b|| 8.917262127165e-03 > > 9876 KSP Residual norm 1.080641645065e+00 % max 9.802498627683e+01 min > 1.168935165788e+00 max/min 8.385836028017e+01 > > 9877 KSP preconditioned resid norm 1.080641645065e+00 true resid norm > 1.594065316788e-01 ||r(i)||/||b|| 8.917262118072e-03 > > 9877 KSP Residual norm 1.080641645065e+00 % max 1.122437663284e+02 min > 1.141666474192e+00 max/min 9.831572430803e+01 > > 9878 KSP preconditioned resid norm 1.080641645065e+00 true resid norm > 1.594065315811e-01 ||r(i)||/||b|| 8.917262112605e-03 > > 9878 KSP Residual norm 1.080641645065e+00 % max 1.134942465219e+02 min > 1.090790184702e+00 max/min 1.040477335730e+02 > > 9879 KSP preconditioned resid norm 1.080641645064e+00 true resid norm > 1.594064814200e-01 ||r(i)||/||b|| 8.917259306578e-03 > > 9879 KSP Residual norm 1.080641645064e+00 % max 1.139914544856e+02 min > 4.119772603942e-01 max/min 2.766935591945e+02 > > 9880 KSP preconditioned resid norm 1.080641645063e+00 true resid norm > 1.594064229585e-01 ||r(i)||/||b|| 8.917256036220e-03 > > 9880 KSP Residual norm 1.080641645063e+00 % max 1.140011250742e+02 min > 2.894539413393e-01 max/min 3.938489299774e+02 > > 9881 KSP preconditioned resid norm 1.080641645063e+00 true resid norm > 1.594064257287e-01 ||r(i)||/||b|| 8.917256191184e-03 > > 9881 KSP Residual norm 1.080641645063e+00 % max 1.140392103921e+02 min > 2.880591764056e-01 max/min 3.958881359556e+02 > > 9882 KSP preconditioned resid norm 1.080641645063e+00 true resid norm > 1.594064490294e-01 ||r(i)||/||b|| 8.917257494633e-03 > > 9882 KSP Residual norm 1.080641645063e+00 % max 1.140392535477e+02 min > 2.501714617108e-01 max/min 4.558443747654e+02 > > 9883 KSP preconditioned resid norm 1.080641645061e+00 true resid norm > 1.594064497349e-01 ||r(i)||/||b|| 8.917257534101e-03 > > 9883 KSP Residual norm 1.080641645061e+00 % max 1.141360416004e+02 min > 2.500712361002e-01 max/min 4.564141137554e+02 > > 9884 KSP preconditioned resid norm 1.080641645059e+00 true resid norm > 1.594064561966e-01 ||r(i)||/||b|| 8.917257895569e-03 > > 9884 KSP Residual norm 1.080641645059e+00 % max 1.141719177119e+02 min > 2.470520232306e-01 max/min 4.621371491678e+02 > > 9885 KSP preconditioned resid norm 1.080641645035e+00 true resid norm > 1.594065174658e-01 ||r(i)||/||b|| 8.917261322993e-03 > > 9885 KSP Residual norm 1.080641645035e+00 % max 1.141770016402e+02 min > 2.461723435101e-01 max/min 4.638092159834e+02 > > 9886 KSP preconditioned resid norm 1.080641645032e+00 true resid norm > 1.594066402497e-01 ||r(i)||/||b|| 8.917268191569e-03 > > 9886 KSP Residual norm 1.080641645032e+00 % max 1.150251585488e+02 min > 1.817296775460e-01 max/min 6.329464735865e+02 > > 9887 KSP preconditioned resid norm 1.080641644925e+00 true resid norm > 1.594070641129e-01 ||r(i)||/||b|| 8.917291902641e-03 > > 9887 KSP Residual norm 1.080641644925e+00 % max 1.153670431587e+02 min > 1.757838889123e-01 max/min 6.563004372733e+02 > > 9888 KSP preconditioned resid norm 1.080641644909e+00 true resid norm > 1.594072071224e-01 ||r(i)||/||b|| 8.917299902647e-03 > > 9888 KSP Residual norm 1.080641644909e+00 % max 1.154419177071e+02 min > 1.682011082589e-01 max/min 6.863326817642e+02 > > 9889 KSP preconditioned resid norm 1.080641644540e+00 true resid norm > 1.594087159021e-01 ||r(i)||/||b|| 8.917384304358e-03 > > 9889 KSP Residual norm 1.080641644540e+00 % max 1.154420615965e+02 min > 1.254373423966e-01 max/min 9.203165452239e+02 > > 9890 KSP preconditioned resid norm 1.080641644365e+00 true resid norm > 1.594087901062e-01 ||r(i)||/||b|| 8.917388455364e-03 > > 9890 KSP Residual norm 1.080641644365e+00 % max 1.155791063432e+02 min > 1.115273566807e-01 max/min 1.036329648465e+03 > > 9891 KSP preconditioned resid norm 1.080641644282e+00 true resid norm > 1.594096994140e-01 ||r(i)||/||b|| 8.917439322388e-03 > > 9891 KSP Residual norm 1.080641644282e+00 % max 1.156952495514e+02 min > 9.753038620512e-02 max/min 1.186248245835e+03 > > 9892 KSP preconditioned resid norm 1.080641644189e+00 true resid norm > 1.594101905102e-01 ||r(i)||/||b|| 8.917466794493e-03 > > 9892 KSP Residual norm 1.080641644189e+00 % max 1.157175115527e+02 min > 8.164774923625e-02 max/min 1.417277422037e+03 > > 9893 KSP preconditioned resid norm 1.080641644177e+00 true resid norm > 1.594099860409e-01 ||r(i)||/||b|| 8.917455356405e-03 > > 9893 KSP Residual norm 1.080641644177e+00 % max 1.157285171248e+02 min > 8.043299897298e-02 max/min 1.438818875368e+03 > > 9894 KSP preconditioned resid norm 1.080641644141e+00 true resid norm > 1.594101071411e-01 ||r(i)||/||b|| 8.917462130798e-03 > > 9894 KSP Residual norm 1.080641644141e+00 % max 1.158251669093e+02 min > 8.042896682578e-02 max/min 1.440092686509e+03 > > 9895 KSP preconditioned resid norm 1.080641644139e+00 true resid norm > 1.594101154949e-01 ||r(i)||/||b|| 8.917462598110e-03 > > 9895 KSP Residual norm 1.080641644139e+00 % max 1.158318659802e+02 min > 7.912549560750e-02 max/min 1.463900669321e+03 > > 9896 KSP preconditioned resid norm 1.080641643967e+00 true resid norm > 1.594100896394e-01 ||r(i)||/||b|| 8.917461151744e-03 > > 9896 KSP Residual norm 1.080641643967e+00 % max 1.164749819823e+02 min > 7.459530237415e-02 max/min 1.561425160502e+03 > > 9897 KSP preconditioned resid norm 1.080641643939e+00 true resid norm > 1.594099626317e-01 ||r(i)||/||b|| 8.917454046886e-03 > > 9897 KSP Residual norm 1.080641643939e+00 % max 1.166576013049e+02 min > 7.458633610448e-02 max/min 1.564061293230e+03 > > 9898 KSP preconditioned resid norm 1.080641642401e+00 true resid norm > 1.594089618420e-01 ||r(i)||/||b|| 8.917398062328e-03 > > 9898 KSP Residual norm 1.080641642401e+00 % max 1.166900450421e+02 min > 6.207924609942e-02 max/min 1.879694944349e+03 > > 9899 KSP preconditioned resid norm 1.080641636866e+00 true resid norm > 1.594083552587e-01 ||r(i)||/||b|| 8.917364129827e-03 > > 9899 KSP Residual norm 1.080641636866e+00 % max 1.166908543099e+02 min > 4.511837692126e-02 max/min 2.586326509785e+03 > > 9900 KSP preconditioned resid norm 1.080641636459e+00 true resid norm > 1.594073529026e-01 ||r(i)||/||b|| 8.917308057647e-03 > > 9900 KSP Residual norm 1.080641636459e+00 % max 1.000000000000e+00 min > 1.000000000000e+00 max/min 1.000000000000e+00 > > 9901 KSP preconditioned resid norm 1.080641636459e+00 true resid norm > 1.594073529076e-01 ||r(i)||/||b|| 8.917308057926e-03 > > 9901 KSP Residual norm 1.080641636459e+00 % max 3.063497290246e+01 min > 3.063497290246e+01 max/min 1.000000000000e+00 > > 9902 KSP preconditioned resid norm 1.080641636459e+00 true resid norm > 1.594073529160e-01 ||r(i)||/||b|| 8.917308058394e-03 > > 9902 KSP Residual norm 1.080641636459e+00 % max 3.066566310477e+01 min > 3.845858371013e+00 max/min 7.973684973919e+00 > > 9903 KSP preconditioned resid norm 1.080641636459e+00 true resid norm > 1.594073530163e-01 ||r(i)||/||b|| 8.917308064007e-03 > > 9903 KSP Residual norm 1.080641636459e+00 % max 3.713375877488e+01 min > 1.336326817614e+00 max/min 2.778793202787e+01 > > 9904 KSP preconditioned resid norm 1.080641636459e+00 true resid norm > 1.594073530198e-01 ||r(i)||/||b|| 8.917308064200e-03 > > 9904 KSP Residual norm 1.080641636459e+00 % max 4.496516516046e+01 min > 1.226794642680e+00 max/min 3.665256074336e+01 > > 9905 KSP preconditioned resid norm 1.080641636459e+00 true resid norm > 1.594073530254e-01 ||r(i)||/||b|| 8.917308064515e-03 > > 9905 KSP Residual norm 1.080641636459e+00 % max 8.684929340512e+01 min > 1.183106694605e+00 max/min 7.340782855945e+01 > > 9906 KSP preconditioned resid norm 1.080641636459e+00 true resid norm > 1.594073533139e-01 ||r(i)||/||b|| 8.917308080655e-03 > > 9906 KSP Residual norm 1.080641636459e+00 % max 9.802647163348e+01 min > 1.168695697992e+00 max/min 8.387681395754e+01 > > 9907 KSP preconditioned resid norm 1.080641636459e+00 true resid norm > 1.594073534761e-01 ||r(i)||/||b|| 8.917308089727e-03 > > 9907 KSP Residual norm 1.080641636459e+00 % max 1.123306036185e+02 min > 1.141279831413e+00 max/min 9.842511934993e+01 > > 9908 KSP preconditioned resid norm 1.080641636459e+00 true resid norm > 1.594073535726e-01 ||r(i)||/||b|| 8.917308095124e-03 > > 9908 KSP Residual norm 1.080641636459e+00 % max 1.134977112090e+02 min > 1.090790470231e+00 max/min 1.040508826457e+02 > > 9909 KSP preconditioned resid norm 1.080641636458e+00 true resid norm > 1.594074031465e-01 ||r(i)||/||b|| 8.917310868307e-03 > > 9909 KSP Residual norm 1.080641636458e+00 % max 1.139911596762e+02 min > 4.122331938771e-01 max/min 2.765210598500e+02 > > 9910 KSP preconditioned resid norm 1.080641636457e+00 true resid norm > 1.594074609911e-01 ||r(i)||/||b|| 8.917314104156e-03 > > 9910 KSP Residual norm 1.080641636457e+00 % max 1.140007179199e+02 min > 2.895720290537e-01 max/min 3.936869120007e+02 > > 9911 KSP preconditioned resid norm 1.080641636457e+00 true resid norm > 1.594074582552e-01 ||r(i)||/||b|| 8.917313951111e-03 > > 9911 KSP Residual norm 1.080641636457e+00 % max 1.140387297689e+02 min > 2.881903470643e-01 max/min 3.957062786128e+02 > > 9912 KSP preconditioned resid norm 1.080641636457e+00 true resid norm > 1.594074351775e-01 ||r(i)||/||b|| 8.917312660133e-03 > > 9912 KSP Residual norm 1.080641636457e+00 % max 1.140387790534e+02 min > 2.501647526543e-01 max/min 4.558547031242e+02 > > 9913 KSP preconditioned resid norm 1.080641636455e+00 true resid norm > 1.594074345003e-01 ||r(i)||/||b|| 8.917312622252e-03 > > 9913 KSP Residual norm 1.080641636455e+00 % max 1.141359922733e+02 min > 2.500654781355e-01 max/min 4.564244258117e+02 > > 9914 KSP preconditioned resid norm 1.080641636454e+00 true resid norm > 1.594074280517e-01 ||r(i)||/||b|| 8.917312261516e-03 > > 9914 KSP Residual norm 1.080641636454e+00 % max 1.141719413857e+02 min > 2.470359663864e-01 max/min 4.621672829905e+02 > > 9915 KSP preconditioned resid norm 1.080641636430e+00 true resid norm > 1.594073674253e-01 ||r(i)||/||b|| 8.917308870053e-03 > > 9915 KSP Residual norm 1.080641636430e+00 % max 1.141769960812e+02 min > 2.461588977988e-01 max/min 4.638345276249e+02 > > 9916 KSP preconditioned resid norm 1.080641636427e+00 true resid norm > 1.594072457998e-01 ||r(i)||/||b|| 8.917302066275e-03 > > 9916 KSP Residual norm 1.080641636427e+00 % max 1.150248029458e+02 min > 1.817425233097e-01 max/min 6.328997795954e+02 > > 9917 KSP preconditioned resid norm 1.080641636323e+00 true resid norm > 1.594068269925e-01 ||r(i)||/||b|| 8.917278638033e-03 > > 9917 KSP Residual norm 1.080641636323e+00 % max 1.153659292411e+02 min > 1.758211627252e-01 max/min 6.561549670868e+02 > > 9918 KSP preconditioned resid norm 1.080641636306e+00 true resid norm > 1.594066853231e-01 ||r(i)||/||b|| 8.917270712996e-03 > > 9918 KSP Residual norm 1.080641636306e+00 % max 1.154409736018e+02 min > 1.682249410195e-01 max/min 6.862298354942e+02 > > 9919 KSP preconditioned resid norm 1.080641635944e+00 true resid norm > 1.594051910900e-01 ||r(i)||/||b|| 8.917187125026e-03 > > 9919 KSP Residual norm 1.080641635944e+00 % max 1.154411092996e+02 min > 1.254604202789e-01 max/min 9.201396667011e+02 > > 9920 KSP preconditioned resid norm 1.080641635771e+00 true resid norm > 1.594051167722e-01 ||r(i)||/||b|| 8.917182967656e-03 > > 9920 KSP Residual norm 1.080641635771e+00 % max 1.155780759198e+02 min > 1.115224613869e-01 max/min 1.036365898694e+03 > > 9921 KSP preconditioned resid norm 1.080641635689e+00 true resid norm > 1.594042104954e-01 ||r(i)||/||b|| 8.917132270191e-03 > > 9921 KSP Residual norm 1.080641635689e+00 % max 1.156949393598e+02 min > 9.748286841768e-02 max/min 1.186823297649e+03 > > 9922 KSP preconditioned resid norm 1.080641635599e+00 true resid norm > 1.594037272785e-01 ||r(i)||/||b|| 8.917105238852e-03 > > 9922 KSP Residual norm 1.080641635599e+00 % max 1.157177765183e+02 min > 8.161780203814e-02 max/min 1.417800695787e+03 > > 9923 KSP preconditioned resid norm 1.080641635586e+00 true resid norm > 1.594039328090e-01 ||r(i)||/||b|| 8.917116736308e-03 > > 9923 KSP Residual norm 1.080641635586e+00 % max 1.157297518468e+02 min > 8.041713659406e-02 max/min 1.439118038125e+03 > > 9924 KSP preconditioned resid norm 1.080641635552e+00 true resid norm > 1.594038177599e-01 ||r(i)||/||b|| 8.917110300416e-03 > > 9924 KSP Residual norm 1.080641635552e+00 % max 1.158244835100e+02 min > 8.041265899129e-02 max/min 1.440376241290e+03 > > 9925 KSP preconditioned resid norm 1.080641635551e+00 true resid norm > 1.594038101782e-01 ||r(i)||/||b|| 8.917109876291e-03 > > 9925 KSP Residual norm 1.080641635551e+00 % max 1.158302075022e+02 min > 7.912034826741e-02 max/min 1.463974944988e+03 > > 9926 KSP preconditioned resid norm 1.080641635382e+00 true resid norm > 1.594038364112e-01 ||r(i)||/||b|| 8.917111343775e-03 > > 9926 KSP Residual norm 1.080641635382e+00 % max 1.164682071358e+02 min > 7.460699390243e-02 max/min 1.561089665242e+03 > > 9927 KSP preconditioned resid norm 1.080641635354e+00 true resid norm > 1.594039631076e-01 ||r(i)||/||b|| 8.917118431221e-03 > > 9927 KSP Residual norm 1.080641635354e+00 % max 1.166492079885e+02 min > 7.459740228247e-02 max/min 1.563716757144e+03 > > 9928 KSP preconditioned resid norm 1.080641633843e+00 true resid norm > 1.594049513925e-01 ||r(i)||/||b|| 8.917173716256e-03 > > 9928 KSP Residual norm 1.080641633843e+00 % max 1.166812081048e+02 min > 6.210379986959e-02 max/min 1.878809482669e+03 > > 9929 KSP preconditioned resid norm 1.080641628410e+00 true resid norm > 1.594055557080e-01 ||r(i)||/||b|| 8.917207521894e-03 > > 9929 KSP Residual norm 1.080641628410e+00 % max 1.166819406955e+02 min > 4.513468212670e-02 max/min 2.585194692807e+03 > > 9930 KSP preconditioned resid norm 1.080641628017e+00 true resid norm > 1.594065400862e-01 ||r(i)||/||b|| 8.917262588389e-03 > > 9930 KSP Residual norm 1.080641628017e+00 % max 1.000000000000e+00 min > 1.000000000000e+00 max/min 1.000000000000e+00 > > 9931 KSP preconditioned resid norm 1.080641628017e+00 true resid norm > 1.594065400814e-01 ||r(i)||/||b|| 8.917262588116e-03 > > 9931 KSP Residual norm 1.080641628017e+00 % max 3.063465052437e+01 min > 3.063465052437e+01 max/min 1.000000000000e+00 > > 9932 KSP preconditioned resid norm 1.080641628017e+00 true resid norm > 1.594065400730e-01 ||r(i)||/||b|| 8.917262587648e-03 > > 9932 KSP Residual norm 1.080641628017e+00 % max 3.066529886382e+01 min > 3.844457299464e+00 max/min 7.976496154112e+00 > > 9933 KSP preconditioned resid norm 1.080641628017e+00 true resid norm > 1.594065399737e-01 ||r(i)||/||b|| 8.917262582094e-03 > > 9933 KSP Residual norm 1.080641628017e+00 % max 3.714525447417e+01 min > 1.336554051053e+00 max/min 2.779180867762e+01 > > 9934 KSP preconditioned resid norm 1.080641628017e+00 true resid norm > 1.594065399702e-01 ||r(i)||/||b|| 8.917262581897e-03 > > 9934 KSP Residual norm 1.080641628017e+00 % max 4.500845605790e+01 min > 1.226798858744e+00 max/min 3.668772247146e+01 > > 9935 KSP preconditioned resid norm 1.080641628017e+00 true resid norm > 1.594065399646e-01 ||r(i)||/||b|| 8.917262581583e-03 > > 9935 KSP Residual norm 1.080641628017e+00 % max 8.688155371236e+01 min > 1.183120734870e+00 max/min 7.343422454843e+01 > > 9936 KSP preconditioned resid norm 1.080641628017e+00 true resid norm > 1.594065396833e-01 ||r(i)||/||b|| 8.917262565850e-03 > > 9936 KSP Residual norm 1.080641628017e+00 % max 9.802499250688e+01 min > 1.168932790974e+00 max/min 8.385853597724e+01 > > 9937 KSP preconditioned resid norm 1.080641628017e+00 true resid norm > 1.594065395240e-01 ||r(i)||/||b|| 8.917262556936e-03 > > 9937 KSP Residual norm 1.080641628017e+00 % max 1.122446326752e+02 min > 1.141662807959e+00 max/min 9.831679887675e+01 > > 9938 KSP preconditioned resid norm 1.080641628017e+00 true resid norm > 1.594065394282e-01 ||r(i)||/||b|| 8.917262551579e-03 > > 9938 KSP Residual norm 1.080641628017e+00 % max 1.134942798726e+02 min > 1.090790194356e+00 max/min 1.040477632269e+02 > > 9939 KSP preconditioned resid norm 1.080641628016e+00 true resid norm > 1.594064902727e-01 ||r(i)||/||b|| 8.917259801801e-03 > > 9939 KSP Residual norm 1.080641628016e+00 % max 1.139914516679e+02 min > 4.119801790176e-01 max/min 2.766915921531e+02 > > 9940 KSP preconditioned resid norm 1.080641628015e+00 true resid norm > 1.594064329818e-01 ||r(i)||/||b|| 8.917256596927e-03 > > 9940 KSP Residual norm 1.080641628015e+00 % max 1.140011210085e+02 min > 2.894551947128e-01 max/min 3.938472105212e+02 > > 9941 KSP preconditioned resid norm 1.080641628015e+00 true resid norm > 1.594064356968e-01 ||r(i)||/||b|| 8.917256748803e-03 > > 9941 KSP Residual norm 1.080641628015e+00 % max 1.140392057187e+02 min > 2.880605712140e-01 max/min 3.958862028153e+02 > > 9942 KSP preconditioned resid norm 1.080641628014e+00 true resid norm > 1.594064585337e-01 ||r(i)||/||b|| 8.917258026309e-03 > > 9942 KSP Residual norm 1.080641628014e+00 % max 1.140392489431e+02 min > 2.501713977769e-01 max/min 4.558444728556e+02 > > 9943 KSP preconditioned resid norm 1.080641628013e+00 true resid norm > 1.594064592249e-01 ||r(i)||/||b|| 8.917258064976e-03 > > 9943 KSP Residual norm 1.080641628013e+00 % max 1.141360412719e+02 min > 2.500711817244e-01 max/min 4.564142116850e+02 > > 9944 KSP preconditioned resid norm 1.080641628011e+00 true resid norm > 1.594064655583e-01 ||r(i)||/||b|| 8.917258419265e-03 > > 9944 KSP Residual norm 1.080641628011e+00 % max 1.141719179255e+02 min > 2.470518740808e-01 max/min 4.621374290332e+02 > > 9945 KSP preconditioned resid norm 1.080641627988e+00 true resid norm > 1.594065256003e-01 ||r(i)||/||b|| 8.917261778039e-03 > > 9945 KSP Residual norm 1.080641627988e+00 % max 1.141770016071e+02 min > 2.461722087102e-01 max/min 4.638094698230e+02 > > 9946 KSP preconditioned resid norm 1.080641627985e+00 true resid norm > 1.594066459440e-01 ||r(i)||/||b|| 8.917268510113e-03 > > 9946 KSP Residual norm 1.080641627985e+00 % max 1.150251558754e+02 min > 1.817297629761e-01 max/min 6.329461613313e+02 > > 9947 KSP preconditioned resid norm 1.080641627883e+00 true resid norm > 1.594070613108e-01 ||r(i)||/||b|| 8.917291745887e-03 > > 9947 KSP Residual norm 1.080641627883e+00 % max 1.153670348347e+02 min > 1.757842027946e-01 max/min 6.562992180216e+02 > > 9948 KSP preconditioned resid norm 1.080641627867e+00 true resid norm > 1.594072014571e-01 ||r(i)||/||b|| 8.917299585729e-03 > > 9948 KSP Residual norm 1.080641627867e+00 % max 1.154419110590e+02 min > 1.682012982555e-01 max/min 6.863318669729e+02 > > 9949 KSP preconditioned resid norm 1.080641627512e+00 true resid norm > 1.594086800399e-01 ||r(i)||/||b|| 8.917382298211e-03 > > 9949 KSP Residual norm 1.080641627512e+00 % max 1.154420548965e+02 min > 1.254375490145e-01 max/min 9.203149758861e+02 > > 9950 KSP preconditioned resid norm 1.080641627344e+00 true resid norm > 1.594087527690e-01 ||r(i)||/||b|| 8.917386366706e-03 > > 9950 KSP Residual norm 1.080641627344e+00 % max 1.155790984624e+02 min > 1.115271921586e-01 max/min 1.036331106570e+03 > > 9951 KSP preconditioned resid norm 1.080641627264e+00 true resid norm > 1.594096439404e-01 ||r(i)||/||b|| 8.917436219173e-03 > > 9951 KSP Residual norm 1.080641627264e+00 % max 1.156952457641e+02 min > 9.753007450777e-02 max/min 1.186251998145e+03 > > 9952 KSP preconditioned resid norm 1.080641627176e+00 true resid norm > 1.594101251850e-01 ||r(i)||/||b|| 8.917463140178e-03 > > 9952 KSP Residual norm 1.080641627176e+00 % max 1.157175103847e+02 min > 8.164743677427e-02 max/min 1.417282831604e+03 > > 9953 KSP preconditioned resid norm 1.080641627164e+00 true resid norm > 1.594099248157e-01 ||r(i)||/||b|| 8.917451931444e-03 > > 9953 KSP Residual norm 1.080641627164e+00 % max 1.157285221143e+02 min > 8.043281187160e-02 max/min 1.438822284357e+03 > > 9954 KSP preconditioned resid norm 1.080641627129e+00 true resid norm > 1.594100434515e-01 ||r(i)||/||b|| 8.917458567979e-03 > > 9954 KSP Residual norm 1.080641627129e+00 % max 1.158251567433e+02 min > 8.042877586324e-02 max/min 1.440095979333e+03 > > 9955 KSP preconditioned resid norm 1.080641627127e+00 true resid norm > 1.594100516277e-01 ||r(i)||/||b|| 8.917459025356e-03 > > 9955 KSP Residual norm 1.080641627127e+00 % max 1.158318480981e+02 min > 7.912541326160e-02 max/min 1.463901966807e+03 > > 9956 KSP preconditioned resid norm 1.080641626963e+00 true resid norm > 1.594100262829e-01 ||r(i)||/||b|| 8.917457607557e-03 > > 9956 KSP Residual norm 1.080641626963e+00 % max 1.164749229958e+02 min > 7.459540346436e-02 max/min 1.561422253738e+03 > > 9957 KSP preconditioned resid norm 1.080641626935e+00 true resid norm > 1.594099017783e-01 ||r(i)||/||b|| 8.917450642725e-03 > > 9957 KSP Residual norm 1.080641626935e+00 % max 1.166575286636e+02 min > 7.458642942097e-02 max/min 1.564058362483e+03 > > 9958 KSP preconditioned resid norm 1.080641625459e+00 true resid norm > 1.594089210473e-01 ||r(i)||/||b|| 8.917395780257e-03 > > 9958 KSP Residual norm 1.080641625459e+00 % max 1.166899683166e+02 min > 6.207944430169e-02 max/min 1.879687707085e+03 > > 9959 KSP preconditioned resid norm 1.080641620143e+00 true resid norm > 1.594083265698e-01 ||r(i)||/||b|| 8.917362524958e-03 > > 9959 KSP Residual norm 1.080641620143e+00 % max 1.166907769657e+02 min > 4.511856151865e-02 max/min 2.586314213883e+03 > > 9960 KSP preconditioned resid norm 1.080641619752e+00 true resid norm > 1.594073442756e-01 ||r(i)||/||b|| 8.917307575050e-03 > > 9960 KSP Residual norm 1.080641619752e+00 % max 1.000000000000e+00 min > 1.000000000000e+00 max/min 1.000000000000e+00 > > 9961 KSP preconditioned resid norm 1.080641619752e+00 true resid norm > 1.594073442805e-01 ||r(i)||/||b|| 8.917307575322e-03 > > 9961 KSP Residual norm 1.080641619752e+00 % max 3.063497144523e+01 min > 3.063497144523e+01 max/min 1.000000000000e+00 > > 9962 KSP preconditioned resid norm 1.080641619752e+00 true resid norm > 1.594073442888e-01 ||r(i)||/||b|| 8.917307575783e-03 > > 9962 KSP Residual norm 1.080641619752e+00 % max 3.066566109468e+01 min > 3.845843703829e+00 max/min 7.973714861098e+00 > > 9963 KSP preconditioned resid norm 1.080641619752e+00 true resid norm > 1.594073443871e-01 ||r(i)||/||b|| 8.917307581283e-03 > > 9963 KSP Residual norm 1.080641619752e+00 % max 3.713390425813e+01 min > 1.336329967989e+00 max/min 2.778797538606e+01 > > 9964 KSP preconditioned resid norm 1.080641619752e+00 true resid norm > 1.594073443904e-01 ||r(i)||/||b|| 8.917307581472e-03 > > 9964 KSP Residual norm 1.080641619752e+00 % max 4.496570748601e+01 min > 1.226794844829e+00 max/min 3.665299677086e+01 > > 9965 KSP preconditioned resid norm 1.080641619752e+00 true resid norm > 1.594073443960e-01 ||r(i)||/||b|| 8.917307581781e-03 > > 9965 KSP Residual norm 1.080641619752e+00 % max 8.684970616180e+01 min > 1.183106833938e+00 max/min 7.340816878957e+01 > > 9966 KSP preconditioned resid norm 1.080641619752e+00 true resid norm > 1.594073446787e-01 ||r(i)||/||b|| 8.917307597597e-03 > > 9966 KSP Residual norm 1.080641619752e+00 % max 9.802644805035e+01 min > 1.168698112506e+00 max/min 8.387662049028e+01 > > 9967 KSP preconditioned resid norm 1.080641619752e+00 true resid norm > 1.594073448376e-01 ||r(i)||/||b|| 8.917307606488e-03 > > 9967 KSP Residual norm 1.080641619752e+00 % max 1.123297332549e+02 min > 1.141283907756e+00 max/min 9.842400518531e+01 > > 9968 KSP preconditioned resid norm 1.080641619752e+00 true resid norm > 1.594073449323e-01 ||r(i)||/||b|| 8.917307611782e-03 > > 9968 KSP Residual norm 1.080641619752e+00 % max 1.134976751689e+02 min > 1.090790474217e+00 max/min 1.040508492250e+02 > > 9969 KSP preconditioned resid norm 1.080641619751e+00 true resid norm > 1.594073935137e-01 ||r(i)||/||b|| 8.917310329444e-03 > > 9969 KSP Residual norm 1.080641619751e+00 % max 1.139911627557e+02 min > 4.122309806702e-01 max/min 2.765225519207e+02 > > 9970 KSP preconditioned resid norm 1.080641619750e+00 true resid norm > 1.594074502003e-01 ||r(i)||/||b|| 8.917313500516e-03 > > 9970 KSP Residual norm 1.080641619750e+00 % max 1.140007220037e+02 min > 2.895709152543e-01 max/min 3.936884403725e+02 > > 9971 KSP preconditioned resid norm 1.080641619750e+00 true resid norm > 1.594074475189e-01 ||r(i)||/||b|| 8.917313350515e-03 > > 9971 KSP Residual norm 1.080641619750e+00 % max 1.140387347164e+02 min > 2.881891122657e-01 max/min 3.957079912555e+02 > > 9972 KSP preconditioned resid norm 1.080641619750e+00 true resid norm > 1.594074249007e-01 ||r(i)||/||b|| 8.917312085247e-03 > > 9972 KSP Residual norm 1.080641619750e+00 % max 1.140387839472e+02 min > 2.501648230072e-01 max/min 4.558545944886e+02 > > 9973 KSP preconditioned resid norm 1.080641619748e+00 true resid norm > 1.594074242369e-01 ||r(i)||/||b|| 8.917312048114e-03 > > 9973 KSP Residual norm 1.080641619748e+00 % max 1.141359929290e+02 min > 2.500655391090e-01 max/min 4.564243171438e+02 > > 9974 KSP preconditioned resid norm 1.080641619747e+00 true resid norm > 1.594074179174e-01 ||r(i)||/||b|| 8.917311694599e-03 > > 9974 KSP Residual norm 1.080641619747e+00 % max 1.141719411262e+02 min > 2.470361384981e-01 max/min 4.621669599450e+02 > > 9975 KSP preconditioned resid norm 1.080641619724e+00 true resid norm > 1.594073585052e-01 ||r(i)||/||b|| 8.917308371056e-03 > > 9975 KSP Residual norm 1.080641619724e+00 % max 1.141769961595e+02 min > 2.461590320913e-01 max/min 4.638342748974e+02 > > 9976 KSP preconditioned resid norm 1.080641619721e+00 true resid norm > 1.594072392992e-01 ||r(i)||/||b|| 8.917301702630e-03 > > 9976 KSP Residual norm 1.080641619721e+00 % max 1.150248073926e+02 min > 1.817423531140e-01 max/min 6.329003967526e+02 > > 9977 KSP preconditioned resid norm 1.080641619621e+00 true resid norm > 1.594068288737e-01 ||r(i)||/||b|| 8.917278743269e-03 > > 9977 KSP Residual norm 1.080641619621e+00 % max 1.153659432133e+02 min > 1.758207322277e-01 max/min 6.561566531518e+02 > > 9978 KSP preconditioned resid norm 1.080641619605e+00 true resid norm > 1.594066900433e-01 ||r(i)||/||b|| 8.917270977044e-03 > > 9978 KSP Residual norm 1.080641619605e+00 % max 1.154409858489e+02 min > 1.682246559484e-01 max/min 6.862310711714e+02 > > 9979 KSP preconditioned resid norm 1.080641619257e+00 true resid norm > 1.594052257365e-01 ||r(i)||/||b|| 8.917189063164e-03 > > 9979 KSP Residual norm 1.080641619257e+00 % max 1.154411216580e+02 min > 1.254601679658e-01 max/min 9.201416157000e+02 > > 9980 KSP preconditioned resid norm 1.080641619092e+00 true resid norm > 1.594051529133e-01 ||r(i)||/||b|| 8.917184989407e-03 > > 9980 KSP Residual norm 1.080641619092e+00 % max 1.155780886655e+02 min > 1.115223976110e-01 max/min 1.036366605645e+03 > > 9981 KSP preconditioned resid norm 1.080641619012e+00 true resid norm > 1.594042648383e-01 ||r(i)||/||b|| 8.917135310152e-03 > > 9981 KSP Residual norm 1.080641619012e+00 % max 1.156949417657e+02 min > 9.748351070064e-02 max/min 1.186815502788e+03 > > 9982 KSP preconditioned resid norm 1.080641618926e+00 true resid norm > 1.594037912594e-01 ||r(i)||/||b|| 8.917108817969e-03 > > 9982 KSP Residual norm 1.080641618926e+00 % max 1.157177699729e+02 min > 8.161808862755e-02 max/min 1.417795637202e+03 > > 9983 KSP preconditioned resid norm 1.080641618914e+00 true resid norm > 1.594039926066e-01 ||r(i)||/||b|| 8.917120081407e-03 > > 9983 KSP Residual norm 1.080641618914e+00 % max 1.157297318664e+02 min > 8.041726690542e-02 max/min 1.439115457660e+03 > > 9984 KSP preconditioned resid norm 1.080641618881e+00 true resid norm > 1.594038798061e-01 ||r(i)||/||b|| 8.917113771305e-03 > > 9984 KSP Residual norm 1.080641618881e+00 % max 1.158244868066e+02 min > 8.041279440761e-02 max/min 1.440373856671e+03 > > 9985 KSP preconditioned resid norm 1.080641618880e+00 true resid norm > 1.594038723728e-01 ||r(i)||/||b|| 8.917113355483e-03 > > 9985 KSP Residual norm 1.080641618880e+00 % max 1.158302224434e+02 min > 7.912036884330e-02 max/min 1.463974753110e+03 > > 9986 KSP preconditioned resid norm 1.080641618717e+00 true resid norm > 1.594038980749e-01 ||r(i)||/||b|| 8.917114793267e-03 > > 9986 KSP Residual norm 1.080641618717e+00 % max 1.164682835790e+02 min > 7.460686106511e-02 max/min 1.561093469371e+03 > > 9987 KSP preconditioned resid norm 1.080641618691e+00 true resid norm > 1.594040222541e-01 ||r(i)||/||b|| 8.917121739901e-03 > > 9987 KSP Residual norm 1.080641618691e+00 % max 1.166493032152e+02 min > 7.459727412837e-02 max/min 1.563720720069e+03 > > 9988 KSP preconditioned resid norm 1.080641617240e+00 true resid norm > 1.594049907736e-01 ||r(i)||/||b|| 8.917175919248e-03 > > 9988 KSP Residual norm 1.080641617240e+00 % max 1.166813081045e+02 min > 6.210350927266e-02 max/min 1.878819884271e+03 > > 9989 KSP preconditioned resid norm 1.080641612022e+00 true resid norm > 1.594055829488e-01 ||r(i)||/||b|| 8.917209045757e-03 > > 9989 KSP Residual norm 1.080641612022e+00 % max 1.166820415947e+02 min > 4.513453999933e-02 max/min 2.585205069033e+03 > > 9990 KSP preconditioned resid norm 1.080641611645e+00 true resid norm > 1.594065477745e-01 ||r(i)||/||b|| 8.917263018470e-03 > > 9990 KSP Residual norm 1.080641611645e+00 % max 1.000000000000e+00 min > 1.000000000000e+00 max/min 1.000000000000e+00 > > 9991 KSP preconditioned resid norm 1.080641611645e+00 true resid norm > 1.594065477696e-01 ||r(i)||/||b|| 8.917263018201e-03 > > 9991 KSP Residual norm 1.080641611645e+00 % max 3.063465549435e+01 min > 3.063465549435e+01 max/min 1.000000000000e+00 > > 9992 KSP preconditioned resid norm 1.080641611645e+00 true resid norm > 1.594065477615e-01 ||r(i)||/||b|| 8.917263017744e-03 > > 9992 KSP Residual norm 1.080641611645e+00 % max 3.066530412302e+01 min > 3.844470687795e+00 max/min 7.976469744031e+00 > > 9993 KSP preconditioned resid norm 1.080641611645e+00 true resid norm > 1.594065476642e-01 ||r(i)||/||b|| 8.917263012302e-03 > > 9993 KSP Residual norm 1.080641611645e+00 % max 3.714516899070e+01 min > 1.336552632161e+00 max/min 2.779177422339e+01 > > 9994 KSP preconditioned resid norm 1.080641611645e+00 true resid norm > 1.594065476608e-01 ||r(i)||/||b|| 8.917263012110e-03 > > 9994 KSP Residual norm 1.080641611645e+00 % max 4.500812884571e+01 min > 1.226798972667e+00 max/min 3.668745234425e+01 > > 9995 KSP preconditioned resid norm 1.080641611645e+00 true resid norm > 1.594065476552e-01 ||r(i)||/||b|| 8.917263011801e-03 > > 9995 KSP Residual norm 1.080641611645e+00 % max 8.688131738345e+01 min > 1.183120593233e+00 max/min 7.343403358914e+01 > > 9996 KSP preconditioned resid norm 1.080641611645e+00 true resid norm > 1.594065473796e-01 ||r(i)||/||b|| 8.917262996379e-03 > > 9996 KSP Residual norm 1.080641611645e+00 % max 9.802499878957e+01 min > 1.168930461656e+00 max/min 8.385870845619e+01 > > 9997 KSP preconditioned resid norm 1.080641611645e+00 true resid norm > 1.594065472234e-01 ||r(i)||/||b|| 8.917262987642e-03 > > 9997 KSP Residual norm 1.080641611645e+00 % max 1.122454823211e+02 min > 1.141659208813e+00 max/min 9.831785304632e+01 > > 9998 KSP preconditioned resid norm 1.080641611645e+00 true resid norm > 1.594065471295e-01 ||r(i)||/||b|| 8.917262982389e-03 > > 9998 KSP Residual norm 1.080641611645e+00 % max 1.134943126018e+02 min > 1.090790203681e+00 max/min 1.040477923425e+02 > > 9999 KSP preconditioned resid norm 1.080641611644e+00 true resid norm > 1.594064989598e-01 ||r(i)||/||b|| 8.917260287762e-03 > > 9999 KSP Residual norm 1.080641611644e+00 % max 1.139914489020e+02 min > 4.119830336343e-01 max/min 2.766896682528e+02 > > 10000 KSP preconditioned resid norm 1.080641611643e+00 true resid norm > 1.594064428168e-01 ||r(i)||/||b|| 8.917257147097e-03 > > 10000 KSP Residual norm 1.080641611643e+00 % max 1.140011170215e+02 min > 2.894564222709e-01 max/min 3.938455264772e+02 > > > > > > Then the number of iteration >10000 > > The Programme stop with convergenceReason=-3 > > > > Best, > > > > Meng > > ------------------ Original ------------------ > > From: "Barry Smith";; > > Send time: Friday, Apr 25, 2014 6:27 AM > > To: "Oo "; > > Cc: "Dave May"; "petsc-users"< > petsc-users at mcs.anl.gov>; > > Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 > > > > > > On Apr 24, 2014, at 5:24 PM, Oo wrote: > > > > > > > > Configure PETSC again? > > > > No, the command line when you run the program. > > > > Barry > > > > > > > > ------------------ Original ------------------ > > > From: "Dave May";; > > > Send time: Friday, Apr 25, 2014 6:20 AM > > > To: "Oo "; > > > Cc: "Barry Smith"; "Matthew Knepley"< > knepley at gmail.com>; "petsc-users"; > > > Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 > > > > > > On the command line > > > > > > > > > On 25 April 2014 00:11, Oo wrote: > > > > > > Where should I put "-ksp_monitor_true_residual > -ksp_monitor_singular_value " ? > > > > > > Thanks, > > > > > > Meng > > > > > > > > > ------------------ Original ------------------ > > > From: "Barry Smith";; > > > Date: Apr 25, 2014 > > > To: "Matthew Knepley"; > > > Cc: "Oo "; "petsc-users"< > petsc-users at mcs.anl.gov>; > > > Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 > > > > > > > > > There are also a great deal of ?bogus? numbers that have no meaning > and many zeros. Most of these are not the eigenvalues of anything. > > > > > > Run the two cases with -ksp_monitor_true_residual > -ksp_monitor_singular_value and send the output > > > > > > Barry > > > > > > > > > On Apr 24, 2014, at 4:53 PM, Matthew Knepley > wrote: > > > > > > > On Thu, Apr 24, 2014 at 12:57 PM, Oo wrote: > > > > > > > > Hi, > > > > > > > > For analysis the convergence of linear solver, > > > > I meet a problem. > > > > > > > > One is the list of Eigenvalues whose linear system which has a > convergence solution. > > > > The other is the list of Eigenvalues whose linear system whose > solution does not converge (convergenceReason=-3). > > > > > > > > These are just lists of numbers. It does not tell us anything about > the computation. What is the problem you are having? > > > > > > > > Matt > > > > > > > > Do you know what kind of method can be used to obtain a convergence > solution for our non-convergence case? > > > > > > > > Thanks, > > > > > > > > Meng > > > > > > > > > > > > > > > > -- > > > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > > > -- Norbert Wiener > > > > > > . > > > > > > > . > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Apr 25 07:00:03 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 25 Apr 2014 07:00:03 -0500 Subject: [petsc-users] How to configure PETSc with gcc-4.9 In-Reply-To: References: <3EE82E45-497B-4901-9BBF-3D45BD53302F@mcs.anl.gov> Message-ID: <1FB2512A-CD8E-4323-81A6-D0B96A759CEB@mcs.anl.gov> On Apr 25, 2014, at 12:25 AM, Justin Dong wrote: > I'll leave it running for a few hours and see what happens. How?d it go? > > In the meantime, is it possible to tell PETSc to compile a specific .c file using gcc-4.9? Something along the lines of this: > > ALL: > > include ${PETSC_DIR}/conf/variables > include ${PETSC_DIR}/conf/rules > include ${PETSC_DIR}/conf/test > > main: main.o > gcc-4.9 -${CLINKER} -fopenmp -o main main.o ${PETSC_LIB} > ${RM} -f main.o > > The main reason I want to compile with gcc-4.9 is because I need to use openmp for a simple parallel for loop. With the gcc compiler I have now (clang), there's no support for openmp. I can get gcc-4.9 to work outside of PETSc, but am not sure how to get it to work here if I want to do it for a single example, or if that's even possible. > > > On Thu, Apr 24, 2014 at 10:03 PM, Barry Smith wrote: > > Leave it to run for several hours it could be that gcc-4.9 is just taking a very long time on mpich. > > Barry > > On Apr 24, 2014, at 8:35 PM, Matthew Knepley wrote: > > > On Thu, Apr 24, 2014 at 9:27 PM, Justin Dong wrote: > > Hi all, > > > > I'm trying to configure PETSc on Mac OS X with gcc-4.9. Currently, it's configured with gcc that comes with Xcode, but I want to use gcc-4.9 that I installed myself. I try: > > > > > > ./configure --with-cc=gcc-4.9 --with-fc=gfortran --download-f-blas-lapack --download-mpich > > > > > > > > But while configuring MPICH it gets stuck indefinitely. I've included the output below after I terminate the process. I don't have this issue if I just configure with cc=gcc. Any ideas what the problem is here? > > > > From the stack trace, its just taking a long time. If it overruns the timeout, it should just die. You can try > > running using -useThreads=0 if you suspect the timeout is not working. > > > > Matt > > > > ^CTraceback (most recent call last): > > > > File "./configure", line 10, in > > > > execfile(os.path.join(os.path.dirname(__file__), 'config', 'configure.py')) > > > > File "./config/configure.py", line 372, in > > > > petsc_configure([]) > > > > File "./config/configure.py", line 287, in petsc_configure > > > > framework.configure(out = sys.stdout) > > > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/framework.py", line 933, in configure > > > > child.configure() > > > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", line 558, in configure > > > > self.executeTest(self.configureLibrary) > > > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/base.py", line 115, in executeTest > > > > ret = apply(test, args,kargs) > > > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", line 748, in configureLibrary > > > > config.package.Package.configureLibrary(self) > > > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", line 486, in configureLibrary > > > > for location, directory, lib, incl in self.generateGuesses(): > > > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", line 232, in generateGuesses > > > > d = self.checkDownload(1) > > > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", line 351, in checkDownload > > > > return config.package.Package.checkDownload(self, requireDownload) > > > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", line 340, in checkDownload > > > > return self.getInstallDir() > > > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/package.py", line 187, in getInstallDir > > > > return os.path.abspath(self.Install()) > > > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", line 366, in Install > > > > return self.InstallMPICH() > > > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/config/packages/MPI.py", line 544, in InstallMPICH > > > > output,err,ret = config.base.Configure.executeShellCommand('cd '+mpichDir+' && ./configure '+args, timeout=2000, log = self.framework.log) > > > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/script.py", line 254, in executeShellCommand > > > > (output, error, status) = runInShell(command, log, cwd) > > > > File "/Users/justindong/Classes/CAAMResearch/petsc-3.4.4/config/BuildSystem/script.py", line 243, in runInShell > > > > thread.join(timeout) > > > > File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 958, in join > > > > self.__block.wait(delay) > > > > File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 358, in wait > > > > _sleep(delay) > > > > > > KeyboardInterrupt > > > > > > > > > > > > Sincerely, > > > > Justin > > > > > > > > > > -- > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > -- Norbert Wiener > > From norihiro.w at gmail.com Fri Apr 25 07:31:56 2014 From: norihiro.w at gmail.com (Norihiro Watanabe) Date: Fri, 25 Apr 2014 14:31:56 +0200 Subject: [petsc-users] infinite loop with NEWTONTR? Message-ID: Hi, In my simulation, nonlinear solve with the trust regtion method got stagnentafter linear solve (see output below). Is it possible that the method goes to inifite loop? Is there any parameter to avoid this situation? 0 SNES Function norm 1.828728087153e+03 0 KSP Residual norm 91.2735 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 3 1 KSP Residual norm 3.42223 Linear solve converged due to CONVERGED_STEP_LENGTH iterations 1 Thank you in advance, Nori -------------- next part -------------- An HTML attachment was scrubbed... URL: From wumeng07maths at qq.com Fri Apr 25 07:33:10 2014 From: wumeng07maths at qq.com (=?gb18030?B?T28gICAgICA=?=) Date: Fri, 25 Apr 2014 20:33:10 +0800 Subject: [petsc-users] Convergence_Eigenvalues_k=3 In-Reply-To: References: <65E3810E-04F5-4B83-B3AF-EDE2D3AEE791@mcs.anl.gov> <4EC22F71-3552-402B-8886-9180229E1BC2@mcs.anl.gov> Message-ID: Hi, Here is output when I used -ksp_view. KSP Object: 1 MPI processes type: gmres GMRES: restart=200, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=200, initial guess is zero tolerances: relative=0.0001, absolute=0.0001, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: icc 0 levels of fill tolerance for zero pivot 2.22045e-14 using Manteuffel shift [POSITIVE_DEFINITE] matrix ordering: natural factor fill ratio given 1, needed 1 Factored matrix follows: Matrix Object: 1 MPI processes type: seqsbaij rows=384, cols=384 package used to perform factorization: petsc total: nonzeros=2560, allocated nonzeros=2560 total number of mallocs used during MatSetValues calls =0 block size is 1 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=384, cols=384 total: nonzeros=4736, allocated nonzeros=147456 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 96 nodes, limit used is 5 Thanks, Meng ------------------ Original ------------------ From: "Matthew Knepley";; Send time: Friday, Apr 25, 2014 6:22 PM To: "Oo "; Cc: "Barry Smith"; "petsc-users"; Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 On Fri, Apr 25, 2014 at 4:13 AM, Oo wrote: Hi, What I can do for solving this linear system? I am guessing that you are using the default GMRES/ILU. Run with -ksp_view to confirm. We would have to know something about the linear system to suggest improvements. Matt At this moment, I use the common line "/Users/wumeng/MyWork/BuildMSplineTools/bin/msplinePDE_PFEM_2 -ksp_gmres_restart 200 -ksp_max_it 200 -ksp_monitor_true_residual -ksp_monitor_singular_value" The following are the output: ============================================ 0 KSP preconditioned resid norm 7.463734841673e+00 true resid norm 7.520241011357e-02 ||r(i)||/||b|| 1.000000000000e+00 0 KSP Residual norm 7.463734841673e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 1 KSP preconditioned resid norm 3.449001344285e-03 true resid norm 7.834435231711e-05 ||r(i)||/||b|| 1.041779807307e-03 1 KSP Residual norm 3.449001344285e-03 % max 9.999991695261e-01 min 9.999991695261e-01 max/min 1.000000000000e+00 2 KSP preconditioned resid norm 1.811463883605e-05 true resid norm 3.597611565181e-07 ||r(i)||/||b|| 4.783904611232e-06 2 KSP Residual norm 1.811463883605e-05 % max 1.000686014764e+00 min 9.991339510077e-01 max/min 1.001553409084e+00 0 KSP preconditioned resid norm 9.374463936067e+00 true resid norm 9.058107112571e-02 ||r(i)||/||b|| 1.000000000000e+00 0 KSP Residual norm 9.374463936067e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 1 KSP preconditioned resid norm 2.595582184655e-01 true resid norm 6.637387889158e-03 ||r(i)||/||b|| 7.327566131280e-02 1 KSP Residual norm 2.595582184655e-01 % max 9.933440157684e-01 min 9.933440157684e-01 max/min 1.000000000000e+00 2 KSP preconditioned resid norm 6.351429855766e-03 true resid norm 1.844857600919e-04 ||r(i)||/||b|| 2.036692189651e-03 2 KSP Residual norm 6.351429855766e-03 % max 1.000795215571e+00 min 8.099278726624e-01 max/min 1.235659679523e+00 3 KSP preconditioned resid norm 1.883016084950e-04 true resid norm 3.876682412610e-06 ||r(i)||/||b|| 4.279793078656e-05 3 KSP Residual norm 1.883016084950e-04 % max 1.184638644500e+00 min 8.086172954187e-01 max/min 1.465017692809e+00 ============================================= 0 KSP preconditioned resid norm 1.414935390756e+03 true resid norm 1.787617427503e+01 ||r(i)||/||b|| 1.000000000000e+00 0 KSP Residual norm 1.414935390756e+03 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 1 KSP preconditioned resid norm 1.384179962960e+03 true resid norm 1.802958083039e+01 ||r(i)||/||b|| 1.008581621157e+00 1 KSP Residual norm 1.384179962960e+03 % max 1.895999321723e+01 min 1.895999321723e+01 max/min 1.000000000000e+00 2 KSP preconditioned resid norm 1.382373771674e+03 true resid norm 1.813982830244e+01 ||r(i)||/||b|| 1.014748906749e+00 2 KSP Residual norm 1.382373771674e+03 % max 3.551645921348e+01 min 6.051184451182e+00 max/min 5.869340044086e+00 3 KSP preconditioned resid norm 1.332723893134e+03 true resid norm 1.774590608681e+01 ||r(i)||/||b|| 9.927127479173e-01 3 KSP Residual norm 1.332723893134e+03 % max 5.076435191911e+01 min 4.941060752900e+00 max/min 1.027397849527e+01 4 KSP preconditioned resid norm 1.093788576095e+03 true resid norm 1.717433802192e+01 ||r(i)||/||b|| 9.607390125925e-01 4 KSP Residual norm 1.093788576095e+03 % max 6.610818819562e+01 min 1.960683367943e+00 max/min 3.371691180560e+01 5 KSP preconditioned resid norm 1.077330470806e+03 true resid norm 1.750861653688e+01 ||r(i)||/||b|| 9.794386800837e-01 5 KSP Residual norm 1.077330470806e+03 % max 7.296006568006e+01 min 1.721275741199e+00 max/min 4.238720382432e+01 6 KSP preconditioned resid norm 1.021951470305e+03 true resid norm 1.587225289158e+01 ||r(i)||/||b|| 8.878998742895e-01 6 KSP Residual norm 1.021951470305e+03 % max 7.313469782749e+01 min 1.113708844536e+00 max/min 6.566769958438e+01 7 KSP preconditioned resid norm 8.968401331418e+02 true resid norm 1.532116106999e+01 ||r(i)||/||b|| 8.570715878170e-01 7 KSP Residual norm 8.968401331418e+02 % max 7.385099946342e+01 min 1.112940159952e+00 max/min 6.635666689088e+01 8 KSP preconditioned resid norm 8.540750681178e+02 true resid norm 1.665367409583e+01 ||r(i)||/||b|| 9.316128741873e-01 8 KSP Residual norm 8.540750681178e+02 % max 9.333712052307e+01 min 9.984044777583e-01 max/min 9.348627996205e+01 9 KSP preconditioned resid norm 8.540461721597e+02 true resid norm 1.669107313551e+01 ||r(i)||/||b|| 9.337049907166e-01 9 KSP Residual norm 8.540461721597e+02 % max 9.673114572055e+01 min 9.309725017804e-01 max/min 1.039033328434e+02 10 KSP preconditioned resid norm 8.540453927813e+02 true resid norm 1.668902252063e+01 ||r(i)||/||b|| 9.335902785387e-01 10 KSP Residual norm 8.540453927813e+02 % max 9.685490817256e+01 min 8.452348922200e-01 max/min 1.145893396783e+02 11 KSP preconditioned resid norm 7.927564518868e+02 true resid norm 1.671721115974e+01 ||r(i)||/||b|| 9.351671617505e-01 11 KSP Residual norm 7.927564518868e+02 % max 1.076430910935e+02 min 8.433341413962e-01 max/min 1.276399066629e+02 12 KSP preconditioned resid norm 4.831253651357e+02 true resid norm 2.683431434549e+01 ||r(i)||/||b|| 1.501121768709e+00 12 KSP Residual norm 4.831253651357e+02 % max 1.079017603981e+02 min 8.354112427301e-01 max/min 1.291600530124e+02 13 KSP preconditioned resid norm 4.093462051807e+02 true resid norm 1.923302214627e+01 ||r(i)||/||b|| 1.075902586894e+00 13 KSP Residual norm 4.093462051807e+02 % max 1.088474472678e+02 min 6.034301796734e-01 max/min 1.803811790233e+02 14 KSP preconditioned resid norm 3.809274390266e+02 true resid norm 2.118308925475e+01 ||r(i)||/||b|| 1.184990083943e+00 14 KSP Residual norm 3.809274390266e+02 % max 1.104675729761e+02 min 6.010582938812e-01 max/min 1.837884513045e+02 15 KSP preconditioned resid norm 2.408316705377e+02 true resid norm 1.238065951026e+01 ||r(i)||/||b|| 6.925788101960e-01 15 KSP Residual norm 2.408316705377e+02 % max 1.215487322437e+02 min 5.131416858605e-01 max/min 2.368716781211e+02 16 KSP preconditioned resid norm 1.979802937570e+02 true resid norm 9.481184679187e+00 ||r(i)||/||b|| 5.303810834083e-01 16 KSP Residual norm 1.979802937570e+02 % max 1.246827043503e+02 min 4.780929228356e-01 max/min 2.607917799970e+02 17 KSP preconditioned resid norm 1.853329360151e+02 true resid norm 9.049630342765e+00 ||r(i)||/||b|| 5.062397694011e-01 17 KSP Residual norm 1.853329360151e+02 % max 1.252018074901e+02 min 4.612413672912e-01 max/min 2.714453133841e+02 18 KSP preconditioned resid norm 1.853299943955e+02 true resid norm 9.047741838474e+00 ||r(i)||/||b|| 5.061341257515e-01 18 KSP Residual norm 1.853299943955e+02 % max 1.256988083109e+02 min 4.556318323953e-01 max/min 2.758780211867e+02 19 KSP preconditioned resid norm 1.730151601343e+02 true resid norm 9.854026387827e+00 ||r(i)||/||b|| 5.512379906472e-01 19 KSP Residual norm 1.730151601343e+02 % max 1.279580192729e+02 min 3.287110716087e-01 max/min 3.892720091436e+02 20 KSP preconditioned resid norm 1.429145143492e+02 true resid norm 8.228490997826e+00 ||r(i)||/||b|| 4.603049215804e-01 20 KSP Residual norm 1.429145143492e+02 % max 1.321397322884e+02 min 2.823235578054e-01 max/min 4.680435926621e+02 21 KSP preconditioned resid norm 1.345382626439e+02 true resid norm 8.176256473861e+00 ||r(i)||/||b|| 4.573829024078e-01 21 KSP Residual norm 1.345382626439e+02 % max 1.332774949926e+02 min 2.425224324298e-01 max/min 5.495470817166e+02 22 KSP preconditioned resid norm 1.301499631466e+02 true resid norm 8.487706077838e+00 ||r(i)||/||b|| 4.748055119207e-01 22 KSP Residual norm 1.301499631466e+02 % max 1.334143594976e+02 min 2.077893364534e-01 max/min 6.420654773469e+02 23 KSP preconditioned resid norm 1.260084835452e+02 true resid norm 8.288260183397e+00 ||r(i)||/||b|| 4.636484325941e-01 23 KSP Residual norm 1.260084835452e+02 % max 1.342473982017e+02 min 2.010966692943e-01 max/min 6.675764381023e+02 24 KSP preconditioned resid norm 1.255711443195e+02 true resid norm 8.117619099395e+00 ||r(i)||/||b|| 4.541027053386e-01 24 KSP Residual norm 1.255711443195e+02 % max 1.342478258493e+02 min 1.586270065907e-01 max/min 8.463112854147e+02 25 KSP preconditioned resid norm 1.064125166220e+02 true resid norm 8.683750469293e+00 ||r(i)||/||b|| 4.857723098741e-01 25 KSP Residual norm 1.064125166220e+02 % max 1.343100269972e+02 min 1.586061159091e-01 max/min 8.468149303534e+02 26 KSP preconditioned resid norm 9.497012777512e+01 true resid norm 7.776308733811e+00 ||r(i)||/||b|| 4.350096734441e-01 26 KSP Residual norm 9.497012777512e+01 % max 1.346211743671e+02 min 1.408944545921e-01 max/min 9.554753219835e+02 27 KSP preconditioned resid norm 9.449347291209e+01 true resid norm 8.027397390699e+00 ||r(i)||/||b|| 4.490556685785e-01 27 KSP Residual norm 9.449347291209e+01 % max 1.353601106604e+02 min 1.302056396509e-01 max/min 1.039587156311e+03 28 KSP preconditioned resid norm 7.708808620337e+01 true resid norm 8.253756419882e+00 ||r(i)||/||b|| 4.617182789167e-01 28 KSP Residual norm 7.708808620337e+01 % max 1.354170803310e+02 min 1.300840147004e-01 max/min 1.040997086712e+03 29 KSP preconditioned resid norm 6.883976717639e+01 true resid norm 7.200274893950e+00 ||r(i)||/||b|| 4.027861209659e-01 29 KSP Residual norm 6.883976717639e+01 % max 1.359172085675e+02 min 1.060952954746e-01 max/min 1.281086102446e+03 30 KSP preconditioned resid norm 6.671786230822e+01 true resid norm 6.613746362850e+00 ||r(i)||/||b|| 3.699754914612e-01 30 KSP Residual norm 6.671786230822e+01 % max 1.361448012233e+02 min 8.135160167393e-02 max/min 1.673535596373e+03 31 KSP preconditioned resid norm 5.753308015718e+01 true resid norm 7.287752888130e+00 ||r(i)||/||b|| 4.076796732906e-01 31 KSP Residual norm 5.753308015718e+01 % max 1.363247397022e+02 min 6.837262871767e-02 max/min 1.993849618759e+03 32 KSP preconditioned resid norm 5.188554631220e+01 true resid norm 7.000101427370e+00 ||r(i)||/||b|| 3.915883409767e-01 32 KSP Residual norm 5.188554631220e+01 % max 1.365417051351e+02 min 6.421828602757e-02 max/min 2.126212229901e+03 33 KSP preconditioned resid norm 4.949709579590e+01 true resid norm 6.374314702607e+00 ||r(i)||/||b|| 3.565815931606e-01 33 KSP Residual norm 4.949709579590e+01 % max 1.366317667622e+02 min 6.421330753944e-02 max/min 2.127779614503e+03 34 KSP preconditioned resid norm 4.403827792460e+01 true resid norm 6.147327125810e+00 ||r(i)||/||b|| 3.438838216294e-01 34 KSP Residual norm 4.403827792460e+01 % max 1.370253126927e+02 min 6.368216512224e-02 max/min 2.151706249775e+03 35 KSP preconditioned resid norm 4.140066382940e+01 true resid norm 5.886089041852e+00 ||r(i)||/||b|| 3.292700636777e-01 35 KSP Residual norm 4.140066382940e+01 % max 1.390943209074e+02 min 6.114512372501e-02 max/min 2.274822789352e+03 36 KSP preconditioned resid norm 3.745028333544e+01 true resid norm 5.122854489971e+00 ||r(i)||/||b|| 2.865744320432e-01 36 KSP Residual norm 3.745028333544e+01 % max 1.396462040364e+02 min 5.993486381149e-02 max/min 2.329966152515e+03 37 KSP preconditioned resid norm 3.492028700266e+01 true resid norm 4.982433448736e+00 ||r(i)||/||b|| 2.787192254942e-01 37 KSP Residual norm 3.492028700266e+01 % max 1.418761690073e+02 min 5.972847536278e-02 max/min 2.375352261139e+03 38 KSP preconditioned resid norm 3.068024157121e+01 true resid norm 4.394664243655e+00 ||r(i)||/||b|| 2.458391922143e-01 38 KSP Residual norm 3.068024157121e+01 % max 1.419282962644e+02 min 5.955136512602e-02 max/min 2.383292070032e+03 39 KSP preconditioned resid norm 2.614836504484e+01 true resid norm 3.671741082517e+00 ||r(i)||/||b|| 2.053985951371e-01 39 KSP Residual norm 2.614836504484e+01 % max 1.424590886621e+02 min 5.534838135054e-02 max/min 2.573861876102e+03 40 KSP preconditioned resid norm 2.598742703782e+01 true resid norm 3.707086113835e+00 ||r(i)||/||b|| 2.073758096559e-01 40 KSP Residual norm 2.598742703782e+01 % max 1.428352406023e+02 min 5.401263258718e-02 max/min 2.644478407376e+03 41 KSP preconditioned resid norm 2.271029765350e+01 true resid norm 2.912150121100e+00 ||r(i)||/||b|| 1.629067873414e-01 41 KSP Residual norm 2.271029765350e+01 % max 1.446928584608e+02 min 5.373910075949e-02 max/min 2.692506134562e+03 42 KSP preconditioned resid norm 1.795836259709e+01 true resid norm 2.316834528159e+00 ||r(i)||/||b|| 1.296046062493e-01 42 KSP Residual norm 1.795836259709e+01 % max 1.449375360478e+02 min 5.251640030193e-02 max/min 2.759852831011e+03 43 KSP preconditioned resid norm 1.795769958691e+01 true resid norm 2.311558013062e+00 ||r(i)||/||b|| 1.293094359844e-01 43 KSP Residual norm 1.795769958691e+01 % max 1.449864885595e+02 min 4.685673226983e-02 max/min 3.094250954688e+03 44 KSP preconditioned resid norm 1.688955122733e+01 true resid norm 2.370951018341e+00 ||r(i)||/||b|| 1.326319033292e-01 44 KSP Residual norm 1.688955122733e+01 % max 1.450141764818e+02 min 4.554223489616e-02 max/min 3.184169086398e+03 45 KSP preconditioned resid norm 1.526531678793e+01 true resid norm 1.947726448658e+00 ||r(i)||/||b|| 1.089565596470e-01 45 KSP Residual norm 1.526531678793e+01 % max 1.460128059490e+02 min 4.478596884382e-02 max/min 3.260235509433e+03 46 KSP preconditioned resid norm 1.519131456699e+01 true resid norm 1.914601584750e+00 ||r(i)||/||b|| 1.071035421390e-01 46 KSP Residual norm 1.519131456699e+01 % max 1.467440844081e+02 min 4.068627190104e-02 max/min 3.606722305867e+03 47 KSP preconditioned resid norm 1.396606833105e+01 true resid norm 1.916085501806e+00 ||r(i)||/||b|| 1.071865530245e-01 47 KSP Residual norm 1.396606833105e+01 % max 1.472676539618e+02 min 4.068014642358e-02 max/min 3.620135788804e+03 48 KSP preconditioned resid norm 1.128785630240e+01 true resid norm 1.818072605558e+00 ||r(i)||/||b|| 1.017036742642e-01 48 KSP Residual norm 1.128785630240e+01 % max 1.476343971960e+02 min 4.003858518052e-02 max/min 3.687303048557e+03 49 KSP preconditioned resid norm 9.686331225178e+00 true resid norm 1.520702063515e+00 ||r(i)||/||b|| 8.506865284031e-02 49 KSP Residual norm 9.686331225178e+00 % max 1.478478861627e+02 min 3.802148926353e-02 max/min 3.888534852961e+03 50 KSP preconditioned resid norm 9.646313260413e+00 true resid norm 1.596152870343e+00 ||r(i)||/||b|| 8.928939972200e-02 50 KSP Residual norm 9.646313260413e+00 % max 1.479986700685e+02 min 3.796210121957e-02 max/min 3.898590049388e+03 51 KSP preconditioned resid norm 9.270552344731e+00 true resid norm 1.442564661256e+00 ||r(i)||/||b|| 8.069761678656e-02 51 KSP Residual norm 9.270552344731e+00 % max 1.482179613835e+02 min 3.763123640547e-02 max/min 3.938694965706e+03 52 KSP preconditioned resid norm 8.025547426875e+00 true resid norm 1.151158202903e+00 ||r(i)||/||b|| 6.439622847661e-02 52 KSP Residual norm 8.025547426875e+00 % max 1.482612345232e+02 min 3.498901666812e-02 max/min 4.237364997409e+03 53 KSP preconditioned resid norm 7.830903379041e+00 true resid norm 1.083539895672e+00 ||r(i)||/||b|| 6.061363460667e-02 53 KSP Residual norm 7.830903379041e+00 % max 1.496916293673e+02 min 3.180425422032e-02 max/min 4.706654283743e+03 54 KSP preconditioned resid norm 7.818451528162e+00 true resid norm 1.101275786636e+00 ||r(i)||/||b|| 6.160578710481e-02 54 KSP Residual norm 7.818451528162e+00 % max 1.497547864750e+02 min 2.963830547627e-02 max/min 5.052744550289e+03 55 KSP preconditioned resid norm 6.716190950888e+00 true resid norm 1.068051260258e+00 ||r(i)||/||b|| 5.974719444025e-02 55 KSP Residual norm 6.716190950888e+00 % max 1.509985715255e+02 min 2.913583319816e-02 max/min 5.182572624524e+03 56 KSP preconditioned resid norm 6.435651273713e+00 true resid norm 9.738533937643e-01 ||r(i)||/||b|| 5.447772989798e-02 56 KSP Residual norm 6.435651273713e+00 % max 1.513751779517e+02 min 2.849133088254e-02 max/min 5.313025866562e+03 57 KSP preconditioned resid norm 6.427111085122e+00 true resid norm 9.515491605865e-01 ||r(i)||/||b|| 5.323002259581e-02 57 KSP Residual norm 6.427111085122e+00 % max 1.514217018182e+02 min 2.805206411012e-02 max/min 5.397880926830e+03 58 KSP preconditioned resid norm 6.330738649886e+00 true resid norm 9.488038114070e-01 ||r(i)||/||b|| 5.307644671670e-02 58 KSP Residual norm 6.330738649886e+00 % max 1.514492565938e+02 min 2.542367477260e-02 max/min 5.957016754989e+03 59 KSP preconditioned resid norm 5.861133560095e+00 true resid norm 9.281566562924e-01 ||r(i)||/||b|| 5.192143699275e-02 59 KSP Residual norm 5.861133560095e+00 % max 1.522146406650e+02 min 2.316380798762e-02 max/min 6.571227008373e+03 60 KSP preconditioned resid norm 5.523892064332e+00 true resid norm 8.250464923972e-01 ||r(i)||/||b|| 4.615341513814e-02 60 KSP Residual norm 5.523892064332e+00 % max 1.522274643717e+02 min 2.316038063761e-02 max/min 6.572753131893e+03 61 KSP preconditioned resid norm 5.345610652504e+00 true resid norm 7.967455833060e-01 ||r(i)||/||b|| 4.457025150056e-02 61 KSP Residual norm 5.345610652504e+00 % max 1.527689509969e+02 min 2.278107359995e-02 max/min 6.705959239659e+03 62 KSP preconditioned resid norm 4.883527474160e+00 true resid norm 6.939057385062e-01 ||r(i)||/||b|| 3.881735139915e-02 62 KSP Residual norm 4.883527474160e+00 % max 1.527847370492e+02 min 2.215885299007e-02 max/min 6.894974984387e+03 63 KSP preconditioned resid norm 3.982941325093e+00 true resid norm 6.883730294386e-01 ||r(i)||/||b|| 3.850784954587e-02 63 KSP Residual norm 3.982941325093e+00 % max 1.528029462870e+02 min 2.036330717837e-02 max/min 7.503837414449e+03 64 KSP preconditioned resid norm 3.768777539791e+00 true resid norm 6.576940275451e-01 ||r(i)||/||b|| 3.679165449085e-02 64 KSP Residual norm 3.768777539791e+00 % max 1.534842011203e+02 min 2.015613298342e-02 max/min 7.614764262895e+03 65 KSP preconditioned resid norm 3.754783611569e+00 true resid norm 6.240730525064e-01 ||r(i)||/||b|| 3.491088433716e-02 65 KSP Residual norm 3.754783611569e+00 % max 1.536539871370e+02 min 2.002205183069e-02 max/min 7.674237807208e+03 66 KSP preconditioned resid norm 3.242712853326e+00 true resid norm 5.919043028914e-01 ||r(i)||/||b|| 3.311135222698e-02 66 KSP Residual norm 3.242712853326e+00 % max 1.536763506920e+02 min 1.853540050760e-02 max/min 8.290964666717e+03 67 KSP preconditioned resid norm 2.639460944665e+00 true resid norm 5.172526166420e-01 ||r(i)||/||b|| 2.893530845493e-02 67 KSP Residual norm 2.639460944665e+00 % max 1.536900244470e+02 min 1.819164547837e-02 max/min 8.448384981431e+03 68 KSP preconditioned resid norm 2.332848633723e+00 true resid norm 4.860086486067e-01 ||r(i)||/||b|| 2.718750897868e-02 68 KSP Residual norm 2.332848633723e+00 % max 1.537025056230e+02 min 1.794260313013e-02 max/min 8.566343718814e+03 69 KSP preconditioned resid norm 2.210456807969e+00 true resid norm 5.392952883712e-01 ||r(i)||/||b|| 3.016838391000e-02 69 KSP Residual norm 2.210456807969e+00 % max 1.537288460363e+02 min 1.753831098488e-02 max/min 8.765316464556e+03 70 KSP preconditioned resid norm 2.193036145802e+00 true resid norm 5.031209038994e-01 ||r(i)||/||b|| 2.814477505974e-02 70 KSP Residual norm 2.193036145802e+00 % max 1.541461506421e+02 min 1.700508158735e-02 max/min 9.064711030659e+03 71 KSP preconditioned resid norm 1.938788470922e+00 true resid norm 3.360317906409e-01 ||r(i)||/||b|| 1.879774640094e-02 71 KSP Residual norm 1.938788470922e+00 % max 1.545641242905e+02 min 1.655870257931e-02 max/min 9.334313696993e+03 72 KSP preconditioned resid norm 1.450214827437e+00 true resid norm 2.879267785157e-01 ||r(i)||/||b|| 1.610673369402e-02 72 KSP Residual norm 1.450214827437e+00 % max 1.549625503409e+02 min 1.648921859604e-02 max/min 9.397810420079e+03 73 KSP preconditioned resid norm 1.110389920050e+00 true resid norm 2.664130281817e-01 ||r(i)||/||b|| 1.490324630331e-02 73 KSP Residual norm 1.110389920050e+00 % max 1.553700778851e+02 min 1.637230817799e-02 max/min 9.489809023627e+03 74 KSP preconditioned resid norm 8.588797630056e-01 true resid norm 2.680334659775e-01 ||r(i)||/||b|| 1.499389421102e-02 74 KSP Residual norm 8.588797630056e-01 % max 1.560105458550e+02 min 1.627972447765e-02 max/min 9.583119546597e+03 75 KSP preconditioned resid norm 6.037657330005e-01 true resid norm 2.938298885198e-01 ||r(i)||/||b|| 1.643695591681e-02 75 KSP Residual norm 6.037657330005e-01 % max 1.568417947105e+02 min 1.617453208191e-02 max/min 9.696836601902e+03 76 KSP preconditioned resid norm 4.778003132962e-01 true resid norm 3.104440343355e-01 ||r(i)||/||b|| 1.736635756394e-02 76 KSP Residual norm 4.778003132962e-01 % max 1.582399629678e+02 min 1.614387326366e-02 max/min 9.801858598824e+03 77 KSP preconditioned resid norm 4.523141317161e-01 true resid norm 3.278599563956e-01 ||r(i)||/||b|| 1.834061087967e-02 77 KSP Residual norm 4.523141317161e-01 % max 1.610754319747e+02 min 1.614204788287e-02 max/min 9.978624344533e+03 78 KSP preconditioned resid norm 4.522674526201e-01 true resid norm 3.287567680693e-01 ||r(i)||/||b|| 1.839077886640e-02 78 KSP Residual norm 4.522674526201e-01 % max 1.611450941232e+02 min 1.596556934579e-02 max/min 1.009328829014e+04 79 KSP preconditioned resid norm 4.452139421181e-01 true resid norm 3.198810083892e-01 ||r(i)||/||b|| 1.789426548812e-02 79 KSP Residual norm 4.452139421181e-01 % max 1.619065061486e+02 min 1.456184518464e-02 max/min 1.111854329555e+04 80 KSP preconditioned resid norm 4.217533807980e-01 true resid norm 2.940346905843e-01 ||r(i)||/||b|| 1.644841262233e-02 80 KSP Residual norm 4.217533807980e-01 % max 1.620409915505e+02 min 1.154094228288e-02 max/min 1.404053391644e+04 81 KSP preconditioned resid norm 3.815105809981e-01 true resid norm 2.630686686818e-01 ||r(i)||/||b|| 1.471616155865e-02 81 KSP Residual norm 3.815105809981e-01 % max 1.643140061303e+02 min 9.352799654900e-03 max/min 1.756843000953e+04 82 KSP preconditioned resid norm 3.257333222641e-01 true resid norm 2.350339902943e-01 ||r(i)||/||b|| 1.314789096807e-02 82 KSP Residual norm 3.257333222641e-01 % max 1.661730623934e+02 min 7.309343891643e-03 max/min 2.273433359502e+04 83 KSP preconditioned resid norm 2.653029571859e-01 true resid norm 1.993496448136e-01 ||r(i)||/||b|| 1.115169508568e-02 83 KSP Residual norm 2.653029571859e-01 % max 1.672909675648e+02 min 5.727931297058e-03 max/min 2.920617564857e+04 84 KSP preconditioned resid norm 2.125985080811e-01 true resid norm 1.651488251908e-01 ||r(i)||/||b|| 9.238488205020e-03 84 KSP Residual norm 2.125985080811e-01 % max 1.677205536916e+02 min 4.904920732181e-03 max/min 3.419434540321e+04 85 KSP preconditioned resid norm 1.823170233996e-01 true resid norm 1.538640960567e-01 ||r(i)||/||b|| 8.607216157635e-03 85 KSP Residual norm 1.823170233996e-01 % max 1.684045979154e+02 min 4.411887233900e-03 max/min 3.817064874674e+04 86 KSP preconditioned resid norm 1.568093845918e-01 true resid norm 1.468517035130e-01 ||r(i)||/||b|| 8.214940246925e-03 86 KSP Residual norm 1.568093845918e-01 % max 1.707938893954e+02 min 3.835536173319e-03 max/min 4.452933870980e+04 87 KSP preconditioned resid norm 1.416153038224e-01 true resid norm 1.326462010194e-01 ||r(i)||/||b|| 7.420279024951e-03 87 KSP Residual norm 1.416153038224e-01 % max 1.708004671154e+02 min 3.204409749322e-03 max/min 5.330169375233e+04 88 KSP preconditioned resid norm 1.298764107063e-01 true resid norm 1.273603812747e-01 ||r(i)||/||b|| 7.124588254468e-03 88 KSP Residual norm 1.298764107063e-01 % max 1.708079667291e+02 min 2.881989285724e-03 max/min 5.926738436372e+04 89 KSP preconditioned resid norm 1.226436579597e-01 true resid norm 1.269051556519e-01 ||r(i)||/||b|| 7.099122759682e-03 89 KSP Residual norm 1.226436579597e-01 % max 1.708200970069e+02 min 2.600538747361e-03 max/min 6.568642639154e+04 90 KSP preconditioned resid norm 1.131214284449e-01 true resid norm 1.236585929815e-01 ||r(i)||/||b|| 6.917508806917e-03 90 KSP Residual norm 1.131214284449e-01 % max 1.708228433345e+02 min 2.097699611380e-03 max/min 8.143341516003e+04 91 KSP preconditioned resid norm 1.097877127886e-01 true resid norm 1.208660430826e-01 ||r(i)||/||b|| 6.761292501577e-03 91 KSP Residual norm 1.097877127886e-01 % max 1.734053778533e+02 min 1.669096618002e-03 max/min 1.038917555659e+05 92 KSP preconditioned resid norm 1.087819681706e-01 true resid norm 1.220967287438e-01 ||r(i)||/||b|| 6.830137526368e-03 92 KSP Residual norm 1.087819681706e-01 % max 1.735898650322e+02 min 1.410331102754e-03 max/min 1.230844761867e+05 93 KSP preconditioned resid norm 1.082792743495e-01 true resid norm 1.231464976286e-01 ||r(i)||/||b|| 6.888861997758e-03 93 KSP Residual norm 1.082792743495e-01 % max 1.744253420547e+02 min 1.160377474118e-03 max/min 1.503177594750e+05 94 KSP preconditioned resid norm 1.082786837568e-01 true resid norm 1.231972007582e-01 ||r(i)||/||b|| 6.891698350150e-03 94 KSP Residual norm 1.082786837568e-01 % max 1.744405973598e+02 min 9.020124135986e-04 max/min 1.933904619603e+05 95 KSP preconditioned resid norm 1.076120024876e-01 true resid norm 1.193390416348e-01 ||r(i)||/||b|| 6.675871458778e-03 95 KSP Residual norm 1.076120024876e-01 % max 1.744577996139e+02 min 6.759449987513e-04 max/min 2.580946673712e+05 96 KSP preconditioned resid norm 1.051338654734e-01 true resid norm 1.113305708771e-01 ||r(i)||/||b|| 6.227874553259e-03 96 KSP Residual norm 1.051338654734e-01 % max 1.754402425643e+02 min 5.544787352320e-04 max/min 3.164057184102e+05 97 KSP preconditioned resid norm 9.747169190114e-02 true resid norm 9.349805869502e-02 ||r(i)||/||b|| 5.230317027375e-03 97 KSP Residual norm 9.747169190114e-02 % max 1.756210909470e+02 min 4.482270392115e-04 max/min 3.918127992813e+05 98 KSP preconditioned resid norm 8.842962288098e-02 true resid norm 7.385511932515e-02 ||r(i)||/||b|| 4.131483514810e-03 98 KSP Residual norm 8.842962288098e-02 % max 1.757463096622e+02 min 3.987817611456e-04 max/min 4.407079931572e+05 99 KSP preconditioned resid norm 7.588962343124e-02 true resid norm 4.759948399141e-02 ||r(i)||/||b|| 2.662733270502e-03 99 KSP Residual norm 7.588962343124e-02 % max 1.758800802719e+02 min 3.549508940132e-04 max/min 4.955053874730e+05 100 KSP preconditioned resid norm 5.762080294705e-02 true resid norm 2.979101328333e-02 ||r(i)||/||b|| 1.666520633833e-03 100 KSP Residual norm 5.762080294705e-02 % max 1.767054165485e+02 min 3.399974141580e-04 max/min 5.197257661094e+05 101 KSP preconditioned resid norm 4.010101211334e-02 true resid norm 1.618156300742e-02 ||r(i)||/||b|| 9.052028000211e-04 101 KSP Residual norm 4.010101211334e-02 % max 1.780221758963e+02 min 3.330380731227e-04 max/min 5.345400128792e+05 102 KSP preconditioned resid norm 2.613215615582e-02 true resid norm 8.507285387866e-03 ||r(i)||/||b|| 4.759007859835e-04 102 KSP Residual norm 2.613215615582e-02 % max 1.780247903707e+02 min 3.267877108360e-04 max/min 5.447719864228e+05 103 KSP preconditioned resid norm 1.756896108812e-02 true resid norm 4.899136161981e-03 ||r(i)||/||b|| 2.740595435358e-04 103 KSP Residual norm 1.756896108812e-02 % max 1.780337300599e+02 min 3.237486252629e-04 max/min 5.499134704135e+05 104 KSP preconditioned resid norm 1.142112016905e-02 true resid norm 3.312246267926e-03 ||r(i)||/||b|| 1.852883182367e-04 104 KSP Residual norm 1.142112016905e-02 % max 1.805569604942e+02 min 3.232308817378e-04 max/min 5.586005876774e+05 105 KSP preconditioned resid norm 7.345608733466e-03 true resid norm 1.880672268508e-03 ||r(i)||/||b|| 1.052055232609e-04 105 KSP Residual norm 7.345608733466e-03 % max 1.817568861663e+02 min 3.229958821968e-04 max/min 5.627219917794e+05 106 KSP preconditioned resid norm 4.810582452316e-03 true resid norm 1.418216565286e-03 ||r(i)||/||b|| 7.933557502107e-05 106 KSP Residual norm 4.810582452316e-03 % max 1.845797149153e+02 min 3.220343456392e-04 max/min 5.731677922395e+05 107 KSP preconditioned resid norm 3.114068270187e-03 true resid norm 1.177898700463e-03 ||r(i)||/||b|| 6.589210209864e-05 107 KSP Residual norm 3.114068270187e-03 % max 1.850561815508e+02 min 3.216059660171e-04 max/min 5.754127755856e+05 108 KSP preconditioned resid norm 2.293796939903e-03 true resid norm 1.090384984298e-03 ||r(i)||/||b|| 6.099655147246e-05 108 KSP Residual norm 2.293796939903e-03 % max 1.887275970943e+02 min 3.204019028106e-04 max/min 5.890339459248e+05 109 KSP preconditioned resid norm 1.897140553486e-03 true resid norm 1.085409353133e-03 ||r(i)||/||b|| 6.071821276936e-05 109 KSP Residual norm 1.897140553486e-03 % max 1.900700995337e+02 min 3.201092729524e-04 max/min 5.937663029274e+05 110 KSP preconditioned resid norm 1.638607873536e-03 true resid norm 1.080912735078e-03 ||r(i)||/||b|| 6.046667024205e-05 110 KSP Residual norm 1.638607873536e-03 % max 1.941847763082e+02 min 3.201003961856e-04 max/min 6.066371008040e+05 111 KSP preconditioned resid norm 1.445734513987e-03 true resid norm 1.072337220355e-03 ||r(i)||/||b|| 5.998695268109e-05 111 KSP Residual norm 1.445734513987e-03 % max 1.968706347099e+02 min 3.200841174449e-04 max/min 6.150590547303e+05 112 KSP preconditioned resid norm 1.309787940098e-03 true resid norm 1.071880522659e-03 ||r(i)||/||b|| 5.996140483796e-05 112 KSP Residual norm 1.309787940098e-03 % max 2.005995546657e+02 min 3.200727858161e-04 max/min 6.267310547950e+05 113 KSP preconditioned resid norm 1.203600185791e-03 true resid norm 1.068704252153e-03 ||r(i)||/||b|| 5.978372305568e-05 113 KSP Residual norm 1.203600185791e-03 % max 2.044005133646e+02 min 3.200669375682e-04 max/min 6.386180182109e+05 114 KSP preconditioned resid norm 1.120060134211e-03 true resid norm 1.067737533794e-03 ||r(i)||/||b|| 5.972964446230e-05 114 KSP Residual norm 1.120060134211e-03 % max 2.080860182134e+02 min 3.200663739818e-04 max/min 6.501339569812e+05 115 KSP preconditioned resid norm 1.051559355475e-03 true resid norm 1.066440021011e-03 ||r(i)||/||b|| 5.965706110284e-05 115 KSP Residual norm 1.051559355475e-03 % max 2.120208487134e+02 min 3.200660278464e-04 max/min 6.624284687130e+05 116 KSP preconditioned resid norm 9.943175666627e-04 true resid norm 1.065634036988e-03 ||r(i)||/||b|| 5.961197404955e-05 116 KSP Residual norm 9.943175666627e-04 % max 2.158491989548e+02 min 3.200660249513e-04 max/min 6.743896012946e+05 117 KSP preconditioned resid norm 9.454894909455e-04 true resid norm 1.064909725825e-03 ||r(i)||/||b|| 5.957145580713e-05 117 KSP Residual norm 9.454894909455e-04 % max 2.197604049280e+02 min 3.200659829252e-04 max/min 6.866096887885e+05 118 KSP preconditioned resid norm 9.032188050175e-04 true resid norm 1.064337854187e-03 ||r(i)||/||b|| 5.953946508976e-05 118 KSP Residual norm 9.032188050175e-04 % max 2.236450290076e+02 min 3.200659636119e-04 max/min 6.987466786029e+05 119 KSP preconditioned resid norm 8.661523005930e-04 true resid norm 1.063851383334e-03 ||r(i)||/||b|| 5.951225172494e-05 119 KSP Residual norm 8.661523005930e-04 % max 2.275386631435e+02 min 3.200659439058e-04 max/min 7.109118213789e+05 120 KSP preconditioned resid norm 8.333044554060e-04 true resid norm 1.063440179004e-03 ||r(i)||/||b|| 5.948924879800e-05 120 KSP Residual norm 8.333044554060e-04 % max 2.314181044639e+02 min 3.200659279880e-04 max/min 7.230326136825e+05 121 KSP preconditioned resid norm 8.039309218182e-04 true resid norm 1.063085844171e-03 ||r(i)||/||b|| 5.946942717246e-05 121 KSP Residual norm 8.039309218182e-04 % max 2.352840091878e+02 min 3.200659140872e-04 max/min 7.351111094065e+05 122 KSP preconditioned resid norm 7.774595735272e-04 true resid norm 1.062777951335e-03 ||r(i)||/||b|| 5.945220352987e-05 122 KSP Residual norm 7.774595735272e-04 % max 2.391301402333e+02 min 3.200659020264e-04 max/min 7.471278218621e+05 123 KSP preconditioned resid norm 7.534419441080e-04 true resid norm 1.062507774643e-03 ||r(i)||/||b|| 5.943708974281e-05 123 KSP Residual norm 7.534419441080e-04 % max 2.429535997478e+02 min 3.200658914252e-04 max/min 7.590736978126e+05 124 KSP preconditioned resid norm 7.315209774625e-04 true resid norm 1.062268823392e-03 ||r(i)||/||b|| 5.942372271877e-05 124 KSP Residual norm 7.315209774625e-04 % max 2.467514538222e+02 min 3.200658820413e-04 max/min 7.709395710924e+05 125 KSP preconditioned resid norm 7.114083600074e-04 true resid norm 1.062055971251e-03 ||r(i)||/||b|| 5.941181568889e-05 125 KSP Residual norm 7.114083600074e-04 % max 2.505215705167e+02 min 3.200658736748e-04 max/min 7.827187811071e+05 126 KSP preconditioned resid norm 6.928683961266e-04 true resid norm 1.061865165985e-03 ||r(i)||/||b|| 5.940114196966e-05 126 KSP Residual norm 6.928683961266e-04 % max 2.542622648377e+02 min 3.200658661692e-04 max/min 7.944060636045e+05 127 KSP preconditioned resid norm 6.757062709768e-04 true resid norm 1.061693150669e-03 ||r(i)||/||b|| 5.939151936730e-05 127 KSP Residual norm 6.757062709768e-04 % max 2.579722822206e+02 min 3.200658593982e-04 max/min 8.059974990947e+05 128 KSP preconditioned resid norm 6.597593638309e-04 true resid norm 1.061537280201e-03 ||r(i)||/||b|| 5.938279991397e-05 128 KSP Residual norm 6.597593638309e-04 % max 2.616507118184e+02 min 3.200658532588e-04 max/min 8.174902419435e+05 129 KSP preconditioned resid norm 6.448907155810e-04 true resid norm 1.061395383620e-03 ||r(i)||/||b|| 5.937486216514e-05 129 KSP Residual norm 6.448907155810e-04 % max 2.652969302909e+02 min 3.200658476664e-04 max/min 8.288823447585e+05 130 KSP preconditioned resid norm 6.309840465804e-04 true resid norm 1.061265662452e-03 ||r(i)||/||b|| 5.936760551361e-05 130 KSP Residual norm 6.309840465804e-04 % max 2.689105505574e+02 min 3.200658425513e-04 max/min 8.401725982813e+05 131 KSP preconditioned resid norm 6.179399078869e-04 true resid norm 1.061146614178e-03 ||r(i)||/||b|| 5.936094590780e-05 131 KSP Residual norm 6.179399078869e-04 % max 2.724913796166e+02 min 3.200658378548e-04 max/min 8.513603996069e+05 132 KSP preconditioned resid norm 6.056726732840e-04 true resid norm 1.061036973709e-03 ||r(i)||/||b|| 5.935481257818e-05 132 KSP Residual norm 6.056726732840e-04 % max 2.760393830832e+02 min 3.200658335276e-04 max/min 8.624456413884e+05 133 KSP preconditioned resid norm 5.941081632891e-04 true resid norm 1.060935668299e-03 ||r(i)||/||b|| 5.934914551493e-05 133 KSP Residual norm 5.941081632891e-04 % max 2.795546556158e+02 min 3.200658295276e-04 max/min 8.734286194450e+05 134 KSP preconditioned resid norm 5.831817499918e-04 true resid norm 1.060841782314e-03 ||r(i)||/||b|| 5.934389349718e-05 134 KSP Residual norm 5.831817499918e-04 % max 2.830373962684e+02 min 3.200658258193e-04 max/min 8.843099557533e+05 135 KSP preconditioned resid norm 5.728368317981e-04 true resid norm 1.060754529477e-03 ||r(i)||/||b|| 5.933901254021e-05 135 KSP Residual norm 5.728368317981e-04 % max 2.864878879810e+02 min 3.200658223718e-04 max/min 8.950905343719e+05 136 KSP preconditioned resid norm 5.630235956754e-04 true resid norm 1.060673230868e-03 ||r(i)||/||b|| 5.933446466506e-05 136 KSP Residual norm 5.630235956754e-04 % max 2.899064805338e+02 min 3.200658191584e-04 max/min 9.057714481857e+05 137 KSP preconditioned resid norm 5.536980049705e-04 true resid norm 1.060597297101e-03 ||r(i)||/||b|| 5.933021690118e-05 137 KSP Residual norm 5.536980049705e-04 % max 2.932935763968e+02 min 3.200658161562e-04 max/min 9.163539546933e+05 138 KSP preconditioned resid norm 5.448209657758e-04 true resid norm 1.060526214124e-03 ||r(i)||/||b|| 5.932624049239e-05 138 KSP Residual norm 5.448209657758e-04 % max 2.966496189942e+02 min 3.200658133448e-04 max/min 9.268394393456e+05 139 KSP preconditioned resid norm 5.363576357737e-04 true resid norm 1.060459531517e-03 ||r(i)||/||b|| 5.932251024194e-05 139 KSP Residual norm 5.363576357737e-04 % max 2.999750829834e+02 min 3.200658107070e-04 max/min 9.372293851716e+05 140 KSP preconditioned resid norm 5.282768476492e-04 true resid norm 1.060396852955e-03 ||r(i)||/||b|| 5.931900397932e-05 140 KSP Residual norm 5.282768476492e-04 % max 3.032704662136e+02 min 3.200658082268e-04 max/min 9.475253476584e+05 141 KSP preconditioned resid norm 5.205506252810e-04 true resid norm 1.060337828321e-03 ||r(i)||/||b|| 5.931570211878e-05 141 KSP Residual norm 5.205506252810e-04 % max 3.065362830844e+02 min 3.200658058907e-04 max/min 9.577289339963e+05 142 KSP preconditioned resid norm 5.131537755696e-04 true resid norm 1.060282147141e-03 ||r(i)||/||b|| 5.931258729233e-05 142 KSP Residual norm 5.131537755696e-04 % max 3.097730590695e+02 min 3.200658036863e-04 max/min 9.678417859756e+05 143 KSP preconditioned resid norm 5.060635423123e-04 true resid norm 1.060229533174e-03 ||r(i)||/||b|| 5.930964404698e-05 143 KSP Residual norm 5.060635423123e-04 % max 3.129813262144e+02 min 3.200658016030e-04 max/min 9.778655659143e+05 144 KSP preconditioned resid norm 4.992593112791e-04 true resid norm 1.060179739774e-03 ||r(i)||/||b|| 5.930685858524e-05 144 KSP Residual norm 4.992593112791e-04 % max 3.161616194433e+02 min 3.200657996311e-04 max/min 9.878019451242e+05 145 KSP preconditioned resid norm 4.927223577718e-04 true resid norm 1.060132546053e-03 ||r(i)||/||b|| 5.930421855048e-05 145 KSP Residual norm 4.927223577718e-04 % max 3.193144735429e+02 min 3.200657977617e-04 max/min 9.976525944849e+05 146 KSP preconditioned resid norm 4.864356296191e-04 true resid norm 1.060087753599e-03 ||r(i)||/||b|| 5.930171284355e-05 146 KSP Residual norm 4.864356296191e-04 % max 3.224404207086e+02 min 3.200657959872e-04 max/min 1.007419176779e+06 147 KSP preconditioned resid norm 4.803835598739e-04 true resid norm 1.060045183705e-03 ||r(i)||/||b|| 5.929933146747e-05 147 KSP Residual norm 4.803835598739e-04 % max 3.255399885620e+02 min 3.200657943002e-04 max/min 1.017103340498e+06 148 KSP preconditioned resid norm 4.745519045267e-04 true resid norm 1.060004674955e-03 ||r(i)||/||b|| 5.929706539253e-05 148 KSP Residual norm 4.745519045267e-04 % max 3.286136985587e+02 min 3.200657926948e-04 max/min 1.026706714866e+06 149 KSP preconditioned resid norm 4.689276013781e-04 true resid norm 1.059966081181e-03 ||r(i)||/||b|| 5.929490644211e-05 149 KSP Residual norm 4.689276013781e-04 % max 3.316620647237e+02 min 3.200657911651e-04 max/min 1.036230905891e+06 150 KSP preconditioned resid norm 4.634986468865e-04 true resid norm 1.059929269736e-03 ||r(i)||/||b|| 5.929284719586e-05 150 KSP Residual norm 4.634986468865e-04 % max 3.346855926588e+02 min 3.200657897058e-04 max/min 1.045677493263e+06 151 KSP preconditioned resid norm 4.582539883466e-04 true resid norm 1.059894119959e-03 ||r(i)||/||b|| 5.929088090393e-05 151 KSP Residual norm 4.582539883466e-04 % max 3.376847787762e+02 min 3.200657883122e-04 max/min 1.055048027960e+06 152 KSP preconditioned resid norm 4.531834291922e-04 true resid norm 1.059860521790e-03 ||r(i)||/||b|| 5.928900140955e-05 152 KSP Residual norm 4.531834291922e-04 % max 3.406601097208e+02 min 3.200657869798e-04 max/min 1.064344030443e+06 153 KSP preconditioned resid norm 4.482775455769e-04 true resid norm 1.059828374748e-03 ||r(i)||/||b|| 5.928720309181e-05 153 KSP Residual norm 4.482775455769e-04 % max 3.436120619489e+02 min 3.200657857049e-04 max/min 1.073566989337e+06 154 KSP preconditioned resid norm 4.435276126791e-04 true resid norm 1.059797586777e-03 ||r(i)||/||b|| 5.928548080094e-05 154 KSP Residual norm 4.435276126791e-04 % max 3.465411014367e+02 min 3.200657844837e-04 max/min 1.082718360526e+06 155 KSP preconditioned resid norm 4.389255394186e-04 true resid norm 1.059768073502e-03 ||r(i)||/||b|| 5.928382981713e-05 155 KSP Residual norm 4.389255394186e-04 % max 3.494476834965e+02 min 3.200657833130e-04 max/min 1.091799566575e+06 156 KSP preconditioned resid norm 4.344638104739e-04 true resid norm 1.059739757336e-03 ||r(i)||/||b|| 5.928224580001e-05 156 KSP Residual norm 4.344638104739e-04 % max 3.523322526810e+02 min 3.200657821896e-04 max/min 1.100811996430e+06 157 KSP preconditioned resid norm 4.301354346542e-04 true resid norm 1.059712566921e-03 ||r(i)||/||b|| 5.928072475785e-05 157 KSP Residual norm 4.301354346542e-04 % max 3.551952427605e+02 min 3.200657811108e-04 max/min 1.109757005350e+06 158 KSP preconditioned resid norm 4.259338988194e-04 true resid norm 1.059686436414e-03 ||r(i)||/||b|| 5.927926300733e-05 158 KSP Residual norm 4.259338988194e-04 % max 3.580370767604e+02 min 3.200657800738e-04 max/min 1.118635915023e+06 159 KSP preconditioned resid norm 4.218531266563e-04 true resid norm 1.059661305045e-03 ||r(i)||/||b|| 5.927785714894e-05 159 KSP Residual norm 4.218531266563e-04 % max 3.608581670458e+02 min 3.200657790764e-04 max/min 1.127450013829e+06 160 KSP preconditioned resid norm 4.178874417186e-04 true resid norm 1.059637116561e-03 ||r(i)||/||b|| 5.927650403596e-05 160 KSP Residual norm 4.178874417186e-04 % max 3.636589154466e+02 min 3.200657781164e-04 max/min 1.136200557232e+06 161 KSP preconditioned resid norm 4.140315342180e-04 true resid norm 1.059613818894e-03 ||r(i)||/||b|| 5.927520075559e-05 161 KSP Residual norm 4.140315342180e-04 % max 3.664397134134e+02 min 3.200657771916e-04 max/min 1.144888768267e+06 162 KSP preconditioned resid norm 4.102804311252e-04 true resid norm 1.059591363713e-03 ||r(i)||/||b|| 5.927394460419e-05 162 KSP Residual norm 4.102804311252e-04 % max 3.692009421979e+02 min 3.200657763002e-04 max/min 1.153515838106e+06 163 KSP preconditioned resid norm 4.066294691985e-04 true resid norm 1.059569706138e-03 ||r(i)||/||b|| 5.927273307117e-05 163 KSP Residual norm 4.066294691985e-04 % max 3.719429730532e+02 min 3.200657754405e-04 max/min 1.162082926678e+06 164 KSP preconditioned resid norm 4.030742706055e-04 true resid norm 1.059548804412e-03 ||r(i)||/||b|| 5.927156382068e-05 164 KSP Residual norm 4.030742706055e-04 % max 3.746661674483e+02 min 3.200657746106e-04 max/min 1.170591163345e+06 165 KSP preconditioned resid norm 3.996107208498e-04 true resid norm 1.059528619651e-03 ||r(i)||/||b|| 5.927043467745e-05 165 KSP Residual norm 3.996107208498e-04 % max 3.773708772938e+02 min 3.200657738091e-04 max/min 1.179041647605e+06 166 KSP preconditioned resid norm 3.962349487489e-04 true resid norm 1.059509115572e-03 ||r(i)||/||b|| 5.926934361186e-05 166 KSP Residual norm 3.962349487489e-04 % max 3.800574451754e+02 min 3.200657730346e-04 max/min 1.187435449820e+06 167 KSP preconditioned resid norm 3.929433082418e-04 true resid norm 1.059490258328e-03 ||r(i)||/||b|| 5.926828873043e-05 167 KSP Residual norm 3.929433082418e-04 % max 3.827262045917e+02 min 3.200657722858e-04 max/min 1.195773611962e+06 168 KSP preconditioned resid norm 3.897323618319e-04 true resid norm 1.059472016256e-03 ||r(i)||/||b|| 5.926726826196e-05 168 KSP Residual norm 3.897323618319e-04 % max 3.853774801959e+02 min 3.200657715611e-04 max/min 1.204057148368e+06 169 KSP preconditioned resid norm 3.865988654946e-04 true resid norm 1.059454359751e-03 ||r(i)||/||b|| 5.926628055033e-05 169 KSP Residual norm 3.865988654946e-04 % max 3.880115880375e+02 min 3.200657708600e-04 max/min 1.212287046487e+06 170 KSP preconditioned resid norm 3.835397548978e-04 true resid norm 1.059437261066e-03 ||r(i)||/||b|| 5.926532404338e-05 170 KSP Residual norm 3.835397548978e-04 % max 3.906288358040e+02 min 3.200657701808e-04 max/min 1.220464267652e+06 171 KSP preconditioned resid norm 3.805521328045e-04 true resid norm 1.059420694153e-03 ||r(i)||/||b|| 5.926439728400e-05 171 KSP Residual norm 3.805521328045e-04 % max 3.932295230610e+02 min 3.200657695227e-04 max/min 1.228589747812e+06 172 KSP preconditioned resid norm 3.776332575372e-04 true resid norm 1.059404634604e-03 ||r(i)||/||b|| 5.926349890668e-05 172 KSP Residual norm 3.776332575372e-04 % max 3.958139414889e+02 min 3.200657688847e-04 max/min 1.236664398283e+06 173 KSP preconditioned resid norm 3.747805324013e-04 true resid norm 1.059389059461e-03 ||r(i)||/||b|| 5.926262762725e-05 173 KSP Residual norm 3.747805324013e-04 % max 3.983823751168e+02 min 3.200657682658e-04 max/min 1.244689106477e+06 174 KSP preconditioned resid norm 3.719914959745e-04 true resid norm 1.059373947132e-03 ||r(i)||/||b|| 5.926178223783e-05 174 KSP Residual norm 3.719914959745e-04 % max 4.009351005517e+02 min 3.200657676654e-04 max/min 1.252664736614e+06 175 KSP preconditioned resid norm 3.692638131792e-04 true resid norm 1.059359277291e-03 ||r(i)||/||b|| 5.926096160129e-05 175 KSP Residual norm 3.692638131792e-04 % max 4.034723872035e+02 min 3.200657670824e-04 max/min 1.260592130428e+06 176 KSP preconditioned resid norm 3.665952670646e-04 true resid norm 1.059345030772e-03 ||r(i)||/||b|| 5.926016464562e-05 176 KSP Residual norm 3.665952670646e-04 % max 4.059944975044e+02 min 3.200657665165e-04 max/min 1.268472107852e+06 177 KSP preconditioned resid norm 3.639837512333e-04 true resid norm 1.059331189535e-03 ||r(i)||/||b|| 5.925939036157e-05 177 KSP Residual norm 3.639837512333e-04 % max 4.085016871234e+02 min 3.200657659664e-04 max/min 1.276305467690e+06 178 KSP preconditioned resid norm 3.614272628526e-04 true resid norm 1.059317736504e-03 ||r(i)||/||b|| 5.925863779385e-05 178 KSP Residual norm 3.614272628526e-04 % max 4.109942051749e+02 min 3.200657654318e-04 max/min 1.284092988266e+06 179 KSP preconditioned resid norm 3.589238961979e-04 true resid norm 1.059304655605e-03 ||r(i)||/||b|| 5.925790604337e-05 179 KSP Residual norm 3.589238961979e-04 % max 4.134722944217e+02 min 3.200657649118e-04 max/min 1.291835428058e+06 180 KSP preconditioned resid norm 3.564718366825e-04 true resid norm 1.059291931579e-03 ||r(i)||/||b|| 5.925719425653e-05 180 KSP Residual norm 3.564718366825e-04 % max 4.159361914728e+02 min 3.200657644063e-04 max/min 1.299533526319e+06 181 KSP preconditioned resid norm 3.540693553287e-04 true resid norm 1.059279550018e-03 ||r(i)||/||b|| 5.925650162730e-05 181 KSP Residual norm 3.540693553287e-04 % max 4.183861269743e+02 min 3.200657639142e-04 max/min 1.307188003670e+06 182 KSP preconditioned resid norm 3.517148036444e-04 true resid norm 1.059267497309e-03 ||r(i)||/||b|| 5.925582739414e-05 182 KSP Residual norm 3.517148036444e-04 % max 4.208223257952e+02 min 3.200657634351e-04 max/min 1.314799562686e+06 183 KSP preconditioned resid norm 3.494066088688e-04 true resid norm 1.059255760488e-03 ||r(i)||/||b|| 5.925517083190e-05 183 KSP Residual norm 3.494066088688e-04 % max 4.232450072075e+02 min 3.200657629684e-04 max/min 1.322368888450e+06 184 KSP preconditioned resid norm 3.471432695577e-04 true resid norm 1.059244327320e-03 ||r(i)||/||b|| 5.925453125615e-05 184 KSP Residual norm 3.471432695577e-04 % max 4.256543850606e+02 min 3.200657625139e-04 max/min 1.329896649105e+06 185 KSP preconditioned resid norm 3.449233514781e-04 true resid norm 1.059233186159e-03 ||r(i)||/||b|| 5.925390801536e-05 185 KSP Residual norm 3.449233514781e-04 % max 4.280506679492e+02 min 3.200657620711e-04 max/min 1.337383496377e+06 186 KSP preconditioned resid norm 3.427454837895e-04 true resid norm 1.059222325963e-03 ||r(i)||/||b|| 5.925330049183e-05 186 KSP Residual norm 3.427454837895e-04 % max 4.304340593770e+02 min 3.200657616393e-04 max/min 1.344830066085e+06 187 KSP preconditioned resid norm 3.406083554860e-04 true resid norm 1.059211736238e-03 ||r(i)||/||b|| 5.925270809861e-05 187 KSP Residual norm 3.406083554860e-04 % max 4.328047579144e+02 min 3.200657612184e-04 max/min 1.352236978635e+06 188 KSP preconditioned resid norm 3.385107120797e-04 true resid norm 1.059201407008e-03 ||r(i)||/||b|| 5.925213027752e-05 188 KSP Residual norm 3.385107120797e-04 % max 4.351629573503e+02 min 3.200657608077e-04 max/min 1.359604839494e+06 189 KSP preconditioned resid norm 3.364513525062e-04 true resid norm 1.059191328770e-03 ||r(i)||/||b|| 5.925156649705e-05 189 KSP Residual norm 3.364513525062e-04 % max 4.375088468402e+02 min 3.200657604069e-04 max/min 1.366934239651e+06 190 KSP preconditioned resid norm 3.344291262348e-04 true resid norm 1.059181492481e-03 ||r(i)||/||b|| 5.925101625132e-05 190 KSP Residual norm 3.344291262348e-04 % max 4.398426110478e+02 min 3.200657600159e-04 max/min 1.374225756063e+06 191 KSP preconditioned resid norm 3.324429305670e-04 true resid norm 1.059171889545e-03 ||r(i)||/||b|| 5.925047905943e-05 191 KSP Residual norm 3.324429305670e-04 % max 4.421644302826e+02 min 3.200657596338e-04 max/min 1.381479952084e+06 192 KSP preconditioned resid norm 3.304917081101e-04 true resid norm 1.059162511741e-03 ||r(i)||/||b|| 5.924995446149e-05 192 KSP Residual norm 3.304917081101e-04 % max 4.444744806328e+02 min 3.200657592613e-04 max/min 1.388697377872e+06 193 KSP preconditioned resid norm 3.285744444114e-04 true resid norm 1.059153351271e-03 ||r(i)||/||b|| 5.924944202127e-05 193 KSP Residual norm 3.285744444114e-04 % max 4.467729340931e+02 min 3.200657588967e-04 max/min 1.395878570808e+06 194 KSP preconditioned resid norm 3.266901657422e-04 true resid norm 1.059144400627e-03 ||r(i)||/||b|| 5.924894131886e-05 194 KSP Residual norm 3.266901657422e-04 % max 4.490599586887e+02 min 3.200657585410e-04 max/min 1.403024055856e+06 195 KSP preconditioned resid norm 3.248379370193e-04 true resid norm 1.059135652724e-03 ||r(i)||/||b|| 5.924845195782e-05 195 KSP Residual norm 3.248379370193e-04 % max 4.513357185943e+02 min 3.200657581935e-04 max/min 1.410134345960e+06 196 KSP preconditioned resid norm 3.230168598555e-04 true resid norm 1.059127100716e-03 ||r(i)||/||b|| 5.924797355524e-05 196 KSP Residual norm 3.230168598555e-04 % max 4.536003742498e+02 min 3.200657578528e-04 max/min 1.417209942397e+06 197 KSP preconditioned resid norm 3.212260707284e-04 true resid norm 1.059118738110e-03 ||r(i)||/||b|| 5.924750574787e-05 197 KSP Residual norm 3.212260707284e-04 % max 4.558540824713e+02 min 3.200657575203e-04 max/min 1.424251335110e+06 198 KSP preconditioned resid norm 3.194647392600e-04 true resid norm 1.059110558683e-03 ||r(i)||/||b|| 5.924704818760e-05 198 KSP Residual norm 3.194647392600e-04 % max 4.580969965587e+02 min 3.200657571950e-04 max/min 1.431259003067e+06 199 KSP preconditioned resid norm 3.177320665985e-04 true resid norm 1.059102556481e-03 ||r(i)||/||b|| 5.924660054140e-05 199 KSP Residual norm 3.177320665985e-04 % max 4.603292663993e+02 min 3.200657568766e-04 max/min 1.438233414569e+06 200 KSP preconditioned resid norm 3.160272838961e-04 true resid norm 1.059094725800e-03 ||r(i)||/||b|| 5.924616249011e-05 200 KSP Residual norm 3.160272838961e-04 % max 4.625510385677e+02 min 3.200657565653e-04 max/min 1.445175027567e+06 Thanks, Meng ------------------ Original ------------------ From: "Barry Smith";; Send time: Friday, Apr 25, 2014 6:54 AM To: "Oo "; Cc: "Dave May"; "petsc-users"; Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 Run the bad case again with -ksp_gmres_restart 200 -ksp_max_it 200 and send the output On Apr 24, 2014, at 5:46 PM, Oo wrote: > > Thanks, > The following is the output at the beginning. > > > 0 KSP preconditioned resid norm 7.463734841673e+00 true resid norm 7.520241011357e-02 ||r(i)||/||b|| 1.000000000000e+00 > 0 KSP Residual norm 7.463734841673e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 1 KSP preconditioned resid norm 3.449001344285e-03 true resid norm 7.834435231711e-05 ||r(i)||/||b|| 1.041779807307e-03 > 1 KSP Residual norm 3.449001344285e-03 % max 9.999991695261e-01 min 9.999991695261e-01 max/min 1.000000000000e+00 > 2 KSP preconditioned resid norm 1.811463883605e-05 true resid norm 3.597611565181e-07 ||r(i)||/||b|| 4.783904611232e-06 > 2 KSP Residual norm 1.811463883605e-05 % max 1.000686014764e+00 min 9.991339510077e-01 max/min 1.001553409084e+00 > 0 KSP preconditioned resid norm 9.374463936067e+00 true resid norm 9.058107112571e-02 ||r(i)||/||b|| 1.000000000000e+00 > 0 KSP Residual norm 9.374463936067e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 1 KSP preconditioned resid norm 2.595582184655e-01 true resid norm 6.637387889158e-03 ||r(i)||/||b|| 7.327566131280e-02 > 1 KSP Residual norm 2.595582184655e-01 % max 9.933440157684e-01 min 9.933440157684e-01 max/min 1.000000000000e+00 > 2 KSP preconditioned resid norm 6.351429855766e-03 true resid norm 1.844857600919e-04 ||r(i)||/||b|| 2.036692189651e-03 > 2 KSP Residual norm 6.351429855766e-03 % max 1.000795215571e+00 min 8.099278726624e-01 max/min 1.235659679523e+00 > 3 KSP preconditioned resid norm 1.883016084950e-04 true resid norm 3.876682412610e-06 ||r(i)||/||b|| 4.279793078656e-05 > 3 KSP Residual norm 1.883016084950e-04 % max 1.184638644500e+00 min 8.086172954187e-01 max/min 1.465017692809e+00 > > > When solving the linear system: > Output: > ......... > ........ > ........ > +00 max/min 7.343521316293e+01 > 9636 KSP preconditioned resid norm 1.080641720588e+00 true resid norm 1.594064989678e-01 ||r(i)||/||b|| 8.917260288210e-03 > 9636 KSP Residual norm 1.080641720588e+00 % max 9.802496207537e+01 min 1.168945135768e+00 max/min 8.385762434515e+01 > 9637 KSP preconditioned resid norm 1.080641720588e+00 true resid norm 1.594064987918e-01 ||r(i)||/||b|| 8.917260278361e-03 > 9637 KSP Residual norm 1.080641720588e+00 % max 1.122401280488e+02 min 1.141681830513e+00 max/min 9.831121512938e+01 > 9638 KSP preconditioned resid norm 1.080641720588e+00 true resid norm 1.594064986859e-01 ||r(i)||/||b|| 8.917260272440e-03 > 9638 KSP Residual norm 1.080641720588e+00 % max 1.134941067042e+02 min 1.090790142559e+00 max/min 1.040476094127e+02 > 9639 KSP preconditioned resid norm 1.080641720588e+00 true resid norm 1.594064442988e-01 ||r(i)||/||b|| 8.917257230005e-03 > 9639 KSP Residual norm 1.080641720588e+00 % max 1.139914662925e+02 min 4.119649156568e-01 max/min 2.767018791170e+02 > 9640 KSP preconditioned resid norm 1.080641720586e+00 true resid norm 1.594063809183e-01 ||r(i)||/||b|| 8.917253684473e-03 > 9640 KSP Residual norm 1.080641720586e+00 % max 1.140011421526e+02 min 2.894486589274e-01 max/min 3.938561766878e+02 > 9641 KSP preconditioned resid norm 1.080641720586e+00 true resid norm 1.594063839202e-01 ||r(i)||/||b|| 8.917253852403e-03 > 9641 KSP Residual norm 1.080641720586e+00 % max 1.140392299942e+02 min 2.880532973299e-01 max/min 3.958962839563e+02 > 9642 KSP preconditioned resid norm 1.080641720586e+00 true resid norm 1.594064091676e-01 ||r(i)||/||b|| 8.917255264750e-03 > 9642 KSP Residual norm 1.080641720586e+00 % max 1.140392728591e+02 min 2.501717295613e-01 max/min 4.558439639005e+02 > 9643 KSP preconditioned resid norm 1.080641720583e+00 true resid norm 1.594064099334e-01 ||r(i)||/||b|| 8.917255307591e-03 > 9643 KSP Residual norm 1.080641720583e+00 % max 1.141360429432e+02 min 2.500714638111e-01 max/min 4.564137035220e+02 > 9644 KSP preconditioned resid norm 1.080641720582e+00 true resid norm 1.594064169337e-01 ||r(i)||/||b|| 8.917255699186e-03 > 9644 KSP Residual norm 1.080641720582e+00 % max 1.141719168213e+02 min 2.470526471293e-01 max/min 4.621359784969e+02 > 9645 KSP preconditioned resid norm 1.080641720554e+00 true resid norm 1.594064833602e-01 ||r(i)||/||b|| 8.917259415111e-03 > 9645 KSP Residual norm 1.080641720554e+00 % max 1.141770017757e+02 min 2.461729098264e-01 max/min 4.638081495493e+02 > 9646 KSP preconditioned resid norm 1.080641720550e+00 true resid norm 1.594066163854e-01 ||r(i)||/||b|| 8.917266856592e-03 > 9646 KSP Residual norm 1.080641720550e+00 % max 1.150251695783e+02 min 1.817293289064e-01 max/min 6.329477485583e+02 > 9647 KSP preconditioned resid norm 1.080641720425e+00 true resid norm 1.594070759575e-01 ||r(i)||/||b|| 8.917292565231e-03 > 9647 KSP Residual norm 1.080641720425e+00 % max 1.153670774825e+02 min 1.757825842976e-01 max/min 6.563055034347e+02 > 9648 KSP preconditioned resid norm 1.080641720405e+00 true resid norm 1.594072309986e-01 ||r(i)||/||b|| 8.917301238287e-03 > 9648 KSP Residual norm 1.080641720405e+00 % max 1.154419449950e+02 min 1.682003217110e-01 max/min 6.863360534671e+02 > 9649 KSP preconditioned resid norm 1.080641719971e+00 true resid norm 1.594088666650e-01 ||r(i)||/||b|| 8.917392738093e-03 > 9649 KSP Residual norm 1.080641719971e+00 % max 1.154420890958e+02 min 1.254364806923e-01 max/min 9.203230867027e+02 > 9650 KSP preconditioned resid norm 1.080641719766e+00 true resid norm 1.594089470619e-01 ||r(i)||/||b|| 8.917397235527e-03 > 9650 KSP Residual norm 1.080641719766e+00 % max 1.155791388935e+02 min 1.115280748954e-01 max/min 1.036323266603e+03 > 9651 KSP preconditioned resid norm 1.080641719668e+00 true resid norm 1.594099325489e-01 ||r(i)||/||b|| 8.917452364041e-03 > 9651 KSP Residual norm 1.080641719668e+00 % max 1.156952656131e+02 min 9.753165869338e-02 max/min 1.186232933624e+03 > 9652 KSP preconditioned resid norm 1.080641719560e+00 true resid norm 1.594104650490e-01 ||r(i)||/||b|| 8.917482152303e-03 > 9652 KSP Residual norm 1.080641719560e+00 % max 1.157175173166e+02 min 8.164906465197e-02 max/min 1.417254659437e+03 > 9653 KSP preconditioned resid norm 1.080641719545e+00 true resid norm 1.594102433389e-01 ||r(i)||/||b|| 8.917469749751e-03 > 9653 KSP Residual norm 1.080641719545e+00 % max 1.157284977956e+02 min 8.043379142473e-02 max/min 1.438804459490e+03 > 9654 KSP preconditioned resid norm 1.080641719502e+00 true resid norm 1.594103748106e-01 ||r(i)||/||b|| 8.917477104328e-03 > 9654 KSP Residual norm 1.080641719502e+00 % max 1.158252103352e+02 min 8.042977537341e-02 max/min 1.440078749412e+03 > 9655 KSP preconditioned resid norm 1.080641719500e+00 true resid norm 1.594103839160e-01 ||r(i)||/||b|| 8.917477613692e-03 > 9655 KSP Residual norm 1.080641719500e+00 % max 1.158319413225e+02 min 7.912584859399e-02 max/min 1.463895090931e+03 > 9656 KSP preconditioned resid norm 1.080641719298e+00 true resid norm 1.594103559180e-01 ||r(i)||/||b|| 8.917476047469e-03 > 9656 KSP Residual norm 1.080641719298e+00 % max 1.164752277567e+02 min 7.459488142962e-02 max/min 1.561437266532e+03 > 9657 KSP preconditioned resid norm 1.080641719265e+00 true resid norm 1.594102184171e-01 ||r(i)||/||b|| 8.917468355617e-03 > 9657 KSP Residual norm 1.080641719265e+00 % max 1.166579038733e+02 min 7.458594814570e-02 max/min 1.564073485335e+03 > 9658 KSP preconditioned resid norm 1.080641717458e+00 true resid norm 1.594091333285e-01 ||r(i)||/||b|| 8.917407655346e-03 > 9658 KSP Residual norm 1.080641717458e+00 % max 1.166903646829e+02 min 6.207842503132e-02 max/min 1.879724954749e+03 > 9659 KSP preconditioned resid norm 1.080641710951e+00 true resid norm 1.594084758922e-01 ||r(i)||/||b|| 8.917370878110e-03 > 9659 KSP Residual norm 1.080641710951e+00 % max 1.166911765130e+02 min 4.511759655558e-02 max/min 2.586378384966e+03 > 9660 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073892570e-01 ||r(i)||/||b|| 8.917310091324e-03 > 9660 KSP Residual norm 1.080641710473e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9661 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073892624e-01 ||r(i)||/||b|| 8.917310091626e-03 > 9661 KSP Residual norm 1.080641710473e+00 % max 3.063497860835e+01 min 3.063497860835e+01 max/min 1.000000000000e+00 > 9662 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073892715e-01 ||r(i)||/||b|| 8.917310092135e-03 > 9662 KSP Residual norm 1.080641710473e+00 % max 3.066567116490e+01 min 3.845920135843e+00 max/min 7.973559013643e+00 > 9663 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073893803e-01 ||r(i)||/||b|| 8.917310098220e-03 > 9663 KSP Residual norm 1.080641710473e+00 % max 3.713314039929e+01 min 1.336313376350e+00 max/min 2.778774878443e+01 > 9664 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073893840e-01 ||r(i)||/||b|| 8.917310098430e-03 > 9664 KSP Residual norm 1.080641710473e+00 % max 4.496286107838e+01 min 1.226793755688e+00 max/min 3.665070911057e+01 > 9665 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073893901e-01 ||r(i)||/||b|| 8.917310098770e-03 > 9665 KSP Residual norm 1.080641710473e+00 % max 8.684753794468e+01 min 1.183106109633e+00 max/min 7.340638108242e+01 > 9666 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073897030e-01 ||r(i)||/||b|| 8.917310116272e-03 > 9666 KSP Residual norm 1.080641710473e+00 % max 9.802657279239e+01 min 1.168685545918e+00 max/min 8.387762913199e+01 > 9667 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073898787e-01 ||r(i)||/||b|| 8.917310126104e-03 > 9667 KSP Residual norm 1.080641710473e+00 % max 1.123342619847e+02 min 1.141262650540e+00 max/min 9.842980661068e+01 > 9668 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073899832e-01 ||r(i)||/||b|| 8.917310131950e-03 > 9668 KSP Residual norm 1.080641710473e+00 % max 1.134978630027e+02 min 1.090790451862e+00 max/min 1.040510235573e+02 > 9669 KSP preconditioned resid norm 1.080641710472e+00 true resid norm 1.594074437274e-01 ||r(i)||/||b|| 8.917313138421e-03 > 9669 KSP Residual norm 1.080641710472e+00 % max 1.139911467053e+02 min 4.122424185123e-01 max/min 2.765148407498e+02 > 9670 KSP preconditioned resid norm 1.080641710471e+00 true resid norm 1.594075064381e-01 ||r(i)||/||b|| 8.917316646478e-03 > 9670 KSP Residual norm 1.080641710471e+00 % max 1.140007007543e+02 min 2.895766957825e-01 max/min 3.936805081855e+02 > 9671 KSP preconditioned resid norm 1.080641710471e+00 true resid norm 1.594075034738e-01 ||r(i)||/||b|| 8.917316480653e-03 > 9671 KSP Residual norm 1.080641710471e+00 % max 1.140387089437e+02 min 2.881955202443e-01 max/min 3.956991033277e+02 > 9672 KSP preconditioned resid norm 1.080641710470e+00 true resid norm 1.594074784660e-01 ||r(i)||/||b|| 8.917315081711e-03 > 9672 KSP Residual norm 1.080641710470e+00 % max 1.140387584514e+02 min 2.501644562253e-01 max/min 4.558551609294e+02 > 9673 KSP preconditioned resid norm 1.080641710468e+00 true resid norm 1.594074777329e-01 ||r(i)||/||b|| 8.917315040700e-03 > 9673 KSP Residual norm 1.080641710468e+00 % max 1.141359894823e+02 min 2.500652210717e-01 max/min 4.564248838488e+02 > 9674 KSP preconditioned resid norm 1.080641710467e+00 true resid norm 1.594074707419e-01 ||r(i)||/||b|| 8.917314649619e-03 > 9674 KSP Residual norm 1.080641710467e+00 % max 1.141719424825e+02 min 2.470352404045e-01 max/min 4.621686456374e+02 > 9675 KSP preconditioned resid norm 1.080641710439e+00 true resid norm 1.594074050132e-01 ||r(i)||/||b|| 8.917310972734e-03 > 9675 KSP Residual norm 1.080641710439e+00 % max 1.141769957478e+02 min 2.461583334135e-01 max/min 4.638355897383e+02 > 9676 KSP preconditioned resid norm 1.080641710435e+00 true resid norm 1.594072732317e-01 ||r(i)||/||b|| 8.917303600825e-03 > 9676 KSP Residual norm 1.080641710435e+00 % max 1.150247840524e+02 min 1.817432478135e-01 max/min 6.328971526384e+02 > 9677 KSP preconditioned resid norm 1.080641710313e+00 true resid norm 1.594068192028e-01 ||r(i)||/||b|| 8.917278202275e-03 > 9677 KSP Residual norm 1.080641710313e+00 % max 1.153658698688e+02 min 1.758229849082e-01 max/min 6.561478291873e+02 > 9678 KSP preconditioned resid norm 1.080641710294e+00 true resid norm 1.594066656020e-01 ||r(i)||/||b|| 8.917269609788e-03 > 9678 KSP Residual norm 1.080641710294e+00 % max 1.154409214869e+02 min 1.682261493961e-01 max/min 6.862245964808e+02 > 9679 KSP preconditioned resid norm 1.080641709867e+00 true resid norm 1.594050456081e-01 ||r(i)||/||b|| 8.917178986714e-03 > 9679 KSP Residual norm 1.080641709867e+00 % max 1.154410567101e+02 min 1.254614849745e-01 max/min 9.201314390116e+02 > 9680 KSP preconditioned resid norm 1.080641709664e+00 true resid norm 1.594049650069e-01 ||r(i)||/||b|| 8.917174477851e-03 > 9680 KSP Residual norm 1.080641709664e+00 % max 1.155780217907e+02 min 1.115227545913e-01 max/min 1.036362688622e+03 > 9681 KSP preconditioned resid norm 1.080641709568e+00 true resid norm 1.594039822189e-01 ||r(i)||/||b|| 8.917119500317e-03 > 9681 KSP Residual norm 1.080641709568e+00 % max 1.156949294099e+02 min 9.748012982669e-02 max/min 1.186856538000e+03 > 9682 KSP preconditioned resid norm 1.080641709462e+00 true resid norm 1.594034585197e-01 ||r(i)||/||b|| 8.917090204385e-03 > 9682 KSP Residual norm 1.080641709462e+00 % max 1.157178049396e+02 min 8.161660044005e-02 max/min 1.417821917547e+03 > 9683 KSP preconditioned resid norm 1.080641709447e+00 true resid norm 1.594036816693e-01 ||r(i)||/||b|| 8.917102687458e-03 > 9683 KSP Residual norm 1.080641709447e+00 % max 1.157298376396e+02 min 8.041659530019e-02 max/min 1.439128791856e+03 > 9684 KSP preconditioned resid norm 1.080641709407e+00 true resid norm 1.594035571975e-01 ||r(i)||/||b|| 8.917095724454e-03 > 9684 KSP Residual norm 1.080641709407e+00 % max 1.158244705264e+02 min 8.041209608100e-02 max/min 1.440386162919e+03 > 9685 KSP preconditioned resid norm 1.080641709405e+00 true resid norm 1.594035489927e-01 ||r(i)||/||b|| 8.917095265477e-03 > 9685 KSP Residual norm 1.080641709405e+00 % max 1.158301451854e+02 min 7.912026880308e-02 max/min 1.463975627707e+03 > 9686 KSP preconditioned resid norm 1.080641709207e+00 true resid norm 1.594035774591e-01 ||r(i)||/||b|| 8.917096857899e-03 > 9686 KSP Residual norm 1.080641709207e+00 % max 1.164678839370e+02 min 7.460755571674e-02 max/min 1.561073577845e+03 > 9687 KSP preconditioned resid norm 1.080641709174e+00 true resid norm 1.594037147171e-01 ||r(i)||/||b|| 8.917104536162e-03 > 9687 KSP Residual norm 1.080641709174e+00 % max 1.166488052404e+02 min 7.459794478802e-02 max/min 1.563699986264e+03 > 9688 KSP preconditioned resid norm 1.080641707398e+00 true resid norm 1.594047860513e-01 ||r(i)||/||b|| 8.917164467006e-03 > 9688 KSP Residual norm 1.080641707398e+00 % max 1.166807852257e+02 min 6.210503072987e-02 max/min 1.878765437428e+03 > 9689 KSP preconditioned resid norm 1.080641701010e+00 true resid norm 1.594054414016e-01 ||r(i)||/||b|| 8.917201127549e-03 > 9689 KSP Residual norm 1.080641701010e+00 % max 1.166815140099e+02 min 4.513527468187e-02 max/min 2.585151299782e+03 > 9690 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065078534e-01 ||r(i)||/||b|| 8.917260785273e-03 > 9690 KSP Residual norm 1.080641700548e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9691 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065078481e-01 ||r(i)||/||b|| 8.917260784977e-03 > 9691 KSP Residual norm 1.080641700548e+00 % max 3.063462923862e+01 min 3.063462923862e+01 max/min 1.000000000000e+00 > 9692 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065078391e-01 ||r(i)||/||b|| 8.917260784472e-03 > 9692 KSP Residual norm 1.080641700548e+00 % max 3.066527639129e+01 min 3.844401168431e+00 max/min 7.976606771193e+00 > 9693 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065077314e-01 ||r(i)||/||b|| 8.917260778448e-03 > 9693 KSP Residual norm 1.080641700548e+00 % max 3.714560719346e+01 min 1.336559823443e+00 max/min 2.779195255007e+01 > 9694 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065077277e-01 ||r(i)||/||b|| 8.917260778237e-03 > 9694 KSP Residual norm 1.080641700548e+00 % max 4.500980776068e+01 min 1.226798344943e+00 max/min 3.668883965014e+01 > 9695 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065077215e-01 ||r(i)||/||b|| 8.917260777894e-03 > 9695 KSP Residual norm 1.080641700548e+00 % max 8.688252795875e+01 min 1.183121330492e+00 max/min 7.343501103356e+01 > 9696 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065074170e-01 ||r(i)||/||b|| 8.917260760857e-03 > 9696 KSP Residual norm 1.080641700548e+00 % max 9.802496799855e+01 min 1.168942571066e+00 max/min 8.385781339895e+01 > 9697 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065072444e-01 ||r(i)||/||b|| 8.917260751202e-03 > 9697 KSP Residual norm 1.080641700548e+00 % max 1.122410641497e+02 min 1.141677885656e+00 max/min 9.831237475989e+01 > 9698 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065071406e-01 ||r(i)||/||b|| 8.917260745398e-03 > 9698 KSP Residual norm 1.080641700548e+00 % max 1.134941426413e+02 min 1.090790153651e+00 max/min 1.040476413006e+02 > 9699 KSP preconditioned resid norm 1.080641700547e+00 true resid norm 1.594064538413e-01 ||r(i)||/||b|| 8.917257763815e-03 > 9699 KSP Residual norm 1.080641700547e+00 % max 1.139914632588e+02 min 4.119681054781e-01 max/min 2.766997292824e+02 > 9700 KSP preconditioned resid norm 1.080641700546e+00 true resid norm 1.594063917269e-01 ||r(i)||/||b|| 8.917254289111e-03 > 9700 KSP Residual norm 1.080641700546e+00 % max 1.140011377578e+02 min 2.894500209686e-01 max/min 3.938543081679e+02 > 9701 KSP preconditioned resid norm 1.080641700546e+00 true resid norm 1.594063946692e-01 ||r(i)||/||b|| 8.917254453705e-03 > 9701 KSP Residual norm 1.080641700546e+00 % max 1.140392249545e+02 min 2.880548133071e-01 max/min 3.958941829341e+02 > 9702 KSP preconditioned resid norm 1.080641700546e+00 true resid norm 1.594064194159e-01 ||r(i)||/||b|| 8.917255838042e-03 > 9702 KSP Residual norm 1.080641700546e+00 % max 1.140392678944e+02 min 2.501716607446e-01 max/min 4.558440694479e+02 > 9703 KSP preconditioned resid norm 1.080641700543e+00 true resid norm 1.594064201661e-01 ||r(i)||/||b|| 8.917255880013e-03 > 9703 KSP Residual norm 1.080641700543e+00 % max 1.141360426034e+02 min 2.500714053220e-01 max/min 4.564138089137e+02 > 9704 KSP preconditioned resid norm 1.080641700542e+00 true resid norm 1.594064270279e-01 ||r(i)||/||b|| 8.917256263859e-03 > 9704 KSP Residual norm 1.080641700542e+00 % max 1.141719170495e+02 min 2.470524869868e-01 max/min 4.621362789824e+02 > 9705 KSP preconditioned resid norm 1.080641700515e+00 true resid norm 1.594064921269e-01 ||r(i)||/||b|| 8.917259905523e-03 > 9705 KSP Residual norm 1.080641700515e+00 % max 1.141770017414e+02 min 2.461727640845e-01 max/min 4.638084239987e+02 > 9706 KSP preconditioned resid norm 1.080641700511e+00 true resid norm 1.594066225181e-01 ||r(i)||/||b|| 8.917267199657e-03 > 9706 KSP Residual norm 1.080641700511e+00 % max 1.150251667711e+02 min 1.817294170466e-01 max/min 6.329474261263e+02 > 9707 KSP preconditioned resid norm 1.080641700391e+00 true resid norm 1.594070728983e-01 ||r(i)||/||b|| 8.917292394097e-03 > 9707 KSP Residual norm 1.080641700391e+00 % max 1.153670687494e+02 min 1.757829178698e-01 max/min 6.563042083238e+02 > 9708 KSP preconditioned resid norm 1.080641700372e+00 true resid norm 1.594072248425e-01 ||r(i)||/||b|| 8.917300893916e-03 > 9708 KSP Residual norm 1.080641700372e+00 % max 1.154419380718e+02 min 1.682005223292e-01 max/min 6.863351936912e+02 > 9709 KSP preconditioned resid norm 1.080641699955e+00 true resid norm 1.594088278507e-01 ||r(i)||/||b|| 8.917390566803e-03 > 9709 KSP Residual norm 1.080641699955e+00 % max 1.154420821193e+02 min 1.254367014624e-01 max/min 9.203214113045e+02 > 9710 KSP preconditioned resid norm 1.080641699758e+00 true resid norm 1.594089066545e-01 ||r(i)||/||b|| 8.917394975116e-03 > 9710 KSP Residual norm 1.080641699758e+00 % max 1.155791306030e+02 min 1.115278858784e-01 max/min 1.036324948623e+03 > 9711 KSP preconditioned resid norm 1.080641699664e+00 true resid norm 1.594098725388e-01 ||r(i)||/||b|| 8.917449007053e-03 > 9711 KSP Residual norm 1.080641699664e+00 % max 1.156952614563e+02 min 9.753133694719e-02 max/min 1.186236804269e+03 > 9712 KSP preconditioned resid norm 1.080641699560e+00 true resid norm 1.594103943811e-01 ||r(i)||/||b|| 8.917478199111e-03 > 9712 KSP Residual norm 1.080641699560e+00 % max 1.157175157006e+02 min 8.164872572382e-02 max/min 1.417260522743e+03 > 9713 KSP preconditioned resid norm 1.080641699546e+00 true resid norm 1.594101771105e-01 ||r(i)||/||b|| 8.917466044914e-03 > 9713 KSP Residual norm 1.080641699546e+00 % max 1.157285025175e+02 min 8.043358651149e-02 max/min 1.438808183705e+03 > 9714 KSP preconditioned resid norm 1.080641699505e+00 true resid norm 1.594103059104e-01 ||r(i)||/||b|| 8.917473250025e-03 > 9714 KSP Residual norm 1.080641699505e+00 % max 1.158251990486e+02 min 8.042956633721e-02 max/min 1.440082351843e+03 > 9715 KSP preconditioned resid norm 1.080641699503e+00 true resid norm 1.594103148215e-01 ||r(i)||/||b|| 8.917473748516e-03 > 9715 KSP Residual norm 1.080641699503e+00 % max 1.158319218998e+02 min 7.912575668150e-02 max/min 1.463896545925e+03 > 9716 KSP preconditioned resid norm 1.080641699309e+00 true resid norm 1.594102873744e-01 ||r(i)||/||b|| 8.917472213115e-03 > 9716 KSP Residual norm 1.080641699309e+00 % max 1.164751648212e+02 min 7.459498920670e-02 max/min 1.561434166824e+03 > 9717 KSP preconditioned resid norm 1.080641699277e+00 true resid norm 1.594101525703e-01 ||r(i)||/||b|| 8.917464672122e-03 > 9717 KSP Residual norm 1.080641699277e+00 % max 1.166578264103e+02 min 7.458604738274e-02 max/min 1.564070365758e+03 > 9718 KSP preconditioned resid norm 1.080641697541e+00 true resid norm 1.594090891807e-01 ||r(i)||/||b|| 8.917405185702e-03 > 9718 KSP Residual norm 1.080641697541e+00 % max 1.166902828397e+02 min 6.207863457218e-02 max/min 1.879717291527e+03 > 9719 KSP preconditioned resid norm 1.080641691292e+00 true resid norm 1.594084448306e-01 ||r(i)||/||b|| 8.917369140517e-03 > 9719 KSP Residual norm 1.080641691292e+00 % max 1.166910940162e+02 min 4.511779816158e-02 max/min 2.586364999424e+03 > 9720 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073798840e-01 ||r(i)||/||b|| 8.917309566993e-03 > 9720 KSP Residual norm 1.080641690833e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9721 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073798892e-01 ||r(i)||/||b|| 8.917309567288e-03 > 9721 KSP Residual norm 1.080641690833e+00 % max 3.063497720423e+01 min 3.063497720423e+01 max/min 1.000000000000e+00 > 9722 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073798982e-01 ||r(i)||/||b|| 8.917309567787e-03 > 9722 KSP Residual norm 1.080641690833e+00 % max 3.066566915002e+01 min 3.845904218039e+00 max/min 7.973591491484e+00 > 9723 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073800048e-01 ||r(i)||/||b|| 8.917309573750e-03 > 9723 KSP Residual norm 1.080641690833e+00 % max 3.713330064708e+01 min 1.336316867286e+00 max/min 2.778779611043e+01 > 9724 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073800084e-01 ||r(i)||/||b|| 8.917309573955e-03 > 9724 KSP Residual norm 1.080641690833e+00 % max 4.496345800250e+01 min 1.226793989785e+00 max/min 3.665118868930e+01 > 9725 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073800144e-01 ||r(i)||/||b|| 8.917309574290e-03 > 9725 KSP Residual norm 1.080641690833e+00 % max 8.684799302076e+01 min 1.183106260116e+00 max/min 7.340675639080e+01 > 9726 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073803210e-01 ||r(i)||/||b|| 8.917309591441e-03 > 9726 KSP Residual norm 1.080641690833e+00 % max 9.802654643741e+01 min 1.168688159857e+00 max/min 8.387741897664e+01 > 9727 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073804933e-01 ||r(i)||/||b|| 8.917309601077e-03 > 9727 KSP Residual norm 1.080641690833e+00 % max 1.123333202151e+02 min 1.141267080659e+00 max/min 9.842859933384e+01 > 9728 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073805957e-01 ||r(i)||/||b|| 8.917309606809e-03 > 9728 KSP Residual norm 1.080641690833e+00 % max 1.134978238790e+02 min 1.090790456842e+00 max/min 1.040509872149e+02 > 9729 KSP preconditioned resid norm 1.080641690832e+00 true resid norm 1.594074332665e-01 ||r(i)||/||b|| 8.917312553232e-03 > 9729 KSP Residual norm 1.080641690832e+00 % max 1.139911500486e+02 min 4.122400558994e-01 max/min 2.765164336102e+02 > 9730 KSP preconditioned resid norm 1.080641690831e+00 true resid norm 1.594074947247e-01 ||r(i)||/||b|| 8.917315991226e-03 > 9730 KSP Residual norm 1.080641690831e+00 % max 1.140007051733e+02 min 2.895754967285e-01 max/min 3.936821535706e+02 > 9731 KSP preconditioned resid norm 1.080641690831e+00 true resid norm 1.594074918191e-01 ||r(i)||/||b|| 8.917315828690e-03 > 9731 KSP Residual norm 1.080641690831e+00 % max 1.140387143094e+02 min 2.881941911453e-01 max/min 3.957009468380e+02 > 9732 KSP preconditioned resid norm 1.080641690830e+00 true resid norm 1.594074673079e-01 ||r(i)||/||b|| 8.917314457521e-03 > 9732 KSP Residual norm 1.080641690830e+00 % max 1.140387637599e+02 min 2.501645326450e-01 max/min 4.558550428958e+02 > 9733 KSP preconditioned resid norm 1.080641690828e+00 true resid norm 1.594074665892e-01 ||r(i)||/||b|| 8.917314417314e-03 > 9733 KSP Residual norm 1.080641690828e+00 % max 1.141359902062e+02 min 2.500652873669e-01 max/min 4.564247657404e+02 > 9734 KSP preconditioned resid norm 1.080641690827e+00 true resid norm 1.594074597377e-01 ||r(i)||/||b|| 8.917314034043e-03 > 9734 KSP Residual norm 1.080641690827e+00 % max 1.141719421992e+02 min 2.470354276866e-01 max/min 4.621682941123e+02 > 9735 KSP preconditioned resid norm 1.080641690800e+00 true resid norm 1.594073953224e-01 ||r(i)||/||b|| 8.917310430625e-03 > 9735 KSP Residual norm 1.080641690800e+00 % max 1.141769958343e+02 min 2.461584786890e-01 max/min 4.638353163473e+02 > 9736 KSP preconditioned resid norm 1.080641690797e+00 true resid norm 1.594072661531e-01 ||r(i)||/||b|| 8.917303204847e-03 > 9736 KSP Residual norm 1.080641690797e+00 % max 1.150247889475e+02 min 1.817430598855e-01 max/min 6.328978340080e+02 > 9737 KSP preconditioned resid norm 1.080641690679e+00 true resid norm 1.594068211899e-01 ||r(i)||/||b|| 8.917278313432e-03 > 9737 KSP Residual norm 1.080641690679e+00 % max 1.153658852526e+02 min 1.758225138542e-01 max/min 6.561496745986e+02 > 9738 KSP preconditioned resid norm 1.080641690661e+00 true resid norm 1.594066706603e-01 ||r(i)||/||b|| 8.917269892752e-03 > 9738 KSP Residual norm 1.080641690661e+00 % max 1.154409350014e+02 min 1.682258367468e-01 max/min 6.862259521714e+02 > 9739 KSP preconditioned resid norm 1.080641690251e+00 true resid norm 1.594050830368e-01 ||r(i)||/||b|| 8.917181080485e-03 > 9739 KSP Residual norm 1.080641690251e+00 % max 1.154410703479e+02 min 1.254612102469e-01 max/min 9.201335625627e+02 > 9740 KSP preconditioned resid norm 1.080641690057e+00 true resid norm 1.594050040532e-01 ||r(i)||/||b|| 8.917176662114e-03 > 9740 KSP Residual norm 1.080641690057e+00 % max 1.155780358111e+02 min 1.115226752201e-01 max/min 1.036363551923e+03 > 9741 KSP preconditioned resid norm 1.080641689964e+00 true resid norm 1.594040409622e-01 ||r(i)||/||b|| 8.917122786435e-03 > 9741 KSP Residual norm 1.080641689964e+00 % max 1.156949319460e+02 min 9.748084084270e-02 max/min 1.186847907198e+03 > 9742 KSP preconditioned resid norm 1.080641689862e+00 true resid norm 1.594035276798e-01 ||r(i)||/||b|| 8.917094073223e-03 > 9742 KSP Residual norm 1.080641689862e+00 % max 1.157177974822e+02 min 8.161690928512e-02 max/min 1.417816461022e+03 > 9743 KSP preconditioned resid norm 1.080641689848e+00 true resid norm 1.594037462881e-01 ||r(i)||/||b|| 8.917106302257e-03 > 9743 KSP Residual norm 1.080641689848e+00 % max 1.157298152734e+02 min 8.041673363017e-02 max/min 1.439126038190e+03 > 9744 KSP preconditioned resid norm 1.080641689809e+00 true resid norm 1.594036242374e-01 ||r(i)||/||b|| 8.917099474695e-03 > 9744 KSP Residual norm 1.080641689809e+00 % max 1.158244737331e+02 min 8.041224000025e-02 max/min 1.440383624841e+03 > 9745 KSP preconditioned resid norm 1.080641689808e+00 true resid norm 1.594036161930e-01 ||r(i)||/||b|| 8.917099024685e-03 > 9745 KSP Residual norm 1.080641689808e+00 % max 1.158301611517e+02 min 7.912028815054e-02 max/min 1.463975471516e+03 > 9746 KSP preconditioned resid norm 1.080641689617e+00 true resid norm 1.594036440841e-01 ||r(i)||/||b|| 8.917100584928e-03 > 9746 KSP Residual norm 1.080641689617e+00 % max 1.164679674294e+02 min 7.460741052548e-02 max/min 1.561077734894e+03 > 9747 KSP preconditioned resid norm 1.080641689585e+00 true resid norm 1.594037786262e-01 ||r(i)||/||b|| 8.917108111263e-03 > 9747 KSP Residual norm 1.080641689585e+00 % max 1.166489093034e+02 min 7.459780450955e-02 max/min 1.563704321734e+03 > 9748 KSP preconditioned resid norm 1.080641687880e+00 true resid norm 1.594048285858e-01 ||r(i)||/||b|| 8.917166846404e-03 > 9748 KSP Residual norm 1.080641687880e+00 % max 1.166808944816e+02 min 6.210471238399e-02 max/min 1.878776827114e+03 > 9749 KSP preconditioned resid norm 1.080641681745e+00 true resid norm 1.594054707973e-01 ||r(i)||/||b|| 8.917202771956e-03 > 9749 KSP Residual norm 1.080641681745e+00 % max 1.166816242497e+02 min 4.513512286802e-02 max/min 2.585162437485e+03 > 9750 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065161384e-01 ||r(i)||/||b|| 8.917261248737e-03 > 9750 KSP Residual norm 1.080641681302e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9751 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065161332e-01 ||r(i)||/||b|| 8.917261248446e-03 > 9751 KSP Residual norm 1.080641681302e+00 % max 3.063463477969e+01 min 3.063463477969e+01 max/min 1.000000000000e+00 > 9752 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065161243e-01 ||r(i)||/||b|| 8.917261247951e-03 > 9752 KSP Residual norm 1.080641681302e+00 % max 3.066528223339e+01 min 3.844415595957e+00 max/min 7.976578355796e+00 > 9753 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065160188e-01 ||r(i)||/||b|| 8.917261242049e-03 > 9753 KSP Residual norm 1.080641681302e+00 % max 3.714551741771e+01 min 1.336558367241e+00 max/min 2.779191566050e+01 > 9754 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065160151e-01 ||r(i)||/||b|| 8.917261241840e-03 > 9754 KSP Residual norm 1.080641681302e+00 % max 4.500946346995e+01 min 1.226798482639e+00 max/min 3.668855489054e+01 > 9755 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065160091e-01 ||r(i)||/||b|| 8.917261241506e-03 > 9755 KSP Residual norm 1.080641681302e+00 % max 8.688228012809e+01 min 1.183121177118e+00 max/min 7.343481108144e+01 > 9756 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065157105e-01 ||r(i)||/||b|| 8.917261224803e-03 > 9756 KSP Residual norm 1.080641681302e+00 % max 9.802497401249e+01 min 1.168940055038e+00 max/min 8.385799903943e+01 > 9757 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065155414e-01 ||r(i)||/||b|| 8.917261215340e-03 > 9757 KSP Residual norm 1.080641681302e+00 % max 1.122419823631e+02 min 1.141674012007e+00 max/min 9.831351259875e+01 > 9758 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065154397e-01 ||r(i)||/||b|| 8.917261209650e-03 > 9758 KSP Residual norm 1.080641681302e+00 % max 1.134941779165e+02 min 1.090790164362e+00 max/min 1.040476726180e+02 > 9759 KSP preconditioned resid norm 1.080641681301e+00 true resid norm 1.594064632071e-01 ||r(i)||/||b|| 8.917258287741e-03 > 9759 KSP Residual norm 1.080641681301e+00 % max 1.139914602803e+02 min 4.119712251563e-01 max/min 2.766976267263e+02 > 9760 KSP preconditioned resid norm 1.080641681300e+00 true resid norm 1.594064023343e-01 ||r(i)||/||b|| 8.917254882493e-03 > 9760 KSP Residual norm 1.080641681300e+00 % max 1.140011334474e+02 min 2.894513550128e-01 max/min 3.938524780524e+02 > 9761 KSP preconditioned resid norm 1.080641681299e+00 true resid norm 1.594064052181e-01 ||r(i)||/||b|| 8.917255043816e-03 > 9761 KSP Residual norm 1.080641681299e+00 % max 1.140392200085e+02 min 2.880562980628e-01 max/min 3.958921251693e+02 > 9762 KSP preconditioned resid norm 1.080641681299e+00 true resid norm 1.594064294735e-01 ||r(i)||/||b|| 8.917256400669e-03 > 9762 KSP Residual norm 1.080641681299e+00 % max 1.140392630219e+02 min 2.501715931763e-01 max/min 4.558441730892e+02 > 9763 KSP preconditioned resid norm 1.080641681297e+00 true resid norm 1.594064302085e-01 ||r(i)||/||b|| 8.917256441789e-03 > 9763 KSP Residual norm 1.080641681297e+00 % max 1.141360422663e+02 min 2.500713478839e-01 max/min 4.564139123977e+02 > 9764 KSP preconditioned resid norm 1.080641681296e+00 true resid norm 1.594064369343e-01 ||r(i)||/||b|| 8.917256818032e-03 > 9764 KSP Residual norm 1.080641681296e+00 % max 1.141719172739e+02 min 2.470523296471e-01 max/min 4.621365742109e+02 > 9765 KSP preconditioned resid norm 1.080641681269e+00 true resid norm 1.594065007316e-01 ||r(i)||/||b|| 8.917260386877e-03 > 9765 KSP Residual norm 1.080641681269e+00 % max 1.141770017074e+02 min 2.461726211496e-01 max/min 4.638086931610e+02 > 9766 KSP preconditioned resid norm 1.080641681266e+00 true resid norm 1.594066285385e-01 ||r(i)||/||b|| 8.917267536440e-03 > 9766 KSP Residual norm 1.080641681266e+00 % max 1.150251639969e+02 min 1.817295045513e-01 max/min 6.329471060897e+02 > 9767 KSP preconditioned resid norm 1.080641681150e+00 true resid norm 1.594070699054e-01 ||r(i)||/||b|| 8.917292226672e-03 > 9767 KSP Residual norm 1.080641681150e+00 % max 1.153670601170e+02 min 1.757832464778e-01 max/min 6.563029323251e+02 > 9768 KSP preconditioned resid norm 1.080641681133e+00 true resid norm 1.594072188128e-01 ||r(i)||/||b|| 8.917300556609e-03 > 9768 KSP Residual norm 1.080641681133e+00 % max 1.154419312149e+02 min 1.682007202940e-01 max/min 6.863343451393e+02 > 9769 KSP preconditioned resid norm 1.080641680732e+00 true resid norm 1.594087897943e-01 ||r(i)||/||b|| 8.917388437917e-03 > 9769 KSP Residual norm 1.080641680732e+00 % max 1.154420752094e+02 min 1.254369186538e-01 max/min 9.203197627011e+02 > 9770 KSP preconditioned resid norm 1.080641680543e+00 true resid norm 1.594088670353e-01 ||r(i)||/||b|| 8.917392758804e-03 > 9770 KSP Residual norm 1.080641680543e+00 % max 1.155791224140e+02 min 1.115277033208e-01 max/min 1.036326571538e+03 > 9771 KSP preconditioned resid norm 1.080641680452e+00 true resid norm 1.594098136932e-01 ||r(i)||/||b|| 8.917445715209e-03 > 9771 KSP Residual norm 1.080641680452e+00 % max 1.156952573953e+02 min 9.753101753312e-02 max/min 1.186240647556e+03 > 9772 KSP preconditioned resid norm 1.080641680352e+00 true resid norm 1.594103250846e-01 ||r(i)||/||b|| 8.917474322641e-03 > 9772 KSP Residual norm 1.080641680352e+00 % max 1.157175142052e+02 min 8.164839358709e-02 max/min 1.417266269688e+03 > 9773 KSP preconditioned resid norm 1.080641680339e+00 true resid norm 1.594101121664e-01 ||r(i)||/||b|| 8.917462411912e-03 > 9773 KSP Residual norm 1.080641680339e+00 % max 1.157285073186e+02 min 8.043338619163e-02 max/min 1.438811826757e+03 > 9774 KSP preconditioned resid norm 1.080641680299e+00 true resid norm 1.594102383476e-01 ||r(i)||/||b|| 8.917469470538e-03 > 9774 KSP Residual norm 1.080641680299e+00 % max 1.158251880541e+02 min 8.042936196082e-02 max/min 1.440085874492e+03 > 9775 KSP preconditioned resid norm 1.080641680298e+00 true resid norm 1.594102470687e-01 ||r(i)||/||b|| 8.917469958400e-03 > 9775 KSP Residual norm 1.080641680298e+00 % max 1.158319028736e+02 min 7.912566724984e-02 max/min 1.463897960037e+03 > 9776 KSP preconditioned resid norm 1.080641680112e+00 true resid norm 1.594102201623e-01 ||r(i)||/||b|| 8.917468453243e-03 > 9776 KSP Residual norm 1.080641680112e+00 % max 1.164751028861e+02 min 7.459509526345e-02 max/min 1.561431116546e+03 > 9777 KSP preconditioned resid norm 1.080641680081e+00 true resid norm 1.594100880055e-01 ||r(i)||/||b|| 8.917461060344e-03 > 9777 KSP Residual norm 1.080641680081e+00 % max 1.166577501679e+02 min 7.458614509876e-02 max/min 1.564067294448e+03 > 9778 KSP preconditioned resid norm 1.080641678414e+00 true resid norm 1.594090458938e-01 ||r(i)||/||b|| 8.917402764217e-03 > 9778 KSP Residual norm 1.080641678414e+00 % max 1.166902022924e+02 min 6.207884124039e-02 max/min 1.879709736213e+03 > 9779 KSP preconditioned resid norm 1.080641672412e+00 true resid norm 1.594084143785e-01 ||r(i)||/||b|| 8.917367437011e-03 > 9779 KSP Residual norm 1.080641672412e+00 % max 1.166910128239e+02 min 4.511799534103e-02 max/min 2.586351896663e+03 > 9780 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073707031e-01 ||r(i)||/||b|| 8.917309053412e-03 > 9780 KSP Residual norm 1.080641671971e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9781 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073707083e-01 ||r(i)||/||b|| 8.917309053702e-03 > 9781 KSP Residual norm 1.080641671971e+00 % max 3.063497578379e+01 min 3.063497578379e+01 max/min 1.000000000000e+00 > 9782 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073707170e-01 ||r(i)||/||b|| 8.917309054190e-03 > 9782 KSP Residual norm 1.080641671971e+00 % max 3.066566713389e+01 min 3.845888622043e+00 max/min 7.973623302070e+00 > 9783 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073708215e-01 ||r(i)||/||b|| 8.917309060034e-03 > 9783 KSP Residual norm 1.080641671971e+00 % max 3.713345706068e+01 min 1.336320269525e+00 max/min 2.778784241137e+01 > 9784 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073708251e-01 ||r(i)||/||b|| 8.917309060235e-03 > 9784 KSP Residual norm 1.080641671971e+00 % max 4.496404075363e+01 min 1.226794215442e+00 max/min 3.665165696713e+01 > 9785 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073708309e-01 ||r(i)||/||b|| 8.917309060564e-03 > 9785 KSP Residual norm 1.080641671971e+00 % max 8.684843710054e+01 min 1.183106407741e+00 max/min 7.340712258198e+01 > 9786 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073711314e-01 ||r(i)||/||b|| 8.917309077372e-03 > 9786 KSP Residual norm 1.080641671971e+00 % max 9.802652080694e+01 min 1.168690722569e+00 max/min 8.387721311884e+01 > 9787 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073713003e-01 ||r(i)||/||b|| 8.917309086817e-03 > 9787 KSP Residual norm 1.080641671971e+00 % max 1.123323967807e+02 min 1.141271419633e+00 max/min 9.842741599265e+01 > 9788 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073714007e-01 ||r(i)||/||b|| 8.917309092435e-03 > 9788 KSP Residual norm 1.080641671971e+00 % max 1.134977855491e+02 min 1.090790461557e+00 max/min 1.040509516255e+02 > 9789 KSP preconditioned resid norm 1.080641671970e+00 true resid norm 1.594074230188e-01 ||r(i)||/||b|| 8.917311979972e-03 > 9789 KSP Residual norm 1.080641671970e+00 % max 1.139911533240e+02 min 4.122377311421e-01 max/min 2.765180009315e+02 > 9790 KSP preconditioned resid norm 1.080641671969e+00 true resid norm 1.594074832488e-01 ||r(i)||/||b|| 8.917315349259e-03 > 9790 KSP Residual norm 1.080641671969e+00 % max 1.140007095063e+02 min 2.895743194700e-01 max/min 3.936837690403e+02 > 9791 KSP preconditioned resid norm 1.080641671969e+00 true resid norm 1.594074804009e-01 ||r(i)||/||b|| 8.917315189946e-03 > 9791 KSP Residual norm 1.080641671969e+00 % max 1.140387195674e+02 min 2.881928861513e-01 max/min 3.957027568944e+02 > 9792 KSP preconditioned resid norm 1.080641671968e+00 true resid norm 1.594074563767e-01 ||r(i)||/||b|| 8.917313846024e-03 > 9792 KSP Residual norm 1.080641671968e+00 % max 1.140387689617e+02 min 2.501646075030e-01 max/min 4.558549272813e+02 > 9793 KSP preconditioned resid norm 1.080641671966e+00 true resid norm 1.594074556720e-01 ||r(i)||/||b|| 8.917313806607e-03 > 9793 KSP Residual norm 1.080641671966e+00 % max 1.141359909124e+02 min 2.500653522913e-01 max/min 4.564246500628e+02 > 9794 KSP preconditioned resid norm 1.080641671965e+00 true resid norm 1.594074489575e-01 ||r(i)||/||b|| 8.917313430994e-03 > 9794 KSP Residual norm 1.080641671965e+00 % max 1.141719419220e+02 min 2.470356110582e-01 max/min 4.621679499282e+02 > 9795 KSP preconditioned resid norm 1.080641671939e+00 true resid norm 1.594073858301e-01 ||r(i)||/||b|| 8.917309899620e-03 > 9795 KSP Residual norm 1.080641671939e+00 % max 1.141769959186e+02 min 2.461586211459e-01 max/min 4.638350482589e+02 > 9796 KSP preconditioned resid norm 1.080641671936e+00 true resid norm 1.594072592237e-01 ||r(i)||/||b|| 8.917302817214e-03 > 9796 KSP Residual norm 1.080641671936e+00 % max 1.150247937261e+02 min 1.817428765765e-01 max/min 6.328984986528e+02 > 9797 KSP preconditioned resid norm 1.080641671823e+00 true resid norm 1.594068231505e-01 ||r(i)||/||b|| 8.917278423111e-03 > 9797 KSP Residual norm 1.080641671823e+00 % max 1.153659002697e+02 min 1.758220533048e-01 max/min 6.561514787326e+02 > 9798 KSP preconditioned resid norm 1.080641671805e+00 true resid norm 1.594066756326e-01 ||r(i)||/||b|| 8.917270170903e-03 > 9798 KSP Residual norm 1.080641671805e+00 % max 1.154409481864e+02 min 1.682255312531e-01 max/min 6.862272767188e+02 > 9799 KSP preconditioned resid norm 1.080641671412e+00 true resid norm 1.594051197519e-01 ||r(i)||/||b|| 8.917183134347e-03 > 9799 KSP Residual norm 1.080641671412e+00 % max 1.154410836529e+02 min 1.254609413083e-01 max/min 9.201356410140e+02 > 9800 KSP preconditioned resid norm 1.080641671225e+00 true resid norm 1.594050423544e-01 ||r(i)||/||b|| 8.917178804700e-03 > 9800 KSP Residual norm 1.080641671225e+00 % max 1.155780495006e+02 min 1.115226000164e-01 max/min 1.036364373532e+03 > 9801 KSP preconditioned resid norm 1.080641671136e+00 true resid norm 1.594040985764e-01 ||r(i)||/||b|| 8.917126009400e-03 > 9801 KSP Residual norm 1.080641671136e+00 % max 1.156949344498e+02 min 9.748153395641e-02 max/min 1.186839494150e+03 > 9802 KSP preconditioned resid norm 1.080641671038e+00 true resid norm 1.594035955111e-01 ||r(i)||/||b|| 8.917097867731e-03 > 9802 KSP Residual norm 1.080641671038e+00 % max 1.157177902650e+02 min 8.161721244731e-02 max/min 1.417811106202e+03 > 9803 KSP preconditioned resid norm 1.080641671024e+00 true resid norm 1.594038096704e-01 ||r(i)||/||b|| 8.917109847884e-03 > 9803 KSP Residual norm 1.080641671024e+00 % max 1.157297935320e+02 min 8.041686995864e-02 max/min 1.439123328121e+03 > 9804 KSP preconditioned resid norm 1.080641670988e+00 true resid norm 1.594036899968e-01 ||r(i)||/||b|| 8.917103153299e-03 > 9804 KSP Residual norm 1.080641670988e+00 % max 1.158244769690e+02 min 8.041238179269e-02 max/min 1.440381125230e+03 > 9805 KSP preconditioned resid norm 1.080641670986e+00 true resid norm 1.594036821095e-01 ||r(i)||/||b|| 8.917102712083e-03 > 9805 KSP Residual norm 1.080641670986e+00 % max 1.158301768585e+02 min 7.912030787936e-02 max/min 1.463975304989e+03 > 9806 KSP preconditioned resid norm 1.080641670803e+00 true resid norm 1.594037094369e-01 ||r(i)||/||b|| 8.917104240783e-03 > 9806 KSP Residual norm 1.080641670803e+00 % max 1.164680491000e+02 min 7.460726855236e-02 max/min 1.561081800203e+03 > 9807 KSP preconditioned resid norm 1.080641670773e+00 true resid norm 1.594038413140e-01 ||r(i)||/||b|| 8.917111618039e-03 > 9807 KSP Residual norm 1.080641670773e+00 % max 1.166490110820e+02 min 7.459766739249e-02 max/min 1.563708560326e+03 > 9808 KSP preconditioned resid norm 1.080641669134e+00 true resid norm 1.594048703119e-01 ||r(i)||/||b|| 8.917169180576e-03 > 9808 KSP Residual norm 1.080641669134e+00 % max 1.166810013448e+02 min 6.210440124755e-02 max/min 1.878787960288e+03 > 9809 KSP preconditioned resid norm 1.080641663243e+00 true resid norm 1.594054996410e-01 ||r(i)||/||b|| 8.917204385483e-03 > 9809 KSP Residual norm 1.080641663243e+00 % max 1.166817320749e+02 min 4.513497352470e-02 max/min 2.585173380262e+03 > 9810 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065242705e-01 ||r(i)||/||b|| 8.917261703653e-03 > 9810 KSP Residual norm 1.080641662817e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9811 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065242654e-01 ||r(i)||/||b|| 8.917261703367e-03 > 9811 KSP Residual norm 1.080641662817e+00 % max 3.063464017134e+01 min 3.063464017134e+01 max/min 1.000000000000e+00 > 9812 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065242568e-01 ||r(i)||/||b|| 8.917261702882e-03 > 9812 KSP Residual norm 1.080641662817e+00 % max 3.066528792324e+01 min 3.844429757599e+00 max/min 7.976550452671e+00 > 9813 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065241534e-01 ||r(i)||/||b|| 8.917261697099e-03 > 9813 KSP Residual norm 1.080641662817e+00 % max 3.714542869717e+01 min 1.336556919260e+00 max/min 2.779187938941e+01 > 9814 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065241497e-01 ||r(i)||/||b|| 8.917261696894e-03 > 9814 KSP Residual norm 1.080641662817e+00 % max 4.500912339640e+01 min 1.226798613987e+00 max/min 3.668827375841e+01 > 9815 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065241439e-01 ||r(i)||/||b|| 8.917261696565e-03 > 9815 KSP Residual norm 1.080641662817e+00 % max 8.688203511576e+01 min 1.183121026759e+00 max/min 7.343461332421e+01 > 9816 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065238512e-01 ||r(i)||/||b|| 8.917261680193e-03 > 9816 KSP Residual norm 1.080641662817e+00 % max 9.802498010891e+01 min 1.168937586878e+00 max/min 8.385818131719e+01 > 9817 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065236853e-01 ||r(i)||/||b|| 8.917261670916e-03 > 9817 KSP Residual norm 1.080641662817e+00 % max 1.122428829897e+02 min 1.141670208530e+00 max/min 9.831462899798e+01 > 9818 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065235856e-01 ||r(i)||/||b|| 8.917261665338e-03 > 9818 KSP Residual norm 1.080641662817e+00 % max 1.134942125400e+02 min 1.090790174709e+00 max/min 1.040477033727e+02 > 9819 KSP preconditioned resid norm 1.080641662816e+00 true resid norm 1.594064723991e-01 ||r(i)||/||b|| 8.917258801942e-03 > 9819 KSP Residual norm 1.080641662816e+00 % max 1.139914573562e+02 min 4.119742762810e-01 max/min 2.766955703769e+02 > 9820 KSP preconditioned resid norm 1.080641662815e+00 true resid norm 1.594064127438e-01 ||r(i)||/||b|| 8.917255464802e-03 > 9820 KSP Residual norm 1.080641662815e+00 % max 1.140011292201e+02 min 2.894526616195e-01 max/min 3.938506855740e+02 > 9821 KSP preconditioned resid norm 1.080641662815e+00 true resid norm 1.594064155702e-01 ||r(i)||/||b|| 8.917255622914e-03 > 9821 KSP Residual norm 1.080641662815e+00 % max 1.140392151549e+02 min 2.880577522236e-01 max/min 3.958901097943e+02 > 9822 KSP preconditioned resid norm 1.080641662815e+00 true resid norm 1.594064393436e-01 ||r(i)||/||b|| 8.917256952808e-03 > 9822 KSP Residual norm 1.080641662815e+00 % max 1.140392582401e+02 min 2.501715268375e-01 max/min 4.558442748531e+02 > 9823 KSP preconditioned resid norm 1.080641662812e+00 true resid norm 1.594064400637e-01 ||r(i)||/||b|| 8.917256993092e-03 > 9823 KSP Residual norm 1.080641662812e+00 % max 1.141360419319e+02 min 2.500712914815e-01 max/min 4.564140140028e+02 > 9824 KSP preconditioned resid norm 1.080641662811e+00 true resid norm 1.594064466562e-01 ||r(i)||/||b|| 8.917257361878e-03 > 9824 KSP Residual norm 1.080641662811e+00 % max 1.141719174947e+02 min 2.470521750742e-01 max/min 4.621368642491e+02 > 9825 KSP preconditioned resid norm 1.080641662786e+00 true resid norm 1.594065091771e-01 ||r(i)||/||b|| 8.917260859317e-03 > 9825 KSP Residual norm 1.080641662786e+00 % max 1.141770016737e+02 min 2.461724809746e-01 max/min 4.638089571251e+02 > 9826 KSP preconditioned resid norm 1.080641662783e+00 true resid norm 1.594066344484e-01 ||r(i)||/||b|| 8.917267867042e-03 > 9826 KSP Residual norm 1.080641662783e+00 % max 1.150251612560e+02 min 1.817295914017e-01 max/min 6.329467885162e+02 > 9827 KSP preconditioned resid norm 1.080641662672e+00 true resid norm 1.594070669773e-01 ||r(i)||/||b|| 8.917292062876e-03 > 9827 KSP Residual norm 1.080641662672e+00 % max 1.153670515865e+02 min 1.757835701538e-01 max/min 6.563016753248e+02 > 9828 KSP preconditioned resid norm 1.080641662655e+00 true resid norm 1.594072129068e-01 ||r(i)||/||b|| 8.917300226227e-03 > 9828 KSP Residual norm 1.080641662655e+00 % max 1.154419244262e+02 min 1.682009156098e-01 max/min 6.863335078032e+02 > 9829 KSP preconditioned resid norm 1.080641662270e+00 true resid norm 1.594087524825e-01 ||r(i)||/||b|| 8.917386350680e-03 > 9829 KSP Residual norm 1.080641662270e+00 % max 1.154420683680e+02 min 1.254371322814e-01 max/min 9.203181407961e+02 > 9830 KSP preconditioned resid norm 1.080641662088e+00 true resid norm 1.594088281904e-01 ||r(i)||/||b|| 8.917390585807e-03 > 9830 KSP Residual norm 1.080641662088e+00 % max 1.155791143272e+02 min 1.115275270000e-01 max/min 1.036328137421e+03 > 9831 KSP preconditioned resid norm 1.080641662001e+00 true resid norm 1.594097559916e-01 ||r(i)||/||b|| 8.917442487360e-03 > 9831 KSP Residual norm 1.080641662001e+00 % max 1.156952534277e+02 min 9.753070058662e-02 max/min 1.186244461814e+03 > 9832 KSP preconditioned resid norm 1.080641661905e+00 true resid norm 1.594102571355e-01 ||r(i)||/||b|| 8.917470521541e-03 > 9832 KSP Residual norm 1.080641661905e+00 % max 1.157175128244e+02 min 8.164806814555e-02 max/min 1.417271901866e+03 > 9833 KSP preconditioned resid norm 1.080641661892e+00 true resid norm 1.594100484838e-01 ||r(i)||/||b|| 8.917458849484e-03 > 9833 KSP Residual norm 1.080641661892e+00 % max 1.157285121903e+02 min 8.043319039381e-02 max/min 1.438815389813e+03 > 9834 KSP preconditioned resid norm 1.080641661854e+00 true resid norm 1.594101720988e-01 ||r(i)||/||b|| 8.917465764556e-03 > 9834 KSP Residual norm 1.080641661854e+00 % max 1.158251773437e+02 min 8.042916217223e-02 max/min 1.440089318544e+03 > 9835 KSP preconditioned resid norm 1.080641661853e+00 true resid norm 1.594101806342e-01 ||r(i)||/||b|| 8.917466242026e-03 > 9835 KSP Residual norm 1.080641661853e+00 % max 1.158318842362e+02 min 7.912558026309e-02 max/min 1.463899333832e+03 > 9836 KSP preconditioned resid norm 1.080641661674e+00 true resid norm 1.594101542582e-01 ||r(i)||/||b|| 8.917464766544e-03 > 9836 KSP Residual norm 1.080641661674e+00 % max 1.164750419425e+02 min 7.459519966941e-02 max/min 1.561428114124e+03 > 9837 KSP preconditioned resid norm 1.080641661644e+00 true resid norm 1.594100247000e-01 ||r(i)||/||b|| 8.917457519010e-03 > 9837 KSP Residual norm 1.080641661644e+00 % max 1.166576751361e+02 min 7.458624135723e-02 max/min 1.564064269942e+03 > 9838 KSP preconditioned resid norm 1.080641660043e+00 true resid norm 1.594090034525e-01 ||r(i)||/||b|| 8.917400390036e-03 > 9838 KSP Residual norm 1.080641660043e+00 % max 1.166901230302e+02 min 6.207904511461e-02 max/min 1.879702286251e+03 > 9839 KSP preconditioned resid norm 1.080641654279e+00 true resid norm 1.594083845247e-01 ||r(i)||/||b|| 8.917365766976e-03 > 9839 KSP Residual norm 1.080641654279e+00 % max 1.166909329256e+02 min 4.511818825181e-02 max/min 2.586339067391e+03 > 9840 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073617105e-01 ||r(i)||/||b|| 8.917308550363e-03 > 9840 KSP Residual norm 1.080641653856e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9841 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073617156e-01 ||r(i)||/||b|| 8.917308550646e-03 > 9841 KSP Residual norm 1.080641653856e+00 % max 3.063497434930e+01 min 3.063497434930e+01 max/min 1.000000000000e+00 > 9842 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073617242e-01 ||r(i)||/||b|| 8.917308551127e-03 > 9842 KSP Residual norm 1.080641653856e+00 % max 3.066566511835e+01 min 3.845873341758e+00 max/min 7.973654458505e+00 > 9843 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073618265e-01 ||r(i)||/||b|| 8.917308556853e-03 > 9843 KSP Residual norm 1.080641653856e+00 % max 3.713360973998e+01 min 1.336323585560e+00 max/min 2.778788771015e+01 > 9844 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073618300e-01 ||r(i)||/||b|| 8.917308557050e-03 > 9844 KSP Residual norm 1.080641653856e+00 % max 4.496460969624e+01 min 1.226794432981e+00 max/min 3.665211423154e+01 > 9845 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073618358e-01 ||r(i)||/||b|| 8.917308557372e-03 > 9845 KSP Residual norm 1.080641653856e+00 % max 8.684887047460e+01 min 1.183106552555e+00 max/min 7.340747989862e+01 > 9846 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073621302e-01 ||r(i)||/||b|| 8.917308573843e-03 > 9846 KSP Residual norm 1.080641653856e+00 % max 9.802649587879e+01 min 1.168693234980e+00 max/min 8.387701147298e+01 > 9847 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073622957e-01 ||r(i)||/||b|| 8.917308583099e-03 > 9847 KSP Residual norm 1.080641653856e+00 % max 1.123314913547e+02 min 1.141275669292e+00 max/min 9.842625614229e+01 > 9848 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073623942e-01 ||r(i)||/||b|| 8.917308588608e-03 > 9848 KSP Residual norm 1.080641653856e+00 % max 1.134977479974e+02 min 1.090790466016e+00 max/min 1.040509167741e+02 > 9849 KSP preconditioned resid norm 1.080641653855e+00 true resid norm 1.594074129801e-01 ||r(i)||/||b|| 8.917311418404e-03 > 9849 KSP Residual norm 1.080641653855e+00 % max 1.139911565327e+02 min 4.122354439173e-01 max/min 2.765195429329e+02 > 9850 KSP preconditioned resid norm 1.080641653854e+00 true resid norm 1.594074720057e-01 ||r(i)||/||b|| 8.917314720319e-03 > 9850 KSP Residual norm 1.080641653854e+00 % max 1.140007137546e+02 min 2.895731636857e-01 max/min 3.936853550364e+02 > 9851 KSP preconditioned resid norm 1.080641653854e+00 true resid norm 1.594074692144e-01 ||r(i)||/||b|| 8.917314564169e-03 > 9851 KSP Residual norm 1.080641653854e+00 % max 1.140387247200e+02 min 2.881916049088e-01 max/min 3.957045339889e+02 > 9852 KSP preconditioned resid norm 1.080641653853e+00 true resid norm 1.594074456679e-01 ||r(i)||/||b|| 8.917313246972e-03 > 9852 KSP Residual norm 1.080641653853e+00 % max 1.140387740588e+02 min 2.501646808296e-01 max/min 4.558548140395e+02 > 9853 KSP preconditioned resid norm 1.080641653851e+00 true resid norm 1.594074449772e-01 ||r(i)||/||b|| 8.917313208332e-03 > 9853 KSP Residual norm 1.080641653851e+00 % max 1.141359916013e+02 min 2.500654158720e-01 max/min 4.564245367688e+02 > 9854 KSP preconditioned resid norm 1.080641653850e+00 true resid norm 1.594074383969e-01 ||r(i)||/||b|| 8.917312840228e-03 > 9854 KSP Residual norm 1.080641653850e+00 % max 1.141719416509e+02 min 2.470357905993e-01 max/min 4.621676129356e+02 > 9855 KSP preconditioned resid norm 1.080641653826e+00 true resid norm 1.594073765323e-01 ||r(i)||/||b|| 8.917309379499e-03 > 9855 KSP Residual norm 1.080641653826e+00 % max 1.141769960009e+02 min 2.461587608334e-01 max/min 4.638347853812e+02 > 9856 KSP preconditioned resid norm 1.080641653822e+00 true resid norm 1.594072524403e-01 ||r(i)||/||b|| 8.917302437746e-03 > 9856 KSP Residual norm 1.080641653822e+00 % max 1.150247983913e+02 min 1.817426977612e-01 max/min 6.328991470261e+02 > 9857 KSP preconditioned resid norm 1.080641653714e+00 true resid norm 1.594068250847e-01 ||r(i)||/||b|| 8.917278531310e-03 > 9857 KSP Residual norm 1.080641653714e+00 % max 1.153659149296e+02 min 1.758216030104e-01 max/min 6.561532425728e+02 > 9858 KSP preconditioned resid norm 1.080641653697e+00 true resid norm 1.594066805198e-01 ||r(i)||/||b|| 8.917270444298e-03 > 9858 KSP Residual norm 1.080641653697e+00 % max 1.154409610504e+02 min 1.682252327368e-01 max/min 6.862285709010e+02 > 9859 KSP preconditioned resid norm 1.080641653319e+00 true resid norm 1.594051557657e-01 ||r(i)||/||b|| 8.917185148971e-03 > 9859 KSP Residual norm 1.080641653319e+00 % max 1.154410966341e+02 min 1.254606780252e-01 max/min 9.201376754146e+02 > 9860 KSP preconditioned resid norm 1.080641653140e+00 true resid norm 1.594050799233e-01 ||r(i)||/||b|| 8.917180906316e-03 > 9860 KSP Residual norm 1.080641653140e+00 % max 1.155780628675e+02 min 1.115225287992e-01 max/min 1.036365155202e+03 > 9861 KSP preconditioned resid norm 1.080641653054e+00 true resid norm 1.594041550812e-01 ||r(i)||/||b|| 8.917129170295e-03 > 9861 KSP Residual norm 1.080641653054e+00 % max 1.156949369211e+02 min 9.748220965886e-02 max/min 1.186831292869e+03 > 9862 KSP preconditioned resid norm 1.080641652960e+00 true resid norm 1.594036620365e-01 ||r(i)||/||b|| 8.917101589189e-03 > 9862 KSP Residual norm 1.080641652960e+00 % max 1.157177832797e+02 min 8.161751000999e-02 max/min 1.417805851533e+03 > 9863 KSP preconditioned resid norm 1.080641652947e+00 true resid norm 1.594038718371e-01 ||r(i)||/||b|| 8.917113325517e-03 > 9863 KSP Residual norm 1.080641652947e+00 % max 1.157297723960e+02 min 8.041700428565e-02 max/min 1.439120661408e+03 > 9864 KSP preconditioned resid norm 1.080641652912e+00 true resid norm 1.594037544972e-01 ||r(i)||/||b|| 8.917106761475e-03 > 9864 KSP Residual norm 1.080641652912e+00 % max 1.158244802295e+02 min 8.041252146115e-02 max/min 1.440378663980e+03 > 9865 KSP preconditioned resid norm 1.080641652910e+00 true resid norm 1.594037467642e-01 ||r(i)||/||b|| 8.917106328887e-03 > 9865 KSP Residual norm 1.080641652910e+00 % max 1.158301923080e+02 min 7.912032794006e-02 max/min 1.463975129069e+03 > 9866 KSP preconditioned resid norm 1.080641652734e+00 true resid norm 1.594037735388e-01 ||r(i)||/||b|| 8.917107826673e-03 > 9866 KSP Residual norm 1.080641652734e+00 % max 1.164681289889e+02 min 7.460712971447e-02 max/min 1.561085776047e+03 > 9867 KSP preconditioned resid norm 1.080641652705e+00 true resid norm 1.594039028009e-01 ||r(i)||/||b|| 8.917115057644e-03 > 9867 KSP Residual norm 1.080641652705e+00 % max 1.166491106270e+02 min 7.459753335320e-02 max/min 1.563712704477e+03 > 9868 KSP preconditioned resid norm 1.080641651132e+00 true resid norm 1.594049112430e-01 ||r(i)||/||b|| 8.917171470279e-03 > 9868 KSP Residual norm 1.080641651132e+00 % max 1.166811058685e+02 min 6.210409714784e-02 max/min 1.878798843026e+03 > 9869 KSP preconditioned resid norm 1.080641645474e+00 true resid norm 1.594055279416e-01 ||r(i)||/||b|| 8.917205968634e-03 > 9869 KSP Residual norm 1.080641645474e+00 % max 1.166818375393e+02 min 4.513482662958e-02 max/min 2.585184130582e+03 > 9870 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065322524e-01 ||r(i)||/||b|| 8.917262150160e-03 > 9870 KSP Residual norm 1.080641645065e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9871 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065322474e-01 ||r(i)||/||b|| 8.917262149880e-03 > 9871 KSP Residual norm 1.080641645065e+00 % max 3.063464541814e+01 min 3.063464541814e+01 max/min 1.000000000000e+00 > 9872 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065322389e-01 ||r(i)||/||b|| 8.917262149406e-03 > 9872 KSP Residual norm 1.080641645065e+00 % max 3.066529346532e+01 min 3.844443657370e+00 max/min 7.976523054651e+00 > 9873 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065321376e-01 ||r(i)||/||b|| 8.917262143737e-03 > 9873 KSP Residual norm 1.080641645065e+00 % max 3.714534104643e+01 min 1.336555480314e+00 max/min 2.779184373079e+01 > 9874 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065321340e-01 ||r(i)||/||b|| 8.917262143537e-03 > 9874 KSP Residual norm 1.080641645065e+00 % max 4.500878758461e+01 min 1.226798739269e+00 max/min 3.668799628162e+01 > 9875 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065321283e-01 ||r(i)||/||b|| 8.917262143216e-03 > 9875 KSP Residual norm 1.080641645065e+00 % max 8.688179296761e+01 min 1.183120879361e+00 max/min 7.343441780400e+01 > 9876 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065318413e-01 ||r(i)||/||b|| 8.917262127165e-03 > 9876 KSP Residual norm 1.080641645065e+00 % max 9.802498627683e+01 min 1.168935165788e+00 max/min 8.385836028017e+01 > 9877 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065316788e-01 ||r(i)||/||b|| 8.917262118072e-03 > 9877 KSP Residual norm 1.080641645065e+00 % max 1.122437663284e+02 min 1.141666474192e+00 max/min 9.831572430803e+01 > 9878 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065315811e-01 ||r(i)||/||b|| 8.917262112605e-03 > 9878 KSP Residual norm 1.080641645065e+00 % max 1.134942465219e+02 min 1.090790184702e+00 max/min 1.040477335730e+02 > 9879 KSP preconditioned resid norm 1.080641645064e+00 true resid norm 1.594064814200e-01 ||r(i)||/||b|| 8.917259306578e-03 > 9879 KSP Residual norm 1.080641645064e+00 % max 1.139914544856e+02 min 4.119772603942e-01 max/min 2.766935591945e+02 > 9880 KSP preconditioned resid norm 1.080641645063e+00 true resid norm 1.594064229585e-01 ||r(i)||/||b|| 8.917256036220e-03 > 9880 KSP Residual norm 1.080641645063e+00 % max 1.140011250742e+02 min 2.894539413393e-01 max/min 3.938489299774e+02 > 9881 KSP preconditioned resid norm 1.080641645063e+00 true resid norm 1.594064257287e-01 ||r(i)||/||b|| 8.917256191184e-03 > 9881 KSP Residual norm 1.080641645063e+00 % max 1.140392103921e+02 min 2.880591764056e-01 max/min 3.958881359556e+02 > 9882 KSP preconditioned resid norm 1.080641645063e+00 true resid norm 1.594064490294e-01 ||r(i)||/||b|| 8.917257494633e-03 > 9882 KSP Residual norm 1.080641645063e+00 % max 1.140392535477e+02 min 2.501714617108e-01 max/min 4.558443747654e+02 > 9883 KSP preconditioned resid norm 1.080641645061e+00 true resid norm 1.594064497349e-01 ||r(i)||/||b|| 8.917257534101e-03 > 9883 KSP Residual norm 1.080641645061e+00 % max 1.141360416004e+02 min 2.500712361002e-01 max/min 4.564141137554e+02 > 9884 KSP preconditioned resid norm 1.080641645059e+00 true resid norm 1.594064561966e-01 ||r(i)||/||b|| 8.917257895569e-03 > 9884 KSP Residual norm 1.080641645059e+00 % max 1.141719177119e+02 min 2.470520232306e-01 max/min 4.621371491678e+02 > 9885 KSP preconditioned resid norm 1.080641645035e+00 true resid norm 1.594065174658e-01 ||r(i)||/||b|| 8.917261322993e-03 > 9885 KSP Residual norm 1.080641645035e+00 % max 1.141770016402e+02 min 2.461723435101e-01 max/min 4.638092159834e+02 > 9886 KSP preconditioned resid norm 1.080641645032e+00 true resid norm 1.594066402497e-01 ||r(i)||/||b|| 8.917268191569e-03 > 9886 KSP Residual norm 1.080641645032e+00 % max 1.150251585488e+02 min 1.817296775460e-01 max/min 6.329464735865e+02 > 9887 KSP preconditioned resid norm 1.080641644925e+00 true resid norm 1.594070641129e-01 ||r(i)||/||b|| 8.917291902641e-03 > 9887 KSP Residual norm 1.080641644925e+00 % max 1.153670431587e+02 min 1.757838889123e-01 max/min 6.563004372733e+02 > 9888 KSP preconditioned resid norm 1.080641644909e+00 true resid norm 1.594072071224e-01 ||r(i)||/||b|| 8.917299902647e-03 > 9888 KSP Residual norm 1.080641644909e+00 % max 1.154419177071e+02 min 1.682011082589e-01 max/min 6.863326817642e+02 > 9889 KSP preconditioned resid norm 1.080641644540e+00 true resid norm 1.594087159021e-01 ||r(i)||/||b|| 8.917384304358e-03 > 9889 KSP Residual norm 1.080641644540e+00 % max 1.154420615965e+02 min 1.254373423966e-01 max/min 9.203165452239e+02 > 9890 KSP preconditioned resid norm 1.080641644365e+00 true resid norm 1.594087901062e-01 ||r(i)||/||b|| 8.917388455364e-03 > 9890 KSP Residual norm 1.080641644365e+00 % max 1.155791063432e+02 min 1.115273566807e-01 max/min 1.036329648465e+03 > 9891 KSP preconditioned resid norm 1.080641644282e+00 true resid norm 1.594096994140e-01 ||r(i)||/||b|| 8.917439322388e-03 > 9891 KSP Residual norm 1.080641644282e+00 % max 1.156952495514e+02 min 9.753038620512e-02 max/min 1.186248245835e+03 > 9892 KSP preconditioned resid norm 1.080641644189e+00 true resid norm 1.594101905102e-01 ||r(i)||/||b|| 8.917466794493e-03 > 9892 KSP Residual norm 1.080641644189e+00 % max 1.157175115527e+02 min 8.164774923625e-02 max/min 1.417277422037e+03 > 9893 KSP preconditioned resid norm 1.080641644177e+00 true resid norm 1.594099860409e-01 ||r(i)||/||b|| 8.917455356405e-03 > 9893 KSP Residual norm 1.080641644177e+00 % max 1.157285171248e+02 min 8.043299897298e-02 max/min 1.438818875368e+03 > 9894 KSP preconditioned resid norm 1.080641644141e+00 true resid norm 1.594101071411e-01 ||r(i)||/||b|| 8.917462130798e-03 > 9894 KSP Residual norm 1.080641644141e+00 % max 1.158251669093e+02 min 8.042896682578e-02 max/min 1.440092686509e+03 > 9895 KSP preconditioned resid norm 1.080641644139e+00 true resid norm 1.594101154949e-01 ||r(i)||/||b|| 8.917462598110e-03 > 9895 KSP Residual norm 1.080641644139e+00 % max 1.158318659802e+02 min 7.912549560750e-02 max/min 1.463900669321e+03 > 9896 KSP preconditioned resid norm 1.080641643967e+00 true resid norm 1.594100896394e-01 ||r(i)||/||b|| 8.917461151744e-03 > 9896 KSP Residual norm 1.080641643967e+00 % max 1.164749819823e+02 min 7.459530237415e-02 max/min 1.561425160502e+03 > 9897 KSP preconditioned resid norm 1.080641643939e+00 true resid norm 1.594099626317e-01 ||r(i)||/||b|| 8.917454046886e-03 > 9897 KSP Residual norm 1.080641643939e+00 % max 1.166576013049e+02 min 7.458633610448e-02 max/min 1.564061293230e+03 > 9898 KSP preconditioned resid norm 1.080641642401e+00 true resid norm 1.594089618420e-01 ||r(i)||/||b|| 8.917398062328e-03 > 9898 KSP Residual norm 1.080641642401e+00 % max 1.166900450421e+02 min 6.207924609942e-02 max/min 1.879694944349e+03 > 9899 KSP preconditioned resid norm 1.080641636866e+00 true resid norm 1.594083552587e-01 ||r(i)||/||b|| 8.917364129827e-03 > 9899 KSP Residual norm 1.080641636866e+00 % max 1.166908543099e+02 min 4.511837692126e-02 max/min 2.586326509785e+03 > 9900 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073529026e-01 ||r(i)||/||b|| 8.917308057647e-03 > 9900 KSP Residual norm 1.080641636459e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9901 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073529076e-01 ||r(i)||/||b|| 8.917308057926e-03 > 9901 KSP Residual norm 1.080641636459e+00 % max 3.063497290246e+01 min 3.063497290246e+01 max/min 1.000000000000e+00 > 9902 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073529160e-01 ||r(i)||/||b|| 8.917308058394e-03 > 9902 KSP Residual norm 1.080641636459e+00 % max 3.066566310477e+01 min 3.845858371013e+00 max/min 7.973684973919e+00 > 9903 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073530163e-01 ||r(i)||/||b|| 8.917308064007e-03 > 9903 KSP Residual norm 1.080641636459e+00 % max 3.713375877488e+01 min 1.336326817614e+00 max/min 2.778793202787e+01 > 9904 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073530198e-01 ||r(i)||/||b|| 8.917308064200e-03 > 9904 KSP Residual norm 1.080641636459e+00 % max 4.496516516046e+01 min 1.226794642680e+00 max/min 3.665256074336e+01 > 9905 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073530254e-01 ||r(i)||/||b|| 8.917308064515e-03 > 9905 KSP Residual norm 1.080641636459e+00 % max 8.684929340512e+01 min 1.183106694605e+00 max/min 7.340782855945e+01 > 9906 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073533139e-01 ||r(i)||/||b|| 8.917308080655e-03 > 9906 KSP Residual norm 1.080641636459e+00 % max 9.802647163348e+01 min 1.168695697992e+00 max/min 8.387681395754e+01 > 9907 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073534761e-01 ||r(i)||/||b|| 8.917308089727e-03 > 9907 KSP Residual norm 1.080641636459e+00 % max 1.123306036185e+02 min 1.141279831413e+00 max/min 9.842511934993e+01 > 9908 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073535726e-01 ||r(i)||/||b|| 8.917308095124e-03 > 9908 KSP Residual norm 1.080641636459e+00 % max 1.134977112090e+02 min 1.090790470231e+00 max/min 1.040508826457e+02 > 9909 KSP preconditioned resid norm 1.080641636458e+00 true resid norm 1.594074031465e-01 ||r(i)||/||b|| 8.917310868307e-03 > 9909 KSP Residual norm 1.080641636458e+00 % max 1.139911596762e+02 min 4.122331938771e-01 max/min 2.765210598500e+02 > 9910 KSP preconditioned resid norm 1.080641636457e+00 true resid norm 1.594074609911e-01 ||r(i)||/||b|| 8.917314104156e-03 > 9910 KSP Residual norm 1.080641636457e+00 % max 1.140007179199e+02 min 2.895720290537e-01 max/min 3.936869120007e+02 > 9911 KSP preconditioned resid norm 1.080641636457e+00 true resid norm 1.594074582552e-01 ||r(i)||/||b|| 8.917313951111e-03 > 9911 KSP Residual norm 1.080641636457e+00 % max 1.140387297689e+02 min 2.881903470643e-01 max/min 3.957062786128e+02 > 9912 KSP preconditioned resid norm 1.080641636457e+00 true resid norm 1.594074351775e-01 ||r(i)||/||b|| 8.917312660133e-03 > 9912 KSP Residual norm 1.080641636457e+00 % max 1.140387790534e+02 min 2.501647526543e-01 max/min 4.558547031242e+02 > 9913 KSP preconditioned resid norm 1.080641636455e+00 true resid norm 1.594074345003e-01 ||r(i)||/||b|| 8.917312622252e-03 > 9913 KSP Residual norm 1.080641636455e+00 % max 1.141359922733e+02 min 2.500654781355e-01 max/min 4.564244258117e+02 > 9914 KSP preconditioned resid norm 1.080641636454e+00 true resid norm 1.594074280517e-01 ||r(i)||/||b|| 8.917312261516e-03 > 9914 KSP Residual norm 1.080641636454e+00 % max 1.141719413857e+02 min 2.470359663864e-01 max/min 4.621672829905e+02 > 9915 KSP preconditioned resid norm 1.080641636430e+00 true resid norm 1.594073674253e-01 ||r(i)||/||b|| 8.917308870053e-03 > 9915 KSP Residual norm 1.080641636430e+00 % max 1.141769960812e+02 min 2.461588977988e-01 max/min 4.638345276249e+02 > 9916 KSP preconditioned resid norm 1.080641636427e+00 true resid norm 1.594072457998e-01 ||r(i)||/||b|| 8.917302066275e-03 > 9916 KSP Residual norm 1.080641636427e+00 % max 1.150248029458e+02 min 1.817425233097e-01 max/min 6.328997795954e+02 > 9917 KSP preconditioned resid norm 1.080641636323e+00 true resid norm 1.594068269925e-01 ||r(i)||/||b|| 8.917278638033e-03 > 9917 KSP Residual norm 1.080641636323e+00 % max 1.153659292411e+02 min 1.758211627252e-01 max/min 6.561549670868e+02 > 9918 KSP preconditioned resid norm 1.080641636306e+00 true resid norm 1.594066853231e-01 ||r(i)||/||b|| 8.917270712996e-03 > 9918 KSP Residual norm 1.080641636306e+00 % max 1.154409736018e+02 min 1.682249410195e-01 max/min 6.862298354942e+02 > 9919 KSP preconditioned resid norm 1.080641635944e+00 true resid norm 1.594051910900e-01 ||r(i)||/||b|| 8.917187125026e-03 > 9919 KSP Residual norm 1.080641635944e+00 % max 1.154411092996e+02 min 1.254604202789e-01 max/min 9.201396667011e+02 > 9920 KSP preconditioned resid norm 1.080641635771e+00 true resid norm 1.594051167722e-01 ||r(i)||/||b|| 8.917182967656e-03 > 9920 KSP Residual norm 1.080641635771e+00 % max 1.155780759198e+02 min 1.115224613869e-01 max/min 1.036365898694e+03 > 9921 KSP preconditioned resid norm 1.080641635689e+00 true resid norm 1.594042104954e-01 ||r(i)||/||b|| 8.917132270191e-03 > 9921 KSP Residual norm 1.080641635689e+00 % max 1.156949393598e+02 min 9.748286841768e-02 max/min 1.186823297649e+03 > 9922 KSP preconditioned resid norm 1.080641635599e+00 true resid norm 1.594037272785e-01 ||r(i)||/||b|| 8.917105238852e-03 > 9922 KSP Residual norm 1.080641635599e+00 % max 1.157177765183e+02 min 8.161780203814e-02 max/min 1.417800695787e+03 > 9923 KSP preconditioned resid norm 1.080641635586e+00 true resid norm 1.594039328090e-01 ||r(i)||/||b|| 8.917116736308e-03 > 9923 KSP Residual norm 1.080641635586e+00 % max 1.157297518468e+02 min 8.041713659406e-02 max/min 1.439118038125e+03 > 9924 KSP preconditioned resid norm 1.080641635552e+00 true resid norm 1.594038177599e-01 ||r(i)||/||b|| 8.917110300416e-03 > 9924 KSP Residual norm 1.080641635552e+00 % max 1.158244835100e+02 min 8.041265899129e-02 max/min 1.440376241290e+03 > 9925 KSP preconditioned resid norm 1.080641635551e+00 true resid norm 1.594038101782e-01 ||r(i)||/||b|| 8.917109876291e-03 > 9925 KSP Residual norm 1.080641635551e+00 % max 1.158302075022e+02 min 7.912034826741e-02 max/min 1.463974944988e+03 > 9926 KSP preconditioned resid norm 1.080641635382e+00 true resid norm 1.594038364112e-01 ||r(i)||/||b|| 8.917111343775e-03 > 9926 KSP Residual norm 1.080641635382e+00 % max 1.164682071358e+02 min 7.460699390243e-02 max/min 1.561089665242e+03 > 9927 KSP preconditioned resid norm 1.080641635354e+00 true resid norm 1.594039631076e-01 ||r(i)||/||b|| 8.917118431221e-03 > 9927 KSP Residual norm 1.080641635354e+00 % max 1.166492079885e+02 min 7.459740228247e-02 max/min 1.563716757144e+03 > 9928 KSP preconditioned resid norm 1.080641633843e+00 true resid norm 1.594049513925e-01 ||r(i)||/||b|| 8.917173716256e-03 > 9928 KSP Residual norm 1.080641633843e+00 % max 1.166812081048e+02 min 6.210379986959e-02 max/min 1.878809482669e+03 > 9929 KSP preconditioned resid norm 1.080641628410e+00 true resid norm 1.594055557080e-01 ||r(i)||/||b|| 8.917207521894e-03 > 9929 KSP Residual norm 1.080641628410e+00 % max 1.166819406955e+02 min 4.513468212670e-02 max/min 2.585194692807e+03 > 9930 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065400862e-01 ||r(i)||/||b|| 8.917262588389e-03 > 9930 KSP Residual norm 1.080641628017e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9931 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065400814e-01 ||r(i)||/||b|| 8.917262588116e-03 > 9931 KSP Residual norm 1.080641628017e+00 % max 3.063465052437e+01 min 3.063465052437e+01 max/min 1.000000000000e+00 > 9932 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065400730e-01 ||r(i)||/||b|| 8.917262587648e-03 > 9932 KSP Residual norm 1.080641628017e+00 % max 3.066529886382e+01 min 3.844457299464e+00 max/min 7.976496154112e+00 > 9933 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065399737e-01 ||r(i)||/||b|| 8.917262582094e-03 > 9933 KSP Residual norm 1.080641628017e+00 % max 3.714525447417e+01 min 1.336554051053e+00 max/min 2.779180867762e+01 > 9934 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065399702e-01 ||r(i)||/||b|| 8.917262581897e-03 > 9934 KSP Residual norm 1.080641628017e+00 % max 4.500845605790e+01 min 1.226798858744e+00 max/min 3.668772247146e+01 > 9935 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065399646e-01 ||r(i)||/||b|| 8.917262581583e-03 > 9935 KSP Residual norm 1.080641628017e+00 % max 8.688155371236e+01 min 1.183120734870e+00 max/min 7.343422454843e+01 > 9936 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065396833e-01 ||r(i)||/||b|| 8.917262565850e-03 > 9936 KSP Residual norm 1.080641628017e+00 % max 9.802499250688e+01 min 1.168932790974e+00 max/min 8.385853597724e+01 > 9937 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065395240e-01 ||r(i)||/||b|| 8.917262556936e-03 > 9937 KSP Residual norm 1.080641628017e+00 % max 1.122446326752e+02 min 1.141662807959e+00 max/min 9.831679887675e+01 > 9938 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065394282e-01 ||r(i)||/||b|| 8.917262551579e-03 > 9938 KSP Residual norm 1.080641628017e+00 % max 1.134942798726e+02 min 1.090790194356e+00 max/min 1.040477632269e+02 > 9939 KSP preconditioned resid norm 1.080641628016e+00 true resid norm 1.594064902727e-01 ||r(i)||/||b|| 8.917259801801e-03 > 9939 KSP Residual norm 1.080641628016e+00 % max 1.139914516679e+02 min 4.119801790176e-01 max/min 2.766915921531e+02 > 9940 KSP preconditioned resid norm 1.080641628015e+00 true resid norm 1.594064329818e-01 ||r(i)||/||b|| 8.917256596927e-03 > 9940 KSP Residual norm 1.080641628015e+00 % max 1.140011210085e+02 min 2.894551947128e-01 max/min 3.938472105212e+02 > 9941 KSP preconditioned resid norm 1.080641628015e+00 true resid norm 1.594064356968e-01 ||r(i)||/||b|| 8.917256748803e-03 > 9941 KSP Residual norm 1.080641628015e+00 % max 1.140392057187e+02 min 2.880605712140e-01 max/min 3.958862028153e+02 > 9942 KSP preconditioned resid norm 1.080641628014e+00 true resid norm 1.594064585337e-01 ||r(i)||/||b|| 8.917258026309e-03 > 9942 KSP Residual norm 1.080641628014e+00 % max 1.140392489431e+02 min 2.501713977769e-01 max/min 4.558444728556e+02 > 9943 KSP preconditioned resid norm 1.080641628013e+00 true resid norm 1.594064592249e-01 ||r(i)||/||b|| 8.917258064976e-03 > 9943 KSP Residual norm 1.080641628013e+00 % max 1.141360412719e+02 min 2.500711817244e-01 max/min 4.564142116850e+02 > 9944 KSP preconditioned resid norm 1.080641628011e+00 true resid norm 1.594064655583e-01 ||r(i)||/||b|| 8.917258419265e-03 > 9944 KSP Residual norm 1.080641628011e+00 % max 1.141719179255e+02 min 2.470518740808e-01 max/min 4.621374290332e+02 > 9945 KSP preconditioned resid norm 1.080641627988e+00 true resid norm 1.594065256003e-01 ||r(i)||/||b|| 8.917261778039e-03 > 9945 KSP Residual norm 1.080641627988e+00 % max 1.141770016071e+02 min 2.461722087102e-01 max/min 4.638094698230e+02 > 9946 KSP preconditioned resid norm 1.080641627985e+00 true resid norm 1.594066459440e-01 ||r(i)||/||b|| 8.917268510113e-03 > 9946 KSP Residual norm 1.080641627985e+00 % max 1.150251558754e+02 min 1.817297629761e-01 max/min 6.329461613313e+02 > 9947 KSP preconditioned resid norm 1.080641627883e+00 true resid norm 1.594070613108e-01 ||r(i)||/||b|| 8.917291745887e-03 > 9947 KSP Residual norm 1.080641627883e+00 % max 1.153670348347e+02 min 1.757842027946e-01 max/min 6.562992180216e+02 > 9948 KSP preconditioned resid norm 1.080641627867e+00 true resid norm 1.594072014571e-01 ||r(i)||/||b|| 8.917299585729e-03 > 9948 KSP Residual norm 1.080641627867e+00 % max 1.154419110590e+02 min 1.682012982555e-01 max/min 6.863318669729e+02 > 9949 KSP preconditioned resid norm 1.080641627512e+00 true resid norm 1.594086800399e-01 ||r(i)||/||b|| 8.917382298211e-03 > 9949 KSP Residual norm 1.080641627512e+00 % max 1.154420548965e+02 min 1.254375490145e-01 max/min 9.203149758861e+02 > 9950 KSP preconditioned resid norm 1.080641627344e+00 true resid norm 1.594087527690e-01 ||r(i)||/||b|| 8.917386366706e-03 > 9950 KSP Residual norm 1.080641627344e+00 % max 1.155790984624e+02 min 1.115271921586e-01 max/min 1.036331106570e+03 > 9951 KSP preconditioned resid norm 1.080641627264e+00 true resid norm 1.594096439404e-01 ||r(i)||/||b|| 8.917436219173e-03 > 9951 KSP Residual norm 1.080641627264e+00 % max 1.156952457641e+02 min 9.753007450777e-02 max/min 1.186251998145e+03 > 9952 KSP preconditioned resid norm 1.080641627176e+00 true resid norm 1.594101251850e-01 ||r(i)||/||b|| 8.917463140178e-03 > 9952 KSP Residual norm 1.080641627176e+00 % max 1.157175103847e+02 min 8.164743677427e-02 max/min 1.417282831604e+03 > 9953 KSP preconditioned resid norm 1.080641627164e+00 true resid norm 1.594099248157e-01 ||r(i)||/||b|| 8.917451931444e-03 > 9953 KSP Residual norm 1.080641627164e+00 % max 1.157285221143e+02 min 8.043281187160e-02 max/min 1.438822284357e+03 > 9954 KSP preconditioned resid norm 1.080641627129e+00 true resid norm 1.594100434515e-01 ||r(i)||/||b|| 8.917458567979e-03 > 9954 KSP Residual norm 1.080641627129e+00 % max 1.158251567433e+02 min 8.042877586324e-02 max/min 1.440095979333e+03 > 9955 KSP preconditioned resid norm 1.080641627127e+00 true resid norm 1.594100516277e-01 ||r(i)||/||b|| 8.917459025356e-03 > 9955 KSP Residual norm 1.080641627127e+00 % max 1.158318480981e+02 min 7.912541326160e-02 max/min 1.463901966807e+03 > 9956 KSP preconditioned resid norm 1.080641626963e+00 true resid norm 1.594100262829e-01 ||r(i)||/||b|| 8.917457607557e-03 > 9956 KSP Residual norm 1.080641626963e+00 % max 1.164749229958e+02 min 7.459540346436e-02 max/min 1.561422253738e+03 > 9957 KSP preconditioned resid norm 1.080641626935e+00 true resid norm 1.594099017783e-01 ||r(i)||/||b|| 8.917450642725e-03 > 9957 KSP Residual norm 1.080641626935e+00 % max 1.166575286636e+02 min 7.458642942097e-02 max/min 1.564058362483e+03 > 9958 KSP preconditioned resid norm 1.080641625459e+00 true resid norm 1.594089210473e-01 ||r(i)||/||b|| 8.917395780257e-03 > 9958 KSP Residual norm 1.080641625459e+00 % max 1.166899683166e+02 min 6.207944430169e-02 max/min 1.879687707085e+03 > 9959 KSP preconditioned resid norm 1.080641620143e+00 true resid norm 1.594083265698e-01 ||r(i)||/||b|| 8.917362524958e-03 > 9959 KSP Residual norm 1.080641620143e+00 % max 1.166907769657e+02 min 4.511856151865e-02 max/min 2.586314213883e+03 > 9960 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073442756e-01 ||r(i)||/||b|| 8.917307575050e-03 > 9960 KSP Residual norm 1.080641619752e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9961 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073442805e-01 ||r(i)||/||b|| 8.917307575322e-03 > 9961 KSP Residual norm 1.080641619752e+00 % max 3.063497144523e+01 min 3.063497144523e+01 max/min 1.000000000000e+00 > 9962 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073442888e-01 ||r(i)||/||b|| 8.917307575783e-03 > 9962 KSP Residual norm 1.080641619752e+00 % max 3.066566109468e+01 min 3.845843703829e+00 max/min 7.973714861098e+00 > 9963 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073443871e-01 ||r(i)||/||b|| 8.917307581283e-03 > 9963 KSP Residual norm 1.080641619752e+00 % max 3.713390425813e+01 min 1.336329967989e+00 max/min 2.778797538606e+01 > 9964 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073443904e-01 ||r(i)||/||b|| 8.917307581472e-03 > 9964 KSP Residual norm 1.080641619752e+00 % max 4.496570748601e+01 min 1.226794844829e+00 max/min 3.665299677086e+01 > 9965 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073443960e-01 ||r(i)||/||b|| 8.917307581781e-03 > 9965 KSP Residual norm 1.080641619752e+00 % max 8.684970616180e+01 min 1.183106833938e+00 max/min 7.340816878957e+01 > 9966 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073446787e-01 ||r(i)||/||b|| 8.917307597597e-03 > 9966 KSP Residual norm 1.080641619752e+00 % max 9.802644805035e+01 min 1.168698112506e+00 max/min 8.387662049028e+01 > 9967 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073448376e-01 ||r(i)||/||b|| 8.917307606488e-03 > 9967 KSP Residual norm 1.080641619752e+00 % max 1.123297332549e+02 min 1.141283907756e+00 max/min 9.842400518531e+01 > 9968 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073449323e-01 ||r(i)||/||b|| 8.917307611782e-03 > 9968 KSP Residual norm 1.080641619752e+00 % max 1.134976751689e+02 min 1.090790474217e+00 max/min 1.040508492250e+02 > 9969 KSP preconditioned resid norm 1.080641619751e+00 true resid norm 1.594073935137e-01 ||r(i)||/||b|| 8.917310329444e-03 > 9969 KSP Residual norm 1.080641619751e+00 % max 1.139911627557e+02 min 4.122309806702e-01 max/min 2.765225519207e+02 > 9970 KSP preconditioned resid norm 1.080641619750e+00 true resid norm 1.594074502003e-01 ||r(i)||/||b|| 8.917313500516e-03 > 9970 KSP Residual norm 1.080641619750e+00 % max 1.140007220037e+02 min 2.895709152543e-01 max/min 3.936884403725e+02 > 9971 KSP preconditioned resid norm 1.080641619750e+00 true resid norm 1.594074475189e-01 ||r(i)||/||b|| 8.917313350515e-03 > 9971 KSP Residual norm 1.080641619750e+00 % max 1.140387347164e+02 min 2.881891122657e-01 max/min 3.957079912555e+02 > 9972 KSP preconditioned resid norm 1.080641619750e+00 true resid norm 1.594074249007e-01 ||r(i)||/||b|| 8.917312085247e-03 > 9972 KSP Residual norm 1.080641619750e+00 % max 1.140387839472e+02 min 2.501648230072e-01 max/min 4.558545944886e+02 > 9973 KSP preconditioned resid norm 1.080641619748e+00 true resid norm 1.594074242369e-01 ||r(i)||/||b|| 8.917312048114e-03 > 9973 KSP Residual norm 1.080641619748e+00 % max 1.141359929290e+02 min 2.500655391090e-01 max/min 4.564243171438e+02 > 9974 KSP preconditioned resid norm 1.080641619747e+00 true resid norm 1.594074179174e-01 ||r(i)||/||b|| 8.917311694599e-03 > 9974 KSP Residual norm 1.080641619747e+00 % max 1.141719411262e+02 min 2.470361384981e-01 max/min 4.621669599450e+02 > 9975 KSP preconditioned resid norm 1.080641619724e+00 true resid norm 1.594073585052e-01 ||r(i)||/||b|| 8.917308371056e-03 > 9975 KSP Residual norm 1.080641619724e+00 % max 1.141769961595e+02 min 2.461590320913e-01 max/min 4.638342748974e+02 > 9976 KSP preconditioned resid norm 1.080641619721e+00 true resid norm 1.594072392992e-01 ||r(i)||/||b|| 8.917301702630e-03 > 9976 KSP Residual norm 1.080641619721e+00 % max 1.150248073926e+02 min 1.817423531140e-01 max/min 6.329003967526e+02 > 9977 KSP preconditioned resid norm 1.080641619621e+00 true resid norm 1.594068288737e-01 ||r(i)||/||b|| 8.917278743269e-03 > 9977 KSP Residual norm 1.080641619621e+00 % max 1.153659432133e+02 min 1.758207322277e-01 max/min 6.561566531518e+02 > 9978 KSP preconditioned resid norm 1.080641619605e+00 true resid norm 1.594066900433e-01 ||r(i)||/||b|| 8.917270977044e-03 > 9978 KSP Residual norm 1.080641619605e+00 % max 1.154409858489e+02 min 1.682246559484e-01 max/min 6.862310711714e+02 > 9979 KSP preconditioned resid norm 1.080641619257e+00 true resid norm 1.594052257365e-01 ||r(i)||/||b|| 8.917189063164e-03 > 9979 KSP Residual norm 1.080641619257e+00 % max 1.154411216580e+02 min 1.254601679658e-01 max/min 9.201416157000e+02 > 9980 KSP preconditioned resid norm 1.080641619092e+00 true resid norm 1.594051529133e-01 ||r(i)||/||b|| 8.917184989407e-03 > 9980 KSP Residual norm 1.080641619092e+00 % max 1.155780886655e+02 min 1.115223976110e-01 max/min 1.036366605645e+03 > 9981 KSP preconditioned resid norm 1.080641619012e+00 true resid norm 1.594042648383e-01 ||r(i)||/||b|| 8.917135310152e-03 > 9981 KSP Residual norm 1.080641619012e+00 % max 1.156949417657e+02 min 9.748351070064e-02 max/min 1.186815502788e+03 > 9982 KSP preconditioned resid norm 1.080641618926e+00 true resid norm 1.594037912594e-01 ||r(i)||/||b|| 8.917108817969e-03 > 9982 KSP Residual norm 1.080641618926e+00 % max 1.157177699729e+02 min 8.161808862755e-02 max/min 1.417795637202e+03 > 9983 KSP preconditioned resid norm 1.080641618914e+00 true resid norm 1.594039926066e-01 ||r(i)||/||b|| 8.917120081407e-03 > 9983 KSP Residual norm 1.080641618914e+00 % max 1.157297318664e+02 min 8.041726690542e-02 max/min 1.439115457660e+03 > 9984 KSP preconditioned resid norm 1.080641618881e+00 true resid norm 1.594038798061e-01 ||r(i)||/||b|| 8.917113771305e-03 > 9984 KSP Residual norm 1.080641618881e+00 % max 1.158244868066e+02 min 8.041279440761e-02 max/min 1.440373856671e+03 > 9985 KSP preconditioned resid norm 1.080641618880e+00 true resid norm 1.594038723728e-01 ||r(i)||/||b|| 8.917113355483e-03 > 9985 KSP Residual norm 1.080641618880e+00 % max 1.158302224434e+02 min 7.912036884330e-02 max/min 1.463974753110e+03 > 9986 KSP preconditioned resid norm 1.080641618717e+00 true resid norm 1.594038980749e-01 ||r(i)||/||b|| 8.917114793267e-03 > 9986 KSP Residual norm 1.080641618717e+00 % max 1.164682835790e+02 min 7.460686106511e-02 max/min 1.561093469371e+03 > 9987 KSP preconditioned resid norm 1.080641618691e+00 true resid norm 1.594040222541e-01 ||r(i)||/||b|| 8.917121739901e-03 > 9987 KSP Residual norm 1.080641618691e+00 % max 1.166493032152e+02 min 7.459727412837e-02 max/min 1.563720720069e+03 > 9988 KSP preconditioned resid norm 1.080641617240e+00 true resid norm 1.594049907736e-01 ||r(i)||/||b|| 8.917175919248e-03 > 9988 KSP Residual norm 1.080641617240e+00 % max 1.166813081045e+02 min 6.210350927266e-02 max/min 1.878819884271e+03 > 9989 KSP preconditioned resid norm 1.080641612022e+00 true resid norm 1.594055829488e-01 ||r(i)||/||b|| 8.917209045757e-03 > 9989 KSP Residual norm 1.080641612022e+00 % max 1.166820415947e+02 min 4.513453999933e-02 max/min 2.585205069033e+03 > 9990 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065477745e-01 ||r(i)||/||b|| 8.917263018470e-03 > 9990 KSP Residual norm 1.080641611645e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 9991 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065477696e-01 ||r(i)||/||b|| 8.917263018201e-03 > 9991 KSP Residual norm 1.080641611645e+00 % max 3.063465549435e+01 min 3.063465549435e+01 max/min 1.000000000000e+00 > 9992 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065477615e-01 ||r(i)||/||b|| 8.917263017744e-03 > 9992 KSP Residual norm 1.080641611645e+00 % max 3.066530412302e+01 min 3.844470687795e+00 max/min 7.976469744031e+00 > 9993 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065476642e-01 ||r(i)||/||b|| 8.917263012302e-03 > 9993 KSP Residual norm 1.080641611645e+00 % max 3.714516899070e+01 min 1.336552632161e+00 max/min 2.779177422339e+01 > 9994 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065476608e-01 ||r(i)||/||b|| 8.917263012110e-03 > 9994 KSP Residual norm 1.080641611645e+00 % max 4.500812884571e+01 min 1.226798972667e+00 max/min 3.668745234425e+01 > 9995 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065476552e-01 ||r(i)||/||b|| 8.917263011801e-03 > 9995 KSP Residual norm 1.080641611645e+00 % max 8.688131738345e+01 min 1.183120593233e+00 max/min 7.343403358914e+01 > 9996 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065473796e-01 ||r(i)||/||b|| 8.917262996379e-03 > 9996 KSP Residual norm 1.080641611645e+00 % max 9.802499878957e+01 min 1.168930461656e+00 max/min 8.385870845619e+01 > 9997 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065472234e-01 ||r(i)||/||b|| 8.917262987642e-03 > 9997 KSP Residual norm 1.080641611645e+00 % max 1.122454823211e+02 min 1.141659208813e+00 max/min 9.831785304632e+01 > 9998 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065471295e-01 ||r(i)||/||b|| 8.917262982389e-03 > 9998 KSP Residual norm 1.080641611645e+00 % max 1.134943126018e+02 min 1.090790203681e+00 max/min 1.040477923425e+02 > 9999 KSP preconditioned resid norm 1.080641611644e+00 true resid norm 1.594064989598e-01 ||r(i)||/||b|| 8.917260287762e-03 > 9999 KSP Residual norm 1.080641611644e+00 % max 1.139914489020e+02 min 4.119830336343e-01 max/min 2.766896682528e+02 > 10000 KSP preconditioned resid norm 1.080641611643e+00 true resid norm 1.594064428168e-01 ||r(i)||/||b|| 8.917257147097e-03 > 10000 KSP Residual norm 1.080641611643e+00 % max 1.140011170215e+02 min 2.894564222709e-01 max/min 3.938455264772e+02 > > > Then the number of iteration >10000 > The Programme stop with convergenceReason=-3 > > Best, > > Meng > ------------------ Original ------------------ > From: "Barry Smith";; > Send time: Friday, Apr 25, 2014 6:27 AM > To: "Oo "; > Cc: "Dave May"; "petsc-users"; > Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 > > > On Apr 24, 2014, at 5:24 PM, Oo wrote: > > > > > Configure PETSC again? > > No, the command line when you run the program. > > Barry > > > > > ------------------ Original ------------------ > > From: "Dave May";; > > Send time: Friday, Apr 25, 2014 6:20 AM > > To: "Oo "; > > Cc: "Barry Smith"; "Matthew Knepley"; "petsc-users"; > > Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 > > > > On the command line > > > > > > On 25 April 2014 00:11, Oo wrote: > > > > Where should I put "-ksp_monitor_true_residual -ksp_monitor_singular_value " ? > > > > Thanks, > > > > Meng > > > > > > ------------------ Original ------------------ > > From: "Barry Smith";; > > Date: Apr 25, 2014 > > To: "Matthew Knepley"; > > Cc: "Oo "; "petsc-users"; > > Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 > > > > > > There are also a great deal of ?bogus? numbers that have no meaning and many zeros. Most of these are not the eigenvalues of anything. > > > > Run the two cases with -ksp_monitor_true_residual -ksp_monitor_singular_value and send the output > > > > Barry > > > > > > On Apr 24, 2014, at 4:53 PM, Matthew Knepley wrote: > > > > > On Thu, Apr 24, 2014 at 12:57 PM, Oo wrote: > > > > > > Hi, > > > > > > For analysis the convergence of linear solver, > > > I meet a problem. > > > > > > One is the list of Eigenvalues whose linear system which has a convergence solution. > > > The other is the list of Eigenvalues whose linear system whose solution does not converge (convergenceReason=-3). > > > > > > These are just lists of numbers. It does not tell us anything about the computation. What is the problem you are having? > > > > > > Matt > > > > > > Do you know what kind of method can be used to obtain a convergence solution for our non-convergence case? > > > > > > Thanks, > > > > > > Meng > > > > > > > > > > > > -- > > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > > -- Norbert Wiener > > > > . > > > > . -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Apr 25 07:49:53 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 25 Apr 2014 07:49:53 -0500 Subject: [petsc-users] infinite loop with NEWTONTR? In-Reply-To: References: Message-ID: <1E380BB0-E644-4A33-97FC-7EEB69825F07@mcs.anl.gov> On Apr 25, 2014, at 7:31 AM, Norihiro Watanabe wrote: > Hi, > > In my simulation, nonlinear solve with the trust regtion method got stagnent after linear solve (see output below). What do you mean, get stagnant? Does the code just hang, that is keep running but not print anything. > Is it possible that the method goes to inifite loop? This is not suppose to be possible. > Is there any parameter to avoid this situation? You need to determine what it is ?hanging? on. Try running with -start_in_debugger and when it ?hangs? hit control C and type where to determine where it is. Barry > > 0 SNES Function norm 1.828728087153e+03 > 0 KSP Residual norm 91.2735 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 3 > 1 KSP Residual norm 3.42223 > Linear solve converged due to CONVERGED_STEP_LENGTH iterations 1 > > > Thank you in advance, > Nori From bsmith at mcs.anl.gov Fri Apr 25 07:57:37 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 25 Apr 2014 07:57:37 -0500 Subject: [petsc-users] Convergence_Eigenvalues_k=3 In-Reply-To: References: <65E3810E-04F5-4B83-B3AF-EDE2D3AEE791@mcs.anl.gov> <4EC22F71-3552-402B-8886-9180229E1BC2@mcs.anl.gov> Message-ID: Tiny problem: rows=384, cols=384 Huge condition number > 1.445175027567e+06 with ICC(0) preconditioner. Normally one does not expect to see this on such a tiny problem. Where is the matrix coming from? Discretization of some PDE? What kind of discretization? What PDE? Is it possible the matrix is singular? How big are the systems you want to solve? If smallish, say less than 1,000,000 unknowns you may just want to use a direct solver (assuming that works). One cannot just black-box precondition any crazy matrix that comes along and expect it to converge quickly. One has to know where the matrix comes from to determine an appropriate preconditioner if possible. Barry On Apr 25, 2014, at 7:33 AM, Oo wrote: > > Hi, > > Here is output when I used -ksp_view. > > > KSP Object: 1 MPI processes > type: gmres > GMRES: restart=200, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement > GMRES: happy breakdown tolerance 1e-30 > maximum iterations=200, initial guess is zero > tolerances: relative=0.0001, absolute=0.0001, divergence=10000 > left preconditioning > using PRECONDITIONED norm type for convergence test > PC Object: 1 MPI processes > type: icc > 0 levels of fill > tolerance for zero pivot 2.22045e-14 > using Manteuffel shift [POSITIVE_DEFINITE] > matrix ordering: natural > factor fill ratio given 1, needed 1 > Factored matrix follows: > Matrix Object: 1 MPI processes > type: seqsbaij > rows=384, cols=384 > package used to perform factorization: petsc > total: nonzeros=2560, allocated nonzeros=2560 > total number of mallocs used during MatSetValues calls =0 > block size is 1 > linear system matrix = precond matrix: > Matrix Object: 1 MPI processes > type: seqaij > rows=384, cols=384 > total: nonzeros=4736, allocated nonzeros=147456 > total number of mallocs used during MatSetValues calls =0 > using I-node routines: found 96 nodes, limit used is 5 > > Thanks, > Meng > > > ------------------ Original ------------------ > From: "Matthew Knepley";; > Send time: Friday, Apr 25, 2014 6:22 PM > To: "Oo "; > Cc: "Barry Smith"; "petsc-users"; > Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 > > On Fri, Apr 25, 2014 at 4:13 AM, Oo wrote: > Hi, > > What I can do for solving this linear system? > > I am guessing that you are using the default GMRES/ILU. Run with -ksp_view to confirm. > > We would have to know something about the linear system to suggest improvements. > > Matt > > At this moment, > I use the common line "/Users/wumeng/MyWork/BuildMSplineTools/bin/msplinePDE_PFEM_2 -ksp_gmres_restart 200 -ksp_max_it 200 -ksp_monitor_true_residual -ksp_monitor_singular_value" > > The following are the output: > ============================================ > 0 KSP preconditioned resid norm 7.463734841673e+00 true resid norm 7.520241011357e-02 ||r(i)||/||b|| 1.000000000000e+00 > 0 KSP Residual norm 7.463734841673e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 1 KSP preconditioned resid norm 3.449001344285e-03 true resid norm 7.834435231711e-05 ||r(i)||/||b|| 1.041779807307e-03 > 1 KSP Residual norm 3.449001344285e-03 % max 9.999991695261e-01 min 9.999991695261e-01 max/min 1.000000000000e+00 > 2 KSP preconditioned resid norm 1.811463883605e-05 true resid norm 3.597611565181e-07 ||r(i)||/||b|| 4.783904611232e-06 > 2 KSP Residual norm 1.811463883605e-05 % max 1.000686014764e+00 min 9.991339510077e-01 max/min 1.001553409084e+00 > 0 KSP preconditioned resid norm 9.374463936067e+00 true resid norm 9.058107112571e-02 ||r(i)||/||b|| 1.000000000000e+00 > 0 KSP Residual norm 9.374463936067e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 1 KSP preconditioned resid norm 2.595582184655e-01 true resid norm 6.637387889158e-03 ||r(i)||/||b|| 7.327566131280e-02 > 1 KSP Residual norm 2.595582184655e-01 % max 9.933440157684e-01 min 9.933440157684e-01 max/min 1.000000000000e+00 > 2 KSP preconditioned resid norm 6.351429855766e-03 true resid norm 1.844857600919e-04 ||r(i)||/||b|| 2.036692189651e-03 > 2 KSP Residual norm 6.351429855766e-03 % max 1.000795215571e+00 min 8.099278726624e-01 max/min 1.235659679523e+00 > 3 KSP preconditioned resid norm 1.883016084950e-04 true resid norm 3.876682412610e-06 ||r(i)||/||b|| 4.279793078656e-05 > 3 KSP Residual norm 1.883016084950e-04 % max 1.184638644500e+00 min 8.086172954187e-01 max/min 1.465017692809e+00 > > ============================================= > 0 KSP preconditioned resid norm 1.414935390756e+03 true resid norm 1.787617427503e+01 ||r(i)||/||b|| 1.000000000000e+00 > 0 KSP Residual norm 1.414935390756e+03 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > 1 KSP preconditioned resid norm 1.384179962960e+03 true resid norm 1.802958083039e+01 ||r(i)||/||b|| 1.008581621157e+00 > 1 KSP Residual norm 1.384179962960e+03 % max 1.895999321723e+01 min 1.895999321723e+01 max/min 1.000000000000e+00 > 2 KSP preconditioned resid norm 1.382373771674e+03 true resid norm 1.813982830244e+01 ||r(i)||/||b|| 1.014748906749e+00 > 2 KSP Residual norm 1.382373771674e+03 % max 3.551645921348e+01 min 6.051184451182e+00 max/min 5.869340044086e+00 > 3 KSP preconditioned resid norm 1.332723893134e+03 true resid norm 1.774590608681e+01 ||r(i)||/||b|| 9.927127479173e-01 > 3 KSP Residual norm 1.332723893134e+03 % max 5.076435191911e+01 min 4.941060752900e+00 max/min 1.027397849527e+01 > 4 KSP preconditioned resid norm 1.093788576095e+03 true resid norm 1.717433802192e+01 ||r(i)||/||b|| 9.607390125925e-01 > 4 KSP Residual norm 1.093788576095e+03 % max 6.610818819562e+01 min 1.960683367943e+00 max/min 3.371691180560e+01 > 5 KSP preconditioned resid norm 1.077330470806e+03 true resid norm 1.750861653688e+01 ||r(i)||/||b|| 9.794386800837e-01 > 5 KSP Residual norm 1.077330470806e+03 % max 7.296006568006e+01 min 1.721275741199e+00 max/min 4.238720382432e+01 > 6 KSP preconditioned resid norm 1.021951470305e+03 true resid norm 1.587225289158e+01 ||r(i)||/||b|| 8.878998742895e-01 > 6 KSP Residual norm 1.021951470305e+03 % max 7.313469782749e+01 min 1.113708844536e+00 max/min 6.566769958438e+01 > 7 KSP preconditioned resid norm 8.968401331418e+02 true resid norm 1.532116106999e+01 ||r(i)||/||b|| 8.570715878170e-01 > 7 KSP Residual norm 8.968401331418e+02 % max 7.385099946342e+01 min 1.112940159952e+00 max/min 6.635666689088e+01 > 8 KSP preconditioned resid norm 8.540750681178e+02 true resid norm 1.665367409583e+01 ||r(i)||/||b|| 9.316128741873e-01 > 8 KSP Residual norm 8.540750681178e+02 % max 9.333712052307e+01 min 9.984044777583e-01 max/min 9.348627996205e+01 > 9 KSP preconditioned resid norm 8.540461721597e+02 true resid norm 1.669107313551e+01 ||r(i)||/||b|| 9.337049907166e-01 > 9 KSP Residual norm 8.540461721597e+02 % max 9.673114572055e+01 min 9.309725017804e-01 max/min 1.039033328434e+02 > 10 KSP preconditioned resid norm 8.540453927813e+02 true resid norm 1.668902252063e+01 ||r(i)||/||b|| 9.335902785387e-01 > 10 KSP Residual norm 8.540453927813e+02 % max 9.685490817256e+01 min 8.452348922200e-01 max/min 1.145893396783e+02 > 11 KSP preconditioned resid norm 7.927564518868e+02 true resid norm 1.671721115974e+01 ||r(i)||/||b|| 9.351671617505e-01 > 11 KSP Residual norm 7.927564518868e+02 % max 1.076430910935e+02 min 8.433341413962e-01 max/min 1.276399066629e+02 > 12 KSP preconditioned resid norm 4.831253651357e+02 true resid norm 2.683431434549e+01 ||r(i)||/||b|| 1.501121768709e+00 > 12 KSP Residual norm 4.831253651357e+02 % max 1.079017603981e+02 min 8.354112427301e-01 max/min 1.291600530124e+02 > 13 KSP preconditioned resid norm 4.093462051807e+02 true resid norm 1.923302214627e+01 ||r(i)||/||b|| 1.075902586894e+00 > 13 KSP Residual norm 4.093462051807e+02 % max 1.088474472678e+02 min 6.034301796734e-01 max/min 1.803811790233e+02 > 14 KSP preconditioned resid norm 3.809274390266e+02 true resid norm 2.118308925475e+01 ||r(i)||/||b|| 1.184990083943e+00 > 14 KSP Residual norm 3.809274390266e+02 % max 1.104675729761e+02 min 6.010582938812e-01 max/min 1.837884513045e+02 > 15 KSP preconditioned resid norm 2.408316705377e+02 true resid norm 1.238065951026e+01 ||r(i)||/||b|| 6.925788101960e-01 > 15 KSP Residual norm 2.408316705377e+02 % max 1.215487322437e+02 min 5.131416858605e-01 max/min 2.368716781211e+02 > 16 KSP preconditioned resid norm 1.979802937570e+02 true resid norm 9.481184679187e+00 ||r(i)||/||b|| 5.303810834083e-01 > 16 KSP Residual norm 1.979802937570e+02 % max 1.246827043503e+02 min 4.780929228356e-01 max/min 2.607917799970e+02 > 17 KSP preconditioned resid norm 1.853329360151e+02 true resid norm 9.049630342765e+00 ||r(i)||/||b|| 5.062397694011e-01 > 17 KSP Residual norm 1.853329360151e+02 % max 1.252018074901e+02 min 4.612413672912e-01 max/min 2.714453133841e+02 > 18 KSP preconditioned resid norm 1.853299943955e+02 true resid norm 9.047741838474e+00 ||r(i)||/||b|| 5.061341257515e-01 > 18 KSP Residual norm 1.853299943955e+02 % max 1.256988083109e+02 min 4.556318323953e-01 max/min 2.758780211867e+02 > 19 KSP preconditioned resid norm 1.730151601343e+02 true resid norm 9.854026387827e+00 ||r(i)||/||b|| 5.512379906472e-01 > 19 KSP Residual norm 1.730151601343e+02 % max 1.279580192729e+02 min 3.287110716087e-01 max/min 3.892720091436e+02 > 20 KSP preconditioned resid norm 1.429145143492e+02 true resid norm 8.228490997826e+00 ||r(i)||/||b|| 4.603049215804e-01 > 20 KSP Residual norm 1.429145143492e+02 % max 1.321397322884e+02 min 2.823235578054e-01 max/min 4.680435926621e+02 > 21 KSP preconditioned resid norm 1.345382626439e+02 true resid norm 8.176256473861e+00 ||r(i)||/||b|| 4.573829024078e-01 > 21 KSP Residual norm 1.345382626439e+02 % max 1.332774949926e+02 min 2.425224324298e-01 max/min 5.495470817166e+02 > 22 KSP preconditioned resid norm 1.301499631466e+02 true resid norm 8.487706077838e+00 ||r(i)||/||b|| 4.748055119207e-01 > 22 KSP Residual norm 1.301499631466e+02 % max 1.334143594976e+02 min 2.077893364534e-01 max/min 6.420654773469e+02 > 23 KSP preconditioned resid norm 1.260084835452e+02 true resid norm 8.288260183397e+00 ||r(i)||/||b|| 4.636484325941e-01 > 23 KSP Residual norm 1.260084835452e+02 % max 1.342473982017e+02 min 2.010966692943e-01 max/min 6.675764381023e+02 > 24 KSP preconditioned resid norm 1.255711443195e+02 true resid norm 8.117619099395e+00 ||r(i)||/||b|| 4.541027053386e-01 > 24 KSP Residual norm 1.255711443195e+02 % max 1.342478258493e+02 min 1.586270065907e-01 max/min 8.463112854147e+02 > 25 KSP preconditioned resid norm 1.064125166220e+02 true resid norm 8.683750469293e+00 ||r(i)||/||b|| 4.857723098741e-01 > 25 KSP Residual norm 1.064125166220e+02 % max 1.343100269972e+02 min 1.586061159091e-01 max/min 8.468149303534e+02 > 26 KSP preconditioned resid norm 9.497012777512e+01 true resid norm 7.776308733811e+00 ||r(i)||/||b|| 4.350096734441e-01 > 26 KSP Residual norm 9.497012777512e+01 % max 1.346211743671e+02 min 1.408944545921e-01 max/min 9.554753219835e+02 > 27 KSP preconditioned resid norm 9.449347291209e+01 true resid norm 8.027397390699e+00 ||r(i)||/||b|| 4.490556685785e-01 > 27 KSP Residual norm 9.449347291209e+01 % max 1.353601106604e+02 min 1.302056396509e-01 max/min 1.039587156311e+03 > 28 KSP preconditioned resid norm 7.708808620337e+01 true resid norm 8.253756419882e+00 ||r(i)||/||b|| 4.617182789167e-01 > 28 KSP Residual norm 7.708808620337e+01 % max 1.354170803310e+02 min 1.300840147004e-01 max/min 1.040997086712e+03 > 29 KSP preconditioned resid norm 6.883976717639e+01 true resid norm 7.200274893950e+00 ||r(i)||/||b|| 4.027861209659e-01 > 29 KSP Residual norm 6.883976717639e+01 % max 1.359172085675e+02 min 1.060952954746e-01 max/min 1.281086102446e+03 > 30 KSP preconditioned resid norm 6.671786230822e+01 true resid norm 6.613746362850e+00 ||r(i)||/||b|| 3.699754914612e-01 > 30 KSP Residual norm 6.671786230822e+01 % max 1.361448012233e+02 min 8.135160167393e-02 max/min 1.673535596373e+03 > 31 KSP preconditioned resid norm 5.753308015718e+01 true resid norm 7.287752888130e+00 ||r(i)||/||b|| 4.076796732906e-01 > 31 KSP Residual norm 5.753308015718e+01 % max 1.363247397022e+02 min 6.837262871767e-02 max/min 1.993849618759e+03 > 32 KSP preconditioned resid norm 5.188554631220e+01 true resid norm 7.000101427370e+00 ||r(i)||/||b|| 3.915883409767e-01 > 32 KSP Residual norm 5.188554631220e+01 % max 1.365417051351e+02 min 6.421828602757e-02 max/min 2.126212229901e+03 > 33 KSP preconditioned resid norm 4.949709579590e+01 true resid norm 6.374314702607e+00 ||r(i)||/||b|| 3.565815931606e-01 > 33 KSP Residual norm 4.949709579590e+01 % max 1.366317667622e+02 min 6.421330753944e-02 max/min 2.127779614503e+03 > 34 KSP preconditioned resid norm 4.403827792460e+01 true resid norm 6.147327125810e+00 ||r(i)||/||b|| 3.438838216294e-01 > 34 KSP Residual norm 4.403827792460e+01 % max 1.370253126927e+02 min 6.368216512224e-02 max/min 2.151706249775e+03 > 35 KSP preconditioned resid norm 4.140066382940e+01 true resid norm 5.886089041852e+00 ||r(i)||/||b|| 3.292700636777e-01 > 35 KSP Residual norm 4.140066382940e+01 % max 1.390943209074e+02 min 6.114512372501e-02 max/min 2.274822789352e+03 > 36 KSP preconditioned resid norm 3.745028333544e+01 true resid norm 5.122854489971e+00 ||r(i)||/||b|| 2.865744320432e-01 > 36 KSP Residual norm 3.745028333544e+01 % max 1.396462040364e+02 min 5.993486381149e-02 max/min 2.329966152515e+03 > 37 KSP preconditioned resid norm 3.492028700266e+01 true resid norm 4.982433448736e+00 ||r(i)||/||b|| 2.787192254942e-01 > 37 KSP Residual norm 3.492028700266e+01 % max 1.418761690073e+02 min 5.972847536278e-02 max/min 2.375352261139e+03 > 38 KSP preconditioned resid norm 3.068024157121e+01 true resid norm 4.394664243655e+00 ||r(i)||/||b|| 2.458391922143e-01 > 38 KSP Residual norm 3.068024157121e+01 % max 1.419282962644e+02 min 5.955136512602e-02 max/min 2.383292070032e+03 > 39 KSP preconditioned resid norm 2.614836504484e+01 true resid norm 3.671741082517e+00 ||r(i)||/||b|| 2.053985951371e-01 > 39 KSP Residual norm 2.614836504484e+01 % max 1.424590886621e+02 min 5.534838135054e-02 max/min 2.573861876102e+03 > 40 KSP preconditioned resid norm 2.598742703782e+01 true resid norm 3.707086113835e+00 ||r(i)||/||b|| 2.073758096559e-01 > 40 KSP Residual norm 2.598742703782e+01 % max 1.428352406023e+02 min 5.401263258718e-02 max/min 2.644478407376e+03 > 41 KSP preconditioned resid norm 2.271029765350e+01 true resid norm 2.912150121100e+00 ||r(i)||/||b|| 1.629067873414e-01 > 41 KSP Residual norm 2.271029765350e+01 % max 1.446928584608e+02 min 5.373910075949e-02 max/min 2.692506134562e+03 > 42 KSP preconditioned resid norm 1.795836259709e+01 true resid norm 2.316834528159e+00 ||r(i)||/||b|| 1.296046062493e-01 > 42 KSP Residual norm 1.795836259709e+01 % max 1.449375360478e+02 min 5.251640030193e-02 max/min 2.759852831011e+03 > 43 KSP preconditioned resid norm 1.795769958691e+01 true resid norm 2.311558013062e+00 ||r(i)||/||b|| 1.293094359844e-01 > 43 KSP Residual norm 1.795769958691e+01 % max 1.449864885595e+02 min 4.685673226983e-02 max/min 3.094250954688e+03 > 44 KSP preconditioned resid norm 1.688955122733e+01 true resid norm 2.370951018341e+00 ||r(i)||/||b|| 1.326319033292e-01 > 44 KSP Residual norm 1.688955122733e+01 % max 1.450141764818e+02 min 4.554223489616e-02 max/min 3.184169086398e+03 > 45 KSP preconditioned resid norm 1.526531678793e+01 true resid norm 1.947726448658e+00 ||r(i)||/||b|| 1.089565596470e-01 > 45 KSP Residual norm 1.526531678793e+01 % max 1.460128059490e+02 min 4.478596884382e-02 max/min 3.260235509433e+03 > 46 KSP preconditioned resid norm 1.519131456699e+01 true resid norm 1.914601584750e+00 ||r(i)||/||b|| 1.071035421390e-01 > 46 KSP Residual norm 1.519131456699e+01 % max 1.467440844081e+02 min 4.068627190104e-02 max/min 3.606722305867e+03 > 47 KSP preconditioned resid norm 1.396606833105e+01 true resid norm 1.916085501806e+00 ||r(i)||/||b|| 1.071865530245e-01 > 47 KSP Residual norm 1.396606833105e+01 % max 1.472676539618e+02 min 4.068014642358e-02 max/min 3.620135788804e+03 > 48 KSP preconditioned resid norm 1.128785630240e+01 true resid norm 1.818072605558e+00 ||r(i)||/||b|| 1.017036742642e-01 > 48 KSP Residual norm 1.128785630240e+01 % max 1.476343971960e+02 min 4.003858518052e-02 max/min 3.687303048557e+03 > 49 KSP preconditioned resid norm 9.686331225178e+00 true resid norm 1.520702063515e+00 ||r(i)||/||b|| 8.506865284031e-02 > 49 KSP Residual norm 9.686331225178e+00 % max 1.478478861627e+02 min 3.802148926353e-02 max/min 3.888534852961e+03 > 50 KSP preconditioned resid norm 9.646313260413e+00 true resid norm 1.596152870343e+00 ||r(i)||/||b|| 8.928939972200e-02 > 50 KSP Residual norm 9.646313260413e+00 % max 1.479986700685e+02 min 3.796210121957e-02 max/min 3.898590049388e+03 > 51 KSP preconditioned resid norm 9.270552344731e+00 true resid norm 1.442564661256e+00 ||r(i)||/||b|| 8.069761678656e-02 > 51 KSP Residual norm 9.270552344731e+00 % max 1.482179613835e+02 min 3.763123640547e-02 max/min 3.938694965706e+03 > 52 KSP preconditioned resid norm 8.025547426875e+00 true resid norm 1.151158202903e+00 ||r(i)||/||b|| 6.439622847661e-02 > 52 KSP Residual norm 8.025547426875e+00 % max 1.482612345232e+02 min 3.498901666812e-02 max/min 4.237364997409e+03 > 53 KSP preconditioned resid norm 7.830903379041e+00 true resid norm 1.083539895672e+00 ||r(i)||/||b|| 6.061363460667e-02 > 53 KSP Residual norm 7.830903379041e+00 % max 1.496916293673e+02 min 3.180425422032e-02 max/min 4.706654283743e+03 > 54 KSP preconditioned resid norm 7.818451528162e+00 true resid norm 1.101275786636e+00 ||r(i)||/||b|| 6.160578710481e-02 > 54 KSP Residual norm 7.818451528162e+00 % max 1.497547864750e+02 min 2.963830547627e-02 max/min 5.052744550289e+03 > 55 KSP preconditioned resid norm 6.716190950888e+00 true resid norm 1.068051260258e+00 ||r(i)||/||b|| 5.974719444025e-02 > 55 KSP Residual norm 6.716190950888e+00 % max 1.509985715255e+02 min 2.913583319816e-02 max/min 5.182572624524e+03 > 56 KSP preconditioned resid norm 6.435651273713e+00 true resid norm 9.738533937643e-01 ||r(i)||/||b|| 5.447772989798e-02 > 56 KSP Residual norm 6.435651273713e+00 % max 1.513751779517e+02 min 2.849133088254e-02 max/min 5.313025866562e+03 > 57 KSP preconditioned resid norm 6.427111085122e+00 true resid norm 9.515491605865e-01 ||r(i)||/||b|| 5.323002259581e-02 > 57 KSP Residual norm 6.427111085122e+00 % max 1.514217018182e+02 min 2.805206411012e-02 max/min 5.397880926830e+03 > 58 KSP preconditioned resid norm 6.330738649886e+00 true resid norm 9.488038114070e-01 ||r(i)||/||b|| 5.307644671670e-02 > 58 KSP Residual norm 6.330738649886e+00 % max 1.514492565938e+02 min 2.542367477260e-02 max/min 5.957016754989e+03 > 59 KSP preconditioned resid norm 5.861133560095e+00 true resid norm 9.281566562924e-01 ||r(i)||/||b|| 5.192143699275e-02 > 59 KSP Residual norm 5.861133560095e+00 % max 1.522146406650e+02 min 2.316380798762e-02 max/min 6.571227008373e+03 > 60 KSP preconditioned resid norm 5.523892064332e+00 true resid norm 8.250464923972e-01 ||r(i)||/||b|| 4.615341513814e-02 > 60 KSP Residual norm 5.523892064332e+00 % max 1.522274643717e+02 min 2.316038063761e-02 max/min 6.572753131893e+03 > 61 KSP preconditioned resid norm 5.345610652504e+00 true resid norm 7.967455833060e-01 ||r(i)||/||b|| 4.457025150056e-02 > 61 KSP Residual norm 5.345610652504e+00 % max 1.527689509969e+02 min 2.278107359995e-02 max/min 6.705959239659e+03 > 62 KSP preconditioned resid norm 4.883527474160e+00 true resid norm 6.939057385062e-01 ||r(i)||/||b|| 3.881735139915e-02 > 62 KSP Residual norm 4.883527474160e+00 % max 1.527847370492e+02 min 2.215885299007e-02 max/min 6.894974984387e+03 > 63 KSP preconditioned resid norm 3.982941325093e+00 true resid norm 6.883730294386e-01 ||r(i)||/||b|| 3.850784954587e-02 > 63 KSP Residual norm 3.982941325093e+00 % max 1.528029462870e+02 min 2.036330717837e-02 max/min 7.503837414449e+03 > 64 KSP preconditioned resid norm 3.768777539791e+00 true resid norm 6.576940275451e-01 ||r(i)||/||b|| 3.679165449085e-02 > 64 KSP Residual norm 3.768777539791e+00 % max 1.534842011203e+02 min 2.015613298342e-02 max/min 7.614764262895e+03 > 65 KSP preconditioned resid norm 3.754783611569e+00 true resid norm 6.240730525064e-01 ||r(i)||/||b|| 3.491088433716e-02 > 65 KSP Residual norm 3.754783611569e+00 % max 1.536539871370e+02 min 2.002205183069e-02 max/min 7.674237807208e+03 > 66 KSP preconditioned resid norm 3.242712853326e+00 true resid norm 5.919043028914e-01 ||r(i)||/||b|| 3.311135222698e-02 > 66 KSP Residual norm 3.242712853326e+00 % max 1.536763506920e+02 min 1.853540050760e-02 max/min 8.290964666717e+03 > 67 KSP preconditioned resid norm 2.639460944665e+00 true resid norm 5.172526166420e-01 ||r(i)||/||b|| 2.893530845493e-02 > 67 KSP Residual norm 2.639460944665e+00 % max 1.536900244470e+02 min 1.819164547837e-02 max/min 8.448384981431e+03 > 68 KSP preconditioned resid norm 2.332848633723e+00 true resid norm 4.860086486067e-01 ||r(i)||/||b|| 2.718750897868e-02 > 68 KSP Residual norm 2.332848633723e+00 % max 1.537025056230e+02 min 1.794260313013e-02 max/min 8.566343718814e+03 > 69 KSP preconditioned resid norm 2.210456807969e+00 true resid norm 5.392952883712e-01 ||r(i)||/||b|| 3.016838391000e-02 > 69 KSP Residual norm 2.210456807969e+00 % max 1.537288460363e+02 min 1.753831098488e-02 max/min 8.765316464556e+03 > 70 KSP preconditioned resid norm 2.193036145802e+00 true resid norm 5.031209038994e-01 ||r(i)||/||b|| 2.814477505974e-02 > 70 KSP Residual norm 2.193036145802e+00 % max 1.541461506421e+02 min 1.700508158735e-02 max/min 9.064711030659e+03 > 71 KSP preconditioned resid norm 1.938788470922e+00 true resid norm 3.360317906409e-01 ||r(i)||/||b|| 1.879774640094e-02 > 71 KSP Residual norm 1.938788470922e+00 % max 1.545641242905e+02 min 1.655870257931e-02 max/min 9.334313696993e+03 > 72 KSP preconditioned resid norm 1.450214827437e+00 true resid norm 2.879267785157e-01 ||r(i)||/||b|| 1.610673369402e-02 > 72 KSP Residual norm 1.450214827437e+00 % max 1.549625503409e+02 min 1.648921859604e-02 max/min 9.397810420079e+03 > 73 KSP preconditioned resid norm 1.110389920050e+00 true resid norm 2.664130281817e-01 ||r(i)||/||b|| 1.490324630331e-02 > 73 KSP Residual norm 1.110389920050e+00 % max 1.553700778851e+02 min 1.637230817799e-02 max/min 9.489809023627e+03 > 74 KSP preconditioned resid norm 8.588797630056e-01 true resid norm 2.680334659775e-01 ||r(i)||/||b|| 1.499389421102e-02 > 74 KSP Residual norm 8.588797630056e-01 % max 1.560105458550e+02 min 1.627972447765e-02 max/min 9.583119546597e+03 > 75 KSP preconditioned resid norm 6.037657330005e-01 true resid norm 2.938298885198e-01 ||r(i)||/||b|| 1.643695591681e-02 > 75 KSP Residual norm 6.037657330005e-01 % max 1.568417947105e+02 min 1.617453208191e-02 max/min 9.696836601902e+03 > 76 KSP preconditioned resid norm 4.778003132962e-01 true resid norm 3.104440343355e-01 ||r(i)||/||b|| 1.736635756394e-02 > 76 KSP Residual norm 4.778003132962e-01 % max 1.582399629678e+02 min 1.614387326366e-02 max/min 9.801858598824e+03 > 77 KSP preconditioned resid norm 4.523141317161e-01 true resid norm 3.278599563956e-01 ||r(i)||/||b|| 1.834061087967e-02 > 77 KSP Residual norm 4.523141317161e-01 % max 1.610754319747e+02 min 1.614204788287e-02 max/min 9.978624344533e+03 > 78 KSP preconditioned resid norm 4.522674526201e-01 true resid norm 3.287567680693e-01 ||r(i)||/||b|| 1.839077886640e-02 > 78 KSP Residual norm 4.522674526201e-01 % max 1.611450941232e+02 min 1.596556934579e-02 max/min 1.009328829014e+04 > 79 KSP preconditioned resid norm 4.452139421181e-01 true resid norm 3.198810083892e-01 ||r(i)||/||b|| 1.789426548812e-02 > 79 KSP Residual norm 4.452139421181e-01 % max 1.619065061486e+02 min 1.456184518464e-02 max/min 1.111854329555e+04 > 80 KSP preconditioned resid norm 4.217533807980e-01 true resid norm 2.940346905843e-01 ||r(i)||/||b|| 1.644841262233e-02 > 80 KSP Residual norm 4.217533807980e-01 % max 1.620409915505e+02 min 1.154094228288e-02 max/min 1.404053391644e+04 > 81 KSP preconditioned resid norm 3.815105809981e-01 true resid norm 2.630686686818e-01 ||r(i)||/||b|| 1.471616155865e-02 > 81 KSP Residual norm 3.815105809981e-01 % max 1.643140061303e+02 min 9.352799654900e-03 max/min 1.756843000953e+04 > 82 KSP preconditioned resid norm 3.257333222641e-01 true resid norm 2.350339902943e-01 ||r(i)||/||b|| 1.314789096807e-02 > 82 KSP Residual norm 3.257333222641e-01 % max 1.661730623934e+02 min 7.309343891643e-03 max/min 2.273433359502e+04 > 83 KSP preconditioned resid norm 2.653029571859e-01 true resid norm 1.993496448136e-01 ||r(i)||/||b|| 1.115169508568e-02 > 83 KSP Residual norm 2.653029571859e-01 % max 1.672909675648e+02 min 5.727931297058e-03 max/min 2.920617564857e+04 > 84 KSP preconditioned resid norm 2.125985080811e-01 true resid norm 1.651488251908e-01 ||r(i)||/||b|| 9.238488205020e-03 > 84 KSP Residual norm 2.125985080811e-01 % max 1.677205536916e+02 min 4.904920732181e-03 max/min 3.419434540321e+04 > 85 KSP preconditioned resid norm 1.823170233996e-01 true resid norm 1.538640960567e-01 ||r(i)||/||b|| 8.607216157635e-03 > 85 KSP Residual norm 1.823170233996e-01 % max 1.684045979154e+02 min 4.411887233900e-03 max/min 3.817064874674e+04 > 86 KSP preconditioned resid norm 1.568093845918e-01 true resid norm 1.468517035130e-01 ||r(i)||/||b|| 8.214940246925e-03 > 86 KSP Residual norm 1.568093845918e-01 % max 1.707938893954e+02 min 3.835536173319e-03 max/min 4.452933870980e+04 > 87 KSP preconditioned resid norm 1.416153038224e-01 true resid norm 1.326462010194e-01 ||r(i)||/||b|| 7.420279024951e-03 > 87 KSP Residual norm 1.416153038224e-01 % max 1.708004671154e+02 min 3.204409749322e-03 max/min 5.330169375233e+04 > 88 KSP preconditioned resid norm 1.298764107063e-01 true resid norm 1.273603812747e-01 ||r(i)||/||b|| 7.124588254468e-03 > 88 KSP Residual norm 1.298764107063e-01 % max 1.708079667291e+02 min 2.881989285724e-03 max/min 5.926738436372e+04 > 89 KSP preconditioned resid norm 1.226436579597e-01 true resid norm 1.269051556519e-01 ||r(i)||/||b|| 7.099122759682e-03 > 89 KSP Residual norm 1.226436579597e-01 % max 1.708200970069e+02 min 2.600538747361e-03 max/min 6.568642639154e+04 > 90 KSP preconditioned resid norm 1.131214284449e-01 true resid norm 1.236585929815e-01 ||r(i)||/||b|| 6.917508806917e-03 > 90 KSP Residual norm 1.131214284449e-01 % max 1.708228433345e+02 min 2.097699611380e-03 max/min 8.143341516003e+04 > 91 KSP preconditioned resid norm 1.097877127886e-01 true resid norm 1.208660430826e-01 ||r(i)||/||b|| 6.761292501577e-03 > 91 KSP Residual norm 1.097877127886e-01 % max 1.734053778533e+02 min 1.669096618002e-03 max/min 1.038917555659e+05 > 92 KSP preconditioned resid norm 1.087819681706e-01 true resid norm 1.220967287438e-01 ||r(i)||/||b|| 6.830137526368e-03 > 92 KSP Residual norm 1.087819681706e-01 % max 1.735898650322e+02 min 1.410331102754e-03 max/min 1.230844761867e+05 > 93 KSP preconditioned resid norm 1.082792743495e-01 true resid norm 1.231464976286e-01 ||r(i)||/||b|| 6.888861997758e-03 > 93 KSP Residual norm 1.082792743495e-01 % max 1.744253420547e+02 min 1.160377474118e-03 max/min 1.503177594750e+05 > 94 KSP preconditioned resid norm 1.082786837568e-01 true resid norm 1.231972007582e-01 ||r(i)||/||b|| 6.891698350150e-03 > 94 KSP Residual norm 1.082786837568e-01 % max 1.744405973598e+02 min 9.020124135986e-04 max/min 1.933904619603e+05 > 95 KSP preconditioned resid norm 1.076120024876e-01 true resid norm 1.193390416348e-01 ||r(i)||/||b|| 6.675871458778e-03 > 95 KSP Residual norm 1.076120024876e-01 % max 1.744577996139e+02 min 6.759449987513e-04 max/min 2.580946673712e+05 > 96 KSP preconditioned resid norm 1.051338654734e-01 true resid norm 1.113305708771e-01 ||r(i)||/||b|| 6.227874553259e-03 > 96 KSP Residual norm 1.051338654734e-01 % max 1.754402425643e+02 min 5.544787352320e-04 max/min 3.164057184102e+05 > 97 KSP preconditioned resid norm 9.747169190114e-02 true resid norm 9.349805869502e-02 ||r(i)||/||b|| 5.230317027375e-03 > 97 KSP Residual norm 9.747169190114e-02 % max 1.756210909470e+02 min 4.482270392115e-04 max/min 3.918127992813e+05 > 98 KSP preconditioned resid norm 8.842962288098e-02 true resid norm 7.385511932515e-02 ||r(i)||/||b|| 4.131483514810e-03 > 98 KSP Residual norm 8.842962288098e-02 % max 1.757463096622e+02 min 3.987817611456e-04 max/min 4.407079931572e+05 > 99 KSP preconditioned resid norm 7.588962343124e-02 true resid norm 4.759948399141e-02 ||r(i)||/||b|| 2.662733270502e-03 > 99 KSP Residual norm 7.588962343124e-02 % max 1.758800802719e+02 min 3.549508940132e-04 max/min 4.955053874730e+05 > 100 KSP preconditioned resid norm 5.762080294705e-02 true resid norm 2.979101328333e-02 ||r(i)||/||b|| 1.666520633833e-03 > 100 KSP Residual norm 5.762080294705e-02 % max 1.767054165485e+02 min 3.399974141580e-04 max/min 5.197257661094e+05 > 101 KSP preconditioned resid norm 4.010101211334e-02 true resid norm 1.618156300742e-02 ||r(i)||/||b|| 9.052028000211e-04 > 101 KSP Residual norm 4.010101211334e-02 % max 1.780221758963e+02 min 3.330380731227e-04 max/min 5.345400128792e+05 > 102 KSP preconditioned resid norm 2.613215615582e-02 true resid norm 8.507285387866e-03 ||r(i)||/||b|| 4.759007859835e-04 > 102 KSP Residual norm 2.613215615582e-02 % max 1.780247903707e+02 min 3.267877108360e-04 max/min 5.447719864228e+05 > 103 KSP preconditioned resid norm 1.756896108812e-02 true resid norm 4.899136161981e-03 ||r(i)||/||b|| 2.740595435358e-04 > 103 KSP Residual norm 1.756896108812e-02 % max 1.780337300599e+02 min 3.237486252629e-04 max/min 5.499134704135e+05 > 104 KSP preconditioned resid norm 1.142112016905e-02 true resid norm 3.312246267926e-03 ||r(i)||/||b|| 1.852883182367e-04 > 104 KSP Residual norm 1.142112016905e-02 % max 1.805569604942e+02 min 3.232308817378e-04 max/min 5.586005876774e+05 > 105 KSP preconditioned resid norm 7.345608733466e-03 true resid norm 1.880672268508e-03 ||r(i)||/||b|| 1.052055232609e-04 > 105 KSP Residual norm 7.345608733466e-03 % max 1.817568861663e+02 min 3.229958821968e-04 max/min 5.627219917794e+05 > 106 KSP preconditioned resid norm 4.810582452316e-03 true resid norm 1.418216565286e-03 ||r(i)||/||b|| 7.933557502107e-05 > 106 KSP Residual norm 4.810582452316e-03 % max 1.845797149153e+02 min 3.220343456392e-04 max/min 5.731677922395e+05 > 107 KSP preconditioned resid norm 3.114068270187e-03 true resid norm 1.177898700463e-03 ||r(i)||/||b|| 6.589210209864e-05 > 107 KSP Residual norm 3.114068270187e-03 % max 1.850561815508e+02 min 3.216059660171e-04 max/min 5.754127755856e+05 > 108 KSP preconditioned resid norm 2.293796939903e-03 true resid norm 1.090384984298e-03 ||r(i)||/||b|| 6.099655147246e-05 > 108 KSP Residual norm 2.293796939903e-03 % max 1.887275970943e+02 min 3.204019028106e-04 max/min 5.890339459248e+05 > 109 KSP preconditioned resid norm 1.897140553486e-03 true resid norm 1.085409353133e-03 ||r(i)||/||b|| 6.071821276936e-05 > 109 KSP Residual norm 1.897140553486e-03 % max 1.900700995337e+02 min 3.201092729524e-04 max/min 5.937663029274e+05 > 110 KSP preconditioned resid norm 1.638607873536e-03 true resid norm 1.080912735078e-03 ||r(i)||/||b|| 6.046667024205e-05 > 110 KSP Residual norm 1.638607873536e-03 % max 1.941847763082e+02 min 3.201003961856e-04 max/min 6.066371008040e+05 > 111 KSP preconditioned resid norm 1.445734513987e-03 true resid norm 1.072337220355e-03 ||r(i)||/||b|| 5.998695268109e-05 > 111 KSP Residual norm 1.445734513987e-03 % max 1.968706347099e+02 min 3.200841174449e-04 max/min 6.150590547303e+05 > 112 KSP preconditioned resid norm 1.309787940098e-03 true resid norm 1.071880522659e-03 ||r(i)||/||b|| 5.996140483796e-05 > 112 KSP Residual norm 1.309787940098e-03 % max 2.005995546657e+02 min 3.200727858161e-04 max/min 6.267310547950e+05 > 113 KSP preconditioned resid norm 1.203600185791e-03 true resid norm 1.068704252153e-03 ||r(i)||/||b|| 5.978372305568e-05 > 113 KSP Residual norm 1.203600185791e-03 % max 2.044005133646e+02 min 3.200669375682e-04 max/min 6.386180182109e+05 > 114 KSP preconditioned resid norm 1.120060134211e-03 true resid norm 1.067737533794e-03 ||r(i)||/||b|| 5.972964446230e-05 > 114 KSP Residual norm 1.120060134211e-03 % max 2.080860182134e+02 min 3.200663739818e-04 max/min 6.501339569812e+05 > 115 KSP preconditioned resid norm 1.051559355475e-03 true resid norm 1.066440021011e-03 ||r(i)||/||b|| 5.965706110284e-05 > 115 KSP Residual norm 1.051559355475e-03 % max 2.120208487134e+02 min 3.200660278464e-04 max/min 6.624284687130e+05 > 116 KSP preconditioned resid norm 9.943175666627e-04 true resid norm 1.065634036988e-03 ||r(i)||/||b|| 5.961197404955e-05 > 116 KSP Residual norm 9.943175666627e-04 % max 2.158491989548e+02 min 3.200660249513e-04 max/min 6.743896012946e+05 > 117 KSP preconditioned resid norm 9.454894909455e-04 true resid norm 1.064909725825e-03 ||r(i)||/||b|| 5.957145580713e-05 > 117 KSP Residual norm 9.454894909455e-04 % max 2.197604049280e+02 min 3.200659829252e-04 max/min 6.866096887885e+05 > 118 KSP preconditioned resid norm 9.032188050175e-04 true resid norm 1.064337854187e-03 ||r(i)||/||b|| 5.953946508976e-05 > 118 KSP Residual norm 9.032188050175e-04 % max 2.236450290076e+02 min 3.200659636119e-04 max/min 6.987466786029e+05 > 119 KSP preconditioned resid norm 8.661523005930e-04 true resid norm 1.063851383334e-03 ||r(i)||/||b|| 5.951225172494e-05 > 119 KSP Residual norm 8.661523005930e-04 % max 2.275386631435e+02 min 3.200659439058e-04 max/min 7.109118213789e+05 > 120 KSP preconditioned resid norm 8.333044554060e-04 true resid norm 1.063440179004e-03 ||r(i)||/||b|| 5.948924879800e-05 > 120 KSP Residual norm 8.333044554060e-04 % max 2.314181044639e+02 min 3.200659279880e-04 max/min 7.230326136825e+05 > 121 KSP preconditioned resid norm 8.039309218182e-04 true resid norm 1.063085844171e-03 ||r(i)||/||b|| 5.946942717246e-05 > 121 KSP Residual norm 8.039309218182e-04 % max 2.352840091878e+02 min 3.200659140872e-04 max/min 7.351111094065e+05 > 122 KSP preconditioned resid norm 7.774595735272e-04 true resid norm 1.062777951335e-03 ||r(i)||/||b|| 5.945220352987e-05 > 122 KSP Residual norm 7.774595735272e-04 % max 2.391301402333e+02 min 3.200659020264e-04 max/min 7.471278218621e+05 > 123 KSP preconditioned resid norm 7.534419441080e-04 true resid norm 1.062507774643e-03 ||r(i)||/||b|| 5.943708974281e-05 > 123 KSP Residual norm 7.534419441080e-04 % max 2.429535997478e+02 min 3.200658914252e-04 max/min 7.590736978126e+05 > 124 KSP preconditioned resid norm 7.315209774625e-04 true resid norm 1.062268823392e-03 ||r(i)||/||b|| 5.942372271877e-05 > 124 KSP Residual norm 7.315209774625e-04 % max 2.467514538222e+02 min 3.200658820413e-04 max/min 7.709395710924e+05 > 125 KSP preconditioned resid norm 7.114083600074e-04 true resid norm 1.062055971251e-03 ||r(i)||/||b|| 5.941181568889e-05 > 125 KSP Residual norm 7.114083600074e-04 % max 2.505215705167e+02 min 3.200658736748e-04 max/min 7.827187811071e+05 > 126 KSP preconditioned resid norm 6.928683961266e-04 true resid norm 1.061865165985e-03 ||r(i)||/||b|| 5.940114196966e-05 > 126 KSP Residual norm 6.928683961266e-04 % max 2.542622648377e+02 min 3.200658661692e-04 max/min 7.944060636045e+05 > 127 KSP preconditioned resid norm 6.757062709768e-04 true resid norm 1.061693150669e-03 ||r(i)||/||b|| 5.939151936730e-05 > 127 KSP Residual norm 6.757062709768e-04 % max 2.579722822206e+02 min 3.200658593982e-04 max/min 8.059974990947e+05 > 128 KSP preconditioned resid norm 6.597593638309e-04 true resid norm 1.061537280201e-03 ||r(i)||/||b|| 5.938279991397e-05 > 128 KSP Residual norm 6.597593638309e-04 % max 2.616507118184e+02 min 3.200658532588e-04 max/min 8.174902419435e+05 > 129 KSP preconditioned resid norm 6.448907155810e-04 true resid norm 1.061395383620e-03 ||r(i)||/||b|| 5.937486216514e-05 > 129 KSP Residual norm 6.448907155810e-04 % max 2.652969302909e+02 min 3.200658476664e-04 max/min 8.288823447585e+05 > 130 KSP preconditioned resid norm 6.309840465804e-04 true resid norm 1.061265662452e-03 ||r(i)||/||b|| 5.936760551361e-05 > 130 KSP Residual norm 6.309840465804e-04 % max 2.689105505574e+02 min 3.200658425513e-04 max/min 8.401725982813e+05 > 131 KSP preconditioned resid norm 6.179399078869e-04 true resid norm 1.061146614178e-03 ||r(i)||/||b|| 5.936094590780e-05 > 131 KSP Residual norm 6.179399078869e-04 % max 2.724913796166e+02 min 3.200658378548e-04 max/min 8.513603996069e+05 > 132 KSP preconditioned resid norm 6.056726732840e-04 true resid norm 1.061036973709e-03 ||r(i)||/||b|| 5.935481257818e-05 > 132 KSP Residual norm 6.056726732840e-04 % max 2.760393830832e+02 min 3.200658335276e-04 max/min 8.624456413884e+05 > 133 KSP preconditioned resid norm 5.941081632891e-04 true resid norm 1.060935668299e-03 ||r(i)||/||b|| 5.934914551493e-05 > 133 KSP Residual norm 5.941081632891e-04 % max 2.795546556158e+02 min 3.200658295276e-04 max/min 8.734286194450e+05 > 134 KSP preconditioned resid norm 5.831817499918e-04 true resid norm 1.060841782314e-03 ||r(i)||/||b|| 5.934389349718e-05 > 134 KSP Residual norm 5.831817499918e-04 % max 2.830373962684e+02 min 3.200658258193e-04 max/min 8.843099557533e+05 > 135 KSP preconditioned resid norm 5.728368317981e-04 true resid norm 1.060754529477e-03 ||r(i)||/||b|| 5.933901254021e-05 > 135 KSP Residual norm 5.728368317981e-04 % max 2.864878879810e+02 min 3.200658223718e-04 max/min 8.950905343719e+05 > 136 KSP preconditioned resid norm 5.630235956754e-04 true resid norm 1.060673230868e-03 ||r(i)||/||b|| 5.933446466506e-05 > 136 KSP Residual norm 5.630235956754e-04 % max 2.899064805338e+02 min 3.200658191584e-04 max/min 9.057714481857e+05 > 137 KSP preconditioned resid norm 5.536980049705e-04 true resid norm 1.060597297101e-03 ||r(i)||/||b|| 5.933021690118e-05 > 137 KSP Residual norm 5.536980049705e-04 % max 2.932935763968e+02 min 3.200658161562e-04 max/min 9.163539546933e+05 > 138 KSP preconditioned resid norm 5.448209657758e-04 true resid norm 1.060526214124e-03 ||r(i)||/||b|| 5.932624049239e-05 > 138 KSP Residual norm 5.448209657758e-04 % max 2.966496189942e+02 min 3.200658133448e-04 max/min 9.268394393456e+05 > 139 KSP preconditioned resid norm 5.363576357737e-04 true resid norm 1.060459531517e-03 ||r(i)||/||b|| 5.932251024194e-05 > 139 KSP Residual norm 5.363576357737e-04 % max 2.999750829834e+02 min 3.200658107070e-04 max/min 9.372293851716e+05 > 140 KSP preconditioned resid norm 5.282768476492e-04 true resid norm 1.060396852955e-03 ||r(i)||/||b|| 5.931900397932e-05 > 140 KSP Residual norm 5.282768476492e-04 % max 3.032704662136e+02 min 3.200658082268e-04 max/min 9.475253476584e+05 > 141 KSP preconditioned resid norm 5.205506252810e-04 true resid norm 1.060337828321e-03 ||r(i)||/||b|| 5.931570211878e-05 > 141 KSP Residual norm 5.205506252810e-04 % max 3.065362830844e+02 min 3.200658058907e-04 max/min 9.577289339963e+05 > 142 KSP preconditioned resid norm 5.131537755696e-04 true resid norm 1.060282147141e-03 ||r(i)||/||b|| 5.931258729233e-05 > 142 KSP Residual norm 5.131537755696e-04 % max 3.097730590695e+02 min 3.200658036863e-04 max/min 9.678417859756e+05 > 143 KSP preconditioned resid norm 5.060635423123e-04 true resid norm 1.060229533174e-03 ||r(i)||/||b|| 5.930964404698e-05 > 143 KSP Residual norm 5.060635423123e-04 % max 3.129813262144e+02 min 3.200658016030e-04 max/min 9.778655659143e+05 > 144 KSP preconditioned resid norm 4.992593112791e-04 true resid norm 1.060179739774e-03 ||r(i)||/||b|| 5.930685858524e-05 > 144 KSP Residual norm 4.992593112791e-04 % max 3.161616194433e+02 min 3.200657996311e-04 max/min 9.878019451242e+05 > 145 KSP preconditioned resid norm 4.927223577718e-04 true resid norm 1.060132546053e-03 ||r(i)||/||b|| 5.930421855048e-05 > 145 KSP Residual norm 4.927223577718e-04 % max 3.193144735429e+02 min 3.200657977617e-04 max/min 9.976525944849e+05 > 146 KSP preconditioned resid norm 4.864356296191e-04 true resid norm 1.060087753599e-03 ||r(i)||/||b|| 5.930171284355e-05 > 146 KSP Residual norm 4.864356296191e-04 % max 3.224404207086e+02 min 3.200657959872e-04 max/min 1.007419176779e+06 > 147 KSP preconditioned resid norm 4.803835598739e-04 true resid norm 1.060045183705e-03 ||r(i)||/||b|| 5.929933146747e-05 > 147 KSP Residual norm 4.803835598739e-04 % max 3.255399885620e+02 min 3.200657943002e-04 max/min 1.017103340498e+06 > 148 KSP preconditioned resid norm 4.745519045267e-04 true resid norm 1.060004674955e-03 ||r(i)||/||b|| 5.929706539253e-05 > 148 KSP Residual norm 4.745519045267e-04 % max 3.286136985587e+02 min 3.200657926948e-04 max/min 1.026706714866e+06 > 149 KSP preconditioned resid norm 4.689276013781e-04 true resid norm 1.059966081181e-03 ||r(i)||/||b|| 5.929490644211e-05 > 149 KSP Residual norm 4.689276013781e-04 % max 3.316620647237e+02 min 3.200657911651e-04 max/min 1.036230905891e+06 > 150 KSP preconditioned resid norm 4.634986468865e-04 true resid norm 1.059929269736e-03 ||r(i)||/||b|| 5.929284719586e-05 > 150 KSP Residual norm 4.634986468865e-04 % max 3.346855926588e+02 min 3.200657897058e-04 max/min 1.045677493263e+06 > 151 KSP preconditioned resid norm 4.582539883466e-04 true resid norm 1.059894119959e-03 ||r(i)||/||b|| 5.929088090393e-05 > 151 KSP Residual norm 4.582539883466e-04 % max 3.376847787762e+02 min 3.200657883122e-04 max/min 1.055048027960e+06 > 152 KSP preconditioned resid norm 4.531834291922e-04 true resid norm 1.059860521790e-03 ||r(i)||/||b|| 5.928900140955e-05 > 152 KSP Residual norm 4.531834291922e-04 % max 3.406601097208e+02 min 3.200657869798e-04 max/min 1.064344030443e+06 > 153 KSP preconditioned resid norm 4.482775455769e-04 true resid norm 1.059828374748e-03 ||r(i)||/||b|| 5.928720309181e-05 > 153 KSP Residual norm 4.482775455769e-04 % max 3.436120619489e+02 min 3.200657857049e-04 max/min 1.073566989337e+06 > 154 KSP preconditioned resid norm 4.435276126791e-04 true resid norm 1.059797586777e-03 ||r(i)||/||b|| 5.928548080094e-05 > 154 KSP Residual norm 4.435276126791e-04 % max 3.465411014367e+02 min 3.200657844837e-04 max/min 1.082718360526e+06 > 155 KSP preconditioned resid norm 4.389255394186e-04 true resid norm 1.059768073502e-03 ||r(i)||/||b|| 5.928382981713e-05 > 155 KSP Residual norm 4.389255394186e-04 % max 3.494476834965e+02 min 3.200657833130e-04 max/min 1.091799566575e+06 > 156 KSP preconditioned resid norm 4.344638104739e-04 true resid norm 1.059739757336e-03 ||r(i)||/||b|| 5.928224580001e-05 > 156 KSP Residual norm 4.344638104739e-04 % max 3.523322526810e+02 min 3.200657821896e-04 max/min 1.100811996430e+06 > 157 KSP preconditioned resid norm 4.301354346542e-04 true resid norm 1.059712566921e-03 ||r(i)||/||b|| 5.928072475785e-05 > 157 KSP Residual norm 4.301354346542e-04 % max 3.551952427605e+02 min 3.200657811108e-04 max/min 1.109757005350e+06 > 158 KSP preconditioned resid norm 4.259338988194e-04 true resid norm 1.059686436414e-03 ||r(i)||/||b|| 5.927926300733e-05 > 158 KSP Residual norm 4.259338988194e-04 % max 3.580370767604e+02 min 3.200657800738e-04 max/min 1.118635915023e+06 > 159 KSP preconditioned resid norm 4.218531266563e-04 true resid norm 1.059661305045e-03 ||r(i)||/||b|| 5.927785714894e-05 > 159 KSP Residual norm 4.218531266563e-04 % max 3.608581670458e+02 min 3.200657790764e-04 max/min 1.127450013829e+06 > 160 KSP preconditioned resid norm 4.178874417186e-04 true resid norm 1.059637116561e-03 ||r(i)||/||b|| 5.927650403596e-05 > 160 KSP Residual norm 4.178874417186e-04 % max 3.636589154466e+02 min 3.200657781164e-04 max/min 1.136200557232e+06 > 161 KSP preconditioned resid norm 4.140315342180e-04 true resid norm 1.059613818894e-03 ||r(i)||/||b|| 5.927520075559e-05 > 161 KSP Residual norm 4.140315342180e-04 % max 3.664397134134e+02 min 3.200657771916e-04 max/min 1.144888768267e+06 > 162 KSP preconditioned resid norm 4.102804311252e-04 true resid norm 1.059591363713e-03 ||r(i)||/||b|| 5.927394460419e-05 > 162 KSP Residual norm 4.102804311252e-04 % max 3.692009421979e+02 min 3.200657763002e-04 max/min 1.153515838106e+06 > 163 KSP preconditioned resid norm 4.066294691985e-04 true resid norm 1.059569706138e-03 ||r(i)||/||b|| 5.927273307117e-05 > 163 KSP Residual norm 4.066294691985e-04 % max 3.719429730532e+02 min 3.200657754405e-04 max/min 1.162082926678e+06 > 164 KSP preconditioned resid norm 4.030742706055e-04 true resid norm 1.059548804412e-03 ||r(i)||/||b|| 5.927156382068e-05 > 164 KSP Residual norm 4.030742706055e-04 % max 3.746661674483e+02 min 3.200657746106e-04 max/min 1.170591163345e+06 > 165 KSP preconditioned resid norm 3.996107208498e-04 true resid norm 1.059528619651e-03 ||r(i)||/||b|| 5.927043467745e-05 > 165 KSP Residual norm 3.996107208498e-04 % max 3.773708772938e+02 min 3.200657738091e-04 max/min 1.179041647605e+06 > 166 KSP preconditioned resid norm 3.962349487489e-04 true resid norm 1.059509115572e-03 ||r(i)||/||b|| 5.926934361186e-05 > 166 KSP Residual norm 3.962349487489e-04 % max 3.800574451754e+02 min 3.200657730346e-04 max/min 1.187435449820e+06 > 167 KSP preconditioned resid norm 3.929433082418e-04 true resid norm 1.059490258328e-03 ||r(i)||/||b|| 5.926828873043e-05 > 167 KSP Residual norm 3.929433082418e-04 % max 3.827262045917e+02 min 3.200657722858e-04 max/min 1.195773611962e+06 > 168 KSP preconditioned resid norm 3.897323618319e-04 true resid norm 1.059472016256e-03 ||r(i)||/||b|| 5.926726826196e-05 > 168 KSP Residual norm 3.897323618319e-04 % max 3.853774801959e+02 min 3.200657715611e-04 max/min 1.204057148368e+06 > 169 KSP preconditioned resid norm 3.865988654946e-04 true resid norm 1.059454359751e-03 ||r(i)||/||b|| 5.926628055033e-05 > 169 KSP Residual norm 3.865988654946e-04 % max 3.880115880375e+02 min 3.200657708600e-04 max/min 1.212287046487e+06 > 170 KSP preconditioned resid norm 3.835397548978e-04 true resid norm 1.059437261066e-03 ||r(i)||/||b|| 5.926532404338e-05 > 170 KSP Residual norm 3.835397548978e-04 % max 3.906288358040e+02 min 3.200657701808e-04 max/min 1.220464267652e+06 > 171 KSP preconditioned resid norm 3.805521328045e-04 true resid norm 1.059420694153e-03 ||r(i)||/||b|| 5.926439728400e-05 > 171 KSP Residual norm 3.805521328045e-04 % max 3.932295230610e+02 min 3.200657695227e-04 max/min 1.228589747812e+06 > 172 KSP preconditioned resid norm 3.776332575372e-04 true resid norm 1.059404634604e-03 ||r(i)||/||b|| 5.926349890668e-05 > 172 KSP Residual norm 3.776332575372e-04 % max 3.958139414889e+02 min 3.200657688847e-04 max/min 1.236664398283e+06 > 173 KSP preconditioned resid norm 3.747805324013e-04 true resid norm 1.059389059461e-03 ||r(i)||/||b|| 5.926262762725e-05 > 173 KSP Residual norm 3.747805324013e-04 % max 3.983823751168e+02 min 3.200657682658e-04 max/min 1.244689106477e+06 > 174 KSP preconditioned resid norm 3.719914959745e-04 true resid norm 1.059373947132e-03 ||r(i)||/||b|| 5.926178223783e-05 > 174 KSP Residual norm 3.719914959745e-04 % max 4.009351005517e+02 min 3.200657676654e-04 max/min 1.252664736614e+06 > 175 KSP preconditioned resid norm 3.692638131792e-04 true resid norm 1.059359277291e-03 ||r(i)||/||b|| 5.926096160129e-05 > 175 KSP Residual norm 3.692638131792e-04 % max 4.034723872035e+02 min 3.200657670824e-04 max/min 1.260592130428e+06 > 176 KSP preconditioned resid norm 3.665952670646e-04 true resid norm 1.059345030772e-03 ||r(i)||/||b|| 5.926016464562e-05 > 176 KSP Residual norm 3.665952670646e-04 % max 4.059944975044e+02 min 3.200657665165e-04 max/min 1.268472107852e+06 > 177 KSP preconditioned resid norm 3.639837512333e-04 true resid norm 1.059331189535e-03 ||r(i)||/||b|| 5.925939036157e-05 > 177 KSP Residual norm 3.639837512333e-04 % max 4.085016871234e+02 min 3.200657659664e-04 max/min 1.276305467690e+06 > 178 KSP preconditioned resid norm 3.614272628526e-04 true resid norm 1.059317736504e-03 ||r(i)||/||b|| 5.925863779385e-05 > 178 KSP Residual norm 3.614272628526e-04 % max 4.109942051749e+02 min 3.200657654318e-04 max/min 1.284092988266e+06 > 179 KSP preconditioned resid norm 3.589238961979e-04 true resid norm 1.059304655605e-03 ||r(i)||/||b|| 5.925790604337e-05 > 179 KSP Residual norm 3.589238961979e-04 % max 4.134722944217e+02 min 3.200657649118e-04 max/min 1.291835428058e+06 > 180 KSP preconditioned resid norm 3.564718366825e-04 true resid norm 1.059291931579e-03 ||r(i)||/||b|| 5.925719425653e-05 > 180 KSP Residual norm 3.564718366825e-04 % max 4.159361914728e+02 min 3.200657644063e-04 max/min 1.299533526319e+06 > 181 KSP preconditioned resid norm 3.540693553287e-04 true resid norm 1.059279550018e-03 ||r(i)||/||b|| 5.925650162730e-05 > 181 KSP Residual norm 3.540693553287e-04 % max 4.183861269743e+02 min 3.200657639142e-04 max/min 1.307188003670e+06 > 182 KSP preconditioned resid norm 3.517148036444e-04 true resid norm 1.059267497309e-03 ||r(i)||/||b|| 5.925582739414e-05 > 182 KSP Residual norm 3.517148036444e-04 % max 4.208223257952e+02 min 3.200657634351e-04 max/min 1.314799562686e+06 > 183 KSP preconditioned resid norm 3.494066088688e-04 true resid norm 1.059255760488e-03 ||r(i)||/||b|| 5.925517083190e-05 > 183 KSP Residual norm 3.494066088688e-04 % max 4.232450072075e+02 min 3.200657629684e-04 max/min 1.322368888450e+06 > 184 KSP preconditioned resid norm 3.471432695577e-04 true resid norm 1.059244327320e-03 ||r(i)||/||b|| 5.925453125615e-05 > 184 KSP Residual norm 3.471432695577e-04 % max 4.256543850606e+02 min 3.200657625139e-04 max/min 1.329896649105e+06 > 185 KSP preconditioned resid norm 3.449233514781e-04 true resid norm 1.059233186159e-03 ||r(i)||/||b|| 5.925390801536e-05 > 185 KSP Residual norm 3.449233514781e-04 % max 4.280506679492e+02 min 3.200657620711e-04 max/min 1.337383496377e+06 > 186 KSP preconditioned resid norm 3.427454837895e-04 true resid norm 1.059222325963e-03 ||r(i)||/||b|| 5.925330049183e-05 > 186 KSP Residual norm 3.427454837895e-04 % max 4.304340593770e+02 min 3.200657616393e-04 max/min 1.344830066085e+06 > 187 KSP preconditioned resid norm 3.406083554860e-04 true resid norm 1.059211736238e-03 ||r(i)||/||b|| 5.925270809861e-05 > 187 KSP Residual norm 3.406083554860e-04 % max 4.328047579144e+02 min 3.200657612184e-04 max/min 1.352236978635e+06 > 188 KSP preconditioned resid norm 3.385107120797e-04 true resid norm 1.059201407008e-03 ||r(i)||/||b|| 5.925213027752e-05 > 188 KSP Residual norm 3.385107120797e-04 % max 4.351629573503e+02 min 3.200657608077e-04 max/min 1.359604839494e+06 > 189 KSP preconditioned resid norm 3.364513525062e-04 true resid norm 1.059191328770e-03 ||r(i)||/||b|| 5.925156649705e-05 > 189 KSP Residual norm 3.364513525062e-04 % max 4.375088468402e+02 min 3.200657604069e-04 max/min 1.366934239651e+06 > 190 KSP preconditioned resid norm 3.344291262348e-04 true resid norm 1.059181492481e-03 ||r(i)||/||b|| 5.925101625132e-05 > 190 KSP Residual norm 3.344291262348e-04 % max 4.398426110478e+02 min 3.200657600159e-04 max/min 1.374225756063e+06 > 191 KSP preconditioned resid norm 3.324429305670e-04 true resid norm 1.059171889545e-03 ||r(i)||/||b|| 5.925047905943e-05 > 191 KSP Residual norm 3.324429305670e-04 % max 4.421644302826e+02 min 3.200657596338e-04 max/min 1.381479952084e+06 > 192 KSP preconditioned resid norm 3.304917081101e-04 true resid norm 1.059162511741e-03 ||r(i)||/||b|| 5.924995446149e-05 > 192 KSP Residual norm 3.304917081101e-04 % max 4.444744806328e+02 min 3.200657592613e-04 max/min 1.388697377872e+06 > 193 KSP preconditioned resid norm 3.285744444114e-04 true resid norm 1.059153351271e-03 ||r(i)||/||b|| 5.924944202127e-05 > 193 KSP Residual norm 3.285744444114e-04 % max 4.467729340931e+02 min 3.200657588967e-04 max/min 1.395878570808e+06 > 194 KSP preconditioned resid norm 3.266901657422e-04 true resid norm 1.059144400627e-03 ||r(i)||/||b|| 5.924894131886e-05 > 194 KSP Residual norm 3.266901657422e-04 % max 4.490599586887e+02 min 3.200657585410e-04 max/min 1.403024055856e+06 > 195 KSP preconditioned resid norm 3.248379370193e-04 true resid norm 1.059135652724e-03 ||r(i)||/||b|| 5.924845195782e-05 > 195 KSP Residual norm 3.248379370193e-04 % max 4.513357185943e+02 min 3.200657581935e-04 max/min 1.410134345960e+06 > 196 KSP preconditioned resid norm 3.230168598555e-04 true resid norm 1.059127100716e-03 ||r(i)||/||b|| 5.924797355524e-05 > 196 KSP Residual norm 3.230168598555e-04 % max 4.536003742498e+02 min 3.200657578528e-04 max/min 1.417209942397e+06 > 197 KSP preconditioned resid norm 3.212260707284e-04 true resid norm 1.059118738110e-03 ||r(i)||/||b|| 5.924750574787e-05 > 197 KSP Residual norm 3.212260707284e-04 % max 4.558540824713e+02 min 3.200657575203e-04 max/min 1.424251335110e+06 > 198 KSP preconditioned resid norm 3.194647392600e-04 true resid norm 1.059110558683e-03 ||r(i)||/||b|| 5.924704818760e-05 > 198 KSP Residual norm 3.194647392600e-04 % max 4.580969965587e+02 min 3.200657571950e-04 max/min 1.431259003067e+06 > 199 KSP preconditioned resid norm 3.177320665985e-04 true resid norm 1.059102556481e-03 ||r(i)||/||b|| 5.924660054140e-05 > 199 KSP Residual norm 3.177320665985e-04 % max 4.603292663993e+02 min 3.200657568766e-04 max/min 1.438233414569e+06 > 200 KSP preconditioned resid norm 3.160272838961e-04 true resid norm 1.059094725800e-03 ||r(i)||/||b|| 5.924616249011e-05 > 200 KSP Residual norm 3.160272838961e-04 % max 4.625510385677e+02 min 3.200657565653e-04 max/min 1.445175027567e+06 > > > > Thanks, > > Meng > ------------------ Original ------------------ > From: "Barry Smith";; > Send time: Friday, Apr 25, 2014 6:54 AM > To: "Oo "; > Cc: "Dave May"; "petsc-users"; > Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 > > > Run the bad case again with -ksp_gmres_restart 200 -ksp_max_it 200 and send the output > > > On Apr 24, 2014, at 5:46 PM, Oo wrote: > > > > > Thanks, > > The following is the output at the beginning. > > > > > > 0 KSP preconditioned resid norm 7.463734841673e+00 true resid norm 7.520241011357e-02 ||r(i)||/||b|| 1.000000000000e+00 > > 0 KSP Residual norm 7.463734841673e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > > 1 KSP preconditioned resid norm 3.449001344285e-03 true resid norm 7.834435231711e-05 ||r(i)||/||b|| 1.041779807307e-03 > > 1 KSP Residual norm 3.449001344285e-03 % max 9.999991695261e-01 min 9.999991695261e-01 max/min 1.000000000000e+00 > > 2 KSP preconditioned resid norm 1.811463883605e-05 true resid norm 3.597611565181e-07 ||r(i)||/||b|| 4.783904611232e-06 > > 2 KSP Residual norm 1.811463883605e-05 % max 1.000686014764e+00 min 9.991339510077e-01 max/min 1.001553409084e+00 > > 0 KSP preconditioned resid norm 9.374463936067e+00 true resid norm 9.058107112571e-02 ||r(i)||/||b|| 1.000000000000e+00 > > 0 KSP Residual norm 9.374463936067e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > > 1 KSP preconditioned resid norm 2.595582184655e-01 true resid norm 6.637387889158e-03 ||r(i)||/||b|| 7.327566131280e-02 > > 1 KSP Residual norm 2.595582184655e-01 % max 9.933440157684e-01 min 9.933440157684e-01 max/min 1.000000000000e+00 > > 2 KSP preconditioned resid norm 6.351429855766e-03 true resid norm 1.844857600919e-04 ||r(i)||/||b|| 2.036692189651e-03 > > 2 KSP Residual norm 6.351429855766e-03 % max 1.000795215571e+00 min 8.099278726624e-01 max/min 1.235659679523e+00 > > 3 KSP preconditioned resid norm 1.883016084950e-04 true resid norm 3.876682412610e-06 ||r(i)||/||b|| 4.279793078656e-05 > > 3 KSP Residual norm 1.883016084950e-04 % max 1.184638644500e+00 min 8.086172954187e-01 max/min 1.465017692809e+00 > > > > > > When solving the linear system: > > Output: > > ......... > > ........ > > ........ > > +00 max/min 7.343521316293e+01 > > 9636 KSP preconditioned resid norm 1.080641720588e+00 true resid norm 1.594064989678e-01 ||r(i)||/||b|| 8.917260288210e-03 > > 9636 KSP Residual norm 1.080641720588e+00 % max 9.802496207537e+01 min 1.168945135768e+00 max/min 8.385762434515e+01 > > 9637 KSP preconditioned resid norm 1.080641720588e+00 true resid norm 1.594064987918e-01 ||r(i)||/||b|| 8.917260278361e-03 > > 9637 KSP Residual norm 1.080641720588e+00 % max 1.122401280488e+02 min 1.141681830513e+00 max/min 9.831121512938e+01 > > 9638 KSP preconditioned resid norm 1.080641720588e+00 true resid norm 1.594064986859e-01 ||r(i)||/||b|| 8.917260272440e-03 > > 9638 KSP Residual norm 1.080641720588e+00 % max 1.134941067042e+02 min 1.090790142559e+00 max/min 1.040476094127e+02 > > 9639 KSP preconditioned resid norm 1.080641720588e+00 true resid norm 1.594064442988e-01 ||r(i)||/||b|| 8.917257230005e-03 > > 9639 KSP Residual norm 1.080641720588e+00 % max 1.139914662925e+02 min 4.119649156568e-01 max/min 2.767018791170e+02 > > 9640 KSP preconditioned resid norm 1.080641720586e+00 true resid norm 1.594063809183e-01 ||r(i)||/||b|| 8.917253684473e-03 > > 9640 KSP Residual norm 1.080641720586e+00 % max 1.140011421526e+02 min 2.894486589274e-01 max/min 3.938561766878e+02 > > 9641 KSP preconditioned resid norm 1.080641720586e+00 true resid norm 1.594063839202e-01 ||r(i)||/||b|| 8.917253852403e-03 > > 9641 KSP Residual norm 1.080641720586e+00 % max 1.140392299942e+02 min 2.880532973299e-01 max/min 3.958962839563e+02 > > 9642 KSP preconditioned resid norm 1.080641720586e+00 true resid norm 1.594064091676e-01 ||r(i)||/||b|| 8.917255264750e-03 > > 9642 KSP Residual norm 1.080641720586e+00 % max 1.140392728591e+02 min 2.501717295613e-01 max/min 4.558439639005e+02 > > 9643 KSP preconditioned resid norm 1.080641720583e+00 true resid norm 1.594064099334e-01 ||r(i)||/||b|| 8.917255307591e-03 > > 9643 KSP Residual norm 1.080641720583e+00 % max 1.141360429432e+02 min 2.500714638111e-01 max/min 4.564137035220e+02 > > 9644 KSP preconditioned resid norm 1.080641720582e+00 true resid norm 1.594064169337e-01 ||r(i)||/||b|| 8.917255699186e-03 > > 9644 KSP Residual norm 1.080641720582e+00 % max 1.141719168213e+02 min 2.470526471293e-01 max/min 4.621359784969e+02 > > 9645 KSP preconditioned resid norm 1.080641720554e+00 true resid norm 1.594064833602e-01 ||r(i)||/||b|| 8.917259415111e-03 > > 9645 KSP Residual norm 1.080641720554e+00 % max 1.141770017757e+02 min 2.461729098264e-01 max/min 4.638081495493e+02 > > 9646 KSP preconditioned resid norm 1.080641720550e+00 true resid norm 1.594066163854e-01 ||r(i)||/||b|| 8.917266856592e-03 > > 9646 KSP Residual norm 1.080641720550e+00 % max 1.150251695783e+02 min 1.817293289064e-01 max/min 6.329477485583e+02 > > 9647 KSP preconditioned resid norm 1.080641720425e+00 true resid norm 1.594070759575e-01 ||r(i)||/||b|| 8.917292565231e-03 > > 9647 KSP Residual norm 1.080641720425e+00 % max 1.153670774825e+02 min 1.757825842976e-01 max/min 6.563055034347e+02 > > 9648 KSP preconditioned resid norm 1.080641720405e+00 true resid norm 1.594072309986e-01 ||r(i)||/||b|| 8.917301238287e-03 > > 9648 KSP Residual norm 1.080641720405e+00 % max 1.154419449950e+02 min 1.682003217110e-01 max/min 6.863360534671e+02 > > 9649 KSP preconditioned resid norm 1.080641719971e+00 true resid norm 1.594088666650e-01 ||r(i)||/||b|| 8.917392738093e-03 > > 9649 KSP Residual norm 1.080641719971e+00 % max 1.154420890958e+02 min 1.254364806923e-01 max/min 9.203230867027e+02 > > 9650 KSP preconditioned resid norm 1.080641719766e+00 true resid norm 1.594089470619e-01 ||r(i)||/||b|| 8.917397235527e-03 > > 9650 KSP Residual norm 1.080641719766e+00 % max 1.155791388935e+02 min 1.115280748954e-01 max/min 1.036323266603e+03 > > 9651 KSP preconditioned resid norm 1.080641719668e+00 true resid norm 1.594099325489e-01 ||r(i)||/||b|| 8.917452364041e-03 > > 9651 KSP Residual norm 1.080641719668e+00 % max 1.156952656131e+02 min 9.753165869338e-02 max/min 1.186232933624e+03 > > 9652 KSP preconditioned resid norm 1.080641719560e+00 true resid norm 1.594104650490e-01 ||r(i)||/||b|| 8.917482152303e-03 > > 9652 KSP Residual norm 1.080641719560e+00 % max 1.157175173166e+02 min 8.164906465197e-02 max/min 1.417254659437e+03 > > 9653 KSP preconditioned resid norm 1.080641719545e+00 true resid norm 1.594102433389e-01 ||r(i)||/||b|| 8.917469749751e-03 > > 9653 KSP Residual norm 1.080641719545e+00 % max 1.157284977956e+02 min 8.043379142473e-02 max/min 1.438804459490e+03 > > 9654 KSP preconditioned resid norm 1.080641719502e+00 true resid norm 1.594103748106e-01 ||r(i)||/||b|| 8.917477104328e-03 > > 9654 KSP Residual norm 1.080641719502e+00 % max 1.158252103352e+02 min 8.042977537341e-02 max/min 1.440078749412e+03 > > 9655 KSP preconditioned resid norm 1.080641719500e+00 true resid norm 1.594103839160e-01 ||r(i)||/||b|| 8.917477613692e-03 > > 9655 KSP Residual norm 1.080641719500e+00 % max 1.158319413225e+02 min 7.912584859399e-02 max/min 1.463895090931e+03 > > 9656 KSP preconditioned resid norm 1.080641719298e+00 true resid norm 1.594103559180e-01 ||r(i)||/||b|| 8.917476047469e-03 > > 9656 KSP Residual norm 1.080641719298e+00 % max 1.164752277567e+02 min 7.459488142962e-02 max/min 1.561437266532e+03 > > 9657 KSP preconditioned resid norm 1.080641719265e+00 true resid norm 1.594102184171e-01 ||r(i)||/||b|| 8.917468355617e-03 > > 9657 KSP Residual norm 1.080641719265e+00 % max 1.166579038733e+02 min 7.458594814570e-02 max/min 1.564073485335e+03 > > 9658 KSP preconditioned resid norm 1.080641717458e+00 true resid norm 1.594091333285e-01 ||r(i)||/||b|| 8.917407655346e-03 > > 9658 KSP Residual norm 1.080641717458e+00 % max 1.166903646829e+02 min 6.207842503132e-02 max/min 1.879724954749e+03 > > 9659 KSP preconditioned resid norm 1.080641710951e+00 true resid norm 1.594084758922e-01 ||r(i)||/||b|| 8.917370878110e-03 > > 9659 KSP Residual norm 1.080641710951e+00 % max 1.166911765130e+02 min 4.511759655558e-02 max/min 2.586378384966e+03 > > 9660 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073892570e-01 ||r(i)||/||b|| 8.917310091324e-03 > > 9660 KSP Residual norm 1.080641710473e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > > 9661 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073892624e-01 ||r(i)||/||b|| 8.917310091626e-03 > > 9661 KSP Residual norm 1.080641710473e+00 % max 3.063497860835e+01 min 3.063497860835e+01 max/min 1.000000000000e+00 > > 9662 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073892715e-01 ||r(i)||/||b|| 8.917310092135e-03 > > 9662 KSP Residual norm 1.080641710473e+00 % max 3.066567116490e+01 min 3.845920135843e+00 max/min 7.973559013643e+00 > > 9663 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073893803e-01 ||r(i)||/||b|| 8.917310098220e-03 > > 9663 KSP Residual norm 1.080641710473e+00 % max 3.713314039929e+01 min 1.336313376350e+00 max/min 2.778774878443e+01 > > 9664 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073893840e-01 ||r(i)||/||b|| 8.917310098430e-03 > > 9664 KSP Residual norm 1.080641710473e+00 % max 4.496286107838e+01 min 1.226793755688e+00 max/min 3.665070911057e+01 > > 9665 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073893901e-01 ||r(i)||/||b|| 8.917310098770e-03 > > 9665 KSP Residual norm 1.080641710473e+00 % max 8.684753794468e+01 min 1.183106109633e+00 max/min 7.340638108242e+01 > > 9666 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073897030e-01 ||r(i)||/||b|| 8.917310116272e-03 > > 9666 KSP Residual norm 1.080641710473e+00 % max 9.802657279239e+01 min 1.168685545918e+00 max/min 8.387762913199e+01 > > 9667 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073898787e-01 ||r(i)||/||b|| 8.917310126104e-03 > > 9667 KSP Residual norm 1.080641710473e+00 % max 1.123342619847e+02 min 1.141262650540e+00 max/min 9.842980661068e+01 > > 9668 KSP preconditioned resid norm 1.080641710473e+00 true resid norm 1.594073899832e-01 ||r(i)||/||b|| 8.917310131950e-03 > > 9668 KSP Residual norm 1.080641710473e+00 % max 1.134978630027e+02 min 1.090790451862e+00 max/min 1.040510235573e+02 > > 9669 KSP preconditioned resid norm 1.080641710472e+00 true resid norm 1.594074437274e-01 ||r(i)||/||b|| 8.917313138421e-03 > > 9669 KSP Residual norm 1.080641710472e+00 % max 1.139911467053e+02 min 4.122424185123e-01 max/min 2.765148407498e+02 > > 9670 KSP preconditioned resid norm 1.080641710471e+00 true resid norm 1.594075064381e-01 ||r(i)||/||b|| 8.917316646478e-03 > > 9670 KSP Residual norm 1.080641710471e+00 % max 1.140007007543e+02 min 2.895766957825e-01 max/min 3.936805081855e+02 > > 9671 KSP preconditioned resid norm 1.080641710471e+00 true resid norm 1.594075034738e-01 ||r(i)||/||b|| 8.917316480653e-03 > > 9671 KSP Residual norm 1.080641710471e+00 % max 1.140387089437e+02 min 2.881955202443e-01 max/min 3.956991033277e+02 > > 9672 KSP preconditioned resid norm 1.080641710470e+00 true resid norm 1.594074784660e-01 ||r(i)||/||b|| 8.917315081711e-03 > > 9672 KSP Residual norm 1.080641710470e+00 % max 1.140387584514e+02 min 2.501644562253e-01 max/min 4.558551609294e+02 > > 9673 KSP preconditioned resid norm 1.080641710468e+00 true resid norm 1.594074777329e-01 ||r(i)||/||b|| 8.917315040700e-03 > > 9673 KSP Residual norm 1.080641710468e+00 % max 1.141359894823e+02 min 2.500652210717e-01 max/min 4.564248838488e+02 > > 9674 KSP preconditioned resid norm 1.080641710467e+00 true resid norm 1.594074707419e-01 ||r(i)||/||b|| 8.917314649619e-03 > > 9674 KSP Residual norm 1.080641710467e+00 % max 1.141719424825e+02 min 2.470352404045e-01 max/min 4.621686456374e+02 > > 9675 KSP preconditioned resid norm 1.080641710439e+00 true resid norm 1.594074050132e-01 ||r(i)||/||b|| 8.917310972734e-03 > > 9675 KSP Residual norm 1.080641710439e+00 % max 1.141769957478e+02 min 2.461583334135e-01 max/min 4.638355897383e+02 > > 9676 KSP preconditioned resid norm 1.080641710435e+00 true resid norm 1.594072732317e-01 ||r(i)||/||b|| 8.917303600825e-03 > > 9676 KSP Residual norm 1.080641710435e+00 % max 1.150247840524e+02 min 1.817432478135e-01 max/min 6.328971526384e+02 > > 9677 KSP preconditioned resid norm 1.080641710313e+00 true resid norm 1.594068192028e-01 ||r(i)||/||b|| 8.917278202275e-03 > > 9677 KSP Residual norm 1.080641710313e+00 % max 1.153658698688e+02 min 1.758229849082e-01 max/min 6.561478291873e+02 > > 9678 KSP preconditioned resid norm 1.080641710294e+00 true resid norm 1.594066656020e-01 ||r(i)||/||b|| 8.917269609788e-03 > > 9678 KSP Residual norm 1.080641710294e+00 % max 1.154409214869e+02 min 1.682261493961e-01 max/min 6.862245964808e+02 > > 9679 KSP preconditioned resid norm 1.080641709867e+00 true resid norm 1.594050456081e-01 ||r(i)||/||b|| 8.917178986714e-03 > > 9679 KSP Residual norm 1.080641709867e+00 % max 1.154410567101e+02 min 1.254614849745e-01 max/min 9.201314390116e+02 > > 9680 KSP preconditioned resid norm 1.080641709664e+00 true resid norm 1.594049650069e-01 ||r(i)||/||b|| 8.917174477851e-03 > > 9680 KSP Residual norm 1.080641709664e+00 % max 1.155780217907e+02 min 1.115227545913e-01 max/min 1.036362688622e+03 > > 9681 KSP preconditioned resid norm 1.080641709568e+00 true resid norm 1.594039822189e-01 ||r(i)||/||b|| 8.917119500317e-03 > > 9681 KSP Residual norm 1.080641709568e+00 % max 1.156949294099e+02 min 9.748012982669e-02 max/min 1.186856538000e+03 > > 9682 KSP preconditioned resid norm 1.080641709462e+00 true resid norm 1.594034585197e-01 ||r(i)||/||b|| 8.917090204385e-03 > > 9682 KSP Residual norm 1.080641709462e+00 % max 1.157178049396e+02 min 8.161660044005e-02 max/min 1.417821917547e+03 > > 9683 KSP preconditioned resid norm 1.080641709447e+00 true resid norm 1.594036816693e-01 ||r(i)||/||b|| 8.917102687458e-03 > > 9683 KSP Residual norm 1.080641709447e+00 % max 1.157298376396e+02 min 8.041659530019e-02 max/min 1.439128791856e+03 > > 9684 KSP preconditioned resid norm 1.080641709407e+00 true resid norm 1.594035571975e-01 ||r(i)||/||b|| 8.917095724454e-03 > > 9684 KSP Residual norm 1.080641709407e+00 % max 1.158244705264e+02 min 8.041209608100e-02 max/min 1.440386162919e+03 > > 9685 KSP preconditioned resid norm 1.080641709405e+00 true resid norm 1.594035489927e-01 ||r(i)||/||b|| 8.917095265477e-03 > > 9685 KSP Residual norm 1.080641709405e+00 % max 1.158301451854e+02 min 7.912026880308e-02 max/min 1.463975627707e+03 > > 9686 KSP preconditioned resid norm 1.080641709207e+00 true resid norm 1.594035774591e-01 ||r(i)||/||b|| 8.917096857899e-03 > > 9686 KSP Residual norm 1.080641709207e+00 % max 1.164678839370e+02 min 7.460755571674e-02 max/min 1.561073577845e+03 > > 9687 KSP preconditioned resid norm 1.080641709174e+00 true resid norm 1.594037147171e-01 ||r(i)||/||b|| 8.917104536162e-03 > > 9687 KSP Residual norm 1.080641709174e+00 % max 1.166488052404e+02 min 7.459794478802e-02 max/min 1.563699986264e+03 > > 9688 KSP preconditioned resid norm 1.080641707398e+00 true resid norm 1.594047860513e-01 ||r(i)||/||b|| 8.917164467006e-03 > > 9688 KSP Residual norm 1.080641707398e+00 % max 1.166807852257e+02 min 6.210503072987e-02 max/min 1.878765437428e+03 > > 9689 KSP preconditioned resid norm 1.080641701010e+00 true resid norm 1.594054414016e-01 ||r(i)||/||b|| 8.917201127549e-03 > > 9689 KSP Residual norm 1.080641701010e+00 % max 1.166815140099e+02 min 4.513527468187e-02 max/min 2.585151299782e+03 > > 9690 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065078534e-01 ||r(i)||/||b|| 8.917260785273e-03 > > 9690 KSP Residual norm 1.080641700548e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > > 9691 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065078481e-01 ||r(i)||/||b|| 8.917260784977e-03 > > 9691 KSP Residual norm 1.080641700548e+00 % max 3.063462923862e+01 min 3.063462923862e+01 max/min 1.000000000000e+00 > > 9692 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065078391e-01 ||r(i)||/||b|| 8.917260784472e-03 > > 9692 KSP Residual norm 1.080641700548e+00 % max 3.066527639129e+01 min 3.844401168431e+00 max/min 7.976606771193e+00 > > 9693 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065077314e-01 ||r(i)||/||b|| 8.917260778448e-03 > > 9693 KSP Residual norm 1.080641700548e+00 % max 3.714560719346e+01 min 1.336559823443e+00 max/min 2.779195255007e+01 > > 9694 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065077277e-01 ||r(i)||/||b|| 8.917260778237e-03 > > 9694 KSP Residual norm 1.080641700548e+00 % max 4.500980776068e+01 min 1.226798344943e+00 max/min 3.668883965014e+01 > > 9695 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065077215e-01 ||r(i)||/||b|| 8.917260777894e-03 > > 9695 KSP Residual norm 1.080641700548e+00 % max 8.688252795875e+01 min 1.183121330492e+00 max/min 7.343501103356e+01 > > 9696 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065074170e-01 ||r(i)||/||b|| 8.917260760857e-03 > > 9696 KSP Residual norm 1.080641700548e+00 % max 9.802496799855e+01 min 1.168942571066e+00 max/min 8.385781339895e+01 > > 9697 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065072444e-01 ||r(i)||/||b|| 8.917260751202e-03 > > 9697 KSP Residual norm 1.080641700548e+00 % max 1.122410641497e+02 min 1.141677885656e+00 max/min 9.831237475989e+01 > > 9698 KSP preconditioned resid norm 1.080641700548e+00 true resid norm 1.594065071406e-01 ||r(i)||/||b|| 8.917260745398e-03 > > 9698 KSP Residual norm 1.080641700548e+00 % max 1.134941426413e+02 min 1.090790153651e+00 max/min 1.040476413006e+02 > > 9699 KSP preconditioned resid norm 1.080641700547e+00 true resid norm 1.594064538413e-01 ||r(i)||/||b|| 8.917257763815e-03 > > 9699 KSP Residual norm 1.080641700547e+00 % max 1.139914632588e+02 min 4.119681054781e-01 max/min 2.766997292824e+02 > > 9700 KSP preconditioned resid norm 1.080641700546e+00 true resid norm 1.594063917269e-01 ||r(i)||/||b|| 8.917254289111e-03 > > 9700 KSP Residual norm 1.080641700546e+00 % max 1.140011377578e+02 min 2.894500209686e-01 max/min 3.938543081679e+02 > > 9701 KSP preconditioned resid norm 1.080641700546e+00 true resid norm 1.594063946692e-01 ||r(i)||/||b|| 8.917254453705e-03 > > 9701 KSP Residual norm 1.080641700546e+00 % max 1.140392249545e+02 min 2.880548133071e-01 max/min 3.958941829341e+02 > > 9702 KSP preconditioned resid norm 1.080641700546e+00 true resid norm 1.594064194159e-01 ||r(i)||/||b|| 8.917255838042e-03 > > 9702 KSP Residual norm 1.080641700546e+00 % max 1.140392678944e+02 min 2.501716607446e-01 max/min 4.558440694479e+02 > > 9703 KSP preconditioned resid norm 1.080641700543e+00 true resid norm 1.594064201661e-01 ||r(i)||/||b|| 8.917255880013e-03 > > 9703 KSP Residual norm 1.080641700543e+00 % max 1.141360426034e+02 min 2.500714053220e-01 max/min 4.564138089137e+02 > > 9704 KSP preconditioned resid norm 1.080641700542e+00 true resid norm 1.594064270279e-01 ||r(i)||/||b|| 8.917256263859e-03 > > 9704 KSP Residual norm 1.080641700542e+00 % max 1.141719170495e+02 min 2.470524869868e-01 max/min 4.621362789824e+02 > > 9705 KSP preconditioned resid norm 1.080641700515e+00 true resid norm 1.594064921269e-01 ||r(i)||/||b|| 8.917259905523e-03 > > 9705 KSP Residual norm 1.080641700515e+00 % max 1.141770017414e+02 min 2.461727640845e-01 max/min 4.638084239987e+02 > > 9706 KSP preconditioned resid norm 1.080641700511e+00 true resid norm 1.594066225181e-01 ||r(i)||/||b|| 8.917267199657e-03 > > 9706 KSP Residual norm 1.080641700511e+00 % max 1.150251667711e+02 min 1.817294170466e-01 max/min 6.329474261263e+02 > > 9707 KSP preconditioned resid norm 1.080641700391e+00 true resid norm 1.594070728983e-01 ||r(i)||/||b|| 8.917292394097e-03 > > 9707 KSP Residual norm 1.080641700391e+00 % max 1.153670687494e+02 min 1.757829178698e-01 max/min 6.563042083238e+02 > > 9708 KSP preconditioned resid norm 1.080641700372e+00 true resid norm 1.594072248425e-01 ||r(i)||/||b|| 8.917300893916e-03 > > 9708 KSP Residual norm 1.080641700372e+00 % max 1.154419380718e+02 min 1.682005223292e-01 max/min 6.863351936912e+02 > > 9709 KSP preconditioned resid norm 1.080641699955e+00 true resid norm 1.594088278507e-01 ||r(i)||/||b|| 8.917390566803e-03 > > 9709 KSP Residual norm 1.080641699955e+00 % max 1.154420821193e+02 min 1.254367014624e-01 max/min 9.203214113045e+02 > > 9710 KSP preconditioned resid norm 1.080641699758e+00 true resid norm 1.594089066545e-01 ||r(i)||/||b|| 8.917394975116e-03 > > 9710 KSP Residual norm 1.080641699758e+00 % max 1.155791306030e+02 min 1.115278858784e-01 max/min 1.036324948623e+03 > > 9711 KSP preconditioned resid norm 1.080641699664e+00 true resid norm 1.594098725388e-01 ||r(i)||/||b|| 8.917449007053e-03 > > 9711 KSP Residual norm 1.080641699664e+00 % max 1.156952614563e+02 min 9.753133694719e-02 max/min 1.186236804269e+03 > > 9712 KSP preconditioned resid norm 1.080641699560e+00 true resid norm 1.594103943811e-01 ||r(i)||/||b|| 8.917478199111e-03 > > 9712 KSP Residual norm 1.080641699560e+00 % max 1.157175157006e+02 min 8.164872572382e-02 max/min 1.417260522743e+03 > > 9713 KSP preconditioned resid norm 1.080641699546e+00 true resid norm 1.594101771105e-01 ||r(i)||/||b|| 8.917466044914e-03 > > 9713 KSP Residual norm 1.080641699546e+00 % max 1.157285025175e+02 min 8.043358651149e-02 max/min 1.438808183705e+03 > > 9714 KSP preconditioned resid norm 1.080641699505e+00 true resid norm 1.594103059104e-01 ||r(i)||/||b|| 8.917473250025e-03 > > 9714 KSP Residual norm 1.080641699505e+00 % max 1.158251990486e+02 min 8.042956633721e-02 max/min 1.440082351843e+03 > > 9715 KSP preconditioned resid norm 1.080641699503e+00 true resid norm 1.594103148215e-01 ||r(i)||/||b|| 8.917473748516e-03 > > 9715 KSP Residual norm 1.080641699503e+00 % max 1.158319218998e+02 min 7.912575668150e-02 max/min 1.463896545925e+03 > > 9716 KSP preconditioned resid norm 1.080641699309e+00 true resid norm 1.594102873744e-01 ||r(i)||/||b|| 8.917472213115e-03 > > 9716 KSP Residual norm 1.080641699309e+00 % max 1.164751648212e+02 min 7.459498920670e-02 max/min 1.561434166824e+03 > > 9717 KSP preconditioned resid norm 1.080641699277e+00 true resid norm 1.594101525703e-01 ||r(i)||/||b|| 8.917464672122e-03 > > 9717 KSP Residual norm 1.080641699277e+00 % max 1.166578264103e+02 min 7.458604738274e-02 max/min 1.564070365758e+03 > > 9718 KSP preconditioned resid norm 1.080641697541e+00 true resid norm 1.594090891807e-01 ||r(i)||/||b|| 8.917405185702e-03 > > 9718 KSP Residual norm 1.080641697541e+00 % max 1.166902828397e+02 min 6.207863457218e-02 max/min 1.879717291527e+03 > > 9719 KSP preconditioned resid norm 1.080641691292e+00 true resid norm 1.594084448306e-01 ||r(i)||/||b|| 8.917369140517e-03 > > 9719 KSP Residual norm 1.080641691292e+00 % max 1.166910940162e+02 min 4.511779816158e-02 max/min 2.586364999424e+03 > > 9720 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073798840e-01 ||r(i)||/||b|| 8.917309566993e-03 > > 9720 KSP Residual norm 1.080641690833e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > > 9721 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073798892e-01 ||r(i)||/||b|| 8.917309567288e-03 > > 9721 KSP Residual norm 1.080641690833e+00 % max 3.063497720423e+01 min 3.063497720423e+01 max/min 1.000000000000e+00 > > 9722 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073798982e-01 ||r(i)||/||b|| 8.917309567787e-03 > > 9722 KSP Residual norm 1.080641690833e+00 % max 3.066566915002e+01 min 3.845904218039e+00 max/min 7.973591491484e+00 > > 9723 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073800048e-01 ||r(i)||/||b|| 8.917309573750e-03 > > 9723 KSP Residual norm 1.080641690833e+00 % max 3.713330064708e+01 min 1.336316867286e+00 max/min 2.778779611043e+01 > > 9724 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073800084e-01 ||r(i)||/||b|| 8.917309573955e-03 > > 9724 KSP Residual norm 1.080641690833e+00 % max 4.496345800250e+01 min 1.226793989785e+00 max/min 3.665118868930e+01 > > 9725 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073800144e-01 ||r(i)||/||b|| 8.917309574290e-03 > > 9725 KSP Residual norm 1.080641690833e+00 % max 8.684799302076e+01 min 1.183106260116e+00 max/min 7.340675639080e+01 > > 9726 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073803210e-01 ||r(i)||/||b|| 8.917309591441e-03 > > 9726 KSP Residual norm 1.080641690833e+00 % max 9.802654643741e+01 min 1.168688159857e+00 max/min 8.387741897664e+01 > > 9727 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073804933e-01 ||r(i)||/||b|| 8.917309601077e-03 > > 9727 KSP Residual norm 1.080641690833e+00 % max 1.123333202151e+02 min 1.141267080659e+00 max/min 9.842859933384e+01 > > 9728 KSP preconditioned resid norm 1.080641690833e+00 true resid norm 1.594073805957e-01 ||r(i)||/||b|| 8.917309606809e-03 > > 9728 KSP Residual norm 1.080641690833e+00 % max 1.134978238790e+02 min 1.090790456842e+00 max/min 1.040509872149e+02 > > 9729 KSP preconditioned resid norm 1.080641690832e+00 true resid norm 1.594074332665e-01 ||r(i)||/||b|| 8.917312553232e-03 > > 9729 KSP Residual norm 1.080641690832e+00 % max 1.139911500486e+02 min 4.122400558994e-01 max/min 2.765164336102e+02 > > 9730 KSP preconditioned resid norm 1.080641690831e+00 true resid norm 1.594074947247e-01 ||r(i)||/||b|| 8.917315991226e-03 > > 9730 KSP Residual norm 1.080641690831e+00 % max 1.140007051733e+02 min 2.895754967285e-01 max/min 3.936821535706e+02 > > 9731 KSP preconditioned resid norm 1.080641690831e+00 true resid norm 1.594074918191e-01 ||r(i)||/||b|| 8.917315828690e-03 > > 9731 KSP Residual norm 1.080641690831e+00 % max 1.140387143094e+02 min 2.881941911453e-01 max/min 3.957009468380e+02 > > 9732 KSP preconditioned resid norm 1.080641690830e+00 true resid norm 1.594074673079e-01 ||r(i)||/||b|| 8.917314457521e-03 > > 9732 KSP Residual norm 1.080641690830e+00 % max 1.140387637599e+02 min 2.501645326450e-01 max/min 4.558550428958e+02 > > 9733 KSP preconditioned resid norm 1.080641690828e+00 true resid norm 1.594074665892e-01 ||r(i)||/||b|| 8.917314417314e-03 > > 9733 KSP Residual norm 1.080641690828e+00 % max 1.141359902062e+02 min 2.500652873669e-01 max/min 4.564247657404e+02 > > 9734 KSP preconditioned resid norm 1.080641690827e+00 true resid norm 1.594074597377e-01 ||r(i)||/||b|| 8.917314034043e-03 > > 9734 KSP Residual norm 1.080641690827e+00 % max 1.141719421992e+02 min 2.470354276866e-01 max/min 4.621682941123e+02 > > 9735 KSP preconditioned resid norm 1.080641690800e+00 true resid norm 1.594073953224e-01 ||r(i)||/||b|| 8.917310430625e-03 > > 9735 KSP Residual norm 1.080641690800e+00 % max 1.141769958343e+02 min 2.461584786890e-01 max/min 4.638353163473e+02 > > 9736 KSP preconditioned resid norm 1.080641690797e+00 true resid norm 1.594072661531e-01 ||r(i)||/||b|| 8.917303204847e-03 > > 9736 KSP Residual norm 1.080641690797e+00 % max 1.150247889475e+02 min 1.817430598855e-01 max/min 6.328978340080e+02 > > 9737 KSP preconditioned resid norm 1.080641690679e+00 true resid norm 1.594068211899e-01 ||r(i)||/||b|| 8.917278313432e-03 > > 9737 KSP Residual norm 1.080641690679e+00 % max 1.153658852526e+02 min 1.758225138542e-01 max/min 6.561496745986e+02 > > 9738 KSP preconditioned resid norm 1.080641690661e+00 true resid norm 1.594066706603e-01 ||r(i)||/||b|| 8.917269892752e-03 > > 9738 KSP Residual norm 1.080641690661e+00 % max 1.154409350014e+02 min 1.682258367468e-01 max/min 6.862259521714e+02 > > 9739 KSP preconditioned resid norm 1.080641690251e+00 true resid norm 1.594050830368e-01 ||r(i)||/||b|| 8.917181080485e-03 > > 9739 KSP Residual norm 1.080641690251e+00 % max 1.154410703479e+02 min 1.254612102469e-01 max/min 9.201335625627e+02 > > 9740 KSP preconditioned resid norm 1.080641690057e+00 true resid norm 1.594050040532e-01 ||r(i)||/||b|| 8.917176662114e-03 > > 9740 KSP Residual norm 1.080641690057e+00 % max 1.155780358111e+02 min 1.115226752201e-01 max/min 1.036363551923e+03 > > 9741 KSP preconditioned resid norm 1.080641689964e+00 true resid norm 1.594040409622e-01 ||r(i)||/||b|| 8.917122786435e-03 > > 9741 KSP Residual norm 1.080641689964e+00 % max 1.156949319460e+02 min 9.748084084270e-02 max/min 1.186847907198e+03 > > 9742 KSP preconditioned resid norm 1.080641689862e+00 true resid norm 1.594035276798e-01 ||r(i)||/||b|| 8.917094073223e-03 > > 9742 KSP Residual norm 1.080641689862e+00 % max 1.157177974822e+02 min 8.161690928512e-02 max/min 1.417816461022e+03 > > 9743 KSP preconditioned resid norm 1.080641689848e+00 true resid norm 1.594037462881e-01 ||r(i)||/||b|| 8.917106302257e-03 > > 9743 KSP Residual norm 1.080641689848e+00 % max 1.157298152734e+02 min 8.041673363017e-02 max/min 1.439126038190e+03 > > 9744 KSP preconditioned resid norm 1.080641689809e+00 true resid norm 1.594036242374e-01 ||r(i)||/||b|| 8.917099474695e-03 > > 9744 KSP Residual norm 1.080641689809e+00 % max 1.158244737331e+02 min 8.041224000025e-02 max/min 1.440383624841e+03 > > 9745 KSP preconditioned resid norm 1.080641689808e+00 true resid norm 1.594036161930e-01 ||r(i)||/||b|| 8.917099024685e-03 > > 9745 KSP Residual norm 1.080641689808e+00 % max 1.158301611517e+02 min 7.912028815054e-02 max/min 1.463975471516e+03 > > 9746 KSP preconditioned resid norm 1.080641689617e+00 true resid norm 1.594036440841e-01 ||r(i)||/||b|| 8.917100584928e-03 > > 9746 KSP Residual norm 1.080641689617e+00 % max 1.164679674294e+02 min 7.460741052548e-02 max/min 1.561077734894e+03 > > 9747 KSP preconditioned resid norm 1.080641689585e+00 true resid norm 1.594037786262e-01 ||r(i)||/||b|| 8.917108111263e-03 > > 9747 KSP Residual norm 1.080641689585e+00 % max 1.166489093034e+02 min 7.459780450955e-02 max/min 1.563704321734e+03 > > 9748 KSP preconditioned resid norm 1.080641687880e+00 true resid norm 1.594048285858e-01 ||r(i)||/||b|| 8.917166846404e-03 > > 9748 KSP Residual norm 1.080641687880e+00 % max 1.166808944816e+02 min 6.210471238399e-02 max/min 1.878776827114e+03 > > 9749 KSP preconditioned resid norm 1.080641681745e+00 true resid norm 1.594054707973e-01 ||r(i)||/||b|| 8.917202771956e-03 > > 9749 KSP Residual norm 1.080641681745e+00 % max 1.166816242497e+02 min 4.513512286802e-02 max/min 2.585162437485e+03 > > 9750 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065161384e-01 ||r(i)||/||b|| 8.917261248737e-03 > > 9750 KSP Residual norm 1.080641681302e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > > 9751 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065161332e-01 ||r(i)||/||b|| 8.917261248446e-03 > > 9751 KSP Residual norm 1.080641681302e+00 % max 3.063463477969e+01 min 3.063463477969e+01 max/min 1.000000000000e+00 > > 9752 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065161243e-01 ||r(i)||/||b|| 8.917261247951e-03 > > 9752 KSP Residual norm 1.080641681302e+00 % max 3.066528223339e+01 min 3.844415595957e+00 max/min 7.976578355796e+00 > > 9753 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065160188e-01 ||r(i)||/||b|| 8.917261242049e-03 > > 9753 KSP Residual norm 1.080641681302e+00 % max 3.714551741771e+01 min 1.336558367241e+00 max/min 2.779191566050e+01 > > 9754 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065160151e-01 ||r(i)||/||b|| 8.917261241840e-03 > > 9754 KSP Residual norm 1.080641681302e+00 % max 4.500946346995e+01 min 1.226798482639e+00 max/min 3.668855489054e+01 > > 9755 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065160091e-01 ||r(i)||/||b|| 8.917261241506e-03 > > 9755 KSP Residual norm 1.080641681302e+00 % max 8.688228012809e+01 min 1.183121177118e+00 max/min 7.343481108144e+01 > > 9756 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065157105e-01 ||r(i)||/||b|| 8.917261224803e-03 > > 9756 KSP Residual norm 1.080641681302e+00 % max 9.802497401249e+01 min 1.168940055038e+00 max/min 8.385799903943e+01 > > 9757 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065155414e-01 ||r(i)||/||b|| 8.917261215340e-03 > > 9757 KSP Residual norm 1.080641681302e+00 % max 1.122419823631e+02 min 1.141674012007e+00 max/min 9.831351259875e+01 > > 9758 KSP preconditioned resid norm 1.080641681302e+00 true resid norm 1.594065154397e-01 ||r(i)||/||b|| 8.917261209650e-03 > > 9758 KSP Residual norm 1.080641681302e+00 % max 1.134941779165e+02 min 1.090790164362e+00 max/min 1.040476726180e+02 > > 9759 KSP preconditioned resid norm 1.080641681301e+00 true resid norm 1.594064632071e-01 ||r(i)||/||b|| 8.917258287741e-03 > > 9759 KSP Residual norm 1.080641681301e+00 % max 1.139914602803e+02 min 4.119712251563e-01 max/min 2.766976267263e+02 > > 9760 KSP preconditioned resid norm 1.080641681300e+00 true resid norm 1.594064023343e-01 ||r(i)||/||b|| 8.917254882493e-03 > > 9760 KSP Residual norm 1.080641681300e+00 % max 1.140011334474e+02 min 2.894513550128e-01 max/min 3.938524780524e+02 > > 9761 KSP preconditioned resid norm 1.080641681299e+00 true resid norm 1.594064052181e-01 ||r(i)||/||b|| 8.917255043816e-03 > > 9761 KSP Residual norm 1.080641681299e+00 % max 1.140392200085e+02 min 2.880562980628e-01 max/min 3.958921251693e+02 > > 9762 KSP preconditioned resid norm 1.080641681299e+00 true resid norm 1.594064294735e-01 ||r(i)||/||b|| 8.917256400669e-03 > > 9762 KSP Residual norm 1.080641681299e+00 % max 1.140392630219e+02 min 2.501715931763e-01 max/min 4.558441730892e+02 > > 9763 KSP preconditioned resid norm 1.080641681297e+00 true resid norm 1.594064302085e-01 ||r(i)||/||b|| 8.917256441789e-03 > > 9763 KSP Residual norm 1.080641681297e+00 % max 1.141360422663e+02 min 2.500713478839e-01 max/min 4.564139123977e+02 > > 9764 KSP preconditioned resid norm 1.080641681296e+00 true resid norm 1.594064369343e-01 ||r(i)||/||b|| 8.917256818032e-03 > > 9764 KSP Residual norm 1.080641681296e+00 % max 1.141719172739e+02 min 2.470523296471e-01 max/min 4.621365742109e+02 > > 9765 KSP preconditioned resid norm 1.080641681269e+00 true resid norm 1.594065007316e-01 ||r(i)||/||b|| 8.917260386877e-03 > > 9765 KSP Residual norm 1.080641681269e+00 % max 1.141770017074e+02 min 2.461726211496e-01 max/min 4.638086931610e+02 > > 9766 KSP preconditioned resid norm 1.080641681266e+00 true resid norm 1.594066285385e-01 ||r(i)||/||b|| 8.917267536440e-03 > > 9766 KSP Residual norm 1.080641681266e+00 % max 1.150251639969e+02 min 1.817295045513e-01 max/min 6.329471060897e+02 > > 9767 KSP preconditioned resid norm 1.080641681150e+00 true resid norm 1.594070699054e-01 ||r(i)||/||b|| 8.917292226672e-03 > > 9767 KSP Residual norm 1.080641681150e+00 % max 1.153670601170e+02 min 1.757832464778e-01 max/min 6.563029323251e+02 > > 9768 KSP preconditioned resid norm 1.080641681133e+00 true resid norm 1.594072188128e-01 ||r(i)||/||b|| 8.917300556609e-03 > > 9768 KSP Residual norm 1.080641681133e+00 % max 1.154419312149e+02 min 1.682007202940e-01 max/min 6.863343451393e+02 > > 9769 KSP preconditioned resid norm 1.080641680732e+00 true resid norm 1.594087897943e-01 ||r(i)||/||b|| 8.917388437917e-03 > > 9769 KSP Residual norm 1.080641680732e+00 % max 1.154420752094e+02 min 1.254369186538e-01 max/min 9.203197627011e+02 > > 9770 KSP preconditioned resid norm 1.080641680543e+00 true resid norm 1.594088670353e-01 ||r(i)||/||b|| 8.917392758804e-03 > > 9770 KSP Residual norm 1.080641680543e+00 % max 1.155791224140e+02 min 1.115277033208e-01 max/min 1.036326571538e+03 > > 9771 KSP preconditioned resid norm 1.080641680452e+00 true resid norm 1.594098136932e-01 ||r(i)||/||b|| 8.917445715209e-03 > > 9771 KSP Residual norm 1.080641680452e+00 % max 1.156952573953e+02 min 9.753101753312e-02 max/min 1.186240647556e+03 > > 9772 KSP preconditioned resid norm 1.080641680352e+00 true resid norm 1.594103250846e-01 ||r(i)||/||b|| 8.917474322641e-03 > > 9772 KSP Residual norm 1.080641680352e+00 % max 1.157175142052e+02 min 8.164839358709e-02 max/min 1.417266269688e+03 > > 9773 KSP preconditioned resid norm 1.080641680339e+00 true resid norm 1.594101121664e-01 ||r(i)||/||b|| 8.917462411912e-03 > > 9773 KSP Residual norm 1.080641680339e+00 % max 1.157285073186e+02 min 8.043338619163e-02 max/min 1.438811826757e+03 > > 9774 KSP preconditioned resid norm 1.080641680299e+00 true resid norm 1.594102383476e-01 ||r(i)||/||b|| 8.917469470538e-03 > > 9774 KSP Residual norm 1.080641680299e+00 % max 1.158251880541e+02 min 8.042936196082e-02 max/min 1.440085874492e+03 > > 9775 KSP preconditioned resid norm 1.080641680298e+00 true resid norm 1.594102470687e-01 ||r(i)||/||b|| 8.917469958400e-03 > > 9775 KSP Residual norm 1.080641680298e+00 % max 1.158319028736e+02 min 7.912566724984e-02 max/min 1.463897960037e+03 > > 9776 KSP preconditioned resid norm 1.080641680112e+00 true resid norm 1.594102201623e-01 ||r(i)||/||b|| 8.917468453243e-03 > > 9776 KSP Residual norm 1.080641680112e+00 % max 1.164751028861e+02 min 7.459509526345e-02 max/min 1.561431116546e+03 > > 9777 KSP preconditioned resid norm 1.080641680081e+00 true resid norm 1.594100880055e-01 ||r(i)||/||b|| 8.917461060344e-03 > > 9777 KSP Residual norm 1.080641680081e+00 % max 1.166577501679e+02 min 7.458614509876e-02 max/min 1.564067294448e+03 > > 9778 KSP preconditioned resid norm 1.080641678414e+00 true resid norm 1.594090458938e-01 ||r(i)||/||b|| 8.917402764217e-03 > > 9778 KSP Residual norm 1.080641678414e+00 % max 1.166902022924e+02 min 6.207884124039e-02 max/min 1.879709736213e+03 > > 9779 KSP preconditioned resid norm 1.080641672412e+00 true resid norm 1.594084143785e-01 ||r(i)||/||b|| 8.917367437011e-03 > > 9779 KSP Residual norm 1.080641672412e+00 % max 1.166910128239e+02 min 4.511799534103e-02 max/min 2.586351896663e+03 > > 9780 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073707031e-01 ||r(i)||/||b|| 8.917309053412e-03 > > 9780 KSP Residual norm 1.080641671971e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > > 9781 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073707083e-01 ||r(i)||/||b|| 8.917309053702e-03 > > 9781 KSP Residual norm 1.080641671971e+00 % max 3.063497578379e+01 min 3.063497578379e+01 max/min 1.000000000000e+00 > > 9782 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073707170e-01 ||r(i)||/||b|| 8.917309054190e-03 > > 9782 KSP Residual norm 1.080641671971e+00 % max 3.066566713389e+01 min 3.845888622043e+00 max/min 7.973623302070e+00 > > 9783 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073708215e-01 ||r(i)||/||b|| 8.917309060034e-03 > > 9783 KSP Residual norm 1.080641671971e+00 % max 3.713345706068e+01 min 1.336320269525e+00 max/min 2.778784241137e+01 > > 9784 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073708251e-01 ||r(i)||/||b|| 8.917309060235e-03 > > 9784 KSP Residual norm 1.080641671971e+00 % max 4.496404075363e+01 min 1.226794215442e+00 max/min 3.665165696713e+01 > > 9785 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073708309e-01 ||r(i)||/||b|| 8.917309060564e-03 > > 9785 KSP Residual norm 1.080641671971e+00 % max 8.684843710054e+01 min 1.183106407741e+00 max/min 7.340712258198e+01 > > 9786 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073711314e-01 ||r(i)||/||b|| 8.917309077372e-03 > > 9786 KSP Residual norm 1.080641671971e+00 % max 9.802652080694e+01 min 1.168690722569e+00 max/min 8.387721311884e+01 > > 9787 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073713003e-01 ||r(i)||/||b|| 8.917309086817e-03 > > 9787 KSP Residual norm 1.080641671971e+00 % max 1.123323967807e+02 min 1.141271419633e+00 max/min 9.842741599265e+01 > > 9788 KSP preconditioned resid norm 1.080641671971e+00 true resid norm 1.594073714007e-01 ||r(i)||/||b|| 8.917309092435e-03 > > 9788 KSP Residual norm 1.080641671971e+00 % max 1.134977855491e+02 min 1.090790461557e+00 max/min 1.040509516255e+02 > > 9789 KSP preconditioned resid norm 1.080641671970e+00 true resid norm 1.594074230188e-01 ||r(i)||/||b|| 8.917311979972e-03 > > 9789 KSP Residual norm 1.080641671970e+00 % max 1.139911533240e+02 min 4.122377311421e-01 max/min 2.765180009315e+02 > > 9790 KSP preconditioned resid norm 1.080641671969e+00 true resid norm 1.594074832488e-01 ||r(i)||/||b|| 8.917315349259e-03 > > 9790 KSP Residual norm 1.080641671969e+00 % max 1.140007095063e+02 min 2.895743194700e-01 max/min 3.936837690403e+02 > > 9791 KSP preconditioned resid norm 1.080641671969e+00 true resid norm 1.594074804009e-01 ||r(i)||/||b|| 8.917315189946e-03 > > 9791 KSP Residual norm 1.080641671969e+00 % max 1.140387195674e+02 min 2.881928861513e-01 max/min 3.957027568944e+02 > > 9792 KSP preconditioned resid norm 1.080641671968e+00 true resid norm 1.594074563767e-01 ||r(i)||/||b|| 8.917313846024e-03 > > 9792 KSP Residual norm 1.080641671968e+00 % max 1.140387689617e+02 min 2.501646075030e-01 max/min 4.558549272813e+02 > > 9793 KSP preconditioned resid norm 1.080641671966e+00 true resid norm 1.594074556720e-01 ||r(i)||/||b|| 8.917313806607e-03 > > 9793 KSP Residual norm 1.080641671966e+00 % max 1.141359909124e+02 min 2.500653522913e-01 max/min 4.564246500628e+02 > > 9794 KSP preconditioned resid norm 1.080641671965e+00 true resid norm 1.594074489575e-01 ||r(i)||/||b|| 8.917313430994e-03 > > 9794 KSP Residual norm 1.080641671965e+00 % max 1.141719419220e+02 min 2.470356110582e-01 max/min 4.621679499282e+02 > > 9795 KSP preconditioned resid norm 1.080641671939e+00 true resid norm 1.594073858301e-01 ||r(i)||/||b|| 8.917309899620e-03 > > 9795 KSP Residual norm 1.080641671939e+00 % max 1.141769959186e+02 min 2.461586211459e-01 max/min 4.638350482589e+02 > > 9796 KSP preconditioned resid norm 1.080641671936e+00 true resid norm 1.594072592237e-01 ||r(i)||/||b|| 8.917302817214e-03 > > 9796 KSP Residual norm 1.080641671936e+00 % max 1.150247937261e+02 min 1.817428765765e-01 max/min 6.328984986528e+02 > > 9797 KSP preconditioned resid norm 1.080641671823e+00 true resid norm 1.594068231505e-01 ||r(i)||/||b|| 8.917278423111e-03 > > 9797 KSP Residual norm 1.080641671823e+00 % max 1.153659002697e+02 min 1.758220533048e-01 max/min 6.561514787326e+02 > > 9798 KSP preconditioned resid norm 1.080641671805e+00 true resid norm 1.594066756326e-01 ||r(i)||/||b|| 8.917270170903e-03 > > 9798 KSP Residual norm 1.080641671805e+00 % max 1.154409481864e+02 min 1.682255312531e-01 max/min 6.862272767188e+02 > > 9799 KSP preconditioned resid norm 1.080641671412e+00 true resid norm 1.594051197519e-01 ||r(i)||/||b|| 8.917183134347e-03 > > 9799 KSP Residual norm 1.080641671412e+00 % max 1.154410836529e+02 min 1.254609413083e-01 max/min 9.201356410140e+02 > > 9800 KSP preconditioned resid norm 1.080641671225e+00 true resid norm 1.594050423544e-01 ||r(i)||/||b|| 8.917178804700e-03 > > 9800 KSP Residual norm 1.080641671225e+00 % max 1.155780495006e+02 min 1.115226000164e-01 max/min 1.036364373532e+03 > > 9801 KSP preconditioned resid norm 1.080641671136e+00 true resid norm 1.594040985764e-01 ||r(i)||/||b|| 8.917126009400e-03 > > 9801 KSP Residual norm 1.080641671136e+00 % max 1.156949344498e+02 min 9.748153395641e-02 max/min 1.186839494150e+03 > > 9802 KSP preconditioned resid norm 1.080641671038e+00 true resid norm 1.594035955111e-01 ||r(i)||/||b|| 8.917097867731e-03 > > 9802 KSP Residual norm 1.080641671038e+00 % max 1.157177902650e+02 min 8.161721244731e-02 max/min 1.417811106202e+03 > > 9803 KSP preconditioned resid norm 1.080641671024e+00 true resid norm 1.594038096704e-01 ||r(i)||/||b|| 8.917109847884e-03 > > 9803 KSP Residual norm 1.080641671024e+00 % max 1.157297935320e+02 min 8.041686995864e-02 max/min 1.439123328121e+03 > > 9804 KSP preconditioned resid norm 1.080641670988e+00 true resid norm 1.594036899968e-01 ||r(i)||/||b|| 8.917103153299e-03 > > 9804 KSP Residual norm 1.080641670988e+00 % max 1.158244769690e+02 min 8.041238179269e-02 max/min 1.440381125230e+03 > > 9805 KSP preconditioned resid norm 1.080641670986e+00 true resid norm 1.594036821095e-01 ||r(i)||/||b|| 8.917102712083e-03 > > 9805 KSP Residual norm 1.080641670986e+00 % max 1.158301768585e+02 min 7.912030787936e-02 max/min 1.463975304989e+03 > > 9806 KSP preconditioned resid norm 1.080641670803e+00 true resid norm 1.594037094369e-01 ||r(i)||/||b|| 8.917104240783e-03 > > 9806 KSP Residual norm 1.080641670803e+00 % max 1.164680491000e+02 min 7.460726855236e-02 max/min 1.561081800203e+03 > > 9807 KSP preconditioned resid norm 1.080641670773e+00 true resid norm 1.594038413140e-01 ||r(i)||/||b|| 8.917111618039e-03 > > 9807 KSP Residual norm 1.080641670773e+00 % max 1.166490110820e+02 min 7.459766739249e-02 max/min 1.563708560326e+03 > > 9808 KSP preconditioned resid norm 1.080641669134e+00 true resid norm 1.594048703119e-01 ||r(i)||/||b|| 8.917169180576e-03 > > 9808 KSP Residual norm 1.080641669134e+00 % max 1.166810013448e+02 min 6.210440124755e-02 max/min 1.878787960288e+03 > > 9809 KSP preconditioned resid norm 1.080641663243e+00 true resid norm 1.594054996410e-01 ||r(i)||/||b|| 8.917204385483e-03 > > 9809 KSP Residual norm 1.080641663243e+00 % max 1.166817320749e+02 min 4.513497352470e-02 max/min 2.585173380262e+03 > > 9810 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065242705e-01 ||r(i)||/||b|| 8.917261703653e-03 > > 9810 KSP Residual norm 1.080641662817e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > > 9811 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065242654e-01 ||r(i)||/||b|| 8.917261703367e-03 > > 9811 KSP Residual norm 1.080641662817e+00 % max 3.063464017134e+01 min 3.063464017134e+01 max/min 1.000000000000e+00 > > 9812 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065242568e-01 ||r(i)||/||b|| 8.917261702882e-03 > > 9812 KSP Residual norm 1.080641662817e+00 % max 3.066528792324e+01 min 3.844429757599e+00 max/min 7.976550452671e+00 > > 9813 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065241534e-01 ||r(i)||/||b|| 8.917261697099e-03 > > 9813 KSP Residual norm 1.080641662817e+00 % max 3.714542869717e+01 min 1.336556919260e+00 max/min 2.779187938941e+01 > > 9814 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065241497e-01 ||r(i)||/||b|| 8.917261696894e-03 > > 9814 KSP Residual norm 1.080641662817e+00 % max 4.500912339640e+01 min 1.226798613987e+00 max/min 3.668827375841e+01 > > 9815 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065241439e-01 ||r(i)||/||b|| 8.917261696565e-03 > > 9815 KSP Residual norm 1.080641662817e+00 % max 8.688203511576e+01 min 1.183121026759e+00 max/min 7.343461332421e+01 > > 9816 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065238512e-01 ||r(i)||/||b|| 8.917261680193e-03 > > 9816 KSP Residual norm 1.080641662817e+00 % max 9.802498010891e+01 min 1.168937586878e+00 max/min 8.385818131719e+01 > > 9817 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065236853e-01 ||r(i)||/||b|| 8.917261670916e-03 > > 9817 KSP Residual norm 1.080641662817e+00 % max 1.122428829897e+02 min 1.141670208530e+00 max/min 9.831462899798e+01 > > 9818 KSP preconditioned resid norm 1.080641662817e+00 true resid norm 1.594065235856e-01 ||r(i)||/||b|| 8.917261665338e-03 > > 9818 KSP Residual norm 1.080641662817e+00 % max 1.134942125400e+02 min 1.090790174709e+00 max/min 1.040477033727e+02 > > 9819 KSP preconditioned resid norm 1.080641662816e+00 true resid norm 1.594064723991e-01 ||r(i)||/||b|| 8.917258801942e-03 > > 9819 KSP Residual norm 1.080641662816e+00 % max 1.139914573562e+02 min 4.119742762810e-01 max/min 2.766955703769e+02 > > 9820 KSP preconditioned resid norm 1.080641662815e+00 true resid norm 1.594064127438e-01 ||r(i)||/||b|| 8.917255464802e-03 > > 9820 KSP Residual norm 1.080641662815e+00 % max 1.140011292201e+02 min 2.894526616195e-01 max/min 3.938506855740e+02 > > 9821 KSP preconditioned resid norm 1.080641662815e+00 true resid norm 1.594064155702e-01 ||r(i)||/||b|| 8.917255622914e-03 > > 9821 KSP Residual norm 1.080641662815e+00 % max 1.140392151549e+02 min 2.880577522236e-01 max/min 3.958901097943e+02 > > 9822 KSP preconditioned resid norm 1.080641662815e+00 true resid norm 1.594064393436e-01 ||r(i)||/||b|| 8.917256952808e-03 > > 9822 KSP Residual norm 1.080641662815e+00 % max 1.140392582401e+02 min 2.501715268375e-01 max/min 4.558442748531e+02 > > 9823 KSP preconditioned resid norm 1.080641662812e+00 true resid norm 1.594064400637e-01 ||r(i)||/||b|| 8.917256993092e-03 > > 9823 KSP Residual norm 1.080641662812e+00 % max 1.141360419319e+02 min 2.500712914815e-01 max/min 4.564140140028e+02 > > 9824 KSP preconditioned resid norm 1.080641662811e+00 true resid norm 1.594064466562e-01 ||r(i)||/||b|| 8.917257361878e-03 > > 9824 KSP Residual norm 1.080641662811e+00 % max 1.141719174947e+02 min 2.470521750742e-01 max/min 4.621368642491e+02 > > 9825 KSP preconditioned resid norm 1.080641662786e+00 true resid norm 1.594065091771e-01 ||r(i)||/||b|| 8.917260859317e-03 > > 9825 KSP Residual norm 1.080641662786e+00 % max 1.141770016737e+02 min 2.461724809746e-01 max/min 4.638089571251e+02 > > 9826 KSP preconditioned resid norm 1.080641662783e+00 true resid norm 1.594066344484e-01 ||r(i)||/||b|| 8.917267867042e-03 > > 9826 KSP Residual norm 1.080641662783e+00 % max 1.150251612560e+02 min 1.817295914017e-01 max/min 6.329467885162e+02 > > 9827 KSP preconditioned resid norm 1.080641662672e+00 true resid norm 1.594070669773e-01 ||r(i)||/||b|| 8.917292062876e-03 > > 9827 KSP Residual norm 1.080641662672e+00 % max 1.153670515865e+02 min 1.757835701538e-01 max/min 6.563016753248e+02 > > 9828 KSP preconditioned resid norm 1.080641662655e+00 true resid norm 1.594072129068e-01 ||r(i)||/||b|| 8.917300226227e-03 > > 9828 KSP Residual norm 1.080641662655e+00 % max 1.154419244262e+02 min 1.682009156098e-01 max/min 6.863335078032e+02 > > 9829 KSP preconditioned resid norm 1.080641662270e+00 true resid norm 1.594087524825e-01 ||r(i)||/||b|| 8.917386350680e-03 > > 9829 KSP Residual norm 1.080641662270e+00 % max 1.154420683680e+02 min 1.254371322814e-01 max/min 9.203181407961e+02 > > 9830 KSP preconditioned resid norm 1.080641662088e+00 true resid norm 1.594088281904e-01 ||r(i)||/||b|| 8.917390585807e-03 > > 9830 KSP Residual norm 1.080641662088e+00 % max 1.155791143272e+02 min 1.115275270000e-01 max/min 1.036328137421e+03 > > 9831 KSP preconditioned resid norm 1.080641662001e+00 true resid norm 1.594097559916e-01 ||r(i)||/||b|| 8.917442487360e-03 > > 9831 KSP Residual norm 1.080641662001e+00 % max 1.156952534277e+02 min 9.753070058662e-02 max/min 1.186244461814e+03 > > 9832 KSP preconditioned resid norm 1.080641661905e+00 true resid norm 1.594102571355e-01 ||r(i)||/||b|| 8.917470521541e-03 > > 9832 KSP Residual norm 1.080641661905e+00 % max 1.157175128244e+02 min 8.164806814555e-02 max/min 1.417271901866e+03 > > 9833 KSP preconditioned resid norm 1.080641661892e+00 true resid norm 1.594100484838e-01 ||r(i)||/||b|| 8.917458849484e-03 > > 9833 KSP Residual norm 1.080641661892e+00 % max 1.157285121903e+02 min 8.043319039381e-02 max/min 1.438815389813e+03 > > 9834 KSP preconditioned resid norm 1.080641661854e+00 true resid norm 1.594101720988e-01 ||r(i)||/||b|| 8.917465764556e-03 > > 9834 KSP Residual norm 1.080641661854e+00 % max 1.158251773437e+02 min 8.042916217223e-02 max/min 1.440089318544e+03 > > 9835 KSP preconditioned resid norm 1.080641661853e+00 true resid norm 1.594101806342e-01 ||r(i)||/||b|| 8.917466242026e-03 > > 9835 KSP Residual norm 1.080641661853e+00 % max 1.158318842362e+02 min 7.912558026309e-02 max/min 1.463899333832e+03 > > 9836 KSP preconditioned resid norm 1.080641661674e+00 true resid norm 1.594101542582e-01 ||r(i)||/||b|| 8.917464766544e-03 > > 9836 KSP Residual norm 1.080641661674e+00 % max 1.164750419425e+02 min 7.459519966941e-02 max/min 1.561428114124e+03 > > 9837 KSP preconditioned resid norm 1.080641661644e+00 true resid norm 1.594100247000e-01 ||r(i)||/||b|| 8.917457519010e-03 > > 9837 KSP Residual norm 1.080641661644e+00 % max 1.166576751361e+02 min 7.458624135723e-02 max/min 1.564064269942e+03 > > 9838 KSP preconditioned resid norm 1.080641660043e+00 true resid norm 1.594090034525e-01 ||r(i)||/||b|| 8.917400390036e-03 > > 9838 KSP Residual norm 1.080641660043e+00 % max 1.166901230302e+02 min 6.207904511461e-02 max/min 1.879702286251e+03 > > 9839 KSP preconditioned resid norm 1.080641654279e+00 true resid norm 1.594083845247e-01 ||r(i)||/||b|| 8.917365766976e-03 > > 9839 KSP Residual norm 1.080641654279e+00 % max 1.166909329256e+02 min 4.511818825181e-02 max/min 2.586339067391e+03 > > 9840 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073617105e-01 ||r(i)||/||b|| 8.917308550363e-03 > > 9840 KSP Residual norm 1.080641653856e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > > 9841 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073617156e-01 ||r(i)||/||b|| 8.917308550646e-03 > > 9841 KSP Residual norm 1.080641653856e+00 % max 3.063497434930e+01 min 3.063497434930e+01 max/min 1.000000000000e+00 > > 9842 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073617242e-01 ||r(i)||/||b|| 8.917308551127e-03 > > 9842 KSP Residual norm 1.080641653856e+00 % max 3.066566511835e+01 min 3.845873341758e+00 max/min 7.973654458505e+00 > > 9843 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073618265e-01 ||r(i)||/||b|| 8.917308556853e-03 > > 9843 KSP Residual norm 1.080641653856e+00 % max 3.713360973998e+01 min 1.336323585560e+00 max/min 2.778788771015e+01 > > 9844 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073618300e-01 ||r(i)||/||b|| 8.917308557050e-03 > > 9844 KSP Residual norm 1.080641653856e+00 % max 4.496460969624e+01 min 1.226794432981e+00 max/min 3.665211423154e+01 > > 9845 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073618358e-01 ||r(i)||/||b|| 8.917308557372e-03 > > 9845 KSP Residual norm 1.080641653856e+00 % max 8.684887047460e+01 min 1.183106552555e+00 max/min 7.340747989862e+01 > > 9846 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073621302e-01 ||r(i)||/||b|| 8.917308573843e-03 > > 9846 KSP Residual norm 1.080641653856e+00 % max 9.802649587879e+01 min 1.168693234980e+00 max/min 8.387701147298e+01 > > 9847 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073622957e-01 ||r(i)||/||b|| 8.917308583099e-03 > > 9847 KSP Residual norm 1.080641653856e+00 % max 1.123314913547e+02 min 1.141275669292e+00 max/min 9.842625614229e+01 > > 9848 KSP preconditioned resid norm 1.080641653856e+00 true resid norm 1.594073623942e-01 ||r(i)||/||b|| 8.917308588608e-03 > > 9848 KSP Residual norm 1.080641653856e+00 % max 1.134977479974e+02 min 1.090790466016e+00 max/min 1.040509167741e+02 > > 9849 KSP preconditioned resid norm 1.080641653855e+00 true resid norm 1.594074129801e-01 ||r(i)||/||b|| 8.917311418404e-03 > > 9849 KSP Residual norm 1.080641653855e+00 % max 1.139911565327e+02 min 4.122354439173e-01 max/min 2.765195429329e+02 > > 9850 KSP preconditioned resid norm 1.080641653854e+00 true resid norm 1.594074720057e-01 ||r(i)||/||b|| 8.917314720319e-03 > > 9850 KSP Residual norm 1.080641653854e+00 % max 1.140007137546e+02 min 2.895731636857e-01 max/min 3.936853550364e+02 > > 9851 KSP preconditioned resid norm 1.080641653854e+00 true resid norm 1.594074692144e-01 ||r(i)||/||b|| 8.917314564169e-03 > > 9851 KSP Residual norm 1.080641653854e+00 % max 1.140387247200e+02 min 2.881916049088e-01 max/min 3.957045339889e+02 > > 9852 KSP preconditioned resid norm 1.080641653853e+00 true resid norm 1.594074456679e-01 ||r(i)||/||b|| 8.917313246972e-03 > > 9852 KSP Residual norm 1.080641653853e+00 % max 1.140387740588e+02 min 2.501646808296e-01 max/min 4.558548140395e+02 > > 9853 KSP preconditioned resid norm 1.080641653851e+00 true resid norm 1.594074449772e-01 ||r(i)||/||b|| 8.917313208332e-03 > > 9853 KSP Residual norm 1.080641653851e+00 % max 1.141359916013e+02 min 2.500654158720e-01 max/min 4.564245367688e+02 > > 9854 KSP preconditioned resid norm 1.080641653850e+00 true resid norm 1.594074383969e-01 ||r(i)||/||b|| 8.917312840228e-03 > > 9854 KSP Residual norm 1.080641653850e+00 % max 1.141719416509e+02 min 2.470357905993e-01 max/min 4.621676129356e+02 > > 9855 KSP preconditioned resid norm 1.080641653826e+00 true resid norm 1.594073765323e-01 ||r(i)||/||b|| 8.917309379499e-03 > > 9855 KSP Residual norm 1.080641653826e+00 % max 1.141769960009e+02 min 2.461587608334e-01 max/min 4.638347853812e+02 > > 9856 KSP preconditioned resid norm 1.080641653822e+00 true resid norm 1.594072524403e-01 ||r(i)||/||b|| 8.917302437746e-03 > > 9856 KSP Residual norm 1.080641653822e+00 % max 1.150247983913e+02 min 1.817426977612e-01 max/min 6.328991470261e+02 > > 9857 KSP preconditioned resid norm 1.080641653714e+00 true resid norm 1.594068250847e-01 ||r(i)||/||b|| 8.917278531310e-03 > > 9857 KSP Residual norm 1.080641653714e+00 % max 1.153659149296e+02 min 1.758216030104e-01 max/min 6.561532425728e+02 > > 9858 KSP preconditioned resid norm 1.080641653697e+00 true resid norm 1.594066805198e-01 ||r(i)||/||b|| 8.917270444298e-03 > > 9858 KSP Residual norm 1.080641653697e+00 % max 1.154409610504e+02 min 1.682252327368e-01 max/min 6.862285709010e+02 > > 9859 KSP preconditioned resid norm 1.080641653319e+00 true resid norm 1.594051557657e-01 ||r(i)||/||b|| 8.917185148971e-03 > > 9859 KSP Residual norm 1.080641653319e+00 % max 1.154410966341e+02 min 1.254606780252e-01 max/min 9.201376754146e+02 > > 9860 KSP preconditioned resid norm 1.080641653140e+00 true resid norm 1.594050799233e-01 ||r(i)||/||b|| 8.917180906316e-03 > > 9860 KSP Residual norm 1.080641653140e+00 % max 1.155780628675e+02 min 1.115225287992e-01 max/min 1.036365155202e+03 > > 9861 KSP preconditioned resid norm 1.080641653054e+00 true resid norm 1.594041550812e-01 ||r(i)||/||b|| 8.917129170295e-03 > > 9861 KSP Residual norm 1.080641653054e+00 % max 1.156949369211e+02 min 9.748220965886e-02 max/min 1.186831292869e+03 > > 9862 KSP preconditioned resid norm 1.080641652960e+00 true resid norm 1.594036620365e-01 ||r(i)||/||b|| 8.917101589189e-03 > > 9862 KSP Residual norm 1.080641652960e+00 % max 1.157177832797e+02 min 8.161751000999e-02 max/min 1.417805851533e+03 > > 9863 KSP preconditioned resid norm 1.080641652947e+00 true resid norm 1.594038718371e-01 ||r(i)||/||b|| 8.917113325517e-03 > > 9863 KSP Residual norm 1.080641652947e+00 % max 1.157297723960e+02 min 8.041700428565e-02 max/min 1.439120661408e+03 > > 9864 KSP preconditioned resid norm 1.080641652912e+00 true resid norm 1.594037544972e-01 ||r(i)||/||b|| 8.917106761475e-03 > > 9864 KSP Residual norm 1.080641652912e+00 % max 1.158244802295e+02 min 8.041252146115e-02 max/min 1.440378663980e+03 > > 9865 KSP preconditioned resid norm 1.080641652910e+00 true resid norm 1.594037467642e-01 ||r(i)||/||b|| 8.917106328887e-03 > > 9865 KSP Residual norm 1.080641652910e+00 % max 1.158301923080e+02 min 7.912032794006e-02 max/min 1.463975129069e+03 > > 9866 KSP preconditioned resid norm 1.080641652734e+00 true resid norm 1.594037735388e-01 ||r(i)||/||b|| 8.917107826673e-03 > > 9866 KSP Residual norm 1.080641652734e+00 % max 1.164681289889e+02 min 7.460712971447e-02 max/min 1.561085776047e+03 > > 9867 KSP preconditioned resid norm 1.080641652705e+00 true resid norm 1.594039028009e-01 ||r(i)||/||b|| 8.917115057644e-03 > > 9867 KSP Residual norm 1.080641652705e+00 % max 1.166491106270e+02 min 7.459753335320e-02 max/min 1.563712704477e+03 > > 9868 KSP preconditioned resid norm 1.080641651132e+00 true resid norm 1.594049112430e-01 ||r(i)||/||b|| 8.917171470279e-03 > > 9868 KSP Residual norm 1.080641651132e+00 % max 1.166811058685e+02 min 6.210409714784e-02 max/min 1.878798843026e+03 > > 9869 KSP preconditioned resid norm 1.080641645474e+00 true resid norm 1.594055279416e-01 ||r(i)||/||b|| 8.917205968634e-03 > > 9869 KSP Residual norm 1.080641645474e+00 % max 1.166818375393e+02 min 4.513482662958e-02 max/min 2.585184130582e+03 > > 9870 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065322524e-01 ||r(i)||/||b|| 8.917262150160e-03 > > 9870 KSP Residual norm 1.080641645065e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > > 9871 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065322474e-01 ||r(i)||/||b|| 8.917262149880e-03 > > 9871 KSP Residual norm 1.080641645065e+00 % max 3.063464541814e+01 min 3.063464541814e+01 max/min 1.000000000000e+00 > > 9872 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065322389e-01 ||r(i)||/||b|| 8.917262149406e-03 > > 9872 KSP Residual norm 1.080641645065e+00 % max 3.066529346532e+01 min 3.844443657370e+00 max/min 7.976523054651e+00 > > 9873 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065321376e-01 ||r(i)||/||b|| 8.917262143737e-03 > > 9873 KSP Residual norm 1.080641645065e+00 % max 3.714534104643e+01 min 1.336555480314e+00 max/min 2.779184373079e+01 > > 9874 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065321340e-01 ||r(i)||/||b|| 8.917262143537e-03 > > 9874 KSP Residual norm 1.080641645065e+00 % max 4.500878758461e+01 min 1.226798739269e+00 max/min 3.668799628162e+01 > > 9875 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065321283e-01 ||r(i)||/||b|| 8.917262143216e-03 > > 9875 KSP Residual norm 1.080641645065e+00 % max 8.688179296761e+01 min 1.183120879361e+00 max/min 7.343441780400e+01 > > 9876 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065318413e-01 ||r(i)||/||b|| 8.917262127165e-03 > > 9876 KSP Residual norm 1.080641645065e+00 % max 9.802498627683e+01 min 1.168935165788e+00 max/min 8.385836028017e+01 > > 9877 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065316788e-01 ||r(i)||/||b|| 8.917262118072e-03 > > 9877 KSP Residual norm 1.080641645065e+00 % max 1.122437663284e+02 min 1.141666474192e+00 max/min 9.831572430803e+01 > > 9878 KSP preconditioned resid norm 1.080641645065e+00 true resid norm 1.594065315811e-01 ||r(i)||/||b|| 8.917262112605e-03 > > 9878 KSP Residual norm 1.080641645065e+00 % max 1.134942465219e+02 min 1.090790184702e+00 max/min 1.040477335730e+02 > > 9879 KSP preconditioned resid norm 1.080641645064e+00 true resid norm 1.594064814200e-01 ||r(i)||/||b|| 8.917259306578e-03 > > 9879 KSP Residual norm 1.080641645064e+00 % max 1.139914544856e+02 min 4.119772603942e-01 max/min 2.766935591945e+02 > > 9880 KSP preconditioned resid norm 1.080641645063e+00 true resid norm 1.594064229585e-01 ||r(i)||/||b|| 8.917256036220e-03 > > 9880 KSP Residual norm 1.080641645063e+00 % max 1.140011250742e+02 min 2.894539413393e-01 max/min 3.938489299774e+02 > > 9881 KSP preconditioned resid norm 1.080641645063e+00 true resid norm 1.594064257287e-01 ||r(i)||/||b|| 8.917256191184e-03 > > 9881 KSP Residual norm 1.080641645063e+00 % max 1.140392103921e+02 min 2.880591764056e-01 max/min 3.958881359556e+02 > > 9882 KSP preconditioned resid norm 1.080641645063e+00 true resid norm 1.594064490294e-01 ||r(i)||/||b|| 8.917257494633e-03 > > 9882 KSP Residual norm 1.080641645063e+00 % max 1.140392535477e+02 min 2.501714617108e-01 max/min 4.558443747654e+02 > > 9883 KSP preconditioned resid norm 1.080641645061e+00 true resid norm 1.594064497349e-01 ||r(i)||/||b|| 8.917257534101e-03 > > 9883 KSP Residual norm 1.080641645061e+00 % max 1.141360416004e+02 min 2.500712361002e-01 max/min 4.564141137554e+02 > > 9884 KSP preconditioned resid norm 1.080641645059e+00 true resid norm 1.594064561966e-01 ||r(i)||/||b|| 8.917257895569e-03 > > 9884 KSP Residual norm 1.080641645059e+00 % max 1.141719177119e+02 min 2.470520232306e-01 max/min 4.621371491678e+02 > > 9885 KSP preconditioned resid norm 1.080641645035e+00 true resid norm 1.594065174658e-01 ||r(i)||/||b|| 8.917261322993e-03 > > 9885 KSP Residual norm 1.080641645035e+00 % max 1.141770016402e+02 min 2.461723435101e-01 max/min 4.638092159834e+02 > > 9886 KSP preconditioned resid norm 1.080641645032e+00 true resid norm 1.594066402497e-01 ||r(i)||/||b|| 8.917268191569e-03 > > 9886 KSP Residual norm 1.080641645032e+00 % max 1.150251585488e+02 min 1.817296775460e-01 max/min 6.329464735865e+02 > > 9887 KSP preconditioned resid norm 1.080641644925e+00 true resid norm 1.594070641129e-01 ||r(i)||/||b|| 8.917291902641e-03 > > 9887 KSP Residual norm 1.080641644925e+00 % max 1.153670431587e+02 min 1.757838889123e-01 max/min 6.563004372733e+02 > > 9888 KSP preconditioned resid norm 1.080641644909e+00 true resid norm 1.594072071224e-01 ||r(i)||/||b|| 8.917299902647e-03 > > 9888 KSP Residual norm 1.080641644909e+00 % max 1.154419177071e+02 min 1.682011082589e-01 max/min 6.863326817642e+02 > > 9889 KSP preconditioned resid norm 1.080641644540e+00 true resid norm 1.594087159021e-01 ||r(i)||/||b|| 8.917384304358e-03 > > 9889 KSP Residual norm 1.080641644540e+00 % max 1.154420615965e+02 min 1.254373423966e-01 max/min 9.203165452239e+02 > > 9890 KSP preconditioned resid norm 1.080641644365e+00 true resid norm 1.594087901062e-01 ||r(i)||/||b|| 8.917388455364e-03 > > 9890 KSP Residual norm 1.080641644365e+00 % max 1.155791063432e+02 min 1.115273566807e-01 max/min 1.036329648465e+03 > > 9891 KSP preconditioned resid norm 1.080641644282e+00 true resid norm 1.594096994140e-01 ||r(i)||/||b|| 8.917439322388e-03 > > 9891 KSP Residual norm 1.080641644282e+00 % max 1.156952495514e+02 min 9.753038620512e-02 max/min 1.186248245835e+03 > > 9892 KSP preconditioned resid norm 1.080641644189e+00 true resid norm 1.594101905102e-01 ||r(i)||/||b|| 8.917466794493e-03 > > 9892 KSP Residual norm 1.080641644189e+00 % max 1.157175115527e+02 min 8.164774923625e-02 max/min 1.417277422037e+03 > > 9893 KSP preconditioned resid norm 1.080641644177e+00 true resid norm 1.594099860409e-01 ||r(i)||/||b|| 8.917455356405e-03 > > 9893 KSP Residual norm 1.080641644177e+00 % max 1.157285171248e+02 min 8.043299897298e-02 max/min 1.438818875368e+03 > > 9894 KSP preconditioned resid norm 1.080641644141e+00 true resid norm 1.594101071411e-01 ||r(i)||/||b|| 8.917462130798e-03 > > 9894 KSP Residual norm 1.080641644141e+00 % max 1.158251669093e+02 min 8.042896682578e-02 max/min 1.440092686509e+03 > > 9895 KSP preconditioned resid norm 1.080641644139e+00 true resid norm 1.594101154949e-01 ||r(i)||/||b|| 8.917462598110e-03 > > 9895 KSP Residual norm 1.080641644139e+00 % max 1.158318659802e+02 min 7.912549560750e-02 max/min 1.463900669321e+03 > > 9896 KSP preconditioned resid norm 1.080641643967e+00 true resid norm 1.594100896394e-01 ||r(i)||/||b|| 8.917461151744e-03 > > 9896 KSP Residual norm 1.080641643967e+00 % max 1.164749819823e+02 min 7.459530237415e-02 max/min 1.561425160502e+03 > > 9897 KSP preconditioned resid norm 1.080641643939e+00 true resid norm 1.594099626317e-01 ||r(i)||/||b|| 8.917454046886e-03 > > 9897 KSP Residual norm 1.080641643939e+00 % max 1.166576013049e+02 min 7.458633610448e-02 max/min 1.564061293230e+03 > > 9898 KSP preconditioned resid norm 1.080641642401e+00 true resid norm 1.594089618420e-01 ||r(i)||/||b|| 8.917398062328e-03 > > 9898 KSP Residual norm 1.080641642401e+00 % max 1.166900450421e+02 min 6.207924609942e-02 max/min 1.879694944349e+03 > > 9899 KSP preconditioned resid norm 1.080641636866e+00 true resid norm 1.594083552587e-01 ||r(i)||/||b|| 8.917364129827e-03 > > 9899 KSP Residual norm 1.080641636866e+00 % max 1.166908543099e+02 min 4.511837692126e-02 max/min 2.586326509785e+03 > > 9900 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073529026e-01 ||r(i)||/||b|| 8.917308057647e-03 > > 9900 KSP Residual norm 1.080641636459e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > > 9901 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073529076e-01 ||r(i)||/||b|| 8.917308057926e-03 > > 9901 KSP Residual norm 1.080641636459e+00 % max 3.063497290246e+01 min 3.063497290246e+01 max/min 1.000000000000e+00 > > 9902 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073529160e-01 ||r(i)||/||b|| 8.917308058394e-03 > > 9902 KSP Residual norm 1.080641636459e+00 % max 3.066566310477e+01 min 3.845858371013e+00 max/min 7.973684973919e+00 > > 9903 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073530163e-01 ||r(i)||/||b|| 8.917308064007e-03 > > 9903 KSP Residual norm 1.080641636459e+00 % max 3.713375877488e+01 min 1.336326817614e+00 max/min 2.778793202787e+01 > > 9904 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073530198e-01 ||r(i)||/||b|| 8.917308064200e-03 > > 9904 KSP Residual norm 1.080641636459e+00 % max 4.496516516046e+01 min 1.226794642680e+00 max/min 3.665256074336e+01 > > 9905 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073530254e-01 ||r(i)||/||b|| 8.917308064515e-03 > > 9905 KSP Residual norm 1.080641636459e+00 % max 8.684929340512e+01 min 1.183106694605e+00 max/min 7.340782855945e+01 > > 9906 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073533139e-01 ||r(i)||/||b|| 8.917308080655e-03 > > 9906 KSP Residual norm 1.080641636459e+00 % max 9.802647163348e+01 min 1.168695697992e+00 max/min 8.387681395754e+01 > > 9907 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073534761e-01 ||r(i)||/||b|| 8.917308089727e-03 > > 9907 KSP Residual norm 1.080641636459e+00 % max 1.123306036185e+02 min 1.141279831413e+00 max/min 9.842511934993e+01 > > 9908 KSP preconditioned resid norm 1.080641636459e+00 true resid norm 1.594073535726e-01 ||r(i)||/||b|| 8.917308095124e-03 > > 9908 KSP Residual norm 1.080641636459e+00 % max 1.134977112090e+02 min 1.090790470231e+00 max/min 1.040508826457e+02 > > 9909 KSP preconditioned resid norm 1.080641636458e+00 true resid norm 1.594074031465e-01 ||r(i)||/||b|| 8.917310868307e-03 > > 9909 KSP Residual norm 1.080641636458e+00 % max 1.139911596762e+02 min 4.122331938771e-01 max/min 2.765210598500e+02 > > 9910 KSP preconditioned resid norm 1.080641636457e+00 true resid norm 1.594074609911e-01 ||r(i)||/||b|| 8.917314104156e-03 > > 9910 KSP Residual norm 1.080641636457e+00 % max 1.140007179199e+02 min 2.895720290537e-01 max/min 3.936869120007e+02 > > 9911 KSP preconditioned resid norm 1.080641636457e+00 true resid norm 1.594074582552e-01 ||r(i)||/||b|| 8.917313951111e-03 > > 9911 KSP Residual norm 1.080641636457e+00 % max 1.140387297689e+02 min 2.881903470643e-01 max/min 3.957062786128e+02 > > 9912 KSP preconditioned resid norm 1.080641636457e+00 true resid norm 1.594074351775e-01 ||r(i)||/||b|| 8.917312660133e-03 > > 9912 KSP Residual norm 1.080641636457e+00 % max 1.140387790534e+02 min 2.501647526543e-01 max/min 4.558547031242e+02 > > 9913 KSP preconditioned resid norm 1.080641636455e+00 true resid norm 1.594074345003e-01 ||r(i)||/||b|| 8.917312622252e-03 > > 9913 KSP Residual norm 1.080641636455e+00 % max 1.141359922733e+02 min 2.500654781355e-01 max/min 4.564244258117e+02 > > 9914 KSP preconditioned resid norm 1.080641636454e+00 true resid norm 1.594074280517e-01 ||r(i)||/||b|| 8.917312261516e-03 > > 9914 KSP Residual norm 1.080641636454e+00 % max 1.141719413857e+02 min 2.470359663864e-01 max/min 4.621672829905e+02 > > 9915 KSP preconditioned resid norm 1.080641636430e+00 true resid norm 1.594073674253e-01 ||r(i)||/||b|| 8.917308870053e-03 > > 9915 KSP Residual norm 1.080641636430e+00 % max 1.141769960812e+02 min 2.461588977988e-01 max/min 4.638345276249e+02 > > 9916 KSP preconditioned resid norm 1.080641636427e+00 true resid norm 1.594072457998e-01 ||r(i)||/||b|| 8.917302066275e-03 > > 9916 KSP Residual norm 1.080641636427e+00 % max 1.150248029458e+02 min 1.817425233097e-01 max/min 6.328997795954e+02 > > 9917 KSP preconditioned resid norm 1.080641636323e+00 true resid norm 1.594068269925e-01 ||r(i)||/||b|| 8.917278638033e-03 > > 9917 KSP Residual norm 1.080641636323e+00 % max 1.153659292411e+02 min 1.758211627252e-01 max/min 6.561549670868e+02 > > 9918 KSP preconditioned resid norm 1.080641636306e+00 true resid norm 1.594066853231e-01 ||r(i)||/||b|| 8.917270712996e-03 > > 9918 KSP Residual norm 1.080641636306e+00 % max 1.154409736018e+02 min 1.682249410195e-01 max/min 6.862298354942e+02 > > 9919 KSP preconditioned resid norm 1.080641635944e+00 true resid norm 1.594051910900e-01 ||r(i)||/||b|| 8.917187125026e-03 > > 9919 KSP Residual norm 1.080641635944e+00 % max 1.154411092996e+02 min 1.254604202789e-01 max/min 9.201396667011e+02 > > 9920 KSP preconditioned resid norm 1.080641635771e+00 true resid norm 1.594051167722e-01 ||r(i)||/||b|| 8.917182967656e-03 > > 9920 KSP Residual norm 1.080641635771e+00 % max 1.155780759198e+02 min 1.115224613869e-01 max/min 1.036365898694e+03 > > 9921 KSP preconditioned resid norm 1.080641635689e+00 true resid norm 1.594042104954e-01 ||r(i)||/||b|| 8.917132270191e-03 > > 9921 KSP Residual norm 1.080641635689e+00 % max 1.156949393598e+02 min 9.748286841768e-02 max/min 1.186823297649e+03 > > 9922 KSP preconditioned resid norm 1.080641635599e+00 true resid norm 1.594037272785e-01 ||r(i)||/||b|| 8.917105238852e-03 > > 9922 KSP Residual norm 1.080641635599e+00 % max 1.157177765183e+02 min 8.161780203814e-02 max/min 1.417800695787e+03 > > 9923 KSP preconditioned resid norm 1.080641635586e+00 true resid norm 1.594039328090e-01 ||r(i)||/||b|| 8.917116736308e-03 > > 9923 KSP Residual norm 1.080641635586e+00 % max 1.157297518468e+02 min 8.041713659406e-02 max/min 1.439118038125e+03 > > 9924 KSP preconditioned resid norm 1.080641635552e+00 true resid norm 1.594038177599e-01 ||r(i)||/||b|| 8.917110300416e-03 > > 9924 KSP Residual norm 1.080641635552e+00 % max 1.158244835100e+02 min 8.041265899129e-02 max/min 1.440376241290e+03 > > 9925 KSP preconditioned resid norm 1.080641635551e+00 true resid norm 1.594038101782e-01 ||r(i)||/||b|| 8.917109876291e-03 > > 9925 KSP Residual norm 1.080641635551e+00 % max 1.158302075022e+02 min 7.912034826741e-02 max/min 1.463974944988e+03 > > 9926 KSP preconditioned resid norm 1.080641635382e+00 true resid norm 1.594038364112e-01 ||r(i)||/||b|| 8.917111343775e-03 > > 9926 KSP Residual norm 1.080641635382e+00 % max 1.164682071358e+02 min 7.460699390243e-02 max/min 1.561089665242e+03 > > 9927 KSP preconditioned resid norm 1.080641635354e+00 true resid norm 1.594039631076e-01 ||r(i)||/||b|| 8.917118431221e-03 > > 9927 KSP Residual norm 1.080641635354e+00 % max 1.166492079885e+02 min 7.459740228247e-02 max/min 1.563716757144e+03 > > 9928 KSP preconditioned resid norm 1.080641633843e+00 true resid norm 1.594049513925e-01 ||r(i)||/||b|| 8.917173716256e-03 > > 9928 KSP Residual norm 1.080641633843e+00 % max 1.166812081048e+02 min 6.210379986959e-02 max/min 1.878809482669e+03 > > 9929 KSP preconditioned resid norm 1.080641628410e+00 true resid norm 1.594055557080e-01 ||r(i)||/||b|| 8.917207521894e-03 > > 9929 KSP Residual norm 1.080641628410e+00 % max 1.166819406955e+02 min 4.513468212670e-02 max/min 2.585194692807e+03 > > 9930 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065400862e-01 ||r(i)||/||b|| 8.917262588389e-03 > > 9930 KSP Residual norm 1.080641628017e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > > 9931 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065400814e-01 ||r(i)||/||b|| 8.917262588116e-03 > > 9931 KSP Residual norm 1.080641628017e+00 % max 3.063465052437e+01 min 3.063465052437e+01 max/min 1.000000000000e+00 > > 9932 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065400730e-01 ||r(i)||/||b|| 8.917262587648e-03 > > 9932 KSP Residual norm 1.080641628017e+00 % max 3.066529886382e+01 min 3.844457299464e+00 max/min 7.976496154112e+00 > > 9933 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065399737e-01 ||r(i)||/||b|| 8.917262582094e-03 > > 9933 KSP Residual norm 1.080641628017e+00 % max 3.714525447417e+01 min 1.336554051053e+00 max/min 2.779180867762e+01 > > 9934 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065399702e-01 ||r(i)||/||b|| 8.917262581897e-03 > > 9934 KSP Residual norm 1.080641628017e+00 % max 4.500845605790e+01 min 1.226798858744e+00 max/min 3.668772247146e+01 > > 9935 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065399646e-01 ||r(i)||/||b|| 8.917262581583e-03 > > 9935 KSP Residual norm 1.080641628017e+00 % max 8.688155371236e+01 min 1.183120734870e+00 max/min 7.343422454843e+01 > > 9936 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065396833e-01 ||r(i)||/||b|| 8.917262565850e-03 > > 9936 KSP Residual norm 1.080641628017e+00 % max 9.802499250688e+01 min 1.168932790974e+00 max/min 8.385853597724e+01 > > 9937 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065395240e-01 ||r(i)||/||b|| 8.917262556936e-03 > > 9937 KSP Residual norm 1.080641628017e+00 % max 1.122446326752e+02 min 1.141662807959e+00 max/min 9.831679887675e+01 > > 9938 KSP preconditioned resid norm 1.080641628017e+00 true resid norm 1.594065394282e-01 ||r(i)||/||b|| 8.917262551579e-03 > > 9938 KSP Residual norm 1.080641628017e+00 % max 1.134942798726e+02 min 1.090790194356e+00 max/min 1.040477632269e+02 > > 9939 KSP preconditioned resid norm 1.080641628016e+00 true resid norm 1.594064902727e-01 ||r(i)||/||b|| 8.917259801801e-03 > > 9939 KSP Residual norm 1.080641628016e+00 % max 1.139914516679e+02 min 4.119801790176e-01 max/min 2.766915921531e+02 > > 9940 KSP preconditioned resid norm 1.080641628015e+00 true resid norm 1.594064329818e-01 ||r(i)||/||b|| 8.917256596927e-03 > > 9940 KSP Residual norm 1.080641628015e+00 % max 1.140011210085e+02 min 2.894551947128e-01 max/min 3.938472105212e+02 > > 9941 KSP preconditioned resid norm 1.080641628015e+00 true resid norm 1.594064356968e-01 ||r(i)||/||b|| 8.917256748803e-03 > > 9941 KSP Residual norm 1.080641628015e+00 % max 1.140392057187e+02 min 2.880605712140e-01 max/min 3.958862028153e+02 > > 9942 KSP preconditioned resid norm 1.080641628014e+00 true resid norm 1.594064585337e-01 ||r(i)||/||b|| 8.917258026309e-03 > > 9942 KSP Residual norm 1.080641628014e+00 % max 1.140392489431e+02 min 2.501713977769e-01 max/min 4.558444728556e+02 > > 9943 KSP preconditioned resid norm 1.080641628013e+00 true resid norm 1.594064592249e-01 ||r(i)||/||b|| 8.917258064976e-03 > > 9943 KSP Residual norm 1.080641628013e+00 % max 1.141360412719e+02 min 2.500711817244e-01 max/min 4.564142116850e+02 > > 9944 KSP preconditioned resid norm 1.080641628011e+00 true resid norm 1.594064655583e-01 ||r(i)||/||b|| 8.917258419265e-03 > > 9944 KSP Residual norm 1.080641628011e+00 % max 1.141719179255e+02 min 2.470518740808e-01 max/min 4.621374290332e+02 > > 9945 KSP preconditioned resid norm 1.080641627988e+00 true resid norm 1.594065256003e-01 ||r(i)||/||b|| 8.917261778039e-03 > > 9945 KSP Residual norm 1.080641627988e+00 % max 1.141770016071e+02 min 2.461722087102e-01 max/min 4.638094698230e+02 > > 9946 KSP preconditioned resid norm 1.080641627985e+00 true resid norm 1.594066459440e-01 ||r(i)||/||b|| 8.917268510113e-03 > > 9946 KSP Residual norm 1.080641627985e+00 % max 1.150251558754e+02 min 1.817297629761e-01 max/min 6.329461613313e+02 > > 9947 KSP preconditioned resid norm 1.080641627883e+00 true resid norm 1.594070613108e-01 ||r(i)||/||b|| 8.917291745887e-03 > > 9947 KSP Residual norm 1.080641627883e+00 % max 1.153670348347e+02 min 1.757842027946e-01 max/min 6.562992180216e+02 > > 9948 KSP preconditioned resid norm 1.080641627867e+00 true resid norm 1.594072014571e-01 ||r(i)||/||b|| 8.917299585729e-03 > > 9948 KSP Residual norm 1.080641627867e+00 % max 1.154419110590e+02 min 1.682012982555e-01 max/min 6.863318669729e+02 > > 9949 KSP preconditioned resid norm 1.080641627512e+00 true resid norm 1.594086800399e-01 ||r(i)||/||b|| 8.917382298211e-03 > > 9949 KSP Residual norm 1.080641627512e+00 % max 1.154420548965e+02 min 1.254375490145e-01 max/min 9.203149758861e+02 > > 9950 KSP preconditioned resid norm 1.080641627344e+00 true resid norm 1.594087527690e-01 ||r(i)||/||b|| 8.917386366706e-03 > > 9950 KSP Residual norm 1.080641627344e+00 % max 1.155790984624e+02 min 1.115271921586e-01 max/min 1.036331106570e+03 > > 9951 KSP preconditioned resid norm 1.080641627264e+00 true resid norm 1.594096439404e-01 ||r(i)||/||b|| 8.917436219173e-03 > > 9951 KSP Residual norm 1.080641627264e+00 % max 1.156952457641e+02 min 9.753007450777e-02 max/min 1.186251998145e+03 > > 9952 KSP preconditioned resid norm 1.080641627176e+00 true resid norm 1.594101251850e-01 ||r(i)||/||b|| 8.917463140178e-03 > > 9952 KSP Residual norm 1.080641627176e+00 % max 1.157175103847e+02 min 8.164743677427e-02 max/min 1.417282831604e+03 > > 9953 KSP preconditioned resid norm 1.080641627164e+00 true resid norm 1.594099248157e-01 ||r(i)||/||b|| 8.917451931444e-03 > > 9953 KSP Residual norm 1.080641627164e+00 % max 1.157285221143e+02 min 8.043281187160e-02 max/min 1.438822284357e+03 > > 9954 KSP preconditioned resid norm 1.080641627129e+00 true resid norm 1.594100434515e-01 ||r(i)||/||b|| 8.917458567979e-03 > > 9954 KSP Residual norm 1.080641627129e+00 % max 1.158251567433e+02 min 8.042877586324e-02 max/min 1.440095979333e+03 > > 9955 KSP preconditioned resid norm 1.080641627127e+00 true resid norm 1.594100516277e-01 ||r(i)||/||b|| 8.917459025356e-03 > > 9955 KSP Residual norm 1.080641627127e+00 % max 1.158318480981e+02 min 7.912541326160e-02 max/min 1.463901966807e+03 > > 9956 KSP preconditioned resid norm 1.080641626963e+00 true resid norm 1.594100262829e-01 ||r(i)||/||b|| 8.917457607557e-03 > > 9956 KSP Residual norm 1.080641626963e+00 % max 1.164749229958e+02 min 7.459540346436e-02 max/min 1.561422253738e+03 > > 9957 KSP preconditioned resid norm 1.080641626935e+00 true resid norm 1.594099017783e-01 ||r(i)||/||b|| 8.917450642725e-03 > > 9957 KSP Residual norm 1.080641626935e+00 % max 1.166575286636e+02 min 7.458642942097e-02 max/min 1.564058362483e+03 > > 9958 KSP preconditioned resid norm 1.080641625459e+00 true resid norm 1.594089210473e-01 ||r(i)||/||b|| 8.917395780257e-03 > > 9958 KSP Residual norm 1.080641625459e+00 % max 1.166899683166e+02 min 6.207944430169e-02 max/min 1.879687707085e+03 > > 9959 KSP preconditioned resid norm 1.080641620143e+00 true resid norm 1.594083265698e-01 ||r(i)||/||b|| 8.917362524958e-03 > > 9959 KSP Residual norm 1.080641620143e+00 % max 1.166907769657e+02 min 4.511856151865e-02 max/min 2.586314213883e+03 > > 9960 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073442756e-01 ||r(i)||/||b|| 8.917307575050e-03 > > 9960 KSP Residual norm 1.080641619752e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > > 9961 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073442805e-01 ||r(i)||/||b|| 8.917307575322e-03 > > 9961 KSP Residual norm 1.080641619752e+00 % max 3.063497144523e+01 min 3.063497144523e+01 max/min 1.000000000000e+00 > > 9962 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073442888e-01 ||r(i)||/||b|| 8.917307575783e-03 > > 9962 KSP Residual norm 1.080641619752e+00 % max 3.066566109468e+01 min 3.845843703829e+00 max/min 7.973714861098e+00 > > 9963 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073443871e-01 ||r(i)||/||b|| 8.917307581283e-03 > > 9963 KSP Residual norm 1.080641619752e+00 % max 3.713390425813e+01 min 1.336329967989e+00 max/min 2.778797538606e+01 > > 9964 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073443904e-01 ||r(i)||/||b|| 8.917307581472e-03 > > 9964 KSP Residual norm 1.080641619752e+00 % max 4.496570748601e+01 min 1.226794844829e+00 max/min 3.665299677086e+01 > > 9965 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073443960e-01 ||r(i)||/||b|| 8.917307581781e-03 > > 9965 KSP Residual norm 1.080641619752e+00 % max 8.684970616180e+01 min 1.183106833938e+00 max/min 7.340816878957e+01 > > 9966 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073446787e-01 ||r(i)||/||b|| 8.917307597597e-03 > > 9966 KSP Residual norm 1.080641619752e+00 % max 9.802644805035e+01 min 1.168698112506e+00 max/min 8.387662049028e+01 > > 9967 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073448376e-01 ||r(i)||/||b|| 8.917307606488e-03 > > 9967 KSP Residual norm 1.080641619752e+00 % max 1.123297332549e+02 min 1.141283907756e+00 max/min 9.842400518531e+01 > > 9968 KSP preconditioned resid norm 1.080641619752e+00 true resid norm 1.594073449323e-01 ||r(i)||/||b|| 8.917307611782e-03 > > 9968 KSP Residual norm 1.080641619752e+00 % max 1.134976751689e+02 min 1.090790474217e+00 max/min 1.040508492250e+02 > > 9969 KSP preconditioned resid norm 1.080641619751e+00 true resid norm 1.594073935137e-01 ||r(i)||/||b|| 8.917310329444e-03 > > 9969 KSP Residual norm 1.080641619751e+00 % max 1.139911627557e+02 min 4.122309806702e-01 max/min 2.765225519207e+02 > > 9970 KSP preconditioned resid norm 1.080641619750e+00 true resid norm 1.594074502003e-01 ||r(i)||/||b|| 8.917313500516e-03 > > 9970 KSP Residual norm 1.080641619750e+00 % max 1.140007220037e+02 min 2.895709152543e-01 max/min 3.936884403725e+02 > > 9971 KSP preconditioned resid norm 1.080641619750e+00 true resid norm 1.594074475189e-01 ||r(i)||/||b|| 8.917313350515e-03 > > 9971 KSP Residual norm 1.080641619750e+00 % max 1.140387347164e+02 min 2.881891122657e-01 max/min 3.957079912555e+02 > > 9972 KSP preconditioned resid norm 1.080641619750e+00 true resid norm 1.594074249007e-01 ||r(i)||/||b|| 8.917312085247e-03 > > 9972 KSP Residual norm 1.080641619750e+00 % max 1.140387839472e+02 min 2.501648230072e-01 max/min 4.558545944886e+02 > > 9973 KSP preconditioned resid norm 1.080641619748e+00 true resid norm 1.594074242369e-01 ||r(i)||/||b|| 8.917312048114e-03 > > 9973 KSP Residual norm 1.080641619748e+00 % max 1.141359929290e+02 min 2.500655391090e-01 max/min 4.564243171438e+02 > > 9974 KSP preconditioned resid norm 1.080641619747e+00 true resid norm 1.594074179174e-01 ||r(i)||/||b|| 8.917311694599e-03 > > 9974 KSP Residual norm 1.080641619747e+00 % max 1.141719411262e+02 min 2.470361384981e-01 max/min 4.621669599450e+02 > > 9975 KSP preconditioned resid norm 1.080641619724e+00 true resid norm 1.594073585052e-01 ||r(i)||/||b|| 8.917308371056e-03 > > 9975 KSP Residual norm 1.080641619724e+00 % max 1.141769961595e+02 min 2.461590320913e-01 max/min 4.638342748974e+02 > > 9976 KSP preconditioned resid norm 1.080641619721e+00 true resid norm 1.594072392992e-01 ||r(i)||/||b|| 8.917301702630e-03 > > 9976 KSP Residual norm 1.080641619721e+00 % max 1.150248073926e+02 min 1.817423531140e-01 max/min 6.329003967526e+02 > > 9977 KSP preconditioned resid norm 1.080641619621e+00 true resid norm 1.594068288737e-01 ||r(i)||/||b|| 8.917278743269e-03 > > 9977 KSP Residual norm 1.080641619621e+00 % max 1.153659432133e+02 min 1.758207322277e-01 max/min 6.561566531518e+02 > > 9978 KSP preconditioned resid norm 1.080641619605e+00 true resid norm 1.594066900433e-01 ||r(i)||/||b|| 8.917270977044e-03 > > 9978 KSP Residual norm 1.080641619605e+00 % max 1.154409858489e+02 min 1.682246559484e-01 max/min 6.862310711714e+02 > > 9979 KSP preconditioned resid norm 1.080641619257e+00 true resid norm 1.594052257365e-01 ||r(i)||/||b|| 8.917189063164e-03 > > 9979 KSP Residual norm 1.080641619257e+00 % max 1.154411216580e+02 min 1.254601679658e-01 max/min 9.201416157000e+02 > > 9980 KSP preconditioned resid norm 1.080641619092e+00 true resid norm 1.594051529133e-01 ||r(i)||/||b|| 8.917184989407e-03 > > 9980 KSP Residual norm 1.080641619092e+00 % max 1.155780886655e+02 min 1.115223976110e-01 max/min 1.036366605645e+03 > > 9981 KSP preconditioned resid norm 1.080641619012e+00 true resid norm 1.594042648383e-01 ||r(i)||/||b|| 8.917135310152e-03 > > 9981 KSP Residual norm 1.080641619012e+00 % max 1.156949417657e+02 min 9.748351070064e-02 max/min 1.186815502788e+03 > > 9982 KSP preconditioned resid norm 1.080641618926e+00 true resid norm 1.594037912594e-01 ||r(i)||/||b|| 8.917108817969e-03 > > 9982 KSP Residual norm 1.080641618926e+00 % max 1.157177699729e+02 min 8.161808862755e-02 max/min 1.417795637202e+03 > > 9983 KSP preconditioned resid norm 1.080641618914e+00 true resid norm 1.594039926066e-01 ||r(i)||/||b|| 8.917120081407e-03 > > 9983 KSP Residual norm 1.080641618914e+00 % max 1.157297318664e+02 min 8.041726690542e-02 max/min 1.439115457660e+03 > > 9984 KSP preconditioned resid norm 1.080641618881e+00 true resid norm 1.594038798061e-01 ||r(i)||/||b|| 8.917113771305e-03 > > 9984 KSP Residual norm 1.080641618881e+00 % max 1.158244868066e+02 min 8.041279440761e-02 max/min 1.440373856671e+03 > > 9985 KSP preconditioned resid norm 1.080641618880e+00 true resid norm 1.594038723728e-01 ||r(i)||/||b|| 8.917113355483e-03 > > 9985 KSP Residual norm 1.080641618880e+00 % max 1.158302224434e+02 min 7.912036884330e-02 max/min 1.463974753110e+03 > > 9986 KSP preconditioned resid norm 1.080641618717e+00 true resid norm 1.594038980749e-01 ||r(i)||/||b|| 8.917114793267e-03 > > 9986 KSP Residual norm 1.080641618717e+00 % max 1.164682835790e+02 min 7.460686106511e-02 max/min 1.561093469371e+03 > > 9987 KSP preconditioned resid norm 1.080641618691e+00 true resid norm 1.594040222541e-01 ||r(i)||/||b|| 8.917121739901e-03 > > 9987 KSP Residual norm 1.080641618691e+00 % max 1.166493032152e+02 min 7.459727412837e-02 max/min 1.563720720069e+03 > > 9988 KSP preconditioned resid norm 1.080641617240e+00 true resid norm 1.594049907736e-01 ||r(i)||/||b|| 8.917175919248e-03 > > 9988 KSP Residual norm 1.080641617240e+00 % max 1.166813081045e+02 min 6.210350927266e-02 max/min 1.878819884271e+03 > > 9989 KSP preconditioned resid norm 1.080641612022e+00 true resid norm 1.594055829488e-01 ||r(i)||/||b|| 8.917209045757e-03 > > 9989 KSP Residual norm 1.080641612022e+00 % max 1.166820415947e+02 min 4.513453999933e-02 max/min 2.585205069033e+03 > > 9990 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065477745e-01 ||r(i)||/||b|| 8.917263018470e-03 > > 9990 KSP Residual norm 1.080641611645e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 > > 9991 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065477696e-01 ||r(i)||/||b|| 8.917263018201e-03 > > 9991 KSP Residual norm 1.080641611645e+00 % max 3.063465549435e+01 min 3.063465549435e+01 max/min 1.000000000000e+00 > > 9992 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065477615e-01 ||r(i)||/||b|| 8.917263017744e-03 > > 9992 KSP Residual norm 1.080641611645e+00 % max 3.066530412302e+01 min 3.844470687795e+00 max/min 7.976469744031e+00 > > 9993 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065476642e-01 ||r(i)||/||b|| 8.917263012302e-03 > > 9993 KSP Residual norm 1.080641611645e+00 % max 3.714516899070e+01 min 1.336552632161e+00 max/min 2.779177422339e+01 > > 9994 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065476608e-01 ||r(i)||/||b|| 8.917263012110e-03 > > 9994 KSP Residual norm 1.080641611645e+00 % max 4.500812884571e+01 min 1.226798972667e+00 max/min 3.668745234425e+01 > > 9995 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065476552e-01 ||r(i)||/||b|| 8.917263011801e-03 > > 9995 KSP Residual norm 1.080641611645e+00 % max 8.688131738345e+01 min 1.183120593233e+00 max/min 7.343403358914e+01 > > 9996 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065473796e-01 ||r(i)||/||b|| 8.917262996379e-03 > > 9996 KSP Residual norm 1.080641611645e+00 % max 9.802499878957e+01 min 1.168930461656e+00 max/min 8.385870845619e+01 > > 9997 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065472234e-01 ||r(i)||/||b|| 8.917262987642e-03 > > 9997 KSP Residual norm 1.080641611645e+00 % max 1.122454823211e+02 min 1.141659208813e+00 max/min 9.831785304632e+01 > > 9998 KSP preconditioned resid norm 1.080641611645e+00 true resid norm 1.594065471295e-01 ||r(i)||/||b|| 8.917262982389e-03 > > 9998 KSP Residual norm 1.080641611645e+00 % max 1.134943126018e+02 min 1.090790203681e+00 max/min 1.040477923425e+02 > > 9999 KSP preconditioned resid norm 1.080641611644e+00 true resid norm 1.594064989598e-01 ||r(i)||/||b|| 8.917260287762e-03 > > 9999 KSP Residual norm 1.080641611644e+00 % max 1.139914489020e+02 min 4.119830336343e-01 max/min 2.766896682528e+02 > > 10000 KSP preconditioned resid norm 1.080641611643e+00 true resid norm 1.594064428168e-01 ||r(i)||/||b|| 8.917257147097e-03 > > 10000 KSP Residual norm 1.080641611643e+00 % max 1.140011170215e+02 min 2.894564222709e-01 max/min 3.938455264772e+02 > > > > > > Then the number of iteration >10000 > > The Programme stop with convergenceReason=-3 > > > > Best, > > > > Meng > > ------------------ Original ------------------ > > From: "Barry Smith";; > > Send time: Friday, Apr 25, 2014 6:27 AM > > To: "Oo "; > > Cc: "Dave May"; "petsc-users"; > > Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 > > > > > > On Apr 24, 2014, at 5:24 PM, Oo wrote: > > > > > > > > Configure PETSC again? > > > > No, the command line when you run the program. > > > > Barry > > > > > > > > ------------------ Original ------------------ > > > From: "Dave May";; > > > Send time: Friday, Apr 25, 2014 6:20 AM > > > To: "Oo "; > > > Cc: "Barry Smith"; "Matthew Knepley"; "petsc-users"; > > > Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 > > > > > > On the command line > > > > > > > > > On 25 April 2014 00:11, Oo wrote: > > > > > > Where should I put "-ksp_monitor_true_residual -ksp_monitor_singular_value " ? > > > > > > Thanks, > > > > > > Meng > > > > > > > > > ------------------ Original ------------------ > > > From: "Barry Smith";; > > > Date: Apr 25, 2014 > > > To: "Matthew Knepley"; > > > Cc: "Oo "; "petsc-users"; > > > Subject: Re: [petsc-users] Convergence_Eigenvalues_k=3 > > > > > > > > > There are also a great deal of ?bogus? numbers that have no meaning and many zeros. Most of these are not the eigenvalues of anything. > > > > > > Run the two cases with -ksp_monitor_true_residual -ksp_monitor_singular_value and send the output > > > > > > Barry > > > > > > > > > On Apr 24, 2014, at 4:53 PM, Matthew Knepley wrote: > > > > > > > On Thu, Apr 24, 2014 at 12:57 PM, Oo wrote: > > > > > > > > Hi, > > > > > > > > For analysis the convergence of linear solver, > > > > I meet a problem. > > > > > > > > One is the list of Eigenvalues whose linear system which has a convergence solution. > > > > The other is the list of Eigenvalues whose linear system whose solution does not converge (convergenceReason=-3). > > > > > > > > These are just lists of numbers. It does not tell us anything about the computation. What is the problem you are having? > > > > > > > > Matt > > > > > > > > Do you know what kind of method can be used to obtain a convergence solution for our non-convergence case? > > > > > > > > Thanks, > > > > > > > > Meng > > > > > > > > > > > > > > > > -- > > > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > > > -- Norbert Wiener > > > > > > . > > > > > > > . > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From norihiro.w at gmail.com Fri Apr 25 07:59:48 2014 From: norihiro.w at gmail.com (Norihiro Watanabe) Date: Fri, 25 Apr 2014 14:59:48 +0200 Subject: [petsc-users] infinite loop with NEWTONTR? In-Reply-To: <1E380BB0-E644-4A33-97FC-7EEB69825F07@mcs.anl.gov> References: <1E380BB0-E644-4A33-97FC-7EEB69825F07@mcs.anl.gov> Message-ID: I mean "it's keep running but not print anything". I guess the program is running in while(1) loop after linear solve in SNESSolve_NEWTONTR() in src/ snes/impls/tr/tr.c. But I'm not sure if the breaking condition (if (rho > neP->sigma) break; ) can be always satisfied at the end. On Fri, Apr 25, 2014 at 2:49 PM, Barry Smith wrote: > > On Apr 25, 2014, at 7:31 AM, Norihiro Watanabe > wrote: > > > Hi, > > > > In my simulation, nonlinear solve with the trust regtion method got > stagnent after linear solve (see output below). > > What do you mean, get stagnant? Does the code just hang, that is keep > running but not print anything. > > > Is it possible that the method goes to inifite loop? > > This is not suppose to be possible. > > > Is there any parameter to avoid this situation? > > You need to determine what it is ?hanging? on. Try running with > -start_in_debugger and when it ?hangs? hit control C and type where to > determine where it is. > > Barry > > > > > 0 SNES Function norm 1.828728087153e+03 > > 0 KSP Residual norm 91.2735 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 3 > > 1 KSP Residual norm 3.42223 > > Linear solve converged due to CONVERGED_STEP_LENGTH iterations 1 > > > > > > Thank you in advance, > > Nori > > -- Norihiro Watanabe -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Apr 25 08:09:13 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 25 Apr 2014 08:09:13 -0500 Subject: [petsc-users] infinite loop with NEWTONTR? In-Reply-To: References: <1E380BB0-E644-4A33-97FC-7EEB69825F07@mcs.anl.gov> Message-ID: <0C2A995A-8E1B-4B24-9E17-192E48355667@mcs.anl.gov> Run with -info and send ALL the output. -info triggers the printing of all the messages with from the calls to PetscInfo() thus we?ll be able to see what is happening to the values. In theory it should always eventually get out but there code be either a bug in our code or an error in your function evaluation that prevents it from ever getting out. Barry On Apr 25, 2014, at 7:59 AM, Norihiro Watanabe wrote: > I mean "it's keep running but not print anything". I guess the program is running in while(1) loop after linear solve in SNESSolve_NEWTONTR() in src/snes/impls/tr/tr.c. But I'm not sure if the breaking condition (if (rho > neP->sigma) break; ) can be always satisfied at the end. > > > > On Fri, Apr 25, 2014 at 2:49 PM, Barry Smith wrote: > > On Apr 25, 2014, at 7:31 AM, Norihiro Watanabe wrote: > > > Hi, > > > > In my simulation, nonlinear solve with the trust regtion method got stagnent after linear solve (see output below). > > What do you mean, get stagnant? Does the code just hang, that is keep running but not print anything. > > > Is it possible that the method goes to inifite loop? > > This is not suppose to be possible. > > > Is there any parameter to avoid this situation? > > You need to determine what it is ?hanging? on. Try running with -start_in_debugger and when it ?hangs? hit control C and type where to determine where it is. > > Barry > > > > > 0 SNES Function norm 1.828728087153e+03 > > 0 KSP Residual norm 91.2735 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 3 > > 1 KSP Residual norm 3.42223 > > Linear solve converged due to CONVERGED_STEP_LENGTH iterations 1 > > > > > > Thank you in advance, > > Nori > > > > > -- > Norihiro Watanabe From norihiro.w at gmail.com Fri Apr 25 08:53:02 2014 From: norihiro.w at gmail.com (Norihiro Watanabe) Date: Fri, 25 Apr 2014 15:53:02 +0200 Subject: [petsc-users] infinite loop with NEWTONTR? In-Reply-To: <0C2A995A-8E1B-4B24-9E17-192E48355667@mcs.anl.gov> References: <1E380BB0-E644-4A33-97FC-7EEB69825F07@mcs.anl.gov> <0C2A995A-8E1B-4B24-9E17-192E48355667@mcs.anl.gov> Message-ID: attached is the output written until I killed the program. Please grep the file with "Time step: 7" to go to log messages where the problem happens. Thanks, Nori On Fri, Apr 25, 2014 at 3:09 PM, Barry Smith wrote: > > Run with -info and send ALL the output. -info triggers the printing of > all the messages with from the calls to PetscInfo() thus we?ll be able to > see what is happening to the values. In theory it should always eventually > get out but there code be either a bug in our code or an error in your > function evaluation that prevents it from ever getting out. > > Barry > > On Apr 25, 2014, at 7:59 AM, Norihiro Watanabe > wrote: > > > I mean "it's keep running but not print anything". I guess the program > is running in while(1) loop after linear solve in SNESSolve_NEWTONTR() in > src/snes/impls/tr/tr.c. But I'm not sure if the breaking condition (if (rho > > neP->sigma) break; ) can be always satisfied at the end. > > > > > > > > On Fri, Apr 25, 2014 at 2:49 PM, Barry Smith wrote: > > > > On Apr 25, 2014, at 7:31 AM, Norihiro Watanabe > wrote: > > > > > Hi, > > > > > > In my simulation, nonlinear solve with the trust regtion method got > stagnent after linear solve (see output below). > > > > What do you mean, get stagnant? Does the code just hang, that is keep > running but not print anything. > > > > > Is it possible that the method goes to inifite loop? > > > > This is not suppose to be possible. > > > > > Is there any parameter to avoid this situation? > > > > You need to determine what it is ?hanging? on. Try running with > -start_in_debugger and when it ?hangs? hit control C and type where to > determine where it is. > > > > Barry > > > > > > > > 0 SNES Function norm 1.828728087153e+03 > > > 0 KSP Residual norm 91.2735 > > > Linear solve converged due to CONVERGED_ITS iterations 1 > > > Linear solve converged due to CONVERGED_RTOL iterations 3 > > > 1 KSP Residual norm 3.42223 > > > Linear solve converged due to CONVERGED_STEP_LENGTH iterations 1 > > > > > > > > > Thank you in advance, > > > Nori > > > > > > > > > > -- > > Norihiro Watanabe > > -- Norihiro Watanabe -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: log.tar.gz Type: application/x-gzip Size: 107330 bytes Desc: not available URL: From steve.ndengue at gmail.com Fri Apr 25 09:45:57 2014 From: steve.ndengue at gmail.com (Steve Ndengue) Date: Fri, 25 Apr 2014 09:45:57 -0500 Subject: [petsc-users] Compiling program with program files dependencies (SLEPC ) In-Reply-To: References: <5359B1D6.2040401@gmail.com> <5359B304.8030100@gmail.com> <3B6FCD6A-3E8E-4FE6-92FA-87A482D1163F@mcs.anl.gov> Message-ID: <535A7525.9020809@gmail.com> Dear all, Sorry, I had a typo: I was still editing the .f90 program file instead of the .F90 file. However, now I am having troubles forming the matrice I would like to diagonalize. Presently the size is not yet huge and it could be held in a previously computed matrix of dimension N*N. I am using the following to build the matrix and the compiling from the example 1 and slightly modified: *** /!// // CALL SlepcInitialize(PETSC_NULL_CHARACTER,ierr)// // CALL MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr)// // nn = N// //! CALL PetscOptionsGetInt(PETSC_NULL_CHARACTER,'-n',n,flg,ierr)// //!// //// //!// //! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - // //! Compute the operator matrix that defines the eigensystem, Ax=kx// //! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - // // // call MatCreate(PETSC_COMM_WORLD,A,ierr)// // call MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,nn,nn,ierr)// // call MatSetFromOptions(A,ierr)// // call MatSetUp(A,ierr)// // // call MatGetOwnershipRange(A,Istart,Iend,ierr)// // do i=Istart,Iend-1// // do j=0,nn-1// // call MatSetValue(A,i,-1,j,-1,H_mat(i,j),INSERT_VALUES,ierr)// // enddo// // enddo// // call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr)// // call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr)! ** Create eigensolver context// // CALL EPSCreate(PETSC_COMM_WORLD,solver,ierr)// //!// ** Set operators. In this case, it is a standard eigenvalue problem// // CALL EPSSetOperators(solver,A,PETSC_NULL_OBJECT,ierr)// //print*,'here10,ierr',ierr// // CALL EPSSetProblemType(solver,EPS_HEP,ierr)// //! ** Set working dimensions// // CALL EPSSETDimensions(solver,nstates,10*nstates,2*nstates)// //! ** Set calculation of lowest eigenvalues// // CALL EPSSetWhichEigenpairs(solver,EPS_SMALLEST_REAL)// // //! ** Set solver parameters at runtime// // CALL EPSSetFromOptions(solver,ierr)// //! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - // //! Solve the eigensystem// //! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - // // CALL EPSSolve(solver,ierr)// //print*,'here6'// //! ** Optional: Get some information from the solver and display it// // CALL EPSGetType(solver,tname,ierr)// // if (rank .eq. 0) then// // write(*,120) tname// // endif// // 120 format (' Solution method: ',A)// // CALL EPSGetDimensions(solver,nstates,PETSC_NULL_INTEGER, &// // & PETSC_NULL_INTEGER,ierr)// // if (rank .eq. 0) then// // write(*,130) nstates// // endif// // 130 format (' Number of requested eigenvalues:',I4)// //! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - // //! Display solution and clean up// //! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - // // CALL EPSPrintSolution(solver,PETSC_NULL_OBJECT,ierr)// // CALL EPSDestroy(solver,ierr)// //***/ I get the following error message when making the code: *** /*[0]PETSC ERROR: ------------------------------------------------------------------------*//* *//*[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range*//* *//*[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger*//* *//*[0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors*//* *//*[0]PETSC ERROR: likely location of problem given in stack below*//* *//*[0]PETSC ERROR: --------------------- Stack Frames ------------------------------------*//* *//*[0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,*//* *//*[0]PETSC ERROR: INSTEAD the line number of the start of the function*//* *//*[0]PETSC ERROR: is given.*//* *//*[0]PETSC ERROR: --------------------- Error Message ------------------------------------*//* *//*[0]PETSC ERROR: Signal received!*//* *//*[0]PETSC ERROR: ------------------------------------------------------------------------*//* *//*[0]PETSC ERROR: Petsc Release Version 3.4.4, Mar, 13, 2014 *//* *//*[0]PETSC ERROR: See docs/changes/index.html for recent updates.*//* *//*[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.*//* *//*[0]PETSC ERROR: See docs/index.html for manual pages.*//* *//*[0]PETSC ERROR: ------------------------------------------------------------------------*//* *//*[0]PETSC ERROR: ./*//*PgmPrincipal*//*on a arch-linux2-c-debug named r10dawesr by ndengues Fri Apr 25 09:36:59 2014*//* *//*[0]PETSC ERROR: Libraries linked from /usr/local/home/ndengues/Downloads/Libraries/petsc-3.4.4/arch-linux2-c-debug/lib*//* *//*[0]PETSC ERROR: Configure run at Thu Apr 24 20:35:39 2014*//* *//*[0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran --download-f-blas-lapack --download-mpich*//* *//*[0]PETSC ERROR: ------------------------------------------------------------------------*//* *//*[0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file*//* *//*application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0*//* *//*[cli_0]: aborting job:*//* *//*application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0*/ *** Once more, the program execute without troubles for all the examples in the package. Sincerely, On 04/24/2014 10:14 PM, Satish Balay wrote: > petsc examples work fine. > > send us the code that breaks. Also copy/paste the *complete* output from 'make' when > you attempt to compile this code. > > Satish > > On Thu, 24 Apr 2014, Barry Smith wrote: > >> Try .F files instead of .F90 some Fortran compilers handle preprocessing in different ways for .F and .F90 >> >> Please also send configure.log and make.log so we know what compilers etc you are using. >> >> Barry >> >> Note that we are using C pre processing directives, >> >> On Apr 24, 2014, at 7:57 PM, Steve Ndengue wrote: >> >>> The results is the same with lower case letters... >>> >>> >>> On 04/24/2014 07:39 PM, Matthew Knepley wrote: >>>> On Thu, Apr 24, 2014 at 8:36 PM, Steve Ndengue wrote: >>>> Thank you for the quick reply. >>>> >>>> It partly solved the problem; the PETSC and SLEPC files are now included in the compilation. >>>> >>>> However the Warning are now error messages! >>>> *** >>>> pgm_hatom_offaxis_cyl.F90:880:0: error: invalid preprocessing directive #INCLUDE >>>> pgm_hatom_offaxis_cyl.F90:887:0: error: invalid preprocessing directive #IF >>>> pgm_hatom_offaxis_cyl.F90:890:0: error: invalid preprocessing directive #ELSE >>>> pgm_hatom_offaxis_cyl.F90:893:0: error: invalid preprocessing directive #ENDIF >>>> *** >>>> >>>> These should not be capitalized. >>>> >>>> Matt >>>> >>>> And the corresponding code lines are: >>>> *** >>>> #INCLUDE >>>> USE slepceps >>>> >>>> ! >>>> IMPLICIT NONE >>>> !---------------------- >>>> ! >>>> #IF DEFINED(PETSC_USE_FORTRAN_DATATYPES) >>>> TYPE(Mat) A >>>> TYPE(EPS) solver >>>> #ELSE >>>> Mat A >>>> EPS solver >>>> #ENDIF >>>> *** >>>> >>>> Sincerely. >>>> >>>> On 04/24/2014 07:25 PM, Satish Balay wrote: >>>>> On Thu, 24 Apr 2014, Steve Ndengue wrote: >>>>> >>>>> >>>>>> Dear all, >>>>>> >>>>>> I am having trouble compiling a code with some dependencies that shall call >>>>>> SLEPC. >>>>>> The compilation however goes perfectly for the various tutorials and tests in >>>>>> the package. >>>>>> >>>>>> A sample makefile looks like: >>>>>> >>>>>> *** >>>>>> >>>>>> /default: code// >>>>>> //routines: Rout1.o Rout2.o Rout3.o Module1.o// >>>>>> //code: PgmPrincipal// >>>>>> //sources=Rout1.f Rout2.f Rout3.f Module1.f90// >>>>>> >>>>> Its best to rename your sourcefiles .F/.F90 [and not .f/.f90 >>>>> [this enables using default targets from petsc makefiles - and its >>>>> the correct notation for preprocessing - which is required by petsc/slepc] >>>>> >>>>> >>>>>> //objets=Rout1.o Rout2.o Rout3.o Module1.o Pgm//Principal//.o// >>>>>> // >>>>>> //%.o: %.f// >>>>>> // -${FLINKER} -c $>>>>> >>>>> And you shouldn't need to create the above .o.f target. >>>>> >>>>> Satish >>>>> >>>>> >>>>> >>>>>> // >>>>>> //Module1_mod.mod Module1.o: //Module1//.f90// >>>>>> // -${FLINKER} -c //Module1//.f90// >>>>>> // >>>>>> //include ${SLEPC_DIR}/conf/slepc_common// >>>>>> // >>>>>> //Pgm//Principal//: ${objets} chkopts// >>>>>> // -${FLINKER} -o $@ ${objets} ${SLEPC_LIB}// >>>>>> // >>>>>> //.PHONY: clean// >>>>>> // ${RM} *.o *.mod Pgm//Principal// >>>>>> / >>>>>> *** >>>>>> >>>>>> The code exits with Warning and error messages of the type: >>>>>> >>>>>> *Warning: Pgm**Principal**.f90:880: Illegal preprocessor directive** >>>>>> **Warning: Pgm**Principal**.f90:889: Illegal preprocessor directive** >>>>>> **Warning: Pgm**Principal**.f90:892: Illegal preprocessor directive** >>>>>> **Warning: Pgm**Principal**.f90:895: Illegal preprocessor directive* >>>>>> >>>>>> and >>>>>> >>>>>> *USE slepceps** >>>>>> ** 1** >>>>>> **Fatal Error: Can't open module file 'slepceps.mod' for reading at (1): No >>>>>> such file or directory >>>>>> >>>>>> >>>>>> *I do not get these errors with the tests and errors files. >>>>>> >>>>>> >>>>>> Sincerely, >>>>>> >>>>>> >>>>>> >>>> >>>> -- >>>> Steve >>>> >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>> -- Norbert Wiener >>> >>> -- >>> Steve >>> >>> >>> >> -- Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Apr 25 09:55:17 2014 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 25 Apr 2014 09:55:17 -0500 Subject: [petsc-users] Compiling program with program files dependencies (SLEPC ) In-Reply-To: <535A7525.9020809@gmail.com> References: <5359B1D6.2040401@gmail.com> <5359B304.8030100@gmail.com> <3B6FCD6A-3E8E-4FE6-92FA-87A482D1163F@mcs.anl.gov> <535A7525.9020809@gmail.com> Message-ID: On Fri, Apr 25, 2014 at 9:45 AM, Steve Ndengue wrote: > Dear all, > > Sorry, I had a typo: I was still editing the .f90 program file instead of > the .F90 file. > > However, now I am having troubles forming the matrice I would like to > diagonalize. > Presently the size is not yet huge and it could be held in a previously > computed matrix of dimension N*N. > > I am using the following to build the matrix and the compiling from the > example 1 and slightly modified: > > *** > *!* > * CALL SlepcInitialize(PETSC_NULL_CHARACTER,ierr)* > * CALL MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr)* > * nn = N* > *! CALL PetscOptionsGetInt(PETSC_NULL_CHARACTER,'-n',n,flg,ierr)* > *!* > > *!* > *! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - > * > *! Compute the operator matrix that defines the eigensystem, Ax=kx* > *! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - > * > > * call MatCreate(PETSC_COMM_WORLD,A,ierr)* > * call MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,nn,nn,ierr)* > * call MatSetFromOptions(A,ierr)* > * call MatSetUp(A,ierr)* > > * call MatGetOwnershipRange(A,Istart,Iend,ierr)* > * do i=Istart,Iend-1* > * do j=0,nn-1* > * call MatSetValue(A,i,-1,j,-1,H_mat(i,j),INSERT_VALUES,ierr)* > * enddo* > * enddo* > * call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr)* > * call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr)! ** Create > eigensolver context* > * CALL EPSCreate(PETSC_COMM_WORLD,solver,ierr)* > *!** ** Set operators. In this case, it is a standard eigenvalue > problem* > * CALL EPSSetOperators(solver,A,PETSC_NULL_OBJECT,ierr)* > *print*,'here10,ierr',ierr* > * CALL EPSSetProblemType(solver,EPS_HEP,ierr)* > *! ** Set working dimensions* > * CALL EPSSETDimensions(solver,nstates,10*nstates,2*nstates)* > *! ** Set calculation of lowest eigenvalues* > * CALL EPSSetWhichEigenpairs(solver,EPS_SMALLEST_REAL)* > > *! ** Set solver parameters at runtime* > * CALL EPSSetFromOptions(solver,ierr)* > *! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - > * > *! Solve the eigensystem* > *! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - > * > * CALL EPSSolve(solver,ierr)* > *print*,'here6'* > *! ** Optional: Get some information from the solver and display it* > * CALL EPSGetType(solver,tname,ierr)* > * if (rank .eq. 0) then* > * write(*,120) tname* > * endif* > * 120 format (' Solution method: ',A)* > * CALL > EPSGetDimensions(solver,nstates,PETSC_NULL_INTEGER, &* > * & PETSC_NULL_INTEGER,ierr)* > * if (rank .eq. 0) then* > * write(*,130) nstates* > * endif* > * 130 format (' Number of requested eigenvalues:',I4)* > *! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - > * > *! Display solution and clean up* > *! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - > * > * CALL EPSPrintSolution(solver,PETSC_NULL_OBJECT,ierr)* > * CALL EPSDestroy(solver,ierr)* > ***** > > I get the following error message when making the code: > Run in the debugger, -start_in_debugger, and get a stack trace for the error. Matt > *** > *[0]PETSC ERROR: > ------------------------------------------------------------------------* > *[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range* > *[0]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger* > *[0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > [0]PETSC > ERROR: or try http://valgrind.org on GNU/linux and > Apple Mac OS X to find memory corruption errors* > *[0]PETSC ERROR: likely location of problem given in stack below* > *[0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------* > *[0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available,* > *[0]PETSC ERROR: INSTEAD the line number of the start of the > function* > *[0]PETSC ERROR: is given.* > *[0]PETSC ERROR: --------------------- Error Message > ------------------------------------* > *[0]PETSC ERROR: Signal received!* > *[0]PETSC ERROR: > ------------------------------------------------------------------------* > *[0]PETSC ERROR: Petsc Release Version 3.4.4, Mar, 13, 2014 * > *[0]PETSC ERROR: See docs/changes/index.html for recent updates.* > *[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.* > *[0]PETSC ERROR: See docs/index.html for manual pages.* > *[0]PETSC ERROR: > ------------------------------------------------------------------------* > *[0]PETSC ERROR: ./**PgmPrincipal** on a arch-linux2-c-debug named > r10dawesr by ndengues Fri Apr 25 09:36:59 2014* > *[0]PETSC ERROR: Libraries linked from > /usr/local/home/ndengues/Downloads/Libraries/petsc-3.4.4/arch-linux2-c-debug/lib* > *[0]PETSC ERROR: Configure run at Thu Apr 24 20:35:39 2014* > *[0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran > --download-f-blas-lapack --download-mpich* > *[0]PETSC ERROR: > ------------------------------------------------------------------------* > *[0]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file* > *application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0* > *[cli_0]: aborting job:* > *application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0* > *** > > > > Once more, the program execute without troubles for all the examples in > the package. > > Sincerely, > > > On 04/24/2014 10:14 PM, Satish Balay wrote: > > petsc examples work fine. > > send us the code that breaks. Also copy/paste the *complete* output from 'make' when > you attempt to compile this code. > > Satish > > On Thu, 24 Apr 2014, Barry Smith wrote: > > > Try .F files instead of .F90 some Fortran compilers handle preprocessing in different ways for .F and .F90 > > Please also send configure.log and make.log so we know what compilers etc you are using. > > Barry > > Note that we are using C pre processing directives, > > On Apr 24, 2014, at 7:57 PM, Steve Ndengue wrote: > > > The results is the same with lower case letters... > > > On 04/24/2014 07:39 PM, Matthew Knepley wrote: > > On Thu, Apr 24, 2014 at 8:36 PM, Steve Ndengue wrote: > Thank you for the quick reply. > > It partly solved the problem; the PETSC and SLEPC files are now included in the compilation. > > However the Warning are now error messages! > *** > pgm_hatom_offaxis_cyl.F90:880:0: error: invalid preprocessing directive #INCLUDE > pgm_hatom_offaxis_cyl.F90:887:0: error: invalid preprocessing directive #IF > pgm_hatom_offaxis_cyl.F90:890:0: error: invalid preprocessing directive #ELSE > pgm_hatom_offaxis_cyl.F90:893:0: error: invalid preprocessing directive #ENDIF > *** > > These should not be capitalized. > > Matt > > And the corresponding code lines are: > *** > #INCLUDE > USE slepceps > > ! > IMPLICIT NONE > !---------------------- > ! > #IF DEFINED(PETSC_USE_FORTRAN_DATATYPES) > TYPE(Mat) A > TYPE(EPS) solver > #ELSE > Mat A > EPS solver > #ENDIF > *** > > Sincerely. > > On 04/24/2014 07:25 PM, Satish Balay wrote: > > On Thu, 24 Apr 2014, Steve Ndengue wrote: > > > > Dear all, > > I am having trouble compiling a code with some dependencies that shall call > SLEPC. > The compilation however goes perfectly for the various tutorials and tests in > the package. > > A sample makefile looks like: > > *** > > /default: code// > //routines: Rout1.o Rout2.o Rout3.o Module1.o// > //code: PgmPrincipal// > //sources=Rout1.f Rout2.f Rout3.f Module1.f90// > > > Its best to rename your sourcefiles .F/.F90 [and not .f/.f90 > [this enables using default targets from petsc makefiles - and its > the correct notation for preprocessing - which is required by petsc/slepc] > > > > //objets=Rout1.o Rout2.o Rout3.o Module1.o Pgm//Principal//.o// > // > //%.o: %.f// > // -${FLINKER} -c $ > > And you shouldn't need to create the above .o.f target. > > Satish > > > > > // > //Module1_mod.mod Module1.o: //Module1//.f90// > // -${FLINKER} -c //Module1//.f90// > // > //include ${SLEPC_DIR}/conf/slepc_common// > // > //Pgm//Principal//: ${objets} chkopts// > // -${FLINKER} -o $@ ${objets} ${SLEPC_LIB}// > // > //.PHONY: clean// > // ${RM} *.o *.mod Pgm//Principal// > / > *** > > The code exits with Warning and error messages of the type: > > *Warning: Pgm**Principal**.f90:880: Illegal preprocessor directive** > **Warning: Pgm**Principal**.f90:889: Illegal preprocessor directive** > **Warning: Pgm**Principal**.f90:892: Illegal preprocessor directive** > **Warning: Pgm**Principal**.f90:895: Illegal preprocessor directive* > > and > > *USE slepceps** > ** 1** > **Fatal Error: Can't open module file 'slepceps.mod' for reading at (1): No > such file or directory > > > *I do not get these errors with the tests and errors files. > > > Sincerely, > > > > > -- > Steve > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > -- > Steve > > > > > > > -- > Steve > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Apr 25 10:13:19 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 25 Apr 2014 10:13:19 -0500 Subject: [petsc-users] infinite loop with NEWTONTR? In-Reply-To: References: <1E380BB0-E644-4A33-97FC-7EEB69825F07@mcs.anl.gov> <0C2A995A-8E1B-4B24-9E17-192E48355667@mcs.anl.gov> Message-ID: <632B152F-14B8-429F-A184-1610D9C79A97@mcs.anl.gov> Thanks. I?ll take a look at figuring out what is going on. We keep shrinking the domain beyond anything reasonable. Barry [0] SNESSolve_NEWTONTR(): fnorm=1826.65, gnorm=1826.65, ynorm=5.56247e-74 [0] SNESSolve_NEWTONTR(): gpred=1826.65, rho=0, delta=1.66874e-74 [0] SNESSolve_NEWTONTR(): Trying again in smaller region [0] SNESSolve_NEWTONTR(): Scaling direction by 7.94052e-81 On Apr 25, 2014, at 8:53 AM, Norihiro Watanabe wrote: > attached is the output written until I killed the program. Please grep the file with "Time step: 7" to go to log messages where the problem happens. > > Thanks, > Nori > > > On Fri, Apr 25, 2014 at 3:09 PM, Barry Smith wrote: > > Run with -info and send ALL the output. -info triggers the printing of all the messages with from the calls to PetscInfo() thus we?ll be able to see what is happening to the values. In theory it should always eventually get out but there code be either a bug in our code or an error in your function evaluation that prevents it from ever getting out. > > Barry > > On Apr 25, 2014, at 7:59 AM, Norihiro Watanabe wrote: > > > I mean "it's keep running but not print anything". I guess the program is running in while(1) loop after linear solve in SNESSolve_NEWTONTR() in src/snes/impls/tr/tr.c. But I'm not sure if the breaking condition (if (rho > neP->sigma) break; ) can be always satisfied at the end. > > > > > > > > On Fri, Apr 25, 2014 at 2:49 PM, Barry Smith wrote: > > > > On Apr 25, 2014, at 7:31 AM, Norihiro Watanabe wrote: > > > > > Hi, > > > > > > In my simulation, nonlinear solve with the trust regtion method got stagnent after linear solve (see output below). > > > > What do you mean, get stagnant? Does the code just hang, that is keep running but not print anything. > > > > > Is it possible that the method goes to inifite loop? > > > > This is not suppose to be possible. > > > > > Is there any parameter to avoid this situation? > > > > You need to determine what it is ?hanging? on. Try running with -start_in_debugger and when it ?hangs? hit control C and type where to determine where it is. > > > > Barry > > > > > > > > 0 SNES Function norm 1.828728087153e+03 > > > 0 KSP Residual norm 91.2735 > > > Linear solve converged due to CONVERGED_ITS iterations 1 > > > Linear solve converged due to CONVERGED_RTOL iterations 3 > > > 1 KSP Residual norm 3.42223 > > > Linear solve converged due to CONVERGED_STEP_LENGTH iterations 1 > > > > > > > > > Thank you in advance, > > > Nori > > > > > > > > > > -- > > Norihiro Watanabe > > > > > -- > Norihiro Watanabe > From fujii at cc.kogakuin.ac.jp Fri Apr 25 11:26:27 2014 From: fujii at cc.kogakuin.ac.jp (=?utf-8?B?6Jek5LqV5pit5a6P?=) Date: Sat, 26 Apr 2014 01:26:27 +0900 Subject: [petsc-users] PETSc-gamg default setting Message-ID: <3AD35119-ACF5-4AFD-B543-090464C0FB8E@cc.kogakuin.ac.jp> Hello everybody, I?d like to compare a solver with PETSc-gamg for Poisson problems. I used PETSc version 3.4.3., and the following command line option. -ksp_type ?bcgs? -pc_type ?gamg? -ksp_monitor -ksp_rtol 1.0E-7 -log_summary -pc_mg_log Would you give me some information on the following questions? - Does this option correspond to BICGSTAB solver with smoothed aggregation algebraic multigrid method? - Are coarser level small distributed matrices re-distributed to reduce the parallelism, when the number of processes is very large? - What kind of smoother will be used for this command line option? - pc_mg_log option shows the MG Apply stage. Does this stage time correspond to the solver time except for multi-level setup time? Does it include the time for BICGSTAB? Thanks in advance. Akihiro Fujii From knepley at gmail.com Fri Apr 25 11:31:38 2014 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 25 Apr 2014 11:31:38 -0500 Subject: [petsc-users] PETSc-gamg default setting In-Reply-To: <3AD35119-ACF5-4AFD-B543-090464C0FB8E@cc.kogakuin.ac.jp> References: <3AD35119-ACF5-4AFD-B543-090464C0FB8E@cc.kogakuin.ac.jp> Message-ID: On Fri, Apr 25, 2014 at 11:26 AM, ???? wrote: > Hello everybody, > > I?d like to compare a solver with PETSc-gamg for Poisson problems. > I used PETSc version 3.4.3., and the following command line option. > > -ksp_type ?bcgs? -pc_type ?gamg? -ksp_monitor -ksp_rtol 1.0E-7 > -log_summary -pc_mg_log > > > Would you give me some information on the following questions? > > - Does this option correspond to BICGSTAB solver with smoothed aggregation > algebraic multigrid method? > Yes > - Are coarser level small distributed matrices re-distributed to reduce > the parallelism, when the number of processes is very large? > Yes > - What kind of smoother will be used for this command line option? > Cheby(2)/Jacobi is the default in GAMG. > - pc_mg_log option shows the MG Apply stage. Does this stage time > correspond to the solver time except for multi-level setup time? > Does it include the time for BICGSTAB? > Yes, and no. Thanks, Matt > Thanks in advance. > > Akihiro Fujii > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From danyang.su at gmail.com Fri Apr 25 11:41:40 2014 From: danyang.su at gmail.com (Danyang Su) Date: Fri, 25 Apr 2014 09:41:40 -0700 Subject: [petsc-users] DMDACoor2d and DMDACoor3d in Fortran Message-ID: <535A9044.4010300@gmail.com> Hi All, I tried to set the simulation domain using DMDA coordinates, following the example dm/examples/tutorials/ex3.c. The 1D problem worked fine but the 2D and 3D failed because of the definition in coors2d and coords3d. What should I use to define the variable coords2d and coords3d? -->Codes section: PetscScalar, pointer :: coords1d(:) DMDACoor2d, pointer :: coords2d(:,:) !Failed in compiling DMDACoor3d, pointer :: coords3d(:,:,:) !Failed in compiling Vec :: gc !1D domain call DMGetGlobalVector(dmda_flow%da,gc,ierr) call DMDAVecGetArrayF90(dmda_flow%da,gc,coords1d,ierr) do ivx = nvxls,nvxle coords1d(ivx-ibase) = xglat(ivx) end do call DMDAVecRestoreArrayF90(dmda_flow%da,gc,coords1d,ierr) call DMSetCoordinates(dmda_flow%da,gc,ierr) call DMRestoreGlobalVector(dmda_flow%da,gc,ierr) !2D domain call DMGetGlobalVector(dmda_flow%da,gc,ierr) call DMDAVecGetArrayF90(dmda_flow%da,gc,coords2d,ierr) do ivx = nvxls,nvxle coords2d(ivx-ibase,ivy-ibase)%x = xglat(ivx) coords2d(ivx-ibase,ivy-ibase)%y = yglat(ivy) end do call DMDAVecRestoreArrayF90(dmda_flow%da,gc,coords2d,ierr) call DMSetCoordinates(dmda_flow%da,gc,ierr) call DMRestoreGlobalVector(dmda_flow%da,gc,ierr) Thanks and regards, Danyang -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Apr 25 12:06:33 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 25 Apr 2014 12:06:33 -0500 Subject: [petsc-users] DMDACoor2d and DMDACoor3d in Fortran In-Reply-To: <535A9044.4010300@gmail.com> References: <535A9044.4010300@gmail.com> Message-ID: <4DF14B54-FBB8-43E3-86EA-6CDA967360D9@mcs.anl.gov> On Apr 25, 2014, at 11:41 AM, Danyang Su wrote: > Hi All, > > I tried to set the simulation domain using DMDA coordinates, following the example dm/examples/tutorials/ex3.c. The 1D problem worked fine but the 2D and 3D failed because of the definition in coors2d and coords3d. What should I use to define the variable coords2d and coords3d? Failed in compiling is not very useful. Much better to send the ENTIRE compiler error message. There is no DMDACoor2d or 3d in Fortran. You should use > PetscScalar, pointer :: coords2d(:,:,:) note there is three indices, the first index is 0 or 1 for the x or y coordinate Similar for 3d. Satish, Please see about adding a DMDACoor2d and 3d for Fortran. Request-assigned: Satish, add a DMDACoor2d and 3d for Fortran. Barry > > -->Codes section: > > PetscScalar, pointer :: coords1d(:) > DMDACoor2d, pointer :: coords2d(:,:) !Failed in compiling > DMDACoor3d, pointer :: coords3d(:,:,:) !Failed in compiling > Vec :: gc > > !1D domain > call DMGetGlobalVector(dmda_flow%da,gc,ierr) > call DMDAVecGetArrayF90(dmda_flow%da,gc,coords1d,ierr) > do ivx = nvxls,nvxle > coords1d(ivx-ibase) = xglat(ivx) > end do > call DMDAVecRestoreArrayF90(dmda_flow%da,gc,coords1d,ierr) > call DMSetCoordinates(dmda_flow%da,gc,ierr) > call DMRestoreGlobalVector(dmda_flow%da,gc,ierr) > > > !2D domain > call DMGetGlobalVector(dmda_flow%da,gc,ierr) > call DMDAVecGetArrayF90(dmda_flow%da,gc,coords2d,ierr) > do ivx = nvxls,nvxle > coords2d(ivx-ibase,ivy-ibase)%x = xglat(ivx) > coords2d(ivx-ibase,ivy-ibase)%y = yglat(ivy) > end do > call DMDAVecRestoreArrayF90(dmda_flow%da,gc,coords2d,ierr) > call DMSetCoordinates(dmda_flow%da,gc,ierr) > call DMRestoreGlobalVector(dmda_flow%da,gc,ierr) > > Thanks and regards, > > Danyang From bsmith at mcs.anl.gov Fri Apr 25 12:13:28 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 25 Apr 2014 12:13:28 -0500 Subject: [petsc-users] PETSc-gamg default setting In-Reply-To: References: <3AD35119-ACF5-4AFD-B543-090464C0FB8E@cc.kogakuin.ac.jp> Message-ID: <52625D34-7851-4DCF-BDE9-50EC02DCCF47@mcs.anl.gov> Run with -ksp_view (or -snes_view etc). If that does not provide all relevant information then tell us what is missing so we can add that to the output. Defaults etc change over time so you should never rely on what someone thinks it is or what it ?should be?, just run with the viewer. barry On Apr 25, 2014, at 11:31 AM, Matthew Knepley wrote: > On Fri, Apr 25, 2014 at 11:26 AM, ???? wrote: > Hello everybody, > > I?d like to compare a solver with PETSc-gamg for Poisson problems. > I used PETSc version 3.4.3., and the following command line option. > > -ksp_type ?bcgs? -pc_type ?gamg? -ksp_monitor -ksp_rtol 1.0E-7 -log_summary -pc_mg_log > > > Would you give me some information on the following questions? > > - Does this option correspond to BICGSTAB solver with smoothed aggregation algebraic multigrid method? > > Yes > > - Are coarser level small distributed matrices re-distributed to reduce the parallelism, when the number of processes is very large? > > Yes > > - What kind of smoother will be used for this command line option? > > Cheby(2)/Jacobi is the default in GAMG. > > - pc_mg_log option shows the MG Apply stage. Does this stage time correspond to the solver time except for multi-level setup time? > Does it include the time for BICGSTAB? > > Yes, and no. > > Thanks, > > Matt > > Thanks in advance. > > Akihiro Fujii > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From mlohry at gmail.com Fri Apr 25 13:31:49 2014 From: mlohry at gmail.com (Mark Lohry) Date: Fri, 25 Apr 2014 14:31:49 -0400 Subject: [petsc-users] Structured multi-block topology Message-ID: To resurrect this thread: https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2012-August/014930.html Matt Knepley suggested the correct way to handle a structred multi-block mesh was to use DMComposite. I'm not seeing anything in the documentation about how to properly use DMComposite, however. What are the necessary steps to go from a single-block code using DMDA to a multi-block code that handles all the appropriate data passing at block boundaries? From bsmith at mcs.anl.gov Fri Apr 25 13:40:42 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 25 Apr 2014 13:40:42 -0500 Subject: [petsc-users] Structured multi-block topology In-Reply-To: References: Message-ID: Mark, Are you thinking of the case of two or three (or so blocks) or are you thinking of many blocks? DMComposite helps with multiple DM?s but unfortunately does not do the difficult task of setting up the communication of data between the different block boundaries. There are some other packages out there for multi block domains that might be better for your case thus I ask about your particular situation and what you need. Barry On Apr 25, 2014, at 1:31 PM, Mark Lohry wrote: > To resurrect this thread: > https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2012-August/014930.html > > Matt Knepley suggested the correct way to handle a structred > multi-block mesh was to use DMComposite. I'm not seeing anything in > the documentation about how to properly use DMComposite, however. > > What are the necessary steps to go from a single-block code using DMDA > to a multi-block code that handles all the appropriate data passing at > block boundaries? From mlohry at gmail.com Fri Apr 25 14:47:10 2014 From: mlohry at gmail.com (Mark Lohry) Date: Fri, 25 Apr 2014 15:47:10 -0400 Subject: [petsc-users] Structured multi-block topology In-Reply-To: References: Message-ID: Most common use case would be from 1 to 15 blocks or so, although I have occasionally seen need for 100+. This also brings to mind a question I had about more general connectivity in single-block domains like a C-mesh, where you have halo dependencies to itself that are not simply "periodic." As far as the difficult task of setting up the communication, is that even possible to do manually without treating everything as fully unstructured? What other packages are out there for multi block structured domains? On Fri, Apr 25, 2014 at 2:40 PM, Barry Smith wrote: > > Mark, > > Are you thinking of the case of two or three (or so blocks) or are you thinking of many blocks? > > DMComposite helps with multiple DM?s but unfortunately does not do the difficult task of setting up the communication of data between the different block boundaries. > > There are some other packages out there for multi block domains that might be better for your case thus I ask about your particular situation and what you need. > > Barry > > On Apr 25, 2014, at 1:31 PM, Mark Lohry wrote: > >> To resurrect this thread: >> https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2012-August/014930.html >> >> Matt Knepley suggested the correct way to handle a structred >> multi-block mesh was to use DMComposite. I'm not seeing anything in >> the documentation about how to properly use DMComposite, however. >> >> What are the necessary steps to go from a single-block code using DMDA >> to a multi-block code that handles all the appropriate data passing at >> block boundaries? > From knepley at gmail.com Fri Apr 25 17:03:32 2014 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 25 Apr 2014 17:03:32 -0500 Subject: [petsc-users] Structured multi-block topology In-Reply-To: References: Message-ID: On Fri, Apr 25, 2014 at 2:47 PM, Mark Lohry wrote: > Most common use case would be from 1 to 15 blocks or so, although I > have occasionally seen need for 100+. This also brings to mind a > question I had about more general connectivity in single-block domains > like a C-mesh, where you have halo dependencies to itself that are not > simply "periodic." > Jed and I have discussed this. DMComposite is really not great for even moderate numbers of blocks because it serializes. Also, the real performance advantage of structured blocks is saturated by a small number of structured cells (32 or so), and thus large structured pieces do not make a lot of sense (this is the same intuition behind the performance of spectral elements). Our recommendation is to have a coarse unstructured mesh with regular refinement inside each coarse cell. This is something we could really support well. Its not all there yet, but we have all the pieces. > As far as the difficult task of setting up the communication, is that > even possible to do manually without treating everything as fully > unstructured? What other packages are out there for multi block > structured domains? > There is Overture. Matt > On Fri, Apr 25, 2014 at 2:40 PM, Barry Smith wrote: > > > > Mark, > > > > Are you thinking of the case of two or three (or so blocks) or are > you thinking of many blocks? > > > > DMComposite helps with multiple DM?s but unfortunately does not do > the difficult task of setting up the communication of data between the > different block boundaries. > > > > There are some other packages out there for multi block domains that > might be better for your case thus I ask about your particular situation > and what you need. > > > > Barry > > > > On Apr 25, 2014, at 1:31 PM, Mark Lohry wrote: > > > >> To resurrect this thread: > >> > https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2012-August/014930.html > >> > >> Matt Knepley suggested the correct way to handle a structred > >> multi-block mesh was to use DMComposite. I'm not seeing anything in > >> the documentation about how to properly use DMComposite, however. > >> > >> What are the necessary steps to go from a single-block code using DMDA > >> to a multi-block code that handles all the appropriate data passing at > >> block boundaries? > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From mlohry at gmail.com Fri Apr 25 18:02:15 2014 From: mlohry at gmail.com (Mark Lohry) Date: Fri, 25 Apr 2014 19:02:15 -0400 Subject: [petsc-users] Structured multi-block topology In-Reply-To: References: Message-ID: On Fri, Apr 25, 2014 at 6:03 PM, Matthew Knepley wrote: > On Fri, Apr 25, 2014 at 2:47 PM, Mark Lohry wrote: >> >> Most common use case would be from 1 to 15 blocks or so, although I >> have occasionally seen need for 100+. This also brings to mind a >> question I had about more general connectivity in single-block domains >> like a C-mesh, where you have halo dependencies to itself that are not >> simply "periodic." > > > Jed and I have discussed this. DMComposite is really not great for even > moderate numbers of blocks because it serializes. Also, the real performance > advantage of structured blocks is saturated by a small number of structured > cells (32 or so), and thus large structured pieces do not make a lot of > sense > (this is the same intuition behind the performance of spectral elements). Huh? Perhaps this is the case in finite elements, but in FV you see great performance (and accuracy vs nDOFs) benefits from relatively large structured blocks in conjunction with FAS multigrid. As a visual aid, I'm thinking of meshes like this which have on the order of 64^3 - 128^3 cells: http://spitfire.princeton.edu/mesh3.png > Our > recommendation is to have a coarse unstructured mesh with regular refinement > inside each coarse cell. This is something we could really support well. Its > not all there yet, but we have all the pieces. > Would you still be able to exchange the fine-mesh data across block boundaries? That would be essential. >> >> As far as the difficult task of setting up the communication, is that >> even possible to do manually without treating everything as fully >> unstructured? What other packages are out there for multi block >> structured domains? > > > There is Overture. > > Matt > Fair enough, thanks. For the time being it looks like I'll have to proceed by managing all the parallelization/message passing internally and manually filling in petsc vectors for the various solvers. From fujii at cc.kogakuin.ac.jp Sat Apr 26 02:07:24 2014 From: fujii at cc.kogakuin.ac.jp (=?utf-8?B?6Jek5LqV5pit5a6P?=) Date: Sat, 26 Apr 2014 16:07:24 +0900 Subject: [petsc-users] PETSc-gamg default setting In-Reply-To: <52625D34-7851-4DCF-BDE9-50EC02DCCF47@mcs.anl.gov> References: <3AD35119-ACF5-4AFD-B543-090464C0FB8E@cc.kogakuin.ac.jp> <52625D34-7851-4DCF-BDE9-50EC02DCCF47@mcs.anl.gov> Message-ID: <2A9E6F70-B493-419E-B341-ED1C3E87B630@cc.kogakuin.ac.jp> Thanks for your quick reply! It helps me much. Akihiro 2014/04/26 2:13?Barry Smith ????? > > Run with -ksp_view (or -snes_view etc). If that does not provide all relevant information then tell us what is missing so we can add that to the output. Defaults etc change over time so you should never rely on what someone thinks it is or what it ?should be?, just run with the viewer. > > barry > > On Apr 25, 2014, at 11:31 AM, Matthew Knepley wrote: > >> On Fri, Apr 25, 2014 at 11:26 AM, ???? wrote: >> Hello everybody, >> >> I?d like to compare a solver with PETSc-gamg for Poisson problems. >> I used PETSc version 3.4.3., and the following command line option. >> >> -ksp_type ?bcgs? -pc_type ?gamg? -ksp_monitor -ksp_rtol 1.0E-7 -log_summary -pc_mg_log >> >> >> Would you give me some information on the following questions? >> >> - Does this option correspond to BICGSTAB solver with smoothed aggregation algebraic multigrid method? >> >> Yes >> >> - Are coarser level small distributed matrices re-distributed to reduce the parallelism, when the number of processes is very large? >> >> Yes >> >> - What kind of smoother will be used for this command line option? >> >> Cheby(2)/Jacobi is the default in GAMG. >> >> - pc_mg_log option shows the MG Apply stage. Does this stage time correspond to the solver time except for multi-level setup time? >> Does it include the time for BICGSTAB? >> >> Yes, and no. >> >> Thanks, >> >> Matt >> >> Thanks in advance. >> >> Akihiro Fujii >> >> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener > > From knepley at gmail.com Sat Apr 26 05:04:43 2014 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 26 Apr 2014 05:04:43 -0500 Subject: [petsc-users] Structured multi-block topology In-Reply-To: References: Message-ID: On Fri, Apr 25, 2014 at 6:02 PM, Mark Lohry wrote: > On Fri, Apr 25, 2014 at 6:03 PM, Matthew Knepley > wrote: > > On Fri, Apr 25, 2014 at 2:47 PM, Mark Lohry wrote: > >> > >> Most common use case would be from 1 to 15 blocks or so, although I > >> have occasionally seen need for 100+. This also brings to mind a > >> question I had about more general connectivity in single-block domains > >> like a C-mesh, where you have halo dependencies to itself that are not > >> simply "periodic." > > > > > > Jed and I have discussed this. DMComposite is really not great for even > > moderate numbers of blocks because it serializes. Also, the real > performance > > advantage of structured blocks is saturated by a small number of > structured > > cells (32 or so), and thus large structured pieces do not make a lot of > > sense > > (this is the same intuition behind the performance of spectral elements). > > Huh? Perhaps this is the case in finite elements, but in FV you see > great performance (and accuracy vs nDOFs) benefits from relatively > large structured blocks in conjunction with FAS multigrid. As a visual > aid, I'm thinking of meshes like this which have on the order of 64^3 > - 128^3 cells: > http://spitfire.princeton.edu/mesh3.png I need to clarify. I am sure you see great performance from a large structured mesh. However, that performance comes from good vectorization and decent blocking for the loads. Thus it will not decrease if you made your structured pieces smaller. You can do FAS on unstructured meshes just fine (we have an example in PETSc). The coarsest meshes would not be structured, but the performance is not a big issue there. > > > Our > > recommendation is to have a coarse unstructured mesh with regular > refinement > > inside each coarse cell. This is something we could really support well. > Its > > not all there yet, but we have all the pieces. > > > > Would you still be able to exchange the fine-mesh data across block > boundaries? That would be essential. > There would be no blocks, so yes that would be easy. Thanks, Matt > >> > >> As far as the difficult task of setting up the communication, is that > >> even possible to do manually without treating everything as fully > >> unstructured? What other packages are out there for multi block > >> structured domains? > > > > > > There is Overture. > > > > Matt > > > > > Fair enough, thanks. For the time being it looks like I'll have to > proceed by managing all the parallelization/message passing internally > and manually filling in petsc vectors for the various solvers. > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From yuqing.xia at colorado.edu Sat Apr 26 20:48:04 2014 From: yuqing.xia at colorado.edu (yuqing xia) Date: Sat, 26 Apr 2014 19:48:04 -0600 Subject: [petsc-users] About compatibility with C++ complex type Message-ID: Dear Developer and Users I am new to SLEPC. So my PETSC is compiled with flag --with-scalar-type=complex --with-clanguage=c++, therefore SLEPC should also support c++ complex type. But when I tried to insert C++ complex type value into a PETSC complex matrix, It seems SLEPC's inside complex type is not compatible with C++ complex type according to the test result. Here is the matrix I want to insert their value to Petsc matrix (-0.131541,0)(0,0)(0,0) (0,0)(-0.188541,0)(0,0) (0,0)(0,0)(-0.245541,0) Here is the Petsc matrix after inserting the above matrix value into it Matrix Object: 1 MPI processes type: seqaij row 0: (0, -0.131541) (1, 0) (2, 0) row 1: (0, 0) (1, -0.188541) (2, 0) row 2: (0, 0) (1, 0) (2, -0.24554 Does anyone know how to avoid the problem. Best Yuqing Xia -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Sun Apr 27 00:36:43 2014 From: jed at jedbrown.org (Jed Brown) Date: Sat, 26 Apr 2014 23:36:43 -0600 Subject: [petsc-users] About compatibility with C++ complex type In-Reply-To: References: Message-ID: <87wqeb2v10.fsf@jedbrown.org> yuqing xia writes: > Dear Developer and Users > > I am new to SLEPC. So my PETSC is compiled with flag > --with-scalar-type=complex --with-clanguage=c++, therefore SLEPC should > also support c++ complex type. But when I tried to insert C++ complex type > value into a PETSC complex matrix, It seems SLEPC's inside complex type is > not compatible with C++ complex type according to the test result. > > Here is the matrix I want to insert their value to Petsc matrix > (-0.131541,0)(0,0)(0,0) > (0,0)(-0.188541,0)(0,0) > (0,0)(0,0)(-0.245541,0) > Here is the Petsc matrix after inserting the above matrix value into it > Matrix Object: 1 MPI processes > type: seqaij > row 0: (0, -0.131541) (1, 0) (2, 0) > row 1: (0, 0) (1, -0.188541) (2, 0) > row 2: (0, 0) (1, 0) (2, -0.24554 These are (column_index, matrix_entry) pairs. > Does anyone know how to avoid the problem. > > Best > Yuqing Xia -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From yuqing.xia at colorado.edu Sun Apr 27 00:44:36 2014 From: yuqing.xia at colorado.edu (yuqing xia) Date: Sat, 26 Apr 2014 23:44:36 -0600 Subject: [petsc-users] About compatibility with C++ complex type In-Reply-To: <87wqeb2v10.fsf@jedbrown.org> References: <87wqeb2v10.fsf@jedbrown.org> Message-ID: It make sense now. Thanks Jed. On Sat, Apr 26, 2014 at 11:36 PM, Jed Brown wrote: > yuqing xia writes: > > > Dear Developer and Users > > > > I am new to SLEPC. So my PETSC is compiled with flag > > --with-scalar-type=complex --with-clanguage=c++, therefore SLEPC should > > also support c++ complex type. But when I tried to insert C++ complex > type > > value into a PETSC complex matrix, It seems SLEPC's inside complex type > is > > not compatible with C++ complex type according to the test result. > > > > Here is the matrix I want to insert their value to Petsc matrix > > (-0.131541,0)(0,0)(0,0) > > (0,0)(-0.188541,0)(0,0) > > (0,0)(0,0)(-0.245541,0) > > Here is the Petsc matrix after inserting the above matrix value into it > > Matrix Object: 1 MPI processes > > type: seqaij > > row 0: (0, -0.131541) (1, 0) (2, 0) > > row 1: (0, 0) (1, -0.188541) (2, 0) > > row 2: (0, 0) (1, 0) (2, -0.24554 > > These are (column_index, matrix_entry) pairs. > > > Does anyone know how to avoid the problem. > > > > Best > > Yuqing Xia > -------------- next part -------------- An HTML attachment was scrubbed... URL: From danyang.su at gmail.com Sun Apr 27 02:53:21 2014 From: danyang.su at gmail.com (Danyang Su) Date: Sun, 27 Apr 2014 00:53:21 -0700 Subject: [petsc-users] DMDACoor2d and DMDACoor3d in Fortran In-Reply-To: <4DF14B54-FBB8-43E3-86EA-6CDA967360D9@mcs.anl.gov> References: <535A9044.4010300@gmail.com> <4DF14B54-FBB8-43E3-86EA-6CDA967360D9@mcs.anl.gov> Message-ID: <535CB771.3040000@gmail.com> Hi Barry, Another question is about DMDAGetLocalBoundingBox in fortran. PetscErrorCode DMDAGetLocalBoundingBox(DM da,PetscReal lmin[],PetscReal lmax[]) The first value of lmin and lmax (lmin(0)) are always zero, and lmin(1), lmin(2), and lmin(3) are for x, y, and z dimension, respectively. And the returned value is index (local node index in x, y, z dim), not the coordinate. Correct? Thanks and regards, Danyang On 25/04/2014 10:06 AM, Barry Smith wrote: > On Apr 25, 2014, at 11:41 AM, Danyang Su wrote: > >> Hi All, >> >> I tried to set the simulation domain using DMDA coordinates, following the example dm/examples/tutorials/ex3.c. The 1D problem worked fine but the 2D and 3D failed because of the definition in coors2d and coords3d. What should I use to define the variable coords2d and coords3d? > Failed in compiling is not very useful. Much better to send the ENTIRE compiler error message. > > There is no DMDACoor2d or 3d in Fortran. You should use > >> PetscScalar, pointer :: coords2d(:,:,:) > note there is three indices, the first index is 0 or 1 for the x or y coordinate > > Similar for 3d. > > Satish, > > Please see about adding a DMDACoor2d and 3d for Fortran. > > Request-assigned: Satish, add a DMDACoor2d and 3d for Fortran. > > > Barry > > > >> -->Codes section: >> >> PetscScalar, pointer :: coords1d(:) >> DMDACoor2d, pointer :: coords2d(:,:) !Failed in compiling >> DMDACoor3d, pointer :: coords3d(:,:,:) !Failed in compiling >> Vec :: gc >> >> !1D domain >> call DMGetGlobalVector(dmda_flow%da,gc,ierr) >> call DMDAVecGetArrayF90(dmda_flow%da,gc,coords1d,ierr) >> do ivx = nvxls,nvxle >> coords1d(ivx-ibase) = xglat(ivx) >> end do >> call DMDAVecRestoreArrayF90(dmda_flow%da,gc,coords1d,ierr) >> call DMSetCoordinates(dmda_flow%da,gc,ierr) >> call DMRestoreGlobalVector(dmda_flow%da,gc,ierr) >> >> >> !2D domain >> call DMGetGlobalVector(dmda_flow%da,gc,ierr) >> call DMDAVecGetArrayF90(dmda_flow%da,gc,coords2d,ierr) >> do ivx = nvxls,nvxle >> coords2d(ivx-ibase,ivy-ibase)%x = xglat(ivx) >> coords2d(ivx-ibase,ivy-ibase)%y = yglat(ivy) >> end do >> call DMDAVecRestoreArrayF90(dmda_flow%da,gc,coords2d,ierr) >> call DMSetCoordinates(dmda_flow%da,gc,ierr) >> call DMRestoreGlobalVector(dmda_flow%da,gc,ierr) >> >> Thanks and regards, >> >> Danyang From jsd1 at rice.edu Sun Apr 27 04:14:20 2014 From: jsd1 at rice.edu (Justin Dong) Date: Sun, 27 Apr 2014 04:14:20 -0500 Subject: [petsc-users] Assembling a finite element matrix in parallel Message-ID: Hi all, I'm currently coding a finite element problem in PETSc. I'm computing all of the matrices by myself and would prefer to do it that way. I want to parallelize the assembly of the global matrices. This is a simplified version of the assembly routine (pseudocode): for (k = 0; k < nelements; ++k) { get_index(k,ie,je); /* ie and je are arrays that indicate where to place A_local */ compute_local_matrix(A_local,...); MatSetValues(A_global, nlocal, ie, nlocal, je, A_local, ADD_VALUES); } This is for DG finite elements and I know the matrix easily be assembled in parallel. Even on my laptop, this would allow me to significantly speed up my code. The only problem is, I'm not sure how to do this in PETSc. I'd assume I need to use MatCreateMPIAIJ instead of MatCreateSeqAIJ, but wasn't able to find any using tutorials on how I might do this. Sincerely, Justin -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Apr 27 05:57:06 2014 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 27 Apr 2014 05:57:06 -0500 Subject: [petsc-users] Assembling a finite element matrix in parallel In-Reply-To: References: Message-ID: On Sun, Apr 27, 2014 at 4:14 AM, Justin Dong wrote: > Hi all, > > I'm currently coding a finite element problem in PETSc. I'm computing all > of the matrices by myself and would prefer to do it that way. I want to > parallelize the assembly of the global matrices. This is a simplified > version of the assembly routine (pseudocode): > > for (k = 0; k < nelements; ++k) > { > get_index(k,ie,je); /* ie and je are arrays that indicate where to > place A_local */ > compute_local_matrix(A_local,...); > > MatSetValues(A_global, nlocal, ie, nlocal, je, A_local, ADD_VALUES); > } > > This is for DG finite elements and I know the matrix easily be assembled > in parallel. Even on my laptop, this would allow me to significantly speed > up my code. The only problem is, I'm not sure how to do this in PETSc. I'd > assume I need to use MatCreateMPIAIJ instead of MatCreateSeqAIJ, but wasn't > able to find any using tutorials on how I might do this. > 1) If you just split this loop across processes, it would work immediately. However, that can be non-optimal in terms of communication. 2) PETSc matrices are distributed by contiguous blocks of rows (see manual section on matrices), so you would like those row blocks to correspond roughly to your element blocks. 3) You will also have to preallocate, but that should be the same as you do now for the sequential case, except you check whether a column is inside the diagonal block. Thanks, Matt > Sincerely, > Justin > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsd1 at rice.edu Sun Apr 27 06:25:56 2014 From: jsd1 at rice.edu (Justin Dong) Date: Sun, 27 Apr 2014 06:25:56 -0500 Subject: [petsc-users] Assembling a finite element matrix in parallel In-Reply-To: References: Message-ID: Hi Matt, For option 1), that would be using MPI and not any special functions in PETSc? I ask since I've never done parallel programming before. I tried using OpenMP to parallelize that for loop but it resulted in some errors and I didn't investigate further, but I'm assuming it's because each process wasn't communicating properly with the others during MatSetValues? Sincerely, Justin On Sun, Apr 27, 2014 at 5:57 AM, Matthew Knepley wrote: > On Sun, Apr 27, 2014 at 4:14 AM, Justin Dong wrote: > >> Hi all, >> >> I'm currently coding a finite element problem in PETSc. I'm computing all >> of the matrices by myself and would prefer to do it that way. I want to >> parallelize the assembly of the global matrices. This is a simplified >> version of the assembly routine (pseudocode): >> >> for (k = 0; k < nelements; ++k) >> { >> get_index(k,ie,je); /* ie and je are arrays that indicate where to >> place A_local */ >> compute_local_matrix(A_local,...); >> >> MatSetValues(A_global, nlocal, ie, nlocal, je, A_local, ADD_VALUES); >> } >> >> This is for DG finite elements and I know the matrix easily be assembled >> in parallel. Even on my laptop, this would allow me to significantly speed >> up my code. The only problem is, I'm not sure how to do this in PETSc. I'd >> assume I need to use MatCreateMPIAIJ instead of MatCreateSeqAIJ, but wasn't >> able to find any using tutorials on how I might do this. >> > > 1) If you just split this loop across processes, it would work > immediately. However, that can be non-optimal in terms > of communication. > > 2) PETSc matrices are distributed by contiguous blocks of rows (see manual > section on matrices), so you would like > those row blocks to correspond roughly to your element blocks. > > 3) You will also have to preallocate, but that should be the same as you > do now for the sequential case, except you > check whether a column is inside the diagonal block. > > Thanks, > > Matt > > >> Sincerely, >> Justin >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Apr 27 06:54:55 2014 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 27 Apr 2014 06:54:55 -0500 Subject: [petsc-users] Assembling a finite element matrix in parallel In-Reply-To: References: Message-ID: On Sun, Apr 27, 2014 at 6:25 AM, Justin Dong wrote: > Hi Matt, > > For option 1), that would be using MPI and not any special functions in > PETSc? I ask since I've never done parallel programming before. I tried > using OpenMP to parallelize that for loop but it resulted in some errors > and I didn't investigate further, but I'm assuming it's because each > process wasn't communicating properly with the others during MatSetValues? > Yes, using MPI, so each process owns a set of elements that it loops over. The Mat object manages the global values as long as you use global indices for the (row, column). There are of course many refinements for this, but I think the thing to do is get something working fast, and then optimize it. Thanks, Matt > Sincerely, > Justin > > > On Sun, Apr 27, 2014 at 5:57 AM, Matthew Knepley wrote: > >> On Sun, Apr 27, 2014 at 4:14 AM, Justin Dong wrote: >> >>> Hi all, >>> >>> I'm currently coding a finite element problem in PETSc. I'm computing >>> all of the matrices by myself and would prefer to do it that way. I want to >>> parallelize the assembly of the global matrices. This is a simplified >>> version of the assembly routine (pseudocode): >>> >>> for (k = 0; k < nelements; ++k) >>> { >>> get_index(k,ie,je); /* ie and je are arrays that indicate where to >>> place A_local */ >>> compute_local_matrix(A_local,...); >>> >>> MatSetValues(A_global, nlocal, ie, nlocal, je, A_local, ADD_VALUES); >>> } >>> >>> This is for DG finite elements and I know the matrix easily be assembled >>> in parallel. Even on my laptop, this would allow me to significantly speed >>> up my code. The only problem is, I'm not sure how to do this in PETSc. I'd >>> assume I need to use MatCreateMPIAIJ instead of MatCreateSeqAIJ, but wasn't >>> able to find any using tutorials on how I might do this. >>> >> >> 1) If you just split this loop across processes, it would work >> immediately. However, that can be non-optimal in terms >> of communication. >> >> 2) PETSc matrices are distributed by contiguous blocks of rows (see >> manual section on matrices), so you would like >> those row blocks to correspond roughly to your element blocks. >> >> 3) You will also have to preallocate, but that should be the same as you >> do now for the sequential case, except you >> check whether a column is inside the diagonal block. >> >> Thanks, >> >> Matt >> >> >>> Sincerely, >>> Justin >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsd1 at rice.edu Sun Apr 27 07:47:34 2014 From: jsd1 at rice.edu (Justin Dong) Date: Sun, 27 Apr 2014 07:47:34 -0500 Subject: [petsc-users] Assembling a finite element matrix in parallel In-Reply-To: References: Message-ID: I think I've got the right idea on how to divide up the for loop: MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); mystart = (nelements / numprocs) * myid; if (nelements % numprocs > myid) { mystart += myid; myend = mystart + (nelements / numprocs) + 1; } else { mystart += nelements % numprocs; myend = mystart + (nelements / numprocs); } But then, do I do this? for (k = mystart+1; k < myend; ++k) { get_index(k,ie,je); /* ie and je are arrays that indicate where to place A_local */ compute_local_matrix(A_local,...); MatSetValues(A_global, nlocal, ie, nlocal, je, A_local, ADD_VALUES); } The indices are global indices and I'm doing MatCreateSeqAIJ(PETSC_COMM_WORLD, nglobal, nglobal, 3*nlocal, PETSC_NULL, &A_global); to create the matrix. Running the program seems to give me multiple errors, but mainly [3]PETSC ERROR: Object is in wrong state! [3]PETSC ERROR: Mat object's type is not set: Argument # 1! On Sun, Apr 27, 2014 at 6:54 AM, Matthew Knepley wrote: > On Sun, Apr 27, 2014 at 6:25 AM, Justin Dong wrote: > >> Hi Matt, >> >> For option 1), that would be using MPI and not any special functions in >> PETSc? I ask since I've never done parallel programming before. I tried >> using OpenMP to parallelize that for loop but it resulted in some errors >> and I didn't investigate further, but I'm assuming it's because each >> process wasn't communicating properly with the others during MatSetValues? >> > > Yes, using MPI, so each process owns a set of elements that it loops over. > The Mat object manages the global > values as long as you use global indices for the (row, column). There are > of course many refinements for this, > but I think the thing to do is get something working fast, and then > optimize it. > > Thanks, > > Matt > > >> Sincerely, >> Justin >> >> >> On Sun, Apr 27, 2014 at 5:57 AM, Matthew Knepley wrote: >> >>> On Sun, Apr 27, 2014 at 4:14 AM, Justin Dong wrote: >>> >>>> Hi all, >>>> >>>> I'm currently coding a finite element problem in PETSc. I'm computing >>>> all of the matrices by myself and would prefer to do it that way. I want to >>>> parallelize the assembly of the global matrices. This is a simplified >>>> version of the assembly routine (pseudocode): >>>> >>>> for (k = 0; k < nelements; ++k) >>>> { >>>> get_index(k,ie,je); /* ie and je are arrays that indicate where to >>>> place A_local */ >>>> compute_local_matrix(A_local,...); >>>> >>>> MatSetValues(A_global, nlocal, ie, nlocal, je, A_local, ADD_VALUES); >>>> } >>>> >>>> This is for DG finite elements and I know the matrix easily be >>>> assembled in parallel. Even on my laptop, this would allow me to >>>> significantly speed up my code. The only problem is, I'm not sure how to do >>>> this in PETSc. I'd assume I need to use MatCreateMPIAIJ instead of >>>> MatCreateSeqAIJ, but wasn't able to find any using tutorials on how I might >>>> do this. >>>> >>> >>> 1) If you just split this loop across processes, it would work >>> immediately. However, that can be non-optimal in terms >>> of communication. >>> >>> 2) PETSc matrices are distributed by contiguous blocks of rows (see >>> manual section on matrices), so you would like >>> those row blocks to correspond roughly to your element blocks. >>> >>> 3) You will also have to preallocate, but that should be the same as you >>> do now for the sequential case, except you >>> check whether a column is inside the diagonal block. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Sincerely, >>>> Justin >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Apr 27 08:35:41 2014 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 27 Apr 2014 08:35:41 -0500 Subject: [petsc-users] Assembling a finite element matrix in parallel In-Reply-To: References: Message-ID: On Sun, Apr 27, 2014 at 7:47 AM, Justin Dong wrote: > I think I've got the right idea on how to divide up the for loop: > > MPI_Comm_size(MPI_COMM_WORLD,&numprocs); > MPI_Comm_rank(MPI_COMM_WORLD,&myid); > > mystart = (nelements / numprocs) * myid; > if (nelements % numprocs > myid) > { > mystart += myid; > myend = mystart + (nelements / numprocs) + 1; > } > else > { > mystart += nelements % numprocs; > myend = mystart + (nelements / numprocs); > } > We have a function that does exactly this: http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Sys/PetscSplitOwnership.html > But then, do I do this? > Yes. > for (k = mystart+1; k < myend; ++k) > { > get_index(k,ie,je); /* ie and je are arrays that indicate where to > place A_local */ > compute_local_matrix(A_local,...); > > MatSetValues(A_global, nlocal, ie, nlocal, je, A_local, ADD_VALUES); > } > > The indices are global indices and I'm doing > > MatCreateSeqAIJ(PETSC_COMM_WORLD, nglobal, nglobal, 3*nlocal, > PETSC_NULL, &A_global); > This should be MatCreateMPIAIJ() for parallel execution. Matt to create the matrix. Running the program seems to give me multiple errors, > but mainly > > [3]PETSC ERROR: Object is in wrong state! > > [3]PETSC ERROR: Mat object's type is not set: Argument # 1! > > > > > > On Sun, Apr 27, 2014 at 6:54 AM, Matthew Knepley wrote: > >> On Sun, Apr 27, 2014 at 6:25 AM, Justin Dong wrote: >> >>> Hi Matt, >>> >>> For option 1), that would be using MPI and not any special functions in >>> PETSc? I ask since I've never done parallel programming before. I tried >>> using OpenMP to parallelize that for loop but it resulted in some errors >>> and I didn't investigate further, but I'm assuming it's because each >>> process wasn't communicating properly with the others during MatSetValues? >>> >> >> Yes, using MPI, so each process owns a set of elements that it loops >> over. The Mat object manages the global >> values as long as you use global indices for the (row, column). There are >> of course many refinements for this, >> but I think the thing to do is get something working fast, and then >> optimize it. >> >> Thanks, >> >> Matt >> >> >>> Sincerely, >>> Justin >>> >>> >>> On Sun, Apr 27, 2014 at 5:57 AM, Matthew Knepley wrote: >>> >>>> On Sun, Apr 27, 2014 at 4:14 AM, Justin Dong wrote: >>>> >>>>> Hi all, >>>>> >>>>> I'm currently coding a finite element problem in PETSc. I'm computing >>>>> all of the matrices by myself and would prefer to do it that way. I want to >>>>> parallelize the assembly of the global matrices. This is a simplified >>>>> version of the assembly routine (pseudocode): >>>>> >>>>> for (k = 0; k < nelements; ++k) >>>>> { >>>>> get_index(k,ie,je); /* ie and je are arrays that indicate where to >>>>> place A_local */ >>>>> compute_local_matrix(A_local,...); >>>>> >>>>> MatSetValues(A_global, nlocal, ie, nlocal, je, A_local, >>>>> ADD_VALUES); >>>>> } >>>>> >>>>> This is for DG finite elements and I know the matrix easily be >>>>> assembled in parallel. Even on my laptop, this would allow me to >>>>> significantly speed up my code. The only problem is, I'm not sure how to do >>>>> this in PETSc. I'd assume I need to use MatCreateMPIAIJ instead of >>>>> MatCreateSeqAIJ, but wasn't able to find any using tutorials on how I might >>>>> do this. >>>>> >>>> >>>> 1) If you just split this loop across processes, it would work >>>> immediately. However, that can be non-optimal in terms >>>> of communication. >>>> >>>> 2) PETSc matrices are distributed by contiguous blocks of rows (see >>>> manual section on matrices), so you would like >>>> those row blocks to correspond roughly to your element blocks. >>>> >>>> 3) You will also have to preallocate, but that should be the same as >>>> you do now for the sequential case, except you >>>> check whether a column is inside the diagonal block. >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> Sincerely, >>>>> Justin >>>>> >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Sun Apr 27 09:00:30 2014 From: jed at jedbrown.org (Jed Brown) Date: Sun, 27 Apr 2014 08:00:30 -0600 Subject: [petsc-users] DMDACoor2d and DMDACoor3d in Fortran In-Reply-To: <535CB771.3040000@gmail.com> References: <535A9044.4010300@gmail.com> <4DF14B54-FBB8-43E3-86EA-6CDA967360D9@mcs.anl.gov> <535CB771.3040000@gmail.com> Message-ID: <871twi3m9t.fsf@jedbrown.org> Danyang Su writes: > Hi Barry, > > Another question is about DMDAGetLocalBoundingBox in fortran. > > PetscErrorCode DMDAGetLocalBoundingBox(DM da,PetscReal lmin[],PetscReal > lmax[]) > > The first value of lmin and lmax (lmin(0)) are always zero, lmin and lmax are normal arrays of length 3. In Fortran, you don't access lmin(0). > and lmin(1), lmin(2), and lmin(3) are for x, y, and z dimension, > respectively. And the returned value is index (local node index in x, > y, z dim), not the coordinate. Correct? It is the coordinate value, assuming you have set DMSetCoordinates. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From danyang.su at gmail.com Sun Apr 27 15:40:06 2014 From: danyang.su at gmail.com (Danyang Su) Date: Sun, 27 Apr 2014 13:40:06 -0700 Subject: [petsc-users] DMDACoor2d and DMDACoor3d in Fortran In-Reply-To: <871twi3m9t.fsf@jedbrown.org> References: <535A9044.4010300@gmail.com> <4DF14B54-FBB8-43E3-86EA-6CDA967360D9@mcs.anl.gov> <535CB771.3040000@gmail.com> <871twi3m9t.fsf@jedbrown.org> Message-ID: <535D6B26.8000502@gmail.com> Hi Jed, Thanks, the problem is solved. call DMGetCoordinateDM(dmda_flow%da,cda,ierr) call DMGetCoordinates(dmda_flow%da,gc,ierr) call DMDAGetLocalBoundingBox(cda,lmin,lmax,ierr) !this will return node index value in x,y,z dim call DMDAGetLocalBoundingBox(dmda_flow%da,lmin,lmax,ierr) !this will return coordinate value Thanks and regards, Danyang On 27/04/2014 7:00 AM, Jed Brown wrote: > Danyang Su writes: > >> Hi Barry, >> >> Another question is about DMDAGetLocalBoundingBox in fortran. >> >> PetscErrorCode DMDAGetLocalBoundingBox(DM da,PetscReal lmin[],PetscReal >> lmax[]) >> >> The first value of lmin and lmax (lmin(0)) are always zero, > lmin and lmax are normal arrays of length 3. In Fortran, you don't > access lmin(0). > >> and lmin(1), lmin(2), and lmin(3) are for x, y, and z dimension, >> respectively. And the returned value is index (local node index in x, >> y, z dim), not the coordinate. Correct? > It is the coordinate value, assuming you have set DMSetCoordinates. From jsd1 at rice.edu Sun Apr 27 19:22:33 2014 From: jsd1 at rice.edu (Justin Dong) Date: Sun, 27 Apr 2014 19:22:33 -0500 Subject: [petsc-users] Assembling a finite element matrix in parallel In-Reply-To: References: Message-ID: Thanks for your help -- the matrix assembly is working correctly now. However, say I want to solve the linear problem in serial now. I have A_global and b_global created using MatCreateAIJ and VecCreateMPI, but want to solve A_global*x = b_global using PCLU. I have: KSPCreate(PETSC_COMM_SELF, &ksp); KSPSetOperators(ksp, A_global, A_global, DIFFERENT_NONZERO_PATTERN); KSPGetPC(ksp, &pc); PCSetType(pc, PCLU); KSPSetTolerances(ksp, 1.e-13, PETSC_DEFAULT, PETSC_DEFAULT, PETSC_DEFAULT); KSPSolve(ksp, b_global, b_global); Is there anyway to get this to work? I'd prefer using PCLU rightnow, and besides, the speed-up from assembling in parallel is already significant. On Sun, Apr 27, 2014 at 9:11 AM, Matthew Knepley wrote: > On Sun, Apr 27, 2014 at 9:05 AM, Justin Dong wrote: > >> When I try to invoke MatCreateMPIAIJ, I'm getting this warning: >> >> *main.c:1842:2: **warning: **implicit declaration of function >> 'MatCreateMPIAIJ' is invalid in C99 [-Wimplicit-function-declaration]* >> >> MatCreateMPIAIJ(PETSC_COMM_SELF, NLoc, NLoc, NLoc*nElems, >> NLoc*nElems, (ref+3)*NLoc, >> >> * ^* >> >> and this error: >> > The name was changed in the last release: > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateAIJ.html > > Thanks, > > Matt > > >> Undefined symbols for architecture x86_64: >> >> "_MatCreateMPIAIJ", referenced from: >> >> _DG_SolveOil in main.o >> >> ld: symbol(s) not found for architecture x86_64 >> >> clang: error: linker command failed with exit code 1 (use -v to see >> invocation) >> >> make: [main] Error 1 (ignored) >> >> /bin/rm -f -f main.o >> >> >> This is how I'm calling MatCreateMPIAIJ: >> >> MatCreateMPIAIJ(PETSC_COMM_WORLD, nlocal, nlocal, nlocal*nelements, >> nlocal*nelements, 3*nlocal, >> >> PETSC_NULL, 3*nlocal, PETSC_NULL, &A_global); >> >> >> I'm able to run some simple MPI functions and use mpiexec, so I don't >> think it's an issue with MPI itself. >> Sincerely, >> Justin >> >> >> >> On Sun, Apr 27, 2014 at 8:35 AM, Matthew Knepley wrote: >> >>> On Sun, Apr 27, 2014 at 7:47 AM, Justin Dong wrote: >>> >>>> I think I've got the right idea on how to divide up the for loop: >>>> >>>> MPI_Comm_size(MPI_COMM_WORLD,&numprocs); >>>> MPI_Comm_rank(MPI_COMM_WORLD,&myid); >>>> >>>> mystart = (nelements / numprocs) * myid; >>>> if (nelements % numprocs > myid) >>>> { >>>> mystart += myid; >>>> myend = mystart + (nelements / numprocs) + 1; >>>> } >>>> else >>>> { >>>> mystart += nelements % numprocs; >>>> myend = mystart + (nelements / numprocs); >>>> } >>>> >>> >>> We have a function that does exactly this: >>> http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Sys/PetscSplitOwnership.html >>> >>> >>>> But then, do I do this? >>>> >>> >>> Yes. >>> >>> >>>> for (k = mystart+1; k < myend; ++k) >>>> { >>>> get_index(k,ie,je); /* ie and je are arrays that indicate where to >>>> place A_local */ >>>> compute_local_matrix(A_local,...); >>>> >>>> MatSetValues(A_global, nlocal, ie, nlocal, je, A_local, ADD_VALUES); >>>> } >>>> >>>> The indices are global indices and I'm doing >>>> >>>> MatCreateSeqAIJ(PETSC_COMM_WORLD, nglobal, nglobal, 3*nlocal, >>>> PETSC_NULL, &A_global); >>>> >>> >>> This should be MatCreateMPIAIJ() for parallel execution. >>> >>> Matt >>> >>> to create the matrix. Running the program seems to give me multiple >>>> errors, but mainly >>>> >>>> [3]PETSC ERROR: Object is in wrong state! >>>> >>>> [3]PETSC ERROR: Mat object's type is not set: Argument # 1! >>>> >>>> >>>> >>>> >>>> >>>> On Sun, Apr 27, 2014 at 6:54 AM, Matthew Knepley wrote: >>>> >>>>> On Sun, Apr 27, 2014 at 6:25 AM, Justin Dong wrote: >>>>> >>>>>> Hi Matt, >>>>>> >>>>>> For option 1), that would be using MPI and not any special functions >>>>>> in PETSc? I ask since I've never done parallel programming before. I tried >>>>>> using OpenMP to parallelize that for loop but it resulted in some errors >>>>>> and I didn't investigate further, but I'm assuming it's because each >>>>>> process wasn't communicating properly with the others during MatSetValues? >>>>>> >>>>> >>>>> Yes, using MPI, so each process owns a set of elements that it loops >>>>> over. The Mat object manages the global >>>>> values as long as you use global indices for the (row, column). There >>>>> are of course many refinements for this, >>>>> but I think the thing to do is get something working fast, and then >>>>> optimize it. >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> >>>>>> Sincerely, >>>>>> Justin >>>>>> >>>>>> >>>>>> On Sun, Apr 27, 2014 at 5:57 AM, Matthew Knepley wrote: >>>>>> >>>>>>> On Sun, Apr 27, 2014 at 4:14 AM, Justin Dong wrote: >>>>>>> >>>>>>>> Hi all, >>>>>>>> >>>>>>>> I'm currently coding a finite element problem in PETSc. I'm >>>>>>>> computing all of the matrices by myself and would prefer to do it that way. >>>>>>>> I want to parallelize the assembly of the global matrices. This is a >>>>>>>> simplified version of the assembly routine (pseudocode): >>>>>>>> >>>>>>>> for (k = 0; k < nelements; ++k) >>>>>>>> { >>>>>>>> get_index(k,ie,je); /* ie and je are arrays that indicate where >>>>>>>> to place A_local */ >>>>>>>> compute_local_matrix(A_local,...); >>>>>>>> >>>>>>>> MatSetValues(A_global, nlocal, ie, nlocal, je, A_local, >>>>>>>> ADD_VALUES); >>>>>>>> } >>>>>>>> >>>>>>>> This is for DG finite elements and I know the matrix easily be >>>>>>>> assembled in parallel. Even on my laptop, this would allow me to >>>>>>>> significantly speed up my code. The only problem is, I'm not sure how to do >>>>>>>> this in PETSc. I'd assume I need to use MatCreateMPIAIJ instead of >>>>>>>> MatCreateSeqAIJ, but wasn't able to find any using tutorials on how I might >>>>>>>> do this. >>>>>>>> >>>>>>> >>>>>>> 1) If you just split this loop across processes, it would work >>>>>>> immediately. However, that can be non-optimal in terms >>>>>>> of communication. >>>>>>> >>>>>>> 2) PETSc matrices are distributed by contiguous blocks of rows (see >>>>>>> manual section on matrices), so you would like >>>>>>> those row blocks to correspond roughly to your element blocks. >>>>>>> >>>>>>> 3) You will also have to preallocate, but that should be the same as >>>>>>> you do now for the sequential case, except you >>>>>>> check whether a column is inside the diagonal block. >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> Matt >>>>>>> >>>>>>> >>>>>>>> Sincerely, >>>>>>>> Justin >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> What most experimenters take for granted before they begin their >>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>> experiments lead. >>>>>>> -- Norbert Wiener >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Apr 27 19:48:08 2014 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 27 Apr 2014 19:48:08 -0500 Subject: [petsc-users] Assembling a finite element matrix in parallel In-Reply-To: References: Message-ID: On Sun, Apr 27, 2014 at 7:22 PM, Justin Dong wrote: > Thanks for your help -- the matrix assembly is working correctly now. > > However, say I want to solve the linear problem in serial now. I have > A_global and b_global created using MatCreateAIJ and VecCreateMPI, but want > to solve A_global*x = b_global using PCLU. I have: > > KSPCreate(PETSC_COMM_SELF, &ksp); > KSPSetOperators(ksp, A_global, A_global, DIFFERENT_NONZERO_PATTERN); > KSPGetPC(ksp, &pc); > > PCSetType(pc, PCLU); > > KSPSetTolerances(ksp, 1.e-13, PETSC_DEFAULT, PETSC_DEFAULT, PETSC_DEFAULT); > > KSPSolve(ksp, b_global, b_global); > > > Is there anyway to get this to work? I'd prefer using PCLU rightnow, and > besides, the speed-up from assembling in parallel is already significant. > I recommend solving in parallel (on the same communicator as the matrix). This is not too hard: 1) Reconfigure using $PETSC_DIR/$PETSC_ARCH/conf/reconfigure-$PETSC_ARCH.py --download-superlu_dist 2) Call KSPSetFromOptions(ksp) before solve 3) Run with -pc_type lu -pc_factor_mat_solver_package superlu_dist You can easily get your solution on one process if you want using http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Vec/VecScatterCreateToZero.html Thanks, Matt > > On Sun, Apr 27, 2014 at 9:11 AM, Matthew Knepley wrote: > >> On Sun, Apr 27, 2014 at 9:05 AM, Justin Dong wrote: >> >>> When I try to invoke MatCreateMPIAIJ, I'm getting this warning: >>> >>> *main.c:1842:2: **warning: **implicit declaration of function >>> 'MatCreateMPIAIJ' is invalid in C99 [-Wimplicit-function-declaration]* >>> >>> MatCreateMPIAIJ(PETSC_COMM_SELF, NLoc, NLoc, NLoc*nElems, >>> NLoc*nElems, (ref+3)*NLoc, >>> >>> * ^* >>> >>> and this error: >>> >> The name was changed in the last release: >> >> >> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateAIJ.html >> >> Thanks, >> >> Matt >> >> >>> Undefined symbols for architecture x86_64: >>> >>> "_MatCreateMPIAIJ", referenced from: >>> >>> _DG_SolveOil in main.o >>> >>> ld: symbol(s) not found for architecture x86_64 >>> >>> clang: error: linker command failed with exit code 1 (use -v to see >>> invocation) >>> >>> make: [main] Error 1 (ignored) >>> >>> /bin/rm -f -f main.o >>> >>> >>> This is how I'm calling MatCreateMPIAIJ: >>> >>> MatCreateMPIAIJ(PETSC_COMM_WORLD, nlocal, nlocal, nlocal*nelements, >>> nlocal*nelements, 3*nlocal, >>> >>> PETSC_NULL, 3*nlocal, PETSC_NULL, &A_global); >>> >>> >>> I'm able to run some simple MPI functions and use mpiexec, so I don't >>> think it's an issue with MPI itself. >>> Sincerely, >>> Justin >>> >>> >>> >>> On Sun, Apr 27, 2014 at 8:35 AM, Matthew Knepley wrote: >>> >>>> On Sun, Apr 27, 2014 at 7:47 AM, Justin Dong wrote: >>>> >>>>> I think I've got the right idea on how to divide up the for loop: >>>>> >>>>> MPI_Comm_size(MPI_COMM_WORLD,&numprocs); >>>>> MPI_Comm_rank(MPI_COMM_WORLD,&myid); >>>>> >>>>> mystart = (nelements / numprocs) * myid; >>>>> if (nelements % numprocs > myid) >>>>> { >>>>> mystart += myid; >>>>> myend = mystart + (nelements / numprocs) + 1; >>>>> } >>>>> else >>>>> { >>>>> mystart += nelements % numprocs; >>>>> myend = mystart + (nelements / numprocs); >>>>> } >>>>> >>>> >>>> We have a function that does exactly this: >>>> http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Sys/PetscSplitOwnership.html >>>> >>>> >>>>> But then, do I do this? >>>>> >>>> >>>> Yes. >>>> >>>> >>>>> for (k = mystart+1; k < myend; ++k) >>>>> { >>>>> get_index(k,ie,je); /* ie and je are arrays that indicate where to >>>>> place A_local */ >>>>> compute_local_matrix(A_local,...); >>>>> >>>>> MatSetValues(A_global, nlocal, ie, nlocal, je, A_local, >>>>> ADD_VALUES); >>>>> } >>>>> >>>>> The indices are global indices and I'm doing >>>>> >>>>> MatCreateSeqAIJ(PETSC_COMM_WORLD, nglobal, nglobal, 3*nlocal, >>>>> PETSC_NULL, &A_global); >>>>> >>>> >>>> This should be MatCreateMPIAIJ() for parallel execution. >>>> >>>> Matt >>>> >>>> to create the matrix. Running the program seems to give me multiple >>>>> errors, but mainly >>>>> >>>>> [3]PETSC ERROR: Object is in wrong state! >>>>> >>>>> [3]PETSC ERROR: Mat object's type is not set: Argument # 1! >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Sun, Apr 27, 2014 at 6:54 AM, Matthew Knepley wrote: >>>>> >>>>>> On Sun, Apr 27, 2014 at 6:25 AM, Justin Dong wrote: >>>>>> >>>>>>> Hi Matt, >>>>>>> >>>>>>> For option 1), that would be using MPI and not any special functions >>>>>>> in PETSc? I ask since I've never done parallel programming before. I tried >>>>>>> using OpenMP to parallelize that for loop but it resulted in some errors >>>>>>> and I didn't investigate further, but I'm assuming it's because each >>>>>>> process wasn't communicating properly with the others during MatSetValues? >>>>>>> >>>>>> >>>>>> Yes, using MPI, so each process owns a set of elements that it loops >>>>>> over. The Mat object manages the global >>>>>> values as long as you use global indices for the (row, column). There >>>>>> are of course many refinements for this, >>>>>> but I think the thing to do is get something working fast, and then >>>>>> optimize it. >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> >>>>>> >>>>>>> Sincerely, >>>>>>> Justin >>>>>>> >>>>>>> >>>>>>> On Sun, Apr 27, 2014 at 5:57 AM, Matthew Knepley wrote: >>>>>>> >>>>>>>> On Sun, Apr 27, 2014 at 4:14 AM, Justin Dong wrote: >>>>>>>> >>>>>>>>> Hi all, >>>>>>>>> >>>>>>>>> I'm currently coding a finite element problem in PETSc. I'm >>>>>>>>> computing all of the matrices by myself and would prefer to do it that way. >>>>>>>>> I want to parallelize the assembly of the global matrices. This is a >>>>>>>>> simplified version of the assembly routine (pseudocode): >>>>>>>>> >>>>>>>>> for (k = 0; k < nelements; ++k) >>>>>>>>> { >>>>>>>>> get_index(k,ie,je); /* ie and je are arrays that indicate >>>>>>>>> where to place A_local */ >>>>>>>>> compute_local_matrix(A_local,...); >>>>>>>>> >>>>>>>>> MatSetValues(A_global, nlocal, ie, nlocal, je, A_local, >>>>>>>>> ADD_VALUES); >>>>>>>>> } >>>>>>>>> >>>>>>>>> This is for DG finite elements and I know the matrix easily be >>>>>>>>> assembled in parallel. Even on my laptop, this would allow me to >>>>>>>>> significantly speed up my code. The only problem is, I'm not sure how to do >>>>>>>>> this in PETSc. I'd assume I need to use MatCreateMPIAIJ instead of >>>>>>>>> MatCreateSeqAIJ, but wasn't able to find any using tutorials on how I might >>>>>>>>> do this. >>>>>>>>> >>>>>>>> >>>>>>>> 1) If you just split this loop across processes, it would work >>>>>>>> immediately. However, that can be non-optimal in terms >>>>>>>> of communication. >>>>>>>> >>>>>>>> 2) PETSc matrices are distributed by contiguous blocks of rows (see >>>>>>>> manual section on matrices), so you would like >>>>>>>> those row blocks to correspond roughly to your element blocks. >>>>>>>> >>>>>>>> 3) You will also have to preallocate, but that should be the same >>>>>>>> as you do now for the sequential case, except you >>>>>>>> check whether a column is inside the diagonal block. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> >>>>>>>> Matt >>>>>>>> >>>>>>>> >>>>>>>>> Sincerely, >>>>>>>>> Justin >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> What most experimenters take for granted before they begin their >>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>> experiments lead. >>>>>>>> -- Norbert Wiener >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Apr 27 21:44:12 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 27 Apr 2014 21:44:12 -0500 Subject: [petsc-users] Assembling a finite element matrix in parallel In-Reply-To: References: Message-ID: You can also just run with -pc_type redundant -redundant_pc_type lu . It will use sequential LU on the entire matrix to solve the linear systems but the rest of the code will be parallel. Barry On Apr 27, 2014, at 7:48 PM, Matthew Knepley wrote: > On Sun, Apr 27, 2014 at 7:22 PM, Justin Dong wrote: > Thanks for your help -- the matrix assembly is working correctly now. > > However, say I want to solve the linear problem in serial now. I have A_global and b_global created using MatCreateAIJ and VecCreateMPI, but want to solve A_global*x = b_global using PCLU. I have: > > KSPCreate(PETSC_COMM_SELF, &ksp); > KSPSetOperators(ksp, A_global, A_global, DIFFERENT_NONZERO_PATTERN); > KSPGetPC(ksp, &pc); > > PCSetType(pc, PCLU); > > KSPSetTolerances(ksp, 1.e-13, PETSC_DEFAULT, PETSC_DEFAULT, PETSC_DEFAULT); > > KSPSolve(ksp, b_global, b_global); > > > Is there anyway to get this to work? I'd prefer using PCLU rightnow, and besides, the speed-up from assembling in parallel is already significant. > > I recommend solving in parallel (on the same communicator as the matrix). This is not too hard: > > 1) Reconfigure using $PETSC_DIR/$PETSC_ARCH/conf/reconfigure-$PETSC_ARCH.py --download-superlu_dist > > 2) Call KSPSetFromOptions(ksp) before solve > > 3) Run with -pc_type lu -pc_factor_mat_solver_package superlu_dist > > You can easily get your solution on one process if you want using http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Vec/VecScatterCreateToZero.html > > Thanks, > > Matt > > > On Sun, Apr 27, 2014 at 9:11 AM, Matthew Knepley wrote: > On Sun, Apr 27, 2014 at 9:05 AM, Justin Dong wrote: > When I try to invoke MatCreateMPIAIJ, I'm getting this warning: > > main.c:1842:2: warning: implicit declaration of function 'MatCreateMPIAIJ' is invalid in C99 [-Wimplicit-function-declaration] > > MatCreateMPIAIJ(PETSC_COMM_SELF, NLoc, NLoc, NLoc*nElems, NLoc*nElems, (ref+3)*NLoc, > > ^ > > and this error: > > The name was changed in the last release: > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateAIJ.html > > Thanks, > > Matt > > Undefined symbols for architecture x86_64: > > "_MatCreateMPIAIJ", referenced from: > > _DG_SolveOil in main.o > > ld: symbol(s) not found for architecture x86_64 > > clang: error: linker command failed with exit code 1 (use -v to see invocation) > > make: [main] Error 1 (ignored) > > > /bin/rm -f -f main.o > > > > This is how I'm calling MatCreateMPIAIJ: > > MatCreateMPIAIJ(PETSC_COMM_WORLD, nlocal, nlocal, nlocal*nelements, nlocal*nelements, 3*nlocal, > > PETSC_NULL, 3*nlocal, PETSC_NULL, &A_global); > > > > I'm able to run some simple MPI functions and use mpiexec, so I don't think it's an issue with MPI itself. > > Sincerely, > Justin > > > > On Sun, Apr 27, 2014 at 8:35 AM, Matthew Knepley wrote: > On Sun, Apr 27, 2014 at 7:47 AM, Justin Dong wrote: > I think I've got the right idea on how to divide up the for loop: > > MPI_Comm_size(MPI_COMM_WORLD,&numprocs); > MPI_Comm_rank(MPI_COMM_WORLD,&myid); > > mystart = (nelements / numprocs) * myid; > if (nelements % numprocs > myid) > { > mystart += myid; > myend = mystart + (nelements / numprocs) + 1; > } > else > { > mystart += nelements % numprocs; > myend = mystart + (nelements / numprocs); > } > > We have a function that does exactly this: http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Sys/PetscSplitOwnership.html > > But then, do I do this? > > Yes. > > for (k = mystart+1; k < myend; ++k) > { > get_index(k,ie,je); /* ie and je are arrays that indicate where to place A_local */ > compute_local_matrix(A_local,...); > > MatSetValues(A_global, nlocal, ie, nlocal, je, A_local, ADD_VALUES); > } > > The indices are global indices and I'm doing > > MatCreateSeqAIJ(PETSC_COMM_WORLD, nglobal, nglobal, 3*nlocal, > PETSC_NULL, &A_global); > > This should be MatCreateMPIAIJ() for parallel execution. > > Matt > > to create the matrix. Running the program seems to give me multiple errors, but mainly > [3]PETSC ERROR: Object is in wrong state! > > [3]PETSC ERROR: Mat object's type is not set: Argument # 1! > > > > > > > On Sun, Apr 27, 2014 at 6:54 AM, Matthew Knepley wrote: > On Sun, Apr 27, 2014 at 6:25 AM, Justin Dong wrote: > Hi Matt, > > For option 1), that would be using MPI and not any special functions in PETSc? I ask since I've never done parallel programming before. I tried using OpenMP to parallelize that for loop but it resulted in some errors and I didn't investigate further, but I'm assuming it's because each process wasn't communicating properly with the others during MatSetValues? > > Yes, using MPI, so each process owns a set of elements that it loops over. The Mat object manages the global > values as long as you use global indices for the (row, column). There are of course many refinements for this, > but I think the thing to do is get something working fast, and then optimize it. > > Thanks, > > Matt > > Sincerely, > Justin > > > On Sun, Apr 27, 2014 at 5:57 AM, Matthew Knepley wrote: > On Sun, Apr 27, 2014 at 4:14 AM, Justin Dong wrote: > Hi all, > > I'm currently coding a finite element problem in PETSc. I'm computing all of the matrices by myself and would prefer to do it that way. I want to parallelize the assembly of the global matrices. This is a simplified version of the assembly routine (pseudocode): > > for (k = 0; k < nelements; ++k) > { > get_index(k,ie,je); /* ie and je are arrays that indicate where to place A_local */ > compute_local_matrix(A_local,...); > > MatSetValues(A_global, nlocal, ie, nlocal, je, A_local, ADD_VALUES); > } > > This is for DG finite elements and I know the matrix easily be assembled in parallel. Even on my laptop, this would allow me to significantly speed up my code. The only problem is, I'm not sure how to do this in PETSc. I'd assume I need to use MatCreateMPIAIJ instead of MatCreateSeqAIJ, but wasn't able to find any using tutorials on how I might do this. > > 1) If you just split this loop across processes, it would work immediately. However, that can be non-optimal in terms > of communication. > > 2) PETSc matrices are distributed by contiguous blocks of rows (see manual section on matrices), so you would like > those row blocks to correspond roughly to your element blocks. > > 3) You will also have to preallocate, but that should be the same as you do now for the sequential case, except you > check whether a column is inside the diagonal block. > > Thanks, > > Matt > > Sincerely, > Justin > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From bsmith at mcs.anl.gov Sun Apr 27 23:34:21 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 27 Apr 2014 23:34:21 -0500 Subject: [petsc-users] infinite loop with NEWTONTR? In-Reply-To: References: Message-ID: <6B7E61ED-503A-44AE-9DCF-E74F4A8A62D5@mcs.anl.gov> I have run your code. I changed to use -snes_type newtonls and also -snes_mf_operator there is something wrong with your Jacobian: Without -snes_mf_operator 0 SNES Function norm 1.821611413735e+03 0 KSP Residual norm 1821.61 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 1 1 KSP Residual norm 0.000167024 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 1 2 KSP Residual norm 7.66595e-06 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 1 3 KSP Residual norm 4.4581e-07 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 1 4 KSP Residual norm 3.77537e-08 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 1 5 KSP Residual norm 2.20453e-09 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 1 6 KSP Residual norm 1.711e-10 Linear solve converged due to CONVERGED_RTOL iterations 6 with -snes_mf_operator 0 SNES Function norm 1.821611413735e+03 0 KSP Residual norm 1821.61 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 1 1 KSP Residual norm 1796.39 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 1 2 KSP Residual norm 1786.2 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 1 3 KSP Residual norm 1741.11 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 1 4 KSP Residual norm 1733.92 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 1 5 KSP Residual norm 1726.57 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 1 6 KSP Residual norm 1725.35 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 1 7 KSP Residual norm 1723.89 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 1 8 KSP Residual norm 1715.41 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 1 9 KSP Residual norm 1713.72 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 1 10 KSP Residual norm 1702.84 Linear solve converged due to CONVERGED_ITS iterations 1 ? This means your Jacobian is wrong. Your first order of business is to fix your Jacobian. I noticed in previous emails your discussion with Jed about switching to MatGetLocalSubMatrix() and using -snes_type test YOU NEED TO DO THIS. You will get no where with an incorrect Jacobian. You need to fix your Jacobian before you do anything else! No amount of other options or methods will help you with a wrong Jacobian! Once you have a correct Jacobian if you still have convergence problems let us know and we can make further suggestions. Barry On Apr 25, 2014, at 7:31 AM, Norihiro Watanabe wrote: > Hi, > > In my simulation, nonlinear solve with the trust regtion method got stagnent after linear solve (see output below). Is it possible that the method goes to inifite loop? Is there any parameter to avoid this situation? > > 0 SNES Function norm 1.828728087153e+03 > 0 KSP Residual norm 91.2735 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 3 > 1 KSP Residual norm 3.42223 > Linear solve converged due to CONVERGED_STEP_LENGTH iterations 1 > > > Thank you in advance, > Nori From D.Lathouwers at tudelft.nl Mon Apr 28 04:26:01 2014 From: D.Lathouwers at tudelft.nl (Danny Lathouwers - TNW) Date: Mon, 28 Apr 2014 09:26:01 +0000 Subject: [petsc-users] Fortran 90 problem (basic level) Message-ID: <4E6B33F4128CED4DB307BA83146E9A64258E0424@SRV362.tudelft.net> Dear users, I encountered some basic problems when using PETSc with Fortran90. To illustrate the problem easiest I made a simple problem (attached). There is a petsc.f90 that contains a module with all PETSc specific coding (initializations, etc). I have a main with some easy declarations and calls. There also is a f90_kind for type definition (essentially for the 'double precision'). Note that I wrap the matrices in Fortran structures so that I can keep the code free from PETSc except petsc.f90. The file 'output' is what I get. The program fills two diagonal matrices of size 10. This seems to work fine. The call to get_info should print the number of rows and columns of a matrix: it thinks the number of columns is 0. Another routine mat_prod does matrix-matrix multiplication through MatMatMult. This seg faults. I also tried the above with all kinds of includes instead of the 'use'-manner. This gave the same results. It seems like the interfacing is not working as it should. Am I missing a 'use' statement or a compile option? strangely enough the matrix construction does work fine. I hope anyone might shed some light on this. Thank you. Danny -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: f90_kind.f90 Type: application/octet-stream Size: 694 bytes Desc: f90_kind.f90 URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: main.f90 Type: application/octet-stream Size: 1134 bytes Desc: main.f90 URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: petsc.f90 Type: application/octet-stream Size: 2555 bytes Desc: petsc.f90 URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Makefile Type: application/octet-stream Size: 682 bytes Desc: Makefile URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: output Type: application/octet-stream Size: 2943 bytes Desc: output URL: From bsmith at mcs.anl.gov Mon Apr 28 06:57:24 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 28 Apr 2014 06:57:24 -0500 Subject: [petsc-users] Fortran 90 problem (basic level) In-Reply-To: <4E6B33F4128CED4DB307BA83146E9A64258E0424@SRV362.tudelft.net> References: <4E6B33F4128CED4DB307BA83146E9A64258E0424@SRV362.tudelft.net> Message-ID: <8F918268-5E36-4F61-8312-95798D318738@mcs.anl.gov> call MatGetSize(petsc_matrix%A_PetSC,m,n) call MatMatMult(A,B,MAT_INITIAL_MATRIX,PETSC_DEFAULT_DOUBLE_PRECISION,C) missing the final ierr argument. On Apr 28, 2014, at 4:26 AM, Danny Lathouwers - TNW wrote: > Dear users, > > I encountered some basic problems when using PETSc with Fortran90. To illustrate the problem easiest I made a simple problem (attached). > There is a petsc.f90 that contains a module with all PETSc specific coding (initializations, etc). > I have a main with some easy declarations and calls. There also is a f90_kind for type definition (essentially for the ?double precision?). Note that I wrap the matrices in Fortran structures so that I can keep the code free from PETSc except petsc.f90. > The file ?output? is what I get. > > The program fills two diagonal matrices of size 10. This seems to work fine. > > The call to get_info should print the number of rows and columns of a matrix: it thinks the number of columns is 0. > > Another routine mat_prod does matrix-matrix multiplication through MatMatMult. This seg faults. > > I also tried the above with all kinds of includes instead of the ?use?-manner. This gave the same results. > > It seems like the interfacing is not working as it should. Am I missing a ?use? statement or a compile option? > strangely enough the matrix construction does work fine. > > I hope anyone might shed some light on this. > > Thank you. > Danny > From D.Lathouwers at tudelft.nl Mon Apr 28 07:48:19 2014 From: D.Lathouwers at tudelft.nl (Danny Lathouwers - TNW) Date: Mon, 28 Apr 2014 12:48:19 +0000 Subject: [petsc-users] Fortran 90 problem (basic level) In-Reply-To: <8F918268-5E36-4F61-8312-95798D318738@mcs.anl.gov> References: <4E6B33F4128CED4DB307BA83146E9A64258E0424@SRV362.tudelft.net> <8F918268-5E36-4F61-8312-95798D318738@mcs.anl.gov> Message-ID: <4E6B33F4128CED4DB307BA83146E9A64258E09D7@SRV362.tudelft.net> Thanks Barry, Works now. I missed that from reading the doc on the internet. Will try to remember that one. Danny. -----Original Message----- From: Barry Smith [mailto:bsmith at mcs.anl.gov] Sent: maandag 28 april 2014 13:57 To: Danny Lathouwers - TNW Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Fortran 90 problem (basic level) call MatGetSize(petsc_matrix%A_PetSC,m,n) call MatMatMult(A,B,MAT_INITIAL_MATRIX,PETSC_DEFAULT_DOUBLE_PRECISION,C) missing the final ierr argument. On Apr 28, 2014, at 4:26 AM, Danny Lathouwers - TNW wrote: > Dear users, > > I encountered some basic problems when using PETSc with Fortran90. To illustrate the problem easiest I made a simple problem (attached). > There is a petsc.f90 that contains a module with all PETSc specific coding (initializations, etc). > I have a main with some easy declarations and calls. There also is a f90_kind for type definition (essentially for the 'double precision'). Note that I wrap the matrices in Fortran structures so that I can keep the code free from PETSc except petsc.f90. > The file 'output' is what I get. > > The program fills two diagonal matrices of size 10. This seems to work fine. > > The call to get_info should print the number of rows and columns of a matrix: it thinks the number of columns is 0. > > Another routine mat_prod does matrix-matrix multiplication through MatMatMult. This seg faults. > > I also tried the above with all kinds of includes instead of the 'use'-manner. This gave the same results. > > It seems like the interfacing is not working as it should. Am I missing a 'use' statement or a compile option? > strangely enough the matrix construction does work fine. > > I hope anyone might shed some light on this. > > Thank you. > Danny > From norihiro.w at gmail.com Mon Apr 28 09:14:54 2014 From: norihiro.w at gmail.com (Norihiro Watanabe) Date: Mon, 28 Apr 2014 16:14:54 +0200 Subject: [petsc-users] infinite loop with NEWTONTR? In-Reply-To: <6B7E61ED-503A-44AE-9DCF-E74F4A8A62D5@mcs.anl.gov> References: <6B7E61ED-503A-44AE-9DCF-E74F4A8A62D5@mcs.anl.gov> Message-ID: I cannot surely say my Jacobian for this particular problem is correct, as I have not checked it. For a smaller problem, I've already checked its correctness using -snes_type test or -snes_compare_explicit (but linear solve and nonlinear solve with FD one need a few more iterations than with my Jacobian). To make it sure, now I started -snes_type test for the problem and will update you once it finished. By the way, I'm waiting the calculation for more than three hours now. Is it usual for a large problem (>1e6 dof) or is there something wrong? On Mon, Apr 28, 2014 at 6:34 AM, Barry Smith wrote: > > I have run your code. I changed to use -snes_type newtonls and also > -snes_mf_operator there is something wrong with your Jacobian: > > Without -snes_mf_operator > 0 SNES Function norm 1.821611413735e+03 > 0 KSP Residual norm 1821.61 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 1 KSP Residual norm 0.000167024 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 2 KSP Residual norm 7.66595e-06 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 3 KSP Residual norm 4.4581e-07 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 4 KSP Residual norm 3.77537e-08 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 5 KSP Residual norm 2.20453e-09 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 6 KSP Residual norm 1.711e-10 > Linear solve converged due to CONVERGED_RTOL iterations 6 > > with -snes_mf_operator > > 0 SNES Function norm 1.821611413735e+03 > 0 KSP Residual norm 1821.61 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 1 KSP Residual norm 1796.39 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 2 KSP Residual norm 1786.2 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 3 KSP Residual norm 1741.11 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 4 KSP Residual norm 1733.92 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 5 KSP Residual norm 1726.57 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 6 KSP Residual norm 1725.35 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 7 KSP Residual norm 1723.89 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 8 KSP Residual norm 1715.41 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 9 KSP Residual norm 1713.72 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 10 KSP Residual norm 1702.84 > Linear solve converged due to CONVERGED_ITS iterations 1 > > ? > > This means your Jacobian is wrong. Your first order of business is to > fix your Jacobian. I noticed in previous emails your discussion with Jed > about switching to MatGetLocalSubMatrix() and using -snes_type test YOU > NEED TO DO THIS. You will get no where with an incorrect Jacobian. You need > to fix your Jacobian before you do anything else! No amount of other > options or methods will help you with a wrong Jacobian! Once you have a > correct Jacobian if you still have convergence problems let us know and we > can make further suggestions. > > Barry > > On Apr 25, 2014, at 7:31 AM, Norihiro Watanabe > wrote: > > > Hi, > > > > In my simulation, nonlinear solve with the trust regtion method got > stagnent after linear solve (see output below). Is it possible that the > method goes to inifite loop? Is there any parameter to avoid this situation? > > > > 0 SNES Function norm 1.828728087153e+03 > > 0 KSP Residual norm 91.2735 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 3 > > 1 KSP Residual norm 3.42223 > > Linear solve converged due to CONVERGED_STEP_LENGTH iterations 1 > > > > > > Thank you in advance, > > Nori > > -- Norihiro Watanabe -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Apr 28 10:59:37 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 28 Apr 2014 10:59:37 -0500 Subject: [petsc-users] infinite loop with NEWTONTR? In-Reply-To: References: <6B7E61ED-503A-44AE-9DCF-E74F4A8A62D5@mcs.anl.gov> Message-ID: It will take a very long time On Apr 28, 2014, at 9:14 AM, Norihiro Watanabe wrote: > I cannot surely say my Jacobian for this particular problem is correct, as I have not checked it. For a smaller problem, I've already checked its correctness using -snes_type test or -snes_compare_explicit (but linear solve and nonlinear solve with FD one need a few more iterations than with my Jacobian). To make it sure, now I started -snes_type test for the problem and will update you once it finished. By the way, I'm waiting the calculation for more than three hours now. Is it usual for a large problem (>1e6 dof) or is there something wrong? > > > > > > On Mon, Apr 28, 2014 at 6:34 AM, Barry Smith wrote: > > I have run your code. I changed to use -snes_type newtonls and also -snes_mf_operator there is something wrong with your Jacobian: > > Without -snes_mf_operator > 0 SNES Function norm 1.821611413735e+03 > 0 KSP Residual norm 1821.61 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 1 KSP Residual norm 0.000167024 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 2 KSP Residual norm 7.66595e-06 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 3 KSP Residual norm 4.4581e-07 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 4 KSP Residual norm 3.77537e-08 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 5 KSP Residual norm 2.20453e-09 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 6 KSP Residual norm 1.711e-10 > Linear solve converged due to CONVERGED_RTOL iterations 6 > > with -snes_mf_operator > > 0 SNES Function norm 1.821611413735e+03 > 0 KSP Residual norm 1821.61 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 1 KSP Residual norm 1796.39 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 2 KSP Residual norm 1786.2 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 3 KSP Residual norm 1741.11 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 4 KSP Residual norm 1733.92 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 5 KSP Residual norm 1726.57 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 6 KSP Residual norm 1725.35 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 7 KSP Residual norm 1723.89 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 8 KSP Residual norm 1715.41 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 9 KSP Residual norm 1713.72 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 10 KSP Residual norm 1702.84 > Linear solve converged due to CONVERGED_ITS iterations 1 > > ? > > This means your Jacobian is wrong. Your first order of business is to fix your Jacobian. I noticed in previous emails your discussion with Jed about switching to MatGetLocalSubMatrix() and using -snes_type test YOU NEED TO DO THIS. You will get no where with an incorrect Jacobian. You need to fix your Jacobian before you do anything else! No amount of other options or methods will help you with a wrong Jacobian! Once you have a correct Jacobian if you still have convergence problems let us know and we can make further suggestions. > > Barry > > On Apr 25, 2014, at 7:31 AM, Norihiro Watanabe wrote: > > > Hi, > > > > In my simulation, nonlinear solve with the trust regtion method got stagnent after linear solve (see output below). Is it possible that the method goes to inifite loop? Is there any parameter to avoid this situation? > > > > 0 SNES Function norm 1.828728087153e+03 > > 0 KSP Residual norm 91.2735 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 3 > > 1 KSP Residual norm 3.42223 > > Linear solve converged due to CONVERGED_STEP_LENGTH iterations 1 > > > > > > Thank you in advance, > > Nori > > > > > -- > Norihiro Watanabe From steve.ndengue at gmail.com Mon Apr 28 11:56:08 2014 From: steve.ndengue at gmail.com (Steve Ndengue) Date: Mon, 28 Apr 2014 11:56:08 -0500 Subject: [petsc-users] Increasing the number of iteration (SLEPC) Message-ID: <535E8828.6040501@gmail.com> Dear all, How can one increase the number of iterations using either the Krylov, Arnoldi or Lanczos method? I think that more iterations (sometimes) lead to convergence of more eigenvalues? Or is there a way to play with these to get to convergence? Sincerely, -- Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Mon Apr 28 11:58:27 2014 From: jroman at dsic.upv.es (Jose E. Roman) Date: Mon, 28 Apr 2014 18:58:27 +0200 Subject: [petsc-users] Increasing the number of iteration (SLEPC) In-Reply-To: <535E8828.6040501@gmail.com> References: <535E8828.6040501@gmail.com> Message-ID: El 28/04/2014, a las 18:56, Steve Ndengue escribi?: > Dear all, > > How can one increase the number of iterations using either the Krylov, Arnoldi or Lanczos method? > I think that more iterations (sometimes) lead to convergence of more eigenvalues? Or is there a way to play with these to get to convergence? > > Sincerely, > > -- > Steve > EPSSetTolerances: http://www.grycap.upv.es/slepc/documentation/current/docs/manualpages/EPS/EPSSetTolerances.html You can also play with subspace dimensions: http://www.grycap.upv.es/slepc/documentation/current/docs/manualpages/EPS/EPSSetDimensions.html Jose From steve.ndengue at gmail.com Mon Apr 28 12:19:30 2014 From: steve.ndengue at gmail.com (Steve Ndengue) Date: Mon, 28 Apr 2014 12:19:30 -0500 Subject: [petsc-users] Printing solutions LAPACK (SLEPC) Message-ID: <535E8DA2.2010805@gmail.com> Dear all, Is it possible to ask that only the "nev" selected from **EPSSETDimensions** be printed when using the Lapack method? Would the code work faster if only these values were to be computed instead of all as it does by default? Sincerely, -- Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Mon Apr 28 12:23:54 2014 From: jroman at dsic.upv.es (Jose E. Roman) Date: Mon, 28 Apr 2014 19:23:54 +0200 Subject: [petsc-users] Printing solutions LAPACK (SLEPC) In-Reply-To: <535E8DA2.2010805@gmail.com> References: <535E8DA2.2010805@gmail.com> Message-ID: <92B23499-80E5-4FD9-A115-B1A1E099DC9F@dsic.upv.es> El 28/04/2014, a las 19:19, Steve Ndengue escribi?: > Dear all, > > Is it possible to ask that only the "nev" selected from **EPSSETDimensions** be printed when using the Lapack method? > Would the code work faster if only these values were to be computed instead of all as it does by default? > > Sincerely, > > -- > Steve > You can print the solutions you want, as in ex1.c http://www.grycap.upv.es/slepc/documentation/current/src/eps/examples/tutorials/ex1.c.html EPSLAPACK is intended for debugging only, it always computes all eigenpairs. Jose From bsmith at mcs.anl.gov Mon Apr 28 13:00:25 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 28 Apr 2014 13:00:25 -0500 Subject: [petsc-users] [petsc-maint] Iterative Solver Problem In-Reply-To: References: <20140428111905.14977gpykpia4zs4@webtools.cc.umanitoba.ca> Message-ID: On Apr 28, 2014, at 12:59 PM, Barry Smith wrote: > > First try a much tighter tolerance on the linear solver. Use -ksp_rtol 1.e-12 > > I don?t fully understand. Is the coupled system nonlinear? Are you solving a nonlinear system, how are you doing that since you seem to be only solving a single linear system? Does the linear system involve all unknowns in the fluid and air? > > Barry > > > > On Apr 28, 2014, at 11:19 AM, Foad Hassaninejadfarahani wrote: > >> Hello PETSc team; >> >> The PETSc setup in my code is working now. I have issues with using the iterative solver instead of direct solver. >> >> I am solving a 2D, two-phase flow. Two fluids (air and water) flow into a channel and there is interaction between two phases. I am solving for the velocities in x and y directions, pressure and two scalars. They are all coupled together. I am looking for the steady-state solution. Since there is interface between the phases which needs updating, there are many iterations to reach the steady-state solution. "A" is a nine-banded non-symmetric matrix and each node has five unknowns. I am storing the non-zero coefficients and their locations in three separate vectors. >> >> I started using the direct solver. Superlu works fine and gives me good results compared to the previous works. However it is not cheap and applicable for fine grids. But, the iterative solver did not work and here is what I did: >> >> I got the converged solution by using Superlu. After that I restarted from the converged solution and did one iteration using -pc_type lu -pc_factor_mat_solver_package superlu_dist -log_summary. Again, it gave me the same converged solution. >> >> After that I started from the converged solution once more and this time I tried different combinations of iterative solvers and preconditions like the followings: >> -ksp_type gmres -ksp_gmres_restart 300 -pc_type asm -sub_pc_type lu ksp_monitor_true_residual -ksp_converged_reason -ksp_view -log_summary >> >> and here is the report: >> Linear solve converged due to CONVERGED_RTOL iterations 41 >> KSP Object: 8 MPI processes >> type: gmres >> GMRES: restart=300, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement >> GMRES: happy breakdown tolerance 1e-30 >> maximum iterations=10000, initial guess is zero >> tolerances: relative=1e-06, absolute=1e-50, divergence=10000 >> left preconditioning >> using PRECONDITIONED norm type for convergence test >> PC Object: 8 MPI processes >> type: asm >> Additive Schwarz: total subdomain blocks = 8, amount of overlap = 1 >> Additive Schwarz: restriction/interpolation type - RESTRICT >> Local solve is same for all blocks, in the following KSP and PC objects: >> KSP Object: (sub_) 1 MPI processes >> type: preonly >> maximum iterations=10000, initial guess is zero >> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >> left preconditioning >> using NONE norm type for convergence test >> PC Object: (sub_) 1 MPI processes >> type: lu >> LU: out-of-place factorization >> tolerance for zero pivot 1e-12 >> matrix ordering: nd >> factor fill ratio given 5, needed 3.70575 >> Factored matrix follows: >> Matrix Object: 1 MPI processes >> type: seqaij >> rows=5630, cols=5630 >> package used to perform factorization: petsc >> total: nonzeros=877150, allocated nonzeros=877150 >> total number of mallocs used during MatSetValues calls =0 >> using I-node routines: found 1126 nodes, limit used is 5 >> linear system matrix = precond matrix: >> Matrix Object: 1 MPI processes >> type: seqaij >> rows=5630, cols=5630 >> total: nonzeros=236700, allocated nonzeros=236700 >> total number of mallocs used during MatSetValues calls =0 >> using I-node routines: found 1126 nodes, limit used is 5 >> linear system matrix = precond matrix: >> Matrix Object: 8 MPI processes >> type: mpiaij >> rows=41000, cols=41000 >> total: nonzeros=1817800, allocated nonzeros=2555700 >> total number of mallocs used during MatSetValues calls =121180 >> using I-node (on process 0) routines: found 1025 nodes, limit used is 5 >> >> But, the results are far from the converged solution. For example two reference nodes for the pressure are compared: >> >> Based on Superlu >> Channel Inlet pressure (MIXTURE): 0.38890D-01 >> Channel Inlet pressure (LIQUID): 0.38416D-01 >> >> Based on Gmres >> Channel Inlet pressure (MIXTURE): -0.87214D+00 >> Channel Inlet pressure (LIQUID): -0.87301D+00 >> >> >> I also tried this: >> -ksp_type gcr -pc_type asm -ksp_diagonal_scale -ksp_diagonal_scale_fix -ksp_monitor_true_residual -ksp_converged_reason -ksp_view -log_summary >> >> and here is the report: >> 0 KSP unpreconditioned resid norm 2.248340888101e+05 true resid norm 2.248340888101e+05 ||r(i)||/||b|| 1.000000000000e+00 >> 1 KSP unpreconditioned resid norm 4.900010460179e+04 true resid norm 4.900010460179e+04 ||r(i)||/||b|| 2.179389471637e-01 >> 2 KSP unpreconditioned resid norm 4.267761572746e+04 true resid norm 4.267761572746e+04 ||r(i)||/||b|| 1.898182608933e-01 >> 3 KSP unpreconditioned resid norm 2.041242251471e+03 true resid norm 2.041242251471e+03 ||r(i)||/||b|| 9.078882398457e-03 >> 4 KSP unpreconditioned resid norm 1.852885420564e+03 true resid norm 1.852885420564e+03 ||r(i)||/||b|| 8.241123178296e-03 >> 5 KSP unpreconditioned resid norm 1.748965594395e+02 true resid norm 1.748965594395e+02 ||r(i)||/||b|| 7.778916460804e-04 >> 6 KSP unpreconditioned resid norm 5.664539353996e+01 true resid norm 5.664539353996e+01 ||r(i)||/||b|| 2.519430831852e-04 >> 7 KSP unpreconditioned resid norm 3.607535692806e+01 true resid norm 3.607535692806e+01 ||r(i)||/||b|| 1.604532351788e-04 >> 8 KSP unpreconditioned resid norm 1.041501303366e+01 true resid norm 1.041501303366e+01 ||r(i)||/||b|| 4.632310468924e-05 >> 9 KSP unpreconditioned resid norm 3.089920380322e+00 true resid norm 3.089920380322e+00 ||r(i)||/||b|| 1.374311340720e-05 >> 10 KSP unpreconditioned resid norm 1.456883209806e+00 true resid norm 1.456883209806e+00 ||r(i)||/||b|| 6.479814593583e-06 >> 11 KSP unpreconditioned resid norm 5.566902714391e-01 true resid norm 5.566902714391e-01 ||r(i)||/||b|| 2.476004748147e-06 >> 12 KSP unpreconditioned resid norm 2.403913756663e-01 true resid norm 2.403913756663e-01 ||r(i)||/||b|| 1.069194520006e-06 >> 13 KSP unpreconditioned resid norm 1.650435118839e-01 true resid norm 1.650435118839e-01 ||r(i)||/||b|| 7.340680088032e-07 >> Linear solve converged due to CONVERGED_RTOL iterations 13 >> KSP Object: 8 MPI processes >> type: gcr >> GCR: restart = 30 >> GCR: restarts performed = 1 >> maximum iterations=10000, initial guess is zero >> tolerances: relative=1e-06, absolute=1e-50, divergence=10000 >> right preconditioning >> diagonally scaled system >> using UNPRECONDITIONED norm type for convergence test >> PC Object: 8 MPI processes >> type: asm >> Additive Schwarz: total subdomain blocks = 8, amount of overlap = 1 >> Additive Schwarz: restriction/interpolation type - RESTRICT >> Local solve is same for all blocks, in the following KSP and PC objects: >> KSP Object: (sub_) 1 MPI processes >> type: preonly >> maximum iterations=10000, initial guess is zero >> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >> left preconditioning >> using NONE norm type for convergence test >> PC Object: (sub_) 1 MPI processes >> type: ilu >> ILU: out-of-place factorization >> 0 levels of fill >> tolerance for zero pivot 1e-12 >> using diagonal shift to prevent zero pivot >> matrix ordering: natural >> factor fill ratio given 1, needed 1 >> Factored matrix follows: >> Matrix Object: 1 MPI processes >> type: seqaij >> rows=5630, cols=5630 >> package used to perform factorization: petsc >> total: nonzeros=236700, allocated nonzeros=236700 >> total number of mallocs used during MatSetValues calls =0 >> using I-node routines: found 1126 nodes, limit used is 5 >> linear system matrix = precond matrix: >> Matrix Object: 1 MPI processes >> type: seqaij >> rows=5630, cols=5630 >> total: nonzeros=236700, allocated nonzeros=236700 >> total number of mallocs used during MatSetValues calls =0 >> using I-node routines: found 1126 nodes, limit used is 5 >> linear system matrix = precond matrix: >> Matrix Object: 8 MPI processes >> type: mpiaij >> rows=41000, cols=41000 >> total: nonzeros=1817800, allocated nonzeros=2555700 >> total number of mallocs used during MatSetValues calls =121180 >> using I-node (on process 0) routines: found 1025 nodes, limit used is 5 >> >> Channel Inlet pressure (MIXTURE): -0.90733D+00 >> Channel Inlet pressure (LIQUID): -0.10118D+01 >> >> >> As you may see these are complete different results which are not close to the converged solution. >> >> Since, I want to have fine grids I need to use iterative solver. I wonder if I am missing something or using wrong solver/precondition/option. I would appreciate if you could help me (like always). >> >> -- >> With Best Regards; >> Foad >> >> >> >> > From umhassa5 at cc.umanitoba.ca Mon Apr 28 13:21:35 2014 From: umhassa5 at cc.umanitoba.ca (Foad Hassaninejadfarahani) Date: Mon, 28 Apr 2014 13:21:35 -0500 Subject: [petsc-users] [petsc-maint] Iterative Solver Problem In-Reply-To: References: <20140428111905.14977gpykpia4zs4@webtools.cc.umanitoba.ca> Message-ID: <20140428132135.93114hgxsqjp4lrk@webtools.cc.umanitoba.ca> Hello Again; I used -ksp_rtol 1.e-12 and it took way way longer to get the result for one iteration and it did not converge: Linear solve did not converge due to DIVERGED_ITS iterations 10000 KSP Object: 8 MPI processes type: gmres GMRES: restart=300, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 8 MPI processes type: asm Additive Schwarz: total subdomain blocks = 8, amount of overlap = 1 Additive Schwarz: restriction/interpolation type - RESTRICT Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 1e-12 matrix ordering: nd factor fill ratio given 5, needed 3.70575 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=5630, cols=5630 package used to perform factorization: petsc total: nonzeros=877150, allocated nonzeros=877150 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 1126 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=5630, cols=5630 total: nonzeros=236700, allocated nonzeros=236700 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 1126 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 8 MPI processes type: mpiaij rows=41000, cols=41000 total: nonzeros=1817800, allocated nonzeros=2555700 total number of mallocs used during MatSetValues calls =121180 using I-node (on process 0) routines: found 1025 nodes, limit used is 5 Well, let me clear everything. I am solving the whole system (air and water) coupled at once. Although originally the system is not linear, but I linearized the equations, so I have some lagged terms. In addition the interface (between two phases) location is wrong at the beginning and should be corrected in each iteration after getting the solution. Therefore, I solve the whole domain, move the interface and again solve the whole domain. This should continue until the interface movement becomes from the order of 1E-12. My problem is after getting the converged solution. Restarting from the converged solution, if I use Superlu, it gives me back the converged solution and stops after one iteration. But, if I use any iterative solver, it does not give me back the converged solution and starts moving the interface cause the wrong solution ask for new interface location. This leads to oscillation for ever and for some cases divergence. -- With Best Regards; Foad Quoting Barry Smith : > > On Apr 28, 2014, at 12:59 PM, Barry Smith wrote: > >> >> First try a much tighter tolerance on the linear solver. Use >> -ksp_rtol 1.e-12 >> >> I don?t fully understand. Is the coupled system nonlinear? Are you >> solving a nonlinear system, how are you doing that since you seem >> to be only solving a single linear system? Does the linear system >> involve all unknowns in the fluid and air? >> >> Barry >> >> >> >> On Apr 28, 2014, at 11:19 AM, Foad Hassaninejadfarahani >> wrote: >> >>> Hello PETSc team; >>> >>> The PETSc setup in my code is working now. I have issues with >>> using the iterative solver instead of direct solver. >>> >>> I am solving a 2D, two-phase flow. Two fluids (air and water) flow >>> into a channel and there is interaction between two phases. I am >>> solving for the velocities in x and y directions, pressure and two >>> scalars. They are all coupled together. I am looking for the >>> steady-state solution. Since there is interface between the phases >>> which needs updating, there are many iterations to reach the >>> steady-state solution. "A" is a nine-banded non-symmetric matrix >>> and each node has five unknowns. I am storing the non-zero >>> coefficients and their locations in three separate vectors. >>> >>> I started using the direct solver. Superlu works fine and gives me >>> good results compared to the previous works. However it is not >>> cheap and applicable for fine grids. But, the iterative solver did >>> not work and here is what I did: >>> >>> I got the converged solution by using Superlu. After that I >>> restarted from the converged solution and did one iteration using >>> -pc_type lu -pc_factor_mat_solver_package superlu_dist >>> -log_summary. Again, it gave me the same converged solution. >>> >>> After that I started from the converged solution once more and >>> this time I tried different combinations of iterative solvers and >>> preconditions like the followings: >>> -ksp_type gmres -ksp_gmres_restart 300 -pc_type asm -sub_pc_type >>> lu ksp_monitor_true_residual -ksp_converged_reason -ksp_view >>> -log_summary >>> >>> and here is the report: >>> Linear solve converged due to CONVERGED_RTOL iterations 41 >>> KSP Object: 8 MPI processes >>> type: gmres >>> GMRES: restart=300, using Classical (unmodified) Gram-Schmidt >>> Orthogonalization with no iterative refinement >>> GMRES: happy breakdown tolerance 1e-30 >>> maximum iterations=10000, initial guess is zero >>> tolerances: relative=1e-06, absolute=1e-50, divergence=10000 >>> left preconditioning >>> using PRECONDITIONED norm type for convergence test >>> PC Object: 8 MPI processes >>> type: asm >>> Additive Schwarz: total subdomain blocks = 8, amount of overlap = 1 >>> Additive Schwarz: restriction/interpolation type - RESTRICT >>> Local solve is same for all blocks, in the following KSP and PC objects: >>> KSP Object: (sub_) 1 MPI processes >>> type: preonly >>> maximum iterations=10000, initial guess is zero >>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >>> left preconditioning >>> using NONE norm type for convergence test >>> PC Object: (sub_) 1 MPI processes >>> type: lu >>> LU: out-of-place factorization >>> tolerance for zero pivot 1e-12 >>> matrix ordering: nd >>> factor fill ratio given 5, needed 3.70575 >>> Factored matrix follows: >>> Matrix Object: 1 MPI processes >>> type: seqaij >>> rows=5630, cols=5630 >>> package used to perform factorization: petsc >>> total: nonzeros=877150, allocated nonzeros=877150 >>> total number of mallocs used during MatSetValues calls =0 >>> using I-node routines: found 1126 nodes, limit used is 5 >>> linear system matrix = precond matrix: >>> Matrix Object: 1 MPI processes >>> type: seqaij >>> rows=5630, cols=5630 >>> total: nonzeros=236700, allocated nonzeros=236700 >>> total number of mallocs used during MatSetValues calls =0 >>> using I-node routines: found 1126 nodes, limit used is 5 >>> linear system matrix = precond matrix: >>> Matrix Object: 8 MPI processes >>> type: mpiaij >>> rows=41000, cols=41000 >>> total: nonzeros=1817800, allocated nonzeros=2555700 >>> total number of mallocs used during MatSetValues calls =121180 >>> using I-node (on process 0) routines: found 1025 nodes, limit used is 5 >>> >>> But, the results are far from the converged solution. For example >>> two reference nodes for the pressure are compared: >>> >>> Based on Superlu >>> Channel Inlet pressure (MIXTURE): 0.38890D-01 >>> Channel Inlet pressure (LIQUID): 0.38416D-01 >>> >>> Based on Gmres >>> Channel Inlet pressure (MIXTURE): -0.87214D+00 >>> Channel Inlet pressure (LIQUID): -0.87301D+00 >>> >>> >>> I also tried this: >>> -ksp_type gcr -pc_type asm -ksp_diagonal_scale >>> -ksp_diagonal_scale_fix -ksp_monitor_true_residual >>> -ksp_converged_reason -ksp_view -log_summary >>> >>> and here is the report: >>> 0 KSP unpreconditioned resid norm 2.248340888101e+05 true resid >>> norm 2.248340888101e+05 ||r(i)||/||b|| 1.000000000000e+00 >>> 1 KSP unpreconditioned resid norm 4.900010460179e+04 true resid >>> norm 4.900010460179e+04 ||r(i)||/||b|| 2.179389471637e-01 >>> 2 KSP unpreconditioned resid norm 4.267761572746e+04 true resid >>> norm 4.267761572746e+04 ||r(i)||/||b|| 1.898182608933e-01 >>> 3 KSP unpreconditioned resid norm 2.041242251471e+03 true resid >>> norm 2.041242251471e+03 ||r(i)||/||b|| 9.078882398457e-03 >>> 4 KSP unpreconditioned resid norm 1.852885420564e+03 true resid >>> norm 1.852885420564e+03 ||r(i)||/||b|| 8.241123178296e-03 >>> 5 KSP unpreconditioned resid norm 1.748965594395e+02 true resid >>> norm 1.748965594395e+02 ||r(i)||/||b|| 7.778916460804e-04 >>> 6 KSP unpreconditioned resid norm 5.664539353996e+01 true resid >>> norm 5.664539353996e+01 ||r(i)||/||b|| 2.519430831852e-04 >>> 7 KSP unpreconditioned resid norm 3.607535692806e+01 true resid >>> norm 3.607535692806e+01 ||r(i)||/||b|| 1.604532351788e-04 >>> 8 KSP unpreconditioned resid norm 1.041501303366e+01 true resid >>> norm 1.041501303366e+01 ||r(i)||/||b|| 4.632310468924e-05 >>> 9 KSP unpreconditioned resid norm 3.089920380322e+00 true resid >>> norm 3.089920380322e+00 ||r(i)||/||b|| 1.374311340720e-05 >>> 10 KSP unpreconditioned resid norm 1.456883209806e+00 true resid >>> norm 1.456883209806e+00 ||r(i)||/||b|| 6.479814593583e-06 >>> 11 KSP unpreconditioned resid norm 5.566902714391e-01 true resid >>> norm 5.566902714391e-01 ||r(i)||/||b|| 2.476004748147e-06 >>> 12 KSP unpreconditioned resid norm 2.403913756663e-01 true resid >>> norm 2.403913756663e-01 ||r(i)||/||b|| 1.069194520006e-06 >>> 13 KSP unpreconditioned resid norm 1.650435118839e-01 true resid >>> norm 1.650435118839e-01 ||r(i)||/||b|| 7.340680088032e-07 >>> Linear solve converged due to CONVERGED_RTOL iterations 13 >>> KSP Object: 8 MPI processes >>> type: gcr >>> GCR: restart = 30 >>> GCR: restarts performed = 1 >>> maximum iterations=10000, initial guess is zero >>> tolerances: relative=1e-06, absolute=1e-50, divergence=10000 >>> right preconditioning >>> diagonally scaled system >>> using UNPRECONDITIONED norm type for convergence test >>> PC Object: 8 MPI processes >>> type: asm >>> Additive Schwarz: total subdomain blocks = 8, amount of overlap = 1 >>> Additive Schwarz: restriction/interpolation type - RESTRICT >>> Local solve is same for all blocks, in the following KSP and PC objects: >>> KSP Object: (sub_) 1 MPI processes >>> type: preonly >>> maximum iterations=10000, initial guess is zero >>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >>> left preconditioning >>> using NONE norm type for convergence test >>> PC Object: (sub_) 1 MPI processes >>> type: ilu >>> ILU: out-of-place factorization >>> 0 levels of fill >>> tolerance for zero pivot 1e-12 >>> using diagonal shift to prevent zero pivot >>> matrix ordering: natural >>> factor fill ratio given 1, needed 1 >>> Factored matrix follows: >>> Matrix Object: 1 MPI processes >>> type: seqaij >>> rows=5630, cols=5630 >>> package used to perform factorization: petsc >>> total: nonzeros=236700, allocated nonzeros=236700 >>> total number of mallocs used during MatSetValues calls =0 >>> using I-node routines: found 1126 nodes, limit used is 5 >>> linear system matrix = precond matrix: >>> Matrix Object: 1 MPI processes >>> type: seqaij >>> rows=5630, cols=5630 >>> total: nonzeros=236700, allocated nonzeros=236700 >>> total number of mallocs used during MatSetValues calls =0 >>> using I-node routines: found 1126 nodes, limit used is 5 >>> linear system matrix = precond matrix: >>> Matrix Object: 8 MPI processes >>> type: mpiaij >>> rows=41000, cols=41000 >>> total: nonzeros=1817800, allocated nonzeros=2555700 >>> total number of mallocs used during MatSetValues calls =121180 >>> using I-node (on process 0) routines: found 1025 nodes, limit used is 5 >>> >>> Channel Inlet pressure (MIXTURE): -0.90733D+00 >>> Channel Inlet pressure (LIQUID): -0.10118D+01 >>> >>> >>> As you may see these are complete different results which are not >>> close to the converged solution. >>> >>> Since, I want to have fine grids I need to use iterative solver. I >>> wonder if I am missing something or using wrong >>> solver/precondition/option. I would appreciate if you could help >>> me (like always). >>> >>> -- >>> With Best Regards; >>> Foad >>> >>> >>> >>> >> > > > From bsmith at mcs.anl.gov Mon Apr 28 13:33:39 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 28 Apr 2014 13:33:39 -0500 Subject: [petsc-users] [petsc-maint] Iterative Solver Problem In-Reply-To: <20140428132135.93114hgxsqjp4lrk@webtools.cc.umanitoba.ca> References: <20140428111905.14977gpykpia4zs4@webtools.cc.umanitoba.ca> <20140428132135.93114hgxsqjp4lrk@webtools.cc.umanitoba.ca> Message-ID: Please run with the additional options -ksp_max_it 500 -ksp_gmres_restart 500 -ksp_monitor_true_residual -ksp_monitor_singular_value and send back all the output (that would include the 500 residual norms as it tries to converge.) Barry On Apr 28, 2014, at 1:21 PM, Foad Hassaninejadfarahani wrote: > Hello Again; > > I used -ksp_rtol 1.e-12 and it took way way longer to get the result for one iteration and it did not converge: > > Linear solve did not converge due to DIVERGED_ITS iterations 10000 > KSP Object: 8 MPI processes > type: gmres > GMRES: restart=300, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement > GMRES: happy breakdown tolerance 1e-30 > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-12, absolute=1e-50, divergence=10000 > left preconditioning > using PRECONDITIONED norm type for convergence test > PC Object: 8 MPI processes > type: asm > Additive Schwarz: total subdomain blocks = 8, amount of overlap = 1 > Additive Schwarz: restriction/interpolation type - RESTRICT > Local solve is same for all blocks, in the following KSP and PC objects: > KSP Object: (sub_) 1 MPI processes > type: preonly > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using NONE norm type for convergence test > PC Object: (sub_) 1 MPI processes > type: lu > LU: out-of-place factorization > tolerance for zero pivot 1e-12 > matrix ordering: nd > factor fill ratio given 5, needed 3.70575 > Factored matrix follows: > Matrix Object: 1 MPI processes > type: seqaij > rows=5630, cols=5630 > package used to perform factorization: petsc > total: nonzeros=877150, allocated nonzeros=877150 > total number of mallocs used during MatSetValues calls =0 > using I-node routines: found 1126 nodes, limit used is 5 > linear system matrix = precond matrix: > Matrix Object: 1 MPI processes > type: seqaij > rows=5630, cols=5630 > total: nonzeros=236700, allocated nonzeros=236700 > total number of mallocs used during MatSetValues calls =0 > using I-node routines: found 1126 nodes, limit used is 5 > linear system matrix = precond matrix: > Matrix Object: 8 MPI processes > type: mpiaij > rows=41000, cols=41000 > total: nonzeros=1817800, allocated nonzeros=2555700 > total number of mallocs used during MatSetValues calls =121180 > using I-node (on process 0) routines: found 1025 nodes, limit used is 5 > > > Well, let me clear everything. I am solving the whole system (air and water) coupled at once. Although originally the system is not linear, but I linearized the equations, so I have some lagged terms. In addition the interface (between two phases) location is wrong at the beginning and should be corrected in each iteration after getting the solution. Therefore, I solve the whole domain, move the interface and again solve the whole domain. This should continue until the interface movement becomes from the order of 1E-12. > > My problem is after getting the converged solution. Restarting from the converged solution, if I use Superlu, it gives me back the converged solution and stops after one iteration. But, if I use any iterative solver, it does not give me back the converged solution and starts moving the interface cause the wrong solution ask for new interface location. This leads to oscillation for ever and for some cases divergence. > > -- > With Best Regards; > Foad > > > Quoting Barry Smith : > >> >> On Apr 28, 2014, at 12:59 PM, Barry Smith wrote: >> >>> >>> First try a much tighter tolerance on the linear solver. Use -ksp_rtol 1.e-12 >>> >>> I don?t fully understand. Is the coupled system nonlinear? Are you solving a nonlinear system, how are you doing that since you seem to be only solving a single linear system? Does the linear system involve all unknowns in the fluid and air? >>> >>> Barry >>> >>> >>> >>> On Apr 28, 2014, at 11:19 AM, Foad Hassaninejadfarahani wrote: >>> >>>> Hello PETSc team; >>>> >>>> The PETSc setup in my code is working now. I have issues with using the iterative solver instead of direct solver. >>>> >>>> I am solving a 2D, two-phase flow. Two fluids (air and water) flow into a channel and there is interaction between two phases. I am solving for the velocities in x and y directions, pressure and two scalars. They are all coupled together. I am looking for the steady-state solution. Since there is interface between the phases which needs updating, there are many iterations to reach the steady-state solution. "A" is a nine-banded non-symmetric matrix and each node has five unknowns. I am storing the non-zero coefficients and their locations in three separate vectors. >>>> >>>> I started using the direct solver. Superlu works fine and gives me good results compared to the previous works. However it is not cheap and applicable for fine grids. But, the iterative solver did not work and here is what I did: >>>> >>>> I got the converged solution by using Superlu. After that I restarted from the converged solution and did one iteration using -pc_type lu -pc_factor_mat_solver_package superlu_dist -log_summary. Again, it gave me the same converged solution. >>>> >>>> After that I started from the converged solution once more and this time I tried different combinations of iterative solvers and preconditions like the followings: >>>> -ksp_type gmres -ksp_gmres_restart 300 -pc_type asm -sub_pc_type lu ksp_monitor_true_residual -ksp_converged_reason -ksp_view -log_summary >>>> >>>> and here is the report: >>>> Linear solve converged due to CONVERGED_RTOL iterations 41 >>>> KSP Object: 8 MPI processes >>>> type: gmres >>>> GMRES: restart=300, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement >>>> GMRES: happy breakdown tolerance 1e-30 >>>> maximum iterations=10000, initial guess is zero >>>> tolerances: relative=1e-06, absolute=1e-50, divergence=10000 >>>> left preconditioning >>>> using PRECONDITIONED norm type for convergence test >>>> PC Object: 8 MPI processes >>>> type: asm >>>> Additive Schwarz: total subdomain blocks = 8, amount of overlap = 1 >>>> Additive Schwarz: restriction/interpolation type - RESTRICT >>>> Local solve is same for all blocks, in the following KSP and PC objects: >>>> KSP Object: (sub_) 1 MPI processes >>>> type: preonly >>>> maximum iterations=10000, initial guess is zero >>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >>>> left preconditioning >>>> using NONE norm type for convergence test >>>> PC Object: (sub_) 1 MPI processes >>>> type: lu >>>> LU: out-of-place factorization >>>> tolerance for zero pivot 1e-12 >>>> matrix ordering: nd >>>> factor fill ratio given 5, needed 3.70575 >>>> Factored matrix follows: >>>> Matrix Object: 1 MPI processes >>>> type: seqaij >>>> rows=5630, cols=5630 >>>> package used to perform factorization: petsc >>>> total: nonzeros=877150, allocated nonzeros=877150 >>>> total number of mallocs used during MatSetValues calls =0 >>>> using I-node routines: found 1126 nodes, limit used is 5 >>>> linear system matrix = precond matrix: >>>> Matrix Object: 1 MPI processes >>>> type: seqaij >>>> rows=5630, cols=5630 >>>> total: nonzeros=236700, allocated nonzeros=236700 >>>> total number of mallocs used during MatSetValues calls =0 >>>> using I-node routines: found 1126 nodes, limit used is 5 >>>> linear system matrix = precond matrix: >>>> Matrix Object: 8 MPI processes >>>> type: mpiaij >>>> rows=41000, cols=41000 >>>> total: nonzeros=1817800, allocated nonzeros=2555700 >>>> total number of mallocs used during MatSetValues calls =121180 >>>> using I-node (on process 0) routines: found 1025 nodes, limit used is 5 >>>> >>>> But, the results are far from the converged solution. For example two reference nodes for the pressure are compared: >>>> >>>> Based on Superlu >>>> Channel Inlet pressure (MIXTURE): 0.38890D-01 >>>> Channel Inlet pressure (LIQUID): 0.38416D-01 >>>> >>>> Based on Gmres >>>> Channel Inlet pressure (MIXTURE): -0.87214D+00 >>>> Channel Inlet pressure (LIQUID): -0.87301D+00 >>>> >>>> >>>> I also tried this: >>>> -ksp_type gcr -pc_type asm -ksp_diagonal_scale -ksp_diagonal_scale_fix -ksp_monitor_true_residual -ksp_converged_reason -ksp_view -log_summary >>>> >>>> and here is the report: >>>> 0 KSP unpreconditioned resid norm 2.248340888101e+05 true resid norm 2.248340888101e+05 ||r(i)||/||b|| 1.000000000000e+00 >>>> 1 KSP unpreconditioned resid norm 4.900010460179e+04 true resid norm 4.900010460179e+04 ||r(i)||/||b|| 2.179389471637e-01 >>>> 2 KSP unpreconditioned resid norm 4.267761572746e+04 true resid norm 4.267761572746e+04 ||r(i)||/||b|| 1.898182608933e-01 >>>> 3 KSP unpreconditioned resid norm 2.041242251471e+03 true resid norm 2.041242251471e+03 ||r(i)||/||b|| 9.078882398457e-03 >>>> 4 KSP unpreconditioned resid norm 1.852885420564e+03 true resid norm 1.852885420564e+03 ||r(i)||/||b|| 8.241123178296e-03 >>>> 5 KSP unpreconditioned resid norm 1.748965594395e+02 true resid norm 1.748965594395e+02 ||r(i)||/||b|| 7.778916460804e-04 >>>> 6 KSP unpreconditioned resid norm 5.664539353996e+01 true resid norm 5.664539353996e+01 ||r(i)||/||b|| 2.519430831852e-04 >>>> 7 KSP unpreconditioned resid norm 3.607535692806e+01 true resid norm 3.607535692806e+01 ||r(i)||/||b|| 1.604532351788e-04 >>>> 8 KSP unpreconditioned resid norm 1.041501303366e+01 true resid norm 1.041501303366e+01 ||r(i)||/||b|| 4.632310468924e-05 >>>> 9 KSP unpreconditioned resid norm 3.089920380322e+00 true resid norm 3.089920380322e+00 ||r(i)||/||b|| 1.374311340720e-05 >>>> 10 KSP unpreconditioned resid norm 1.456883209806e+00 true resid norm 1.456883209806e+00 ||r(i)||/||b|| 6.479814593583e-06 >>>> 11 KSP unpreconditioned resid norm 5.566902714391e-01 true resid norm 5.566902714391e-01 ||r(i)||/||b|| 2.476004748147e-06 >>>> 12 KSP unpreconditioned resid norm 2.403913756663e-01 true resid norm 2.403913756663e-01 ||r(i)||/||b|| 1.069194520006e-06 >>>> 13 KSP unpreconditioned resid norm 1.650435118839e-01 true resid norm 1.650435118839e-01 ||r(i)||/||b|| 7.340680088032e-07 >>>> Linear solve converged due to CONVERGED_RTOL iterations 13 >>>> KSP Object: 8 MPI processes >>>> type: gcr >>>> GCR: restart = 30 >>>> GCR: restarts performed = 1 >>>> maximum iterations=10000, initial guess is zero >>>> tolerances: relative=1e-06, absolute=1e-50, divergence=10000 >>>> right preconditioning >>>> diagonally scaled system >>>> using UNPRECONDITIONED norm type for convergence test >>>> PC Object: 8 MPI processes >>>> type: asm >>>> Additive Schwarz: total subdomain blocks = 8, amount of overlap = 1 >>>> Additive Schwarz: restriction/interpolation type - RESTRICT >>>> Local solve is same for all blocks, in the following KSP and PC objects: >>>> KSP Object: (sub_) 1 MPI processes >>>> type: preonly >>>> maximum iterations=10000, initial guess is zero >>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >>>> left preconditioning >>>> using NONE norm type for convergence test >>>> PC Object: (sub_) 1 MPI processes >>>> type: ilu >>>> ILU: out-of-place factorization >>>> 0 levels of fill >>>> tolerance for zero pivot 1e-12 >>>> using diagonal shift to prevent zero pivot >>>> matrix ordering: natural >>>> factor fill ratio given 1, needed 1 >>>> Factored matrix follows: >>>> Matrix Object: 1 MPI processes >>>> type: seqaij >>>> rows=5630, cols=5630 >>>> package used to perform factorization: petsc >>>> total: nonzeros=236700, allocated nonzeros=236700 >>>> total number of mallocs used during MatSetValues calls =0 >>>> using I-node routines: found 1126 nodes, limit used is 5 >>>> linear system matrix = precond matrix: >>>> Matrix Object: 1 MPI processes >>>> type: seqaij >>>> rows=5630, cols=5630 >>>> total: nonzeros=236700, allocated nonzeros=236700 >>>> total number of mallocs used during MatSetValues calls =0 >>>> using I-node routines: found 1126 nodes, limit used is 5 >>>> linear system matrix = precond matrix: >>>> Matrix Object: 8 MPI processes >>>> type: mpiaij >>>> rows=41000, cols=41000 >>>> total: nonzeros=1817800, allocated nonzeros=2555700 >>>> total number of mallocs used during MatSetValues calls =121180 >>>> using I-node (on process 0) routines: found 1025 nodes, limit used is 5 >>>> >>>> Channel Inlet pressure (MIXTURE): -0.90733D+00 >>>> Channel Inlet pressure (LIQUID): -0.10118D+01 >>>> >>>> >>>> As you may see these are complete different results which are not close to the converged solution. >>>> >>>> Since, I want to have fine grids I need to use iterative solver. I wonder if I am missing something or using wrong solver/precondition/option. I would appreciate if you could help me (like always). >>>> >>>> -- >>>> With Best Regards; >>>> Foad >>>> >>>> >>>> >>>> >>> >> >> >> > > From umhassa5 at cc.umanitoba.ca Mon Apr 28 14:42:35 2014 From: umhassa5 at cc.umanitoba.ca (Foad Hassaninejadfarahani) Date: Mon, 28 Apr 2014 14:42:35 -0500 Subject: [petsc-users] [petsc-maint] Iterative Solver Problem In-Reply-To: References: <20140428111905.14977gpykpia4zs4@webtools.cc.umanitoba.ca> <20140428132135.93114hgxsqjp4lrk@webtools.cc.umanitoba.ca> Message-ID: <20140428144235.14076mu678m6agco@webtools.cc.umanitoba.ca> Hello; I put all those commands. It does not work with -ksp_monitor_singular_value and here is the output: 0 KSP preconditioned resid norm 2.622210477042e+04 true resid norm 1.860478790525e+07 ||r(i)||/||b|| 1.000000000000e+00 0 KSP Residual norm 2.622210477042e+04 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 1 KSP preconditioned resid norm 5.998205227155e+03 true resid norm 4.223014979562e+06 ||r(i)||/||b|| 2.269853868300e-01 1 KSP Residual norm 5.998205227155e+03 % max 8.773486916458e-01 min 8.773486916458e-01 max/min 1.000000000000e+00 2 KSP preconditioned resid norm 1.879862239084e+03 true resid norm 3.444600162270e+06 ||r(i)||/||b|| 1.851458979169e-01 2 KSP Residual norm 1.879862239084e+03 % max 1.229948994481e+00 min 7.933593275578e-01 max/min 1.550305078365e+00 3 KSP preconditioned resid norm 8.529038157181e+02 true resid norm 1.311707893098e+06 ||r(i)||/||b|| 7.050378105779e-02 [3]PETSC ERROR: ------------------------------------------------------------------------ [3]PETSC ERROR: Caught signal number 8 FPE: Floating Point Exception,probably divide by zero [3]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [3]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[3]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [3]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [3]PETSC ERROR: to get more information on the crash. [3]PETSC ERROR: --------------------- Error Message ------------------------------------ [3]PETSC ERROR: Signal received! [3]PETSC ERROR: ------------------------------------------------------------------------ [3]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 CDT 2011 [3]PETSC ERROR: See docs/changes/index.html for recent updates. [3]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [3]PETSC ERROR: See docs/index.html for manual pages. [3]PETSC ERROR: ------------------------------------------------------------------------ [3]PETSC ERROR: /home/u14/umhassa5/mecfd/gas-code/TLEC2CCP/ctf/bin/Linux_p/ctf_Linux on a arch-linu named mecfd02 by umhassa5 Mon Apr 28 14:39:08 2014 [3]PETSC ERROR: Libraries linked from /home/mecfd/common/sw/petsc-3.2-p5-pgi/arch-linux2-c-opt/lib [3]PETSC ERROR: Configure run at Sat Dec 31 07:53:05 2011 [3]PETSC ERROR: Configure options --with-mpi-dir=/home/mecfd/common/openmpi-p --PETSC_DIR=/home/mecfd/common/sw/petsc-3.2-p5-pgi --with-debugging=0 --with-shared-libraries=1 --download-f-blas-lapack=1 --download-superlu_dist=yes --download-parmetis=yes --download-mumps=yes --download-scalapack=yes --download-spooles=yes --download-blacs=yes --download-hypre=yes [3]PETSC ERROR: ------------------------------------------------------------------------ [3]PETSC ERROR: [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 8 FPE: Floating Point Exception,probably divide by zero [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [0]PETSC ERROR: to get more information on the crash. [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [7]PETSC ERROR: ------------------------------------------------------------------------ [7]PETSC ERROR: Caught signal number 8 FPE: Floating Point Exception,probably divide by zero [7]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [7]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[7]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [7]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [7]PETSC ERROR: to get more information on the crash. [7]PETSC ERROR: --------------------- Error Message ------------------------------------ [7]PETSC ERROR: Signal received! [7]PETSC ERROR: ------------------------------------------------------------------------ [7]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 CDT 2011 [7]PETSC ERROR: See docs/changes/index.html for recent updates. [7]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [7]PETSC ERROR: See docs/index.html for manual pages. [7]PETSC ERROR: ------------------------------------------------------------------------ [7]PETSC ERROR: /home/u14/umhassa5/mecfd/gas-code/TLEC2CCP/ctf/bin/Linux_p/ctf_Linux on a arch-linu named mecfd02 by umhassa5 Mon Apr 28 14:39:08 2014 [7]PETSC ERROR: Libraries linked from /home/mecfd/common/sw/petsc-3.2-p5-pgi/arch-linux2-c-opt/lib [7]PETSC ERROR: Configure run at Sat Dec 31 07:53:05 2011 [7]PETSC ERROR: Configure options --with-mpi-dir=/home/mecfd/common/openmpi-p --PETSC_DIR=/home/mecfd/common/sw/petsc-3.2-p5-pgi --with-debugging=0 --with-shared-libraries=1 --download-f-blas-lapack=1 --download-superlu_dist=yes --download-parmetis=yes --download-mumps=yes --download-scalapack=yes --download-spooles=yes --download-blacs=yes --download-hypre=yes [7]PETSC ERROR: ------------------------------------------------------------------------ [7]PETSC ERROR: User provided function() line 0 in unknown directory unknown file User provided function() line 0 in unknown directory unknown file [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Caught signal number 8 FPE: Floating Point Exception,probably divide by zero [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [1]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [1]PETSC ERROR: to get more information on the crash. [1]PETSC ERROR: --------------------- Error Message ------------------------------------ [2]PETSC ERROR: ------------------------------------------------------------------------ [2]PETSC ERROR: [4]PETSC ERROR: ------------------------------------------------------------------------ [4]PETSC ERROR: Caught signal number 8 FPE: Floating Point Exception,probably divide by zero [4]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [4]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[4]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [4]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [4]PETSC ERROR: to get more information on the crash. [4]PETSC ERROR: --------------------- Error Message ------------------------------------ [4]PETSC ERROR: Signal received! [4]PETSC ERROR: ------------------------------------------------------------------------ [4]PETSC ERROR: [5]PETSC ERROR: ------------------------------------------------------------------------ [5]PETSC ERROR: Caught signal number 8 FPE: Floating Point Exception,probably divide by zero [5]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [5]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[5]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [5]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [5]PETSC ERROR: to get more information on the crash. [6]PETSC ERROR: ------------------------------------------------------------------------ [6]PETSC ERROR: Caught signal number 8 FPE: Floating Point Exception,probably divide by zero [6]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [6]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[6]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [6]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [6]PETSC ERROR: to get more information on the crash. [6]PETSC ERROR: --------------------- Error Message ------------------------------------ [6]PETSC ERROR: Signal received! [6]PETSC ERROR: ------------------------------------------------------------------------ [6]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 CDT 2011 [6]PETSC ERROR: See docs/changes/index.html for recent updates. [6]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [6]PETSC ERROR: See docs/index.html for manual pages. [6]PETSC ERROR: ------------------------------------------------------------------------ [6]PETSC ERROR: /home/u14/umhassa5/mecfd/gas-code/TLEC2CCP/ctf/bin/Linux_p/ctf_Linux on a arch-linu named mecfd02 by umhassa5 Mon Apr 28 14:39:08 2014 [6]PETSC ERROR: Libraries linked from /home/mecfd/common/sw/petsc-3.2-p5-pgi/arch-linux2-c-opt/lib [6]PETSC ERROR: Configure run at Sat Dec 31 07:53:05 2011 [6]PETSC ERROR: Configure options --with-mpi-dir=/home/mecfd/common/openmpi-p --PETSC_DIR=/home/mecfd/common/sw/petsc-3.2-p5-pgi --with-debugging=0 --with-shared-libraries=1 --download-f-blas-lapack=1 --download-superlu_dist=yes --download-parmetis=yes --download-mumps=yes --download-scalapack=yes --download-spooles=yes --download-blacs=yes --download-hypre=yes [6]PETSC ERROR: ------------------------------------------------------------------------ [6]PETSC ERROR: User provided function() line 0 in unknown directory unknown file [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 CDT 2011 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: /home/u14/umhassa5/mecfd/gas-code/TLEC2CCP/ctf/bin/Linux_p/ctf_Linux on a arch-linu named mecfd02 by umhassa5 Mon Apr 28 14:39:08 2014 [0]PETSC ERROR: Libraries linked from /home/mecfd/common/sw/petsc-3.2-p5-pgi/arch-linux2-c-opt/lib [0]PETSC ERROR: Configure run at Sat Dec 31 07:53:05 2011 [0]PETSC ERROR: Configure options --with-mpi-dir=/home/mecfd/common/openmpi-p --PETSC_DIR=/home/mecfd/common/sw/petsc-3.2-p5-pgi --with-debugging=0 --with-shared-libraries=1 --download-f-blas-lapack=1 --download-superlu_dist=yes --download-parmetis=yes --download-mumps=yes --download-scalapack=yes --download-spooles=yes --download-blacs=yes --download-hypre=yes [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file [1]PETSC ERROR: Signal received! [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 CDT 2011 [1]PETSC ERROR: See docs/changes/index.html for recent updates. [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [1]PETSC ERROR: See docs/index.html for manual pages. [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: /home/u14/umhassa5/mecfd/gas-code/TLEC2CCP/ctf/bin/Linux_p/ctf_Linux on a arch-linu named mecfd02 by umhassa5 Mon Apr 28 14:39:08 2014 [1]PETSC ERROR: Libraries linked from /home/mecfd/common/sw/petsc-3.2-p5-pgi/arch-linux2-c-opt/lib [1]PETSC ERROR: Configure run at Sat Dec 31 07:53:05 2011 [1]PETSC ERROR: Configure options --with-mpi-dir=/home/mecfd/common/openmpi-p --PETSC_DIR=/home/mecfd/common/sw/petsc-3.2-p5-pgi --with-debugging=0 --with-shared-libraries=1 --download-f-blas-lapack=1 --download-superlu_dist=yes --download-parmetis=yes --download-mumps=yes --download-scalapack=yes --download-spooles=yes --download-blacs=yes --download-hypre=yes [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file Caught signal number 8 FPE: Floating Point Exception,probably divide by zero [2]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [2]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[2]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [2]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [2]PETSC ERROR: to get more information on the crash. [2]PETSC ERROR: --------------------- Error Message ------------------------------------ [2]PETSC ERROR: Signal received! [2]PETSC ERROR: ------------------------------------------------------------------------ [2]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 CDT 2011 Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 CDT 2011 [4]PETSC ERROR: See docs/changes/index.html for recent updates. [4]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [4]PETSC ERROR: See docs/index.html for manual pages. [4]PETSC ERROR: ------------------------------------------------------------------------ [4]PETSC ERROR: /home/u14/umhassa5/mecfd/gas-code/TLEC2CCP/ctf/bin/Linux_p/ctf_Linux on a arch-linu named mecfd02 by umhassa5 Mon Apr 28 14:39:08 2014 [4]PETSC ERROR: Libraries linked from /home/mecfd/common/sw/petsc-3.2-p5-pgi/arch-linux2-c-opt/lib [4]PETSC ERROR: Configure run at Sat Dec 31 07:53:05 2011 [4]PETSC ERROR: Configure options --with-mpi-dir=/home/mecfd/common/openmpi-p --PETSC_DIR=/home/mecfd/common/sw/petsc-3.2-p5-pgi --with-debugging=0 --with-shared-libraries=1 --download-f-blas-lapack=1 --download-superlu_dist=yes --download-parmetis=yes --download-mumps=yes --download-scalapack=yes --download-spooles=yes --download-blacs=yes --download-hypre=yes [4]PETSC ERROR: ------------------------------------------------------------------------ [4]PETSC ERROR: User provided function() line 0 in unknown directory unknown file [5]PETSC ERROR: --------------------- Error Message ------------------------------------ [5]PETSC ERROR: Signal received! [5]PETSC ERROR: ------------------------------------------------------------------------ [5]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 CDT 2011 [5]PETSC ERROR: See docs/changes/index.html for recent updates. [5]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [5]PETSC ERROR: See docs/index.html for manual pages. [5]PETSC ERROR: ------------------------------------------------------------------------ [5]PETSC ERROR: /home/u14/umhassa5/mecfd/gas-code/TLEC2CCP/ctf/bin/Linux_p/ctf_Linux on a arch-linu named mecfd02 by umhassa5 Mon Apr 28 14:39:08 2014 [5]PETSC ERROR: Libraries linked from /home/mecfd/common/sw/petsc-3.2-p5-pgi/arch-linux2-c-opt/lib [5]PETSC ERROR: Configure run at Sat Dec 31 07:53:05 2011 [5]PETSC ERROR: Configure options --with-mpi-dir=/home/mecfd/common/openmpi-p --PETSC_DIR=/home/mecfd/common/sw/petsc-3.2-p5-pgi --with-debugging=0 --with-shared-libraries=1 --download-f-blas-lapack=1 --download-superlu_dist=yes --download-parmetis=yes --download-mumps=yes --download-scalapack=yes --download-spooles=yes --download-blacs=yes --download-hypre=yes [5]PETSC ERROR: ------------------------------------------------------------------------ [5]PETSC ERROR: User provided function() line 0 in unknown directory unknown file -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 4 in communicator MPI_COMM_WORLD with errorcode 59. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- [2]PETSC ERROR: See docs/changes/index.html for recent updates. [2]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [2]PETSC ERROR: See docs/index.html for manual pages. [2]PETSC ERROR: ------------------------------------------------------------------------ [2]PETSC ERROR: /home/u14/umhassa5/mecfd/gas-code/TLEC2CCP/ctf/bin/Linux_p/ctf_Linux on a arch-linu named mecfd02 by umhassa5 Mon Apr 28 14:39:08 2014 [2]PETSC ERROR: Libraries linked from /home/mecfd/common/sw/petsc-3.2-p5-pgi/arch-linux2-c-opt/lib [2]PETSC ERROR: Configure run at Sat Dec 31 07:53:05 2011 [2]PETSC ERROR: Configure options --with-mpi-dir=/home/mecfd/common/openmpi-p --PETSC_DIR=/home/mecfd/common/sw/petsc-3.2-p5-pgi --with-debugging=0 --with-shared-libraries=1 --download-f-blas-lapack=1 --download-superlu_dist=yes --download-parmetis=yes --download-mumps=yes --download-scalapack=yes --download-spooles=yes --download-blacs=yes --download-hypre=yes [2]PETSC ERROR: ------------------------------------------------------------------------ [2]PETSC ERROR: User provided function() line 0 in unknown directory unknown file -------------------------------------------------------------------------- mpiexec has exited due to process rank 3 with PID 19683 on node mecfd02 exiting without calling "finalize". This may have caused other processes in the application to be terminated by signals sent by mpiexec (as reported here). -------------------------------------------------------------------------- -- With Best Regards; Foad Quoting Barry Smith : > > Please run with the additional options -ksp_max_it 500 > -ksp_gmres_restart 500 -ksp_monitor_true_residual > -ksp_monitor_singular_value and send back all the output (that would > include the 500 residual norms as it tries to converge.) > > Barry > > On Apr 28, 2014, at 1:21 PM, Foad Hassaninejadfarahani > wrote: > >> Hello Again; >> >> I used -ksp_rtol 1.e-12 and it took way way longer to get the >> result for one iteration and it did not converge: >> >> Linear solve did not converge due to DIVERGED_ITS iterations 10000 >> KSP Object: 8 MPI processes >> type: gmres >> GMRES: restart=300, using Classical (unmodified) Gram-Schmidt >> Orthogonalization with no iterative refinement >> GMRES: happy breakdown tolerance 1e-30 >> maximum iterations=10000, initial guess is zero >> tolerances: relative=1e-12, absolute=1e-50, divergence=10000 >> left preconditioning >> using PRECONDITIONED norm type for convergence test >> PC Object: 8 MPI processes >> type: asm >> Additive Schwarz: total subdomain blocks = 8, amount of overlap = 1 >> Additive Schwarz: restriction/interpolation type - RESTRICT >> Local solve is same for all blocks, in the following KSP and PC objects: >> KSP Object: (sub_) 1 MPI processes >> type: preonly >> maximum iterations=10000, initial guess is zero >> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >> left preconditioning >> using NONE norm type for convergence test >> PC Object: (sub_) 1 MPI processes >> type: lu >> LU: out-of-place factorization >> tolerance for zero pivot 1e-12 >> matrix ordering: nd >> factor fill ratio given 5, needed 3.70575 >> Factored matrix follows: >> Matrix Object: 1 MPI processes >> type: seqaij >> rows=5630, cols=5630 >> package used to perform factorization: petsc >> total: nonzeros=877150, allocated nonzeros=877150 >> total number of mallocs used during MatSetValues calls =0 >> using I-node routines: found 1126 nodes, limit used is 5 >> linear system matrix = precond matrix: >> Matrix Object: 1 MPI processes >> type: seqaij >> rows=5630, cols=5630 >> total: nonzeros=236700, allocated nonzeros=236700 >> total number of mallocs used during MatSetValues calls =0 >> using I-node routines: found 1126 nodes, limit used is 5 >> linear system matrix = precond matrix: >> Matrix Object: 8 MPI processes >> type: mpiaij >> rows=41000, cols=41000 >> total: nonzeros=1817800, allocated nonzeros=2555700 >> total number of mallocs used during MatSetValues calls =121180 >> using I-node (on process 0) routines: found 1025 nodes, limit used is 5 >> >> >> Well, let me clear everything. I am solving the whole system (air >> and water) coupled at once. Although originally the system is not >> linear, but I linearized the equations, so I have some lagged >> terms. In addition the interface (between two phases) location is >> wrong at the beginning and should be corrected in each iteration >> after getting the solution. Therefore, I solve the whole domain, >> move the interface and again solve the whole domain. This should >> continue until the interface movement becomes from the order of >> 1E-12. >> >> My problem is after getting the converged solution. Restarting from >> the converged solution, if I use Superlu, it gives me back the >> converged solution and stops after one iteration. But, if I use any >> iterative solver, it does not give me back the converged solution >> and starts moving the interface cause the wrong solution ask for >> new interface location. This leads to oscillation for ever and for >> some cases divergence. >> >> -- >> With Best Regards; >> Foad >> >> >> Quoting Barry Smith : >> >>> >>> On Apr 28, 2014, at 12:59 PM, Barry Smith wrote: >>> >>>> >>>> First try a much tighter tolerance on the linear solver. Use >>>> -ksp_rtol 1.e-12 >>>> >>>> I don?t fully understand. Is the coupled system nonlinear? Are >>>> you solving a nonlinear system, how are you doing that since you >>>> seem to be only solving a single linear system? Does the linear >>>> system involve all unknowns in the fluid and air? >>>> >>>> Barry >>>> >>>> >>>> >>>> On Apr 28, 2014, at 11:19 AM, Foad Hassaninejadfarahani >>>> wrote: >>>> >>>>> Hello PETSc team; >>>>> >>>>> The PETSc setup in my code is working now. I have issues with >>>>> using the iterative solver instead of direct solver. >>>>> >>>>> I am solving a 2D, two-phase flow. Two fluids (air and water) >>>>> flow into a channel and there is interaction between two phases. >>>>> I am solving for the velocities in x and y directions, pressure >>>>> and two scalars. They are all coupled together. I am looking for >>>>> the steady-state solution. Since there is interface between the >>>>> phases which needs updating, there are many iterations to reach >>>>> the steady-state solution. "A" is a nine-banded non-symmetric >>>>> matrix and each node has five unknowns. I am storing the >>>>> non-zero coefficients and their locations in three separate >>>>> vectors. >>>>> >>>>> I started using the direct solver. Superlu works fine and gives >>>>> me good results compared to the previous works. However it is >>>>> not cheap and applicable for fine grids. But, the iterative >>>>> solver did not work and here is what I did: >>>>> >>>>> I got the converged solution by using Superlu. After that I >>>>> restarted from the converged solution and did one iteration >>>>> using -pc_type lu -pc_factor_mat_solver_package superlu_dist >>>>> -log_summary. Again, it gave me the same converged solution. >>>>> >>>>> After that I started from the converged solution once more and >>>>> this time I tried different combinations of iterative solvers >>>>> and preconditions like the followings: >>>>> -ksp_type gmres -ksp_gmres_restart 300 -pc_type asm -sub_pc_type >>>>> lu ksp_monitor_true_residual -ksp_converged_reason -ksp_view >>>>> -log_summary >>>>> >>>>> and here is the report: >>>>> Linear solve converged due to CONVERGED_RTOL iterations 41 >>>>> KSP Object: 8 MPI processes >>>>> type: gmres >>>>> GMRES: restart=300, using Classical (unmodified) Gram-Schmidt >>>>> Orthogonalization with no iterative refinement >>>>> GMRES: happy breakdown tolerance 1e-30 >>>>> maximum iterations=10000, initial guess is zero >>>>> tolerances: relative=1e-06, absolute=1e-50, divergence=10000 >>>>> left preconditioning >>>>> using PRECONDITIONED norm type for convergence test >>>>> PC Object: 8 MPI processes >>>>> type: asm >>>>> Additive Schwarz: total subdomain blocks = 8, amount of overlap = 1 >>>>> Additive Schwarz: restriction/interpolation type - RESTRICT >>>>> Local solve is same for all blocks, in the following KSP and PC objects: >>>>> KSP Object: (sub_) 1 MPI processes >>>>> type: preonly >>>>> maximum iterations=10000, initial guess is zero >>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >>>>> left preconditioning >>>>> using NONE norm type for convergence test >>>>> PC Object: (sub_) 1 MPI processes >>>>> type: lu >>>>> LU: out-of-place factorization >>>>> tolerance for zero pivot 1e-12 >>>>> matrix ordering: nd >>>>> factor fill ratio given 5, needed 3.70575 >>>>> Factored matrix follows: >>>>> Matrix Object: 1 MPI processes >>>>> type: seqaij >>>>> rows=5630, cols=5630 >>>>> package used to perform factorization: petsc >>>>> total: nonzeros=877150, allocated nonzeros=877150 >>>>> total number of mallocs used during MatSetValues calls =0 >>>>> using I-node routines: found 1126 nodes, limit used is 5 >>>>> linear system matrix = precond matrix: >>>>> Matrix Object: 1 MPI processes >>>>> type: seqaij >>>>> rows=5630, cols=5630 >>>>> total: nonzeros=236700, allocated nonzeros=236700 >>>>> total number of mallocs used during MatSetValues calls =0 >>>>> using I-node routines: found 1126 nodes, limit used is 5 >>>>> linear system matrix = precond matrix: >>>>> Matrix Object: 8 MPI processes >>>>> type: mpiaij >>>>> rows=41000, cols=41000 >>>>> total: nonzeros=1817800, allocated nonzeros=2555700 >>>>> total number of mallocs used during MatSetValues calls =121180 >>>>> using I-node (on process 0) routines: found 1025 nodes, limit >>>>> used is 5 >>>>> >>>>> But, the results are far from the converged solution. For >>>>> example two reference nodes for the pressure are compared: >>>>> >>>>> Based on Superlu >>>>> Channel Inlet pressure (MIXTURE): 0.38890D-01 >>>>> Channel Inlet pressure (LIQUID): 0.38416D-01 >>>>> >>>>> Based on Gmres >>>>> Channel Inlet pressure (MIXTURE): -0.87214D+00 >>>>> Channel Inlet pressure (LIQUID): -0.87301D+00 >>>>> >>>>> >>>>> I also tried this: >>>>> -ksp_type gcr -pc_type asm -ksp_diagonal_scale >>>>> -ksp_diagonal_scale_fix -ksp_monitor_true_residual >>>>> -ksp_converged_reason -ksp_view -log_summary >>>>> >>>>> and here is the report: >>>>> 0 KSP unpreconditioned resid norm 2.248340888101e+05 true resid >>>>> norm 2.248340888101e+05 ||r(i)||/||b|| 1.000000000000e+00 >>>>> 1 KSP unpreconditioned resid norm 4.900010460179e+04 true resid >>>>> norm 4.900010460179e+04 ||r(i)||/||b|| 2.179389471637e-01 >>>>> 2 KSP unpreconditioned resid norm 4.267761572746e+04 true resid >>>>> norm 4.267761572746e+04 ||r(i)||/||b|| 1.898182608933e-01 >>>>> 3 KSP unpreconditioned resid norm 2.041242251471e+03 true resid >>>>> norm 2.041242251471e+03 ||r(i)||/||b|| 9.078882398457e-03 >>>>> 4 KSP unpreconditioned resid norm 1.852885420564e+03 true resid >>>>> norm 1.852885420564e+03 ||r(i)||/||b|| 8.241123178296e-03 >>>>> 5 KSP unpreconditioned resid norm 1.748965594395e+02 true resid >>>>> norm 1.748965594395e+02 ||r(i)||/||b|| 7.778916460804e-04 >>>>> 6 KSP unpreconditioned resid norm 5.664539353996e+01 true resid >>>>> norm 5.664539353996e+01 ||r(i)||/||b|| 2.519430831852e-04 >>>>> 7 KSP unpreconditioned resid norm 3.607535692806e+01 true resid >>>>> norm 3.607535692806e+01 ||r(i)||/||b|| 1.604532351788e-04 >>>>> 8 KSP unpreconditioned resid norm 1.041501303366e+01 true resid >>>>> norm 1.041501303366e+01 ||r(i)||/||b|| 4.632310468924e-05 >>>>> 9 KSP unpreconditioned resid norm 3.089920380322e+00 true resid >>>>> norm 3.089920380322e+00 ||r(i)||/||b|| 1.374311340720e-05 >>>>> 10 KSP unpreconditioned resid norm 1.456883209806e+00 true resid >>>>> norm 1.456883209806e+00 ||r(i)||/||b|| 6.479814593583e-06 >>>>> 11 KSP unpreconditioned resid norm 5.566902714391e-01 true resid >>>>> norm 5.566902714391e-01 ||r(i)||/||b|| 2.476004748147e-06 >>>>> 12 KSP unpreconditioned resid norm 2.403913756663e-01 true resid >>>>> norm 2.403913756663e-01 ||r(i)||/||b|| 1.069194520006e-06 >>>>> 13 KSP unpreconditioned resid norm 1.650435118839e-01 true resid >>>>> norm 1.650435118839e-01 ||r(i)||/||b|| 7.340680088032e-07 >>>>> Linear solve converged due to CONVERGED_RTOL iterations 13 >>>>> KSP Object: 8 MPI processes >>>>> type: gcr >>>>> GCR: restart = 30 >>>>> GCR: restarts performed = 1 >>>>> maximum iterations=10000, initial guess is zero >>>>> tolerances: relative=1e-06, absolute=1e-50, divergence=10000 >>>>> right preconditioning >>>>> diagonally scaled system >>>>> using UNPRECONDITIONED norm type for convergence test >>>>> PC Object: 8 MPI processes >>>>> type: asm >>>>> Additive Schwarz: total subdomain blocks = 8, amount of overlap = 1 >>>>> Additive Schwarz: restriction/interpolation type - RESTRICT >>>>> Local solve is same for all blocks, in the following KSP and PC objects: >>>>> KSP Object: (sub_) 1 MPI processes >>>>> type: preonly >>>>> maximum iterations=10000, initial guess is zero >>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >>>>> left preconditioning >>>>> using NONE norm type for convergence test >>>>> PC Object: (sub_) 1 MPI processes >>>>> type: ilu >>>>> ILU: out-of-place factorization >>>>> 0 levels of fill >>>>> tolerance for zero pivot 1e-12 >>>>> using diagonal shift to prevent zero pivot >>>>> matrix ordering: natural >>>>> factor fill ratio given 1, needed 1 >>>>> Factored matrix follows: >>>>> Matrix Object: 1 MPI processes >>>>> type: seqaij >>>>> rows=5630, cols=5630 >>>>> package used to perform factorization: petsc >>>>> total: nonzeros=236700, allocated nonzeros=236700 >>>>> total number of mallocs used during MatSetValues calls =0 >>>>> using I-node routines: found 1126 nodes, limit used is 5 >>>>> linear system matrix = precond matrix: >>>>> Matrix Object: 1 MPI processes >>>>> type: seqaij >>>>> rows=5630, cols=5630 >>>>> total: nonzeros=236700, allocated nonzeros=236700 >>>>> total number of mallocs used during MatSetValues calls =0 >>>>> using I-node routines: found 1126 nodes, limit used is 5 >>>>> linear system matrix = precond matrix: >>>>> Matrix Object: 8 MPI processes >>>>> type: mpiaij >>>>> rows=41000, cols=41000 >>>>> total: nonzeros=1817800, allocated nonzeros=2555700 >>>>> total number of mallocs used during MatSetValues calls =121180 >>>>> using I-node (on process 0) routines: found 1025 nodes, limit >>>>> used is 5 >>>>> >>>>> Channel Inlet pressure (MIXTURE): -0.90733D+00 >>>>> Channel Inlet pressure (LIQUID): -0.10118D+01 >>>>> >>>>> >>>>> As you may see these are complete different results which are >>>>> not close to the converged solution. >>>>> >>>>> Since, I want to have fine grids I need to use iterative solver. >>>>> I wonder if I am missing something or using wrong >>>>> solver/precondition/option. I would appreciate if you could help >>>>> me (like always). >>>>> >>>>> -- >>>>> With Best Regards; >>>>> Foad >>>>> >>>>> >>>>> >>>>> >>>> >>> >>> >>> >> >> > > > From umhassa5 at cc.umanitoba.ca Mon Apr 28 14:59:09 2014 From: umhassa5 at cc.umanitoba.ca (Foad Hassaninejadfarahani) Date: Mon, 28 Apr 2014 14:59:09 -0500 Subject: [petsc-users] [petsc-maint] Iterative Solver Problem In-Reply-To: References: <20140428111905.14977gpykpia4zs4@webtools.cc.umanitoba.ca> <20140428132135.93114hgxsqjp4lrk@webtools.cc.umanitoba.ca> Message-ID: <20140428145909.85014jncv6gh8fsw@webtools.cc.umanitoba.ca> Here are the results that you asked for based on the following options (as I mentioned before -ksp_monitor_singular_value did not work): -ksp_type gmres -ksp_max_it 500 -ksp_gmres_restart 500 -ksp_monitor_true_residual -pc_type asm -sub_pc_type lu -ksp_converged_reason -ksp_view -log_summary 0 KSP preconditioned resid norm 2.622210477042e+04 true resid norm 1.860478790525e+07 ||r(i)||/||b|| 1.000000000000e+00 1 KSP preconditioned resid norm 5.998205227155e+03 true resid norm 4.223014979562e+06 ||r(i)||/||b|| 2.269853868300e-01 2 KSP preconditioned resid norm 1.879862239084e+03 true resid norm 3.444600162270e+06 ||r(i)||/||b|| 1.851458979169e-01 3 KSP preconditioned resid norm 8.529038157181e+02 true resid norm 1.311707893098e+06 ||r(i)||/||b|| 7.050378105779e-02 4 KSP preconditioned resid norm 5.117477039988e+02 true resid norm 1.150626093586e+06 ||r(i)||/||b|| 6.184569797010e-02 5 KSP preconditioned resid norm 2.668129662870e+02 true resid norm 4.043828616910e+05 ||r(i)||/||b|| 2.173541906258e-02 6 KSP preconditioned resid norm 1.406200642770e+02 true resid norm 2.157802026927e+05 ||r(i)||/||b|| 1.159810064977e-02 7 KSP preconditioned resid norm 4.945736602548e+01 true resid norm 8.664213205211e+04 ||r(i)||/||b|| 4.656980369428e-03 8 KSP preconditioned resid norm 2.074113313015e+01 true resid norm 1.871982540148e+03 ||r(i)||/||b|| 1.006183219976e-04 9 KSP preconditioned resid norm 9.175543524744e+00 true resid norm 1.972683087747e+03 ||r(i)||/||b|| 1.060309366489e-04 10 KSP preconditioned resid norm 5.132382955099e+00 true resid norm 9.668302341603e+02 ||r(i)||/||b|| 5.196674313538e-05 11 KSP preconditioned resid norm 2.805085110849e+00 true resid norm 8.679902185785e+02 ||r(i)||/||b|| 4.665413134506e-05 12 KSP preconditioned resid norm 1.139484572989e+00 true resid norm 1.708233863489e+02 ||r(i)||/||b|| 9.181689531690e-06 13 KSP preconditioned resid norm 5.183239498946e-01 true resid norm 8.061206948868e+01 ||r(i)||/||b|| 4.332866888847e-06 14 KSP preconditioned resid norm 2.548292072900e-01 true resid norm 1.784492887150e+01 ||r(i)||/||b|| 9.591578771219e-07 15 KSP preconditioned resid norm 2.039372407280e-01 true resid norm 1.076957601145e+01 ||r(i)||/||b|| 5.788604560449e-07 16 KSP preconditioned resid norm 1.804982466524e-01 true resid norm 9.739865064005e+00 ||r(i)||/||b|| 5.235138994117e-07 17 KSP preconditioned resid norm 1.700936282311e-01 true resid norm 1.695691923860e+01 ||r(i)||/||b|| 9.114277101657e-07 18 KSP preconditioned resid norm 1.614639032299e-01 true resid norm 4.816194157975e+00 ||r(i)||/||b|| 2.588685333315e-07 19 KSP preconditioned resid norm 1.541008159975e-01 true resid norm 1.351979893208e+01 ||r(i)||/||b|| 7.266838515403e-07 20 KSP preconditioned resid norm 1.441065624835e-01 true resid norm 1.723197534299e+01 ||r(i)||/||b|| 9.262118671147e-07 21 KSP preconditioned resid norm 1.196140411042e-01 true resid norm 6.561761118891e+00 ||r(i)||/||b|| 3.526920678864e-07 22 KSP preconditioned resid norm 9.822321352825e-02 true resid norm 1.061582812231e+01 ||r(i)||/||b|| 5.705965677422e-07 23 KSP preconditioned resid norm 7.431304177587e-02 true resid norm 1.327655908969e+01 ||r(i)||/||b|| 7.136098061053e-07 24 KSP preconditioned resid norm 6.667763866612e-02 true resid norm 2.841492940094e+00 ||r(i)||/||b|| 1.527291229852e-07 25 KSP preconditioned resid norm 6.303986761069e-02 true resid norm 2.414143897545e+00 ||r(i)||/||b|| 1.297592807744e-07 26 KSP preconditioned resid norm 6.099053572176e-02 true resid norm 2.238032868296e+00 ||r(i)||/||b|| 1.202933825257e-07 27 KSP preconditioned resid norm 5.916313912854e-02 true resid norm 2.468465881398e+00 ||r(i)||/||b|| 1.326790659463e-07 28 KSP preconditioned resid norm 5.783250141316e-02 true resid norm 4.252169414392e+00 ||r(i)||/||b|| 2.285524261845e-07 29 KSP preconditioned resid norm 5.517165857664e-02 true resid norm 2.974123231867e+00 ||r(i)||/||b|| 1.598579487718e-07 30 KSP preconditioned resid norm 4.998751735155e-02 true resid norm 2.954311987456e+00 ||r(i)||/||b|| 1.587931022112e-07 31 KSP preconditioned resid norm 4.220190698609e-02 true resid norm 6.558846516228e+00 ||r(i)||/||b|| 3.525354091447e-07 32 KSP preconditioned resid norm 3.807512846045e-02 true resid norm 8.499956982737e-01 ||r(i)||/||b|| 4.568693298749e-08 33 KSP preconditioned resid norm 3.610786244449e-02 true resid norm 5.181174592165e-01 ||r(i)||/||b|| 2.784860874820e-08 34 KSP preconditioned resid norm 3.516902521841e-02 true resid norm 1.154078194829e+00 ||r(i)||/||b|| 6.203124704813e-08 35 KSP preconditioned resid norm 3.430695555312e-02 true resid norm 1.216650859022e+00 ||r(i)||/||b|| 6.539450302891e-08 36 KSP preconditioned resid norm 3.367251402060e-02 true resid norm 1.979303465729e+00 ||r(i)||/||b|| 1.063867793500e-07 37 KSP preconditioned resid norm 3.295560906128e-02 true resid norm 1.817709293239e+00 ||r(i)||/||b|| 9.770115641715e-08 38 KSP preconditioned resid norm 3.199481594350e-02 true resid norm 1.560738982683e+00 ||r(i)||/||b|| 8.388910374206e-08 39 KSP preconditioned resid norm 2.944618065492e-02 true resid norm 2.882956861820e+00 ||r(i)||/||b|| 1.549577923974e-07 40 KSP preconditioned resid norm 2.663367005560e-02 true resid norm 3.308407886135e-01 ||r(i)||/||b|| 1.778256168780e-08 41 KSP preconditioned resid norm 2.474483734332e-02 true resid norm 9.152316486265e-01 ||r(i)||/||b|| 4.919333954719e-08 42 KSP preconditioned resid norm 2.372729505116e-02 true resid norm 1.076997098988e+00 ||r(i)||/||b|| 5.788816859796e-08 43 KSP preconditioned resid norm 2.315451384044e-02 true resid norm 7.138944574620e-01 ||r(i)||/||b|| 3.837154506129e-08 44 KSP preconditioned resid norm 2.285950220585e-02 true resid norm 1.273945994361e+00 ||r(i)||/||b|| 6.847409391866e-08 45 KSP preconditioned resid norm 2.247768360902e-02 true resid norm 1.309834117856e+00 ||r(i)||/||b|| 7.040306637875e-08 46 KSP preconditioned resid norm 2.211290648456e-02 true resid norm 7.229032032922e-01 ||r(i)||/||b|| 3.885576159072e-08 47 KSP preconditioned resid norm 2.125491655457e-02 true resid norm 1.467907956008e+00 ||r(i)||/||b|| 7.889947273161e-08 48 KSP preconditioned resid norm 2.001194328986e-02 true resid norm 1.045520372766e+00 ||r(i)||/||b|| 5.619630699851e-08 49 KSP preconditioned resid norm 1.871103482690e-02 true resid norm 5.683982921115e-01 ||r(i)||/||b|| 3.055118365263e-08 50 KSP preconditioned resid norm 1.784976733327e-02 true resid norm 7.859172163874e-01 ||r(i)||/||b|| 4.224273990061e-08 51 KSP preconditioned resid norm 1.722235167994e-02 true resid norm 8.153844111284e-01 ||r(i)||/||b|| 4.382658997678e-08 52 KSP preconditioned resid norm 1.692767633876e-02 true resid norm 8.214004787306e-01 ||r(i)||/||b|| 4.414995123373e-08 53 KSP preconditioned resid norm 1.679902147112e-02 true resid norm 7.384467810827e-01 ||r(i)||/||b|| 3.969122275639e-08 54 KSP preconditioned resid norm 1.665329178530e-02 true resid norm 5.308690872668e-01 ||r(i)||/||b|| 2.853400371831e-08 55 KSP preconditioned resid norm 1.628710029317e-02 true resid norm 8.791786276987e-01 ||r(i)||/||b|| 4.725550391524e-08 56 KSP preconditioned resid norm 1.560353224506e-02 true resid norm 7.474689747669e-01 ||r(i)||/||b|| 4.017616210266e-08 57 KSP preconditioned resid norm 1.482857597085e-02 true resid norm 6.511977490116e-01 ||r(i)||/||b|| 3.500162175070e-08 58 KSP preconditioned resid norm 1.430916177716e-02 true resid norm 8.379874473158e-01 ||r(i)||/||b|| 4.504149424242e-08 59 KSP preconditioned resid norm 1.375408057045e-02 true resid norm 9.086139880718e-01 ||r(i)||/||b|| 4.883764290672e-08 60 KSP preconditioned resid norm 1.339008284348e-02 true resid norm 6.964842841507e-01 ||r(i)||/||b|| 3.743575512377e-08 61 KSP preconditioned resid norm 1.325481281566e-02 true resid norm 6.814559395447e-01 ||r(i)||/||b|| 3.662798753822e-08 62 KSP preconditioned resid norm 1.316008245220e-02 true resid norm 5.394241512881e-01 ||r(i)||/||b|| 2.899383502974e-08 63 KSP preconditioned resid norm 1.297054602257e-02 true resid norm 5.540383982993e-01 ||r(i)||/||b|| 2.977934503317e-08 64 KSP preconditioned resid norm 1.267029342520e-02 true resid norm 6.222671963629e-01 ||r(i)||/||b|| 3.344661597499e-08 65 KSP preconditioned resid norm 1.218768500723e-02 true resid norm 6.019256916485e-01 ||r(i)||/||b|| 3.235326813259e-08 66 KSP preconditioned resid norm 1.173171524129e-02 true resid norm 7.974979783411e-01 ||r(i)||/||b|| 4.286520128059e-08 67 KSP preconditioned resid norm 1.120242187096e-02 true resid norm 7.439328341402e-01 ||r(i)||/||b|| 3.998609594094e-08 68 KSP preconditioned resid norm 1.090989022280e-02 true resid norm 6.707714484204e-01 ||r(i)||/||b|| 3.605370036125e-08 69 KSP preconditioned resid norm 1.075180649299e-02 true resid norm 6.456430592738e-01 ||r(i)||/||b|| 3.470305937170e-08 70 KSP preconditioned resid norm 1.068398143887e-02 true resid norm 5.140092329889e-01 ||r(i)||/||b|| 2.762779321144e-08 71 KSP preconditioned resid norm 1.061541304445e-02 true resid norm 5.600243898609e-01 ||r(i)||/||b|| 3.010108971481e-08 72 KSP preconditioned resid norm 1.047249627571e-02 true resid norm 6.998966576891e-01 ||r(i)||/||b|| 3.761916885339e-08 73 KSP preconditioned resid norm 1.020027918558e-02 true resid norm 4.636266257888e-01 ||r(i)||/||b|| 2.491974797831e-08 74 KSP preconditioned resid norm 9.876338774474e-03 true resid norm 6.978770495864e-01 ||r(i)||/||b|| 3.751061571572e-08 75 KSP preconditioned resid norm 9.406867350892e-03 true resid norm 5.852304272192e-01 ||r(i)||/||b|| 3.145590426505e-08 76 KSP preconditioned resid norm 9.076310138766e-03 true resid norm 4.704254025560e-01 ||r(i)||/||b|| 2.528517954366e-08 77 KSP preconditioned resid norm 8.893300436215e-03 true resid norm 5.467695072692e-01 ||r(i)||/||b|| 2.938864501191e-08 78 KSP preconditioned resid norm 8.813578822205e-03 true resid norm 4.312260836262e-01 ||r(i)||/||b|| 2.317823163706e-08 79 KSP preconditioned resid norm 8.748471910442e-03 true resid norm 4.820716495205e-01 ||r(i)||/||b|| 2.591116071710e-08 80 KSP preconditioned resid norm 8.661282395294e-03 true resid norm 5.863447781755e-01 ||r(i)||/||b|| 3.151580018873e-08 81 KSP preconditioned resid norm 8.532352878138e-03 true resid norm 5.100248589176e-01 ||r(i)||/||b|| 2.741363467915e-08 82 KSP preconditioned resid norm 8.359845622080e-03 true resid norm 5.496958389615e-01 ||r(i)||/||b|| 2.954593418431e-08 83 KSP preconditioned resid norm 8.103730293968e-03 true resid norm 4.327772638399e-01 ||r(i)||/||b|| 2.326160696074e-08 84 KSP preconditioned resid norm 7.814651802893e-03 true resid norm 3.968564175533e-01 ||r(i)||/||b|| 2.133087566353e-08 85 KSP preconditioned resid norm 7.553683118005e-03 true resid norm 5.015451920734e-01 ||r(i)||/||b|| 2.695785593621e-08 86 KSP preconditioned resid norm 7.424368309992e-03 true resid norm 3.970531542212e-01 ||r(i)||/||b|| 2.134145018171e-08 87 KSP preconditioned resid norm 7.352606110686e-03 true resid norm 4.415932315709e-01 ||r(i)||/||b|| 2.373546174349e-08 88 KSP preconditioned resid norm 7.280042564676e-03 true resid norm 5.109639015446e-01 ||r(i)||/||b|| 2.746410784938e-08 89 KSP preconditioned resid norm 7.171390142951e-03 true resid norm 4.409703319008e-01 ||r(i)||/||b|| 2.370198113231e-08 90 KSP preconditioned resid norm 7.069655565573e-03 true resid norm 4.344117469317e-01 ||r(i)||/||b|| 2.334945978121e-08 91 KSP preconditioned resid norm 6.944694600832e-03 true resid norm 4.400140866139e-01 ||r(i)||/||b|| 2.365058332590e-08 92 KSP preconditioned resid norm 6.805591346380e-03 true resid norm 4.034426989803e-01 ||r(i)||/||b|| 2.168488568829e-08 93 KSP preconditioned resid norm 6.578034682351e-03 true resid norm 4.437828777791e-01 ||r(i)||/||b|| 2.385315436216e-08 94 KSP preconditioned resid norm 6.395854263833e-03 true resid norm 4.319680486901e-01 ||r(i)||/||b|| 2.321811196613e-08 95 KSP preconditioned resid norm 6.256982164629e-03 true resid norm 4.156043449194e-01 ||r(i)||/||b|| 2.233856935301e-08 96 KSP preconditioned resid norm 6.159034987122e-03 true resid norm 4.117408096095e-01 ||r(i)||/||b|| 2.213090585640e-08 97 KSP preconditioned resid norm 6.094488409795e-03 true resid norm 3.960004222128e-01 ||r(i)||/||b|| 2.128486625215e-08 98 KSP preconditioned resid norm 6.051901763657e-03 true resid norm 3.777028982928e-01 ||r(i)||/||b|| 2.030138156997e-08 99 KSP preconditioned resid norm 6.014576545823e-03 true resid norm 3.742789946061e-01 ||r(i)||/||b|| 2.011734809944e-08 100 KSP preconditioned resid norm 5.956552050091e-03 true resid norm 3.551181951200e-01 ||r(i)||/||b|| 1.908746269662e-08 101 KSP preconditioned resid norm 5.842043633900e-03 true resid norm 4.382746134205e-01 ||r(i)||/||b|| 2.355708732895e-08 102 KSP preconditioned resid norm 5.658344160022e-03 true resid norm 4.596230962871e-01 ||r(i)||/||b|| 2.470455984921e-08 103 KSP preconditioned resid norm 5.485803980234e-03 true resid norm 3.503965499180e-01 ||r(i)||/||b|| 1.883367613232e-08 104 KSP preconditioned resid norm 5.376727133021e-03 true resid norm 3.464527812415e-01 ||r(i)||/||b|| 1.862170012396e-08 105 KSP preconditioned resid norm 5.296512288308e-03 true resid norm 3.496873763914e-01 ||r(i)||/||b|| 1.879555833543e-08 106 KSP preconditioned resid norm 5.243956795563e-03 true resid norm 3.403623443924e-01 ||r(i)||/||b|| 1.829434154938e-08 107 KSP preconditioned resid norm 5.224149348899e-03 true resid norm 3.646760344077e-01 ||r(i)||/||b|| 1.960119278247e-08 108 KSP preconditioned resid norm 5.207350168927e-03 true resid norm 3.502961273653e-01 ||r(i)||/||b|| 1.882827845979e-08 109 KSP preconditioned resid norm 5.174774410129e-03 true resid norm 3.530076169203e-01 ||r(i)||/||b|| 1.897401995218e-08 110 KSP preconditioned resid norm 5.080464512716e-03 true resid norm 3.754354350869e-01 ||r(i)||/||b|| 2.017950631842e-08 111 KSP preconditioned resid norm 4.932196560828e-03 true resid norm 3.319266661424e-01 ||r(i)||/||b|| 1.784092717600e-08 112 KSP preconditioned resid norm 4.819086590844e-03 true resid norm 3.397892392797e-01 ||r(i)||/||b|| 1.826353737598e-08 113 KSP preconditioned resid norm 4.747968645116e-03 true resid norm 3.254579053760e-01 ||r(i)||/||b|| 1.749323384031e-08 114 KSP preconditioned resid norm 4.688081041951e-03 true resid norm 3.186617859066e-01 ||r(i)||/||b|| 1.712794510367e-08 115 KSP preconditioned resid norm 4.639394290711e-03 true resid norm 3.547444504944e-01 ||r(i)||/||b|| 1.906737407064e-08 116 KSP preconditioned resid norm 4.609049915582e-03 true resid norm 3.342056457156e-01 ||r(i)||/||b|| 1.796342142774e-08 117 KSP preconditioned resid norm 4.590310507218e-03 true resid norm 3.273850372603e-01 ||r(i)||/||b|| 1.759681641777e-08 118 KSP preconditioned resid norm 4.537739208221e-03 true resid norm 3.343297460299e-01 ||r(i)||/||b|| 1.797009177060e-08 119 KSP preconditioned resid norm 4.392061311691e-03 true resid norm 2.937387873364e-01 ||r(i)||/||b|| 1.578834377647e-08 120 KSP preconditioned resid norm 4.192610027036e-03 true resid norm 3.026418819871e-01 ||r(i)||/||b|| 1.626688159674e-08 121 KSP preconditioned resid norm 3.979626874668e-03 true resid norm 3.335650008413e-01 ||r(i)||/||b|| 1.792898701883e-08 122 KSP preconditioned resid norm 3.815602083753e-03 true resid norm 3.045581838231e-01 ||r(i)||/||b|| 1.636988206338e-08 123 KSP preconditioned resid norm 3.683106242308e-03 true resid norm 3.098877278696e-01 ||r(i)||/||b|| 1.665634295042e-08 124 KSP preconditioned resid norm 3.491204328621e-03 true resid norm 2.724215703848e-01 ||r(i)||/||b|| 1.464255178678e-08 125 KSP preconditioned resid norm 3.300581833063e-03 true resid norm 2.604278612239e-01 ||r(i)||/||b|| 1.399789465756e-08 126 KSP preconditioned resid norm 3.138524711813e-03 true resid norm 2.979743756364e-01 ||r(i)||/||b|| 1.601600497430e-08 127 KSP preconditioned resid norm 2.998768146737e-03 true resid norm 2.684276442529e-01 ||r(i)||/||b|| 1.442787983501e-08 128 KSP preconditioned resid norm 2.828798841220e-03 true resid norm 2.488787122874e-01 ||r(i)||/||b|| 1.337713246477e-08 129 KSP preconditioned resid norm 2.682445898860e-03 true resid norm 2.713173341660e-01 ||r(i)||/||b|| 1.458319952626e-08 130 KSP preconditioned resid norm 2.557725541906e-03 true resid norm 2.456688134002e-01 ||r(i)||/||b|| 1.320460166767e-08 131 KSP preconditioned resid norm 2.435166675884e-03 true resid norm 2.557230363513e-01 ||r(i)||/||b|| 1.374501217932e-08 132 KSP preconditioned resid norm 2.329723884039e-03 true resid norm 2.459481821293e-01 ||r(i)||/||b|| 1.321961762649e-08 133 KSP preconditioned resid norm 2.233092060836e-03 true resid norm 2.488289469002e-01 ||r(i)||/||b|| 1.337445759486e-08 134 KSP preconditioned resid norm 2.146767644244e-03 true resid norm 2.436791649693e-01 ||r(i)||/||b|| 1.309765885052e-08 135 KSP preconditioned resid norm 2.069055646567e-03 true resid norm 2.443539291969e-01 ||r(i)||/||b|| 1.313392716119e-08 136 KSP preconditioned resid norm 1.998218744259e-03 true resid norm 2.419703114076e-01 ||r(i)||/||b|| 1.300580864667e-08 137 KSP preconditioned resid norm 1.934154419559e-03 true resid norm 2.413673649358e-01 ||r(i)||/||b|| 1.297340051201e-08 138 KSP preconditioned resid norm 1.875352742458e-03 true resid norm 2.402293008081e-01 ||r(i)||/||b|| 1.291223001475e-08 139 KSP preconditioned resid norm 1.821713720491e-03 true resid norm 2.394600637115e-01 ||r(i)||/||b|| 1.287088382469e-08 140 KSP preconditioned resid norm 1.772244386117e-03 true resid norm 2.386749177324e-01 ||r(i)||/||b|| 1.282868253849e-08 141 KSP preconditioned resid norm 1.726660380259e-03 true resid norm 2.380358592609e-01 ||r(i)||/||b|| 1.279433339811e-08 142 KSP preconditioned resid norm 1.684371894472e-03 true resid norm 2.374167740258e-01 ||r(i)||/||b|| 1.276105781130e-08 143 KSP preconditioned resid norm 1.645066345043e-03 true resid norm 2.368897695349e-01 ||r(i)||/||b|| 1.273273152811e-08 144 KSP preconditioned resid norm 1.608378395889e-03 true resid norm 2.363909366220e-01 ||r(i)||/||b|| 1.270591945610e-08 145 KSP preconditioned resid norm 1.574045424112e-03 true resid norm 2.359483824966e-01 ||r(i)||/||b|| 1.268213234670e-08 146 KSP preconditioned resid norm 1.541818911387e-03 true resid norm 2.355369368148e-01 ||r(i)||/||b|| 1.266001730384e-08 147 KSP preconditioned resid norm 1.511495012743e-03 true resid norm 2.351622190575e-01 ||r(i)||/||b|| 1.263987637242e-08 148 KSP preconditioned resid norm 1.482892296987e-03 true resid norm 2.348153672194e-01 ||r(i)||/||b|| 1.262123322315e-08 149 KSP preconditioned resid norm 1.455854398511e-03 true resid norm 2.344953173543e-01 ||r(i)||/||b|| 1.260403066934e-08 150 KSP preconditioned resid norm 1.430243441942e-03 true resid norm 2.341982647419e-01 ||r(i)||/||b|| 1.258806420877e-08 151 KSP preconditioned resid norm 1.405938253032e-03 true resid norm 2.339221629824e-01 ||r(i)||/||b|| 1.257322384828e-08 152 KSP preconditioned resid norm 1.382831488600e-03 true resid norm 2.336647410623e-01 ||r(i)||/||b|| 1.255938752177e-08 153 KSP preconditioned resid norm 1.360827803948e-03 true resid norm 2.334242223406e-01 ||r(i)||/||b|| 1.254645973549e-08 154 KSP preconditioned resid norm 1.339842133796e-03 true resid norm 2.331989794938e-01 ||r(i)||/||b|| 1.253435302146e-08 155 KSP preconditioned resid norm 1.319798326743e-03 true resid norm 2.329876098843e-01 ||r(i)||/||b|| 1.252299198845e-08 156 KSP preconditioned resid norm 1.300627973461e-03 true resid norm 2.327888723649e-01 ||r(i)||/||b|| 1.251230992530e-08 157 KSP preconditioned resid norm 1.282269425120e-03 true resid norm 2.326016686901e-01 ||r(i)||/||b|| 1.250224780173e-08 158 KSP preconditioned resid norm 1.264666957355e-03 true resid norm 2.324250272145e-01 ||r(i)||/||b|| 1.249275339220e-08 159 KSP preconditioned resid norm 1.247770057982e-03 true resid norm 2.322580834349e-01 ||r(i)||/||b|| 1.248378023000e-08 160 KSP preconditioned resid norm 1.231532816432e-03 true resid norm 2.321000566002e-01 ||r(i)||/||b|| 1.247528635006e-08 161 KSP preconditioned resid norm 1.215913398441e-03 true resid norm 2.319502503289e-01 ||r(i)||/||b|| 1.246723432216e-08 162 KSP preconditioned resid norm 1.200873592223e-03 true resid norm 2.318080450631e-01 ||r(i)||/||b|| 1.245959084531e-08 163 KSP preconditioned resid norm 1.186378414894e-03 true resid norm 2.316728740452e-01 ||r(i)||/||b|| 1.245232545649e-08 164 KSP preconditioned resid norm 1.172395769938e-03 true resid norm 2.315442332791e-01 ||r(i)||/||b|| 1.244541106613e-08 165 KSP preconditioned resid norm 1.158896148065e-03 true resid norm 2.314216550879e-01 ||r(i)||/||b|| 1.243882253679e-08 166 KSP preconditioned resid norm 1.145852365100e-03 true resid norm 2.313047260225e-01 ||r(i)||/||b|| 1.243253764571e-08 167 KSP preconditioned resid norm 1.133239331624e-03 true resid norm 2.311930638485e-01 ||r(i)||/||b|| 1.242653584797e-08 168 KSP preconditioned resid norm 1.121033849900e-03 true resid norm 2.310863209631e-01 ||r(i)||/||b|| 1.242079846005e-08 169 KSP preconditioned resid norm 1.109214434359e-03 true resid norm 2.309841757666e-01 ||r(i)||/||b|| 1.241530819609e-08 170 KSP preconditioned resid norm 1.097761152478e-03 true resid norm 2.308863479957e-01 ||r(i)||/||b|| 1.241004999205e-08 171 KSP preconditioned resid norm 1.086655483361e-03 true resid norm 2.307925634297e-01 ||r(i)||/||b|| 1.240500910868e-08 172 KSP preconditioned resid norm 1.075880191754e-03 true resid norm 2.307025749247e-01 ||r(i)||/||b|| 1.240017226209e-08 173 KSP preconditioned resid norm 1.065419215531e-03 true resid norm 2.306161549158e-01 ||r(i)||/||b|| 1.239552722075e-08 174 KSP preconditioned resid norm 1.055257564987e-03 true resid norm 2.305330988532e-01 ||r(i)||/||b|| 1.239106299020e-08 175 KSP preconditioned resid norm 1.045381232503e-03 true resid norm 2.304532188963e-01 ||r(i)||/||b|| 1.238676947407e-08 176 KSP preconditioned resid norm 1.035777111332e-03 true resid norm 2.303763286387e-01 ||r(i)||/||b|| 1.238263665310e-08 177 KSP preconditioned resid norm 1.026432922454e-03 true resid norm 2.303022696117e-01 ||r(i)||/||b|| 1.237865600966e-08 178 KSP preconditioned resid norm 1.017337148558e-03 true resid norm 2.302308831450e-01 ||r(i)||/||b|| 1.237481901527e-08 179 KSP preconditioned resid norm 1.008478974346e-03 true resid norm 2.301620347457e-01 ||r(i)||/||b|| 1.237111844101e-08 180 KSP preconditioned resid norm 9.998482324606e-04 true resid norm 2.300955845367e-01 ||r(i)||/||b|| 1.236754676853e-08 181 KSP preconditioned resid norm 9.914353544130e-04 true resid norm 2.300314108316e-01 ||r(i)||/||b|| 1.236409745723e-08 182 KSP preconditioned resid norm 9.832313259834e-04 true resid norm 2.299694002578e-01 ||r(i)||/||b|| 1.236076441339e-08 183 KSP preconditioned resid norm 9.752276466110e-04 true resid norm 2.299094435160e-01 ||r(i)||/||b|| 1.235754176220e-08 184 KSP preconditioned resid norm 9.674162923639e-04 true resid norm 2.298514409700e-01 ||r(i)||/||b|| 1.235442414827e-08 185 KSP preconditioned resid norm 9.597896821203e-04 true resid norm 2.297953006592e-01 ||r(i)||/||b|| 1.235140662874e-08 186 KSP preconditioned resid norm 9.523406466369e-04 true resid norm 2.297409322096e-01 ||r(i)||/||b|| 1.234848434605e-08 187 KSP preconditioned resid norm 9.450624002177e-04 true resid norm 2.296882498964e-01 ||r(i)||/||b|| 1.234565269253e-08 188 KSP preconditioned resid norm 9.379485147290e-04 true resid norm 2.296371855641e-01 ||r(i)||/||b|| 1.234290800484e-08 189 KSP preconditioned resid norm 9.309928957338e-04 true resid norm 2.295876599605e-01 ||r(i)||/||b|| 1.234024602321e-08 190 KSP preconditioned resid norm 9.241897605453e-04 true resid norm 2.295396008850e-01 ||r(i)||/||b|| 1.233766286689e-08 191 KSP preconditioned resid norm 9.175336180183e-04 true resid norm 2.294929517616e-01 ||r(i)||/||b|| 1.233515549494e-08 192 KSP preconditioned resid norm 9.110192499196e-04 true resid norm 2.294476550937e-01 ||r(i)||/||b|| 1.233272081693e-08 193 KSP preconditioned resid norm 9.046416937330e-04 true resid norm 2.294036343745e-01 ||r(i)||/||b|| 1.233035472067e-08 194 KSP preconditioned resid norm 8.983962267697e-04 true resid norm 2.293608510670e-01 ||r(i)||/||b|| 1.232805513479e-08 195 KSP preconditioned resid norm 8.922783514693e-04 true resid norm 2.293192547524e-01 ||r(i)||/||b|| 1.232581934931e-08 196 KSP preconditioned resid norm 8.862837817875e-04 true resid norm 2.292787946262e-01 ||r(i)||/||b|| 1.232364463352e-08 197 KSP preconditioned resid norm 8.804084305754e-04 true resid norm 2.292394175247e-01 ||r(i)||/||b|| 1.232152812986e-08 198 KSP preconditioned resid norm 8.746483978686e-04 true resid norm 2.292010834774e-01 ||r(i)||/||b|| 1.231946768996e-08 199 KSP preconditioned resid norm 8.689999600067e-04 true resid norm 2.291637624697e-01 ||r(i)||/||b|| 1.231746170055e-08 200 KSP preconditioned resid norm 8.634595595170e-04 true resid norm 2.291274049854e-01 ||r(i)||/||b|| 1.231550750013e-08 201 KSP preconditioned resid norm 8.580237956982e-04 true resid norm 2.290919720801e-01 ||r(i)||/||b|| 1.231360299547e-08 202 KSP preconditioned resid norm 8.526894158485e-04 true resid norm 2.290574356997e-01 ||r(i)||/||b|| 1.231174667866e-08 203 KSP preconditioned resid norm 8.474533070857e-04 true resid norm 2.290237596137e-01 ||r(i)||/||b|| 1.230993660235e-08 204 KSP preconditioned resid norm 8.423124887137e-04 true resid norm 2.289909090686e-01 ||r(i)||/||b|| 1.230817089852e-08 205 KSP preconditioned resid norm 8.372641050919e-04 true resid norm 2.289588584825e-01 ||r(i)||/||b|| 1.230644819218e-08 206 KSP preconditioned resid norm 8.323054189692e-04 true resid norm 2.289275744616e-01 ||r(i)||/||b|| 1.230476668842e-08 207 KSP preconditioned resid norm 8.274338052477e-04 true resid norm 2.288970376791e-01 ||r(i)||/||b|| 1.230312534842e-08 208 KSP preconditioned resid norm 8.226467451421e-04 true resid norm 2.288672146645e-01 ||r(i)||/||b|| 1.230152237317e-08 209 KSP preconditioned resid norm 8.179418207082e-04 true resid norm 2.288380816620e-01 ||r(i)||/||b|| 1.229995648580e-08 210 KSP preconditioned resid norm 8.133167097103e-04 true resid norm 2.288096168578e-01 ||r(i)||/||b|| 1.229842651381e-08 211 KSP preconditioned resid norm 8.087691808049e-04 true resid norm 2.287817990843e-01 ||r(i)||/||b|| 1.229693131947e-08 212 KSP preconditioned resid norm 8.042970890175e-04 true resid norm 2.287545984345e-01 ||r(i)||/||b|| 1.229546929530e-08 213 KSP preconditioned resid norm 7.998983714913e-04 true resid norm 2.287280031968e-01 ||r(i)||/||b|| 1.229403981178e-08 214 KSP preconditioned resid norm 7.955710434887e-04 true resid norm 2.287019910352e-01 ||r(i)||/||b|| 1.229264166837e-08 215 KSP preconditioned resid norm 7.913131946287e-04 true resid norm 2.286765459346e-01 ||r(i)||/||b|| 1.229127400426e-08 216 KSP preconditioned resid norm 7.871229853431e-04 true resid norm 2.286516391503e-01 ||r(i)||/||b|| 1.228993527445e-08 217 KSP preconditioned resid norm 7.829986435369e-04 true resid norm 2.286272693763e-01 ||r(i)||/||b|| 1.228862540872e-08 218 KSP preconditioned resid norm 7.789384614395e-04 true resid norm 2.286034070064e-01 ||r(i)||/||b|| 1.228734281577e-08 219 KSP preconditioned resid norm 7.749407926335e-04 true resid norm 2.285800360455e-01 ||r(i)||/||b|| 1.228608663585e-08 220 KSP preconditioned resid norm 7.710040492496e-04 true resid norm 2.285571480396e-01 ||r(i)||/||b|| 1.228485641457e-08 221 KSP preconditioned resid norm 7.671266993165e-04 true resid norm 2.285347243777e-01 ||r(i)||/||b|| 1.228365115161e-08 222 KSP preconditioned resid norm 7.633072642562e-04 true resid norm 2.285127555462e-01 ||r(i)||/||b|| 1.228247033559e-08 223 KSP preconditioned resid norm 7.595443165147e-04 true resid norm 2.284912245964e-01 ||r(i)||/||b|| 1.228131305554e-08 224 KSP preconditioned resid norm 7.558364773208e-04 true resid norm 2.284701167825e-01 ||r(i)||/||b|| 1.228017851889e-08 225 KSP preconditioned resid norm 7.521824145629e-04 true resid norm 2.284494252947e-01 ||r(i)||/||b|| 1.227906635959e-08 226 KSP preconditioned resid norm 7.485808407788e-04 true resid norm 2.284291297251e-01 ||r(i)||/||b|| 1.227797548074e-08 227 KSP preconditioned resid norm 7.450305112491e-04 true resid norm 2.284092255255e-01 ||r(i)||/||b|| 1.227690563788e-08 228 KSP preconditioned resid norm 7.415302221908e-04 true resid norm 2.283896991871e-01 ||r(i)||/||b|| 1.227585610490e-08 229 KSP preconditioned resid norm 7.380788090416e-04 true resid norm 2.283705422023e-01 ||r(i)||/||b|| 1.227482642454e-08 230 KSP preconditioned resid norm 7.346751448326e-04 true resid norm 2.283517372412e-01 ||r(i)||/||b|| 1.227381566531e-08 231 KSP preconditioned resid norm 7.313181386419e-04 true resid norm 2.283332864722e-01 ||r(i)||/||b|| 1.227282394376e-08 232 KSP preconditioned resid norm 7.280067341260e-04 true resid norm 2.283151684393e-01 ||r(i)||/||b|| 1.227185010665e-08 233 KSP preconditioned resid norm 7.247399081230e-04 true resid norm 2.282973787544e-01 ||r(i)||/||b|| 1.227089391812e-08 234 KSP preconditioned resid norm 7.215166693243e-04 true resid norm 2.282799167157e-01 ||r(i)||/||b|| 1.226995534044e-08 235 KSP preconditioned resid norm 7.183360570113e-04 true resid norm 2.282627562941e-01 ||r(i)||/||b|| 1.226903297455e-08 236 KSP preconditioned resid norm 7.151971398519e-04 true resid norm 2.282459005354e-01 ||r(i)||/||b|| 1.226812698418e-08 237 KSP preconditioned resid norm 7.120990147553e-04 true resid norm 2.282293440867e-01 ||r(i)||/||b|| 1.226723708161e-08 238 KSP preconditioned resid norm 7.090408057796e-04 true resid norm 2.282130711554e-01 ||r(i)||/||b|| 1.226636241798e-08 239 KSP preconditioned resid norm 7.060216630923e-04 true resid norm 2.281970808035e-01 ||r(i)||/||b|| 1.226550294288e-08 240 KSP preconditioned resid norm 7.030407619775e-04 true resid norm 2.281813599523e-01 ||r(i)||/||b|| 1.226465795334e-08 241 KSP preconditioned resid norm 7.000973018899e-04 true resid norm 2.281659027838e-01 ||r(i)||/||b|| 1.226382713664e-08 242 KSP preconditioned resid norm 6.971905055512e-04 true resid norm 2.281507115980e-01 ||r(i)||/||b|| 1.226301061640e-08 243 KSP preconditioned resid norm 6.943196180880e-04 true resid norm 2.281357689033e-01 ||r(i)||/||b|| 1.226220745247e-08 244 KSP preconditioned resid norm 6.914839062086e-04 true resid norm 2.281210748466e-01 ||r(i)||/||b|| 1.226141765272e-08 245 KSP preconditioned resid norm 6.886826574152e-04 true resid norm 2.281066202026e-01 ||r(i)||/||b|| 1.226064072132e-08 246 KSP preconditioned resid norm 6.859151792529e-04 true resid norm 2.280923969589e-01 ||r(i)||/||b|| 1.225987622759e-08 247 KSP preconditioned resid norm 6.831807985900e-04 true resid norm 2.280784106590e-01 ||r(i)||/||b|| 1.225912446950e-08 248 KSP preconditioned resid norm 6.804788609301e-04 true resid norm 2.280646379303e-01 ||r(i)||/||b|| 1.225838419077e-08 249 KSP preconditioned resid norm 6.778087297544e-04 true resid norm 2.280510903373e-01 ||r(i)||/||b|| 1.225765601300e-08 250 KSP preconditioned resid norm 6.751697858915e-04 true resid norm 2.280377511770e-01 ||r(i)||/||b|| 1.225693903840e-08 251 KSP preconditioned resid norm 6.725614269143e-04 true resid norm 2.280246239704e-01 ||r(i)||/||b|| 1.225623345623e-08 252 KSP preconditioned resid norm 6.699830665622e-04 true resid norm 2.280116959814e-01 ||r(i)||/||b|| 1.225553858193e-08 253 KSP preconditioned resid norm 6.674341341876e-04 true resid norm 2.279989696805e-01 ||r(i)||/||b|| 1.225485454828e-08 254 KSP preconditioned resid norm 6.649140742254e-04 true resid norm 2.279864354329e-01 ||r(i)||/||b|| 1.225418083743e-08 255 KSP preconditioned resid norm 6.624223456843e-04 true resid norm 2.279740931070e-01 ||r(i)||/||b|| 1.225351744228e-08 256 KSP preconditioned resid norm 6.599584216587e-04 true resid norm 2.279619301052e-01 ||r(i)||/||b|| 1.225286368574e-08 257 KSP preconditioned resid norm 6.575217888610e-04 true resid norm 2.279499505857e-01 ||r(i)||/||b|| 1.225221979130e-08 258 KSP preconditioned resid norm 6.551119471720e-04 true resid norm 2.279381539369e-01 ||r(i)||/||b|| 1.225158572609e-08 259 KSP preconditioned resid norm 6.527284092102e-04 true resid norm 2.279265280692e-01 ||r(i)||/||b|| 1.225096084029e-08 260 KSP preconditioned resid norm 6.503706999171e-04 true resid norm 2.279150677391e-01 ||r(i)||/||b|| 1.225034485208e-08 261 KSP preconditioned resid norm 6.480383561599e-04 true resid norm 2.279037754067e-01 ||r(i)||/||b|| 1.224973789367e-08 262 KSP preconditioned resid norm 6.457309263495e-04 true resid norm 2.278926440841e-01 ||r(i)||/||b|| 1.224913958948e-08 263 KSP preconditioned resid norm 6.434479700724e-04 true resid norm 2.278816724344e-01 ||r(i)||/||b|| 1.224854986765e-08 264 KSP preconditioned resid norm 6.411890577384e-04 true resid norm 2.278708569273e-01 ||r(i)||/||b|| 1.224796853841e-08 265 KSP preconditioned resid norm 6.389537702407e-04 true resid norm 2.278601945294e-01 ||r(i)||/||b|| 1.224739543873e-08 266 KSP preconditioned resid norm 6.367416986291e-04 true resid norm 2.278496785824e-01 ||r(i)||/||b|| 1.224683021074e-08 267 KSP preconditioned resid norm 6.345524437961e-04 true resid norm 2.278393107729e-01 ||r(i)||/||b|| 1.224627294507e-08 268 KSP preconditioned resid norm 6.323856161740e-04 true resid norm 2.278290793929e-01 ||r(i)||/||b|| 1.224572301244e-08 269 KSP preconditioned resid norm 6.302408354439e-04 true resid norm 2.278189975206e-01 ||r(i)||/||b|| 1.224518111579e-08 270 KSP preconditioned resid norm 6.281177302553e-04 true resid norm 2.278090479439e-01 ||r(i)||/||b|| 1.224464632997e-08 271 KSP preconditioned resid norm 6.260159379560e-04 true resid norm 2.277992270207e-01 ||r(i)||/||b|| 1.224411845923e-08 272 KSP preconditioned resid norm 6.239351043318e-04 true resid norm 2.277895437391e-01 ||r(i)||/||b|| 1.224359798667e-08 273 KSP preconditioned resid norm 6.218748833559e-04 true resid norm 2.277799910443e-01 ||r(i)||/||b|| 1.224308453310e-08 274 KSP preconditioned resid norm 6.198349369469e-04 true resid norm 2.277705592500e-01 ||r(i)||/||b|| 1.224257757788e-08 275 KSP preconditioned resid norm 6.178149347363e-04 true resid norm 2.277612586135e-01 ||r(i)||/||b|| 1.224207767234e-08 276 KSP preconditioned resid norm 6.158145538429e-04 true resid norm 2.277520668499e-01 ||r(i)||/||b|| 1.224158361868e-08 277 KSP preconditioned resid norm 6.138334786568e-04 true resid norm 2.277430112949e-01 ||r(i)||/||b|| 1.224109688618e-08 278 KSP preconditioned resid norm 6.118714006300e-04 true resid norm 2.277340559836e-01 ||r(i)||/||b|| 1.224061554173e-08 279 KSP preconditioned resid norm 6.099280180742e-04 true resid norm 2.277252270854e-01 ||r(i)||/||b|| 1.224014099194e-08 280 KSP preconditioned resid norm 6.080030359664e-04 true resid norm 2.277165063564e-01 ||r(i)||/||b|| 1.223967225620e-08 281 KSP preconditioned resid norm 6.060961657604e-04 true resid norm 2.277078943529e-01 ||r(i)||/||b|| 1.223920936442e-08 282 KSP preconditioned resid norm 6.042071252055e-04 true resid norm 2.276993939626e-01 ||r(i)||/||b|| 1.223875247180e-08 283 KSP preconditioned resid norm 6.023356381705e-04 true resid norm 2.276909967784e-01 ||r(i)||/||b|| 1.223830112646e-08 284 KSP preconditioned resid norm 6.004814344746e-04 true resid norm 2.276827043313e-01 ||r(i)||/||b|| 1.223785541071e-08 285 KSP preconditioned resid norm 5.986442497236e-04 true resid norm 2.276745169083e-01 ||r(i)||/||b|| 1.223741533996e-08 286 KSP preconditioned resid norm 5.968238251509e-04 true resid norm 2.276664231577e-01 ||r(i)||/||b|| 1.223698030406e-08 287 KSP preconditioned resid norm 5.950199074650e-04 true resid norm 2.276584395337e-01 ||r(i)||/||b|| 1.223655118742e-08 288 KSP preconditioned resid norm 5.932322487016e-04 true resid norm 2.276505384966e-01 ||r(i)||/||b|| 1.223612650980e-08 289 KSP preconditioned resid norm 5.914606060796e-04 true resid norm 2.276427397608e-01 ||r(i)||/||b|| 1.223570733083e-08 290 KSP preconditioned resid norm 5.897047418634e-04 true resid norm 2.276350322672e-01 ||r(i)||/||b|| 1.223529305609e-08 291 KSP preconditioned resid norm 5.879644232287e-04 true resid norm 2.276274208585e-01 ||r(i)||/||b|| 1.223488394589e-08 292 KSP preconditioned resid norm 5.862394221325e-04 true resid norm 2.276198926130e-01 ||r(i)||/||b|| 1.223447930566e-08 293 KSP preconditioned resid norm 5.845295151879e-04 true resid norm 2.276124585454e-01 ||r(i)||/||b|| 1.223407972747e-08 294 KSP preconditioned resid norm 5.828344835425e-04 true resid norm 2.276051047451e-01 ||r(i)||/||b|| 1.223368446360e-08 295 KSP preconditioned resid norm 5.811541127607e-04 true resid norm 2.275978428963e-01 ||r(i)||/||b|| 1.223329414210e-08 296 KSP preconditioned resid norm 5.794881927099e-04 true resid norm 2.275906670116e-01 ||r(i)||/||b|| 1.223290844113e-08 297 KSP preconditioned resid norm 5.778365174499e-04 true resid norm 2.275835677786e-01 ||r(i)||/||b|| 1.223252686016e-08 298 KSP preconditioned resid norm 5.761988851261e-04 true resid norm 2.275765484745e-01 ||r(i)||/||b|| 1.223214957534e-08 299 KSP preconditioned resid norm 5.745750978659e-04 true resid norm 2.275696146925e-01 ||r(i)||/||b|| 1.223177688730e-08 300 KSP preconditioned resid norm 5.729649616779e-04 true resid norm 2.275627510683e-01 ||r(i)||/||b|| 1.223140797021e-08 301 KSP preconditioned resid norm 5.713682863554e-04 true resid norm 2.275559709262e-01 ||r(i)||/||b|| 1.223104354025e-08 302 KSP preconditioned resid norm 5.697848853810e-04 true resid norm 2.275492623914e-01 ||r(i)||/||b|| 1.223068295915e-08 303 KSP preconditioned resid norm 5.682145758359e-04 true resid norm 2.275426321888e-01 ||r(i)||/||b|| 1.223032658839e-08 304 KSP preconditioned resid norm 5.666571783106e-04 true resid norm 2.275360696194e-01 ||r(i)||/||b|| 1.222997385287e-08 305 KSP preconditioned resid norm 5.651125168191e-04 true resid norm 2.275295863598e-01 ||r(i)||/||b|| 1.222962538023e-08 306 KSP preconditioned resid norm 5.635804187152e-04 true resid norm 2.275231742717e-01 ||r(i)||/||b|| 1.222928073303e-08 307 KSP preconditioned resid norm 5.620607146115e-04 true resid norm 2.275168211743e-01 ||r(i)||/||b|| 1.222893925655e-08 308 KSP preconditioned resid norm 5.605532383010e-04 true resid norm 2.275105454059e-01 ||r(i)||/||b|| 1.222860193648e-08 309 KSP preconditioned resid norm 5.590578266801e-04 true resid norm 2.275043341609e-01 ||r(i)||/||b|| 1.222826808451e-08 310 KSP preconditioned resid norm 5.575743196751e-04 true resid norm 2.274981921808e-01 ||r(i)||/||b|| 1.222793795551e-08 311 KSP preconditioned resid norm 5.561025601701e-04 true resid norm 2.274921161984e-01 ||r(i)||/||b|| 1.222761137386e-08 312 KSP preconditioned resid norm 5.546423939368e-04 true resid norm 2.274861025252e-01 ||r(i)||/||b|| 1.222728814129e-08 313 KSP preconditioned resid norm 5.531936695668e-04 true resid norm 2.274801473152e-01 ||r(i)||/||b|| 1.222696805111e-08 314 KSP preconditioned resid norm 5.517562384059e-04 true resid norm 2.274742621715e-01 ||r(i)||/||b|| 1.222665172696e-08 315 KSP preconditioned resid norm 5.503299544895e-04 true resid norm 2.274684288202e-01 ||r(i)||/||b|| 1.222633818663e-08 316 KSP preconditioned resid norm 5.489146744809e-04 true resid norm 2.274626652936e-01 ||r(i)||/||b|| 1.222602839935e-08 317 KSP preconditioned resid norm 5.475102576105e-04 true resid norm 2.274569603655e-01 ||r(i)||/||b|| 1.222572176172e-08 318 KSP preconditioned resid norm 5.461165656167e-04 true resid norm 2.274513056984e-01 ||r(i)||/||b|| 1.222541782560e-08 319 KSP preconditioned resid norm 5.447334626896e-04 true resid norm 2.274457171259e-01 ||r(i)||/||b|| 1.222511744204e-08 320 KSP preconditioned resid norm 5.433608154143e-04 true resid norm 2.274401805906e-01 ||r(i)||/||b|| 1.222481985545e-08 321 KSP preconditioned resid norm 5.419984927178e-04 true resid norm 2.274347014589e-01 ||r(i)||/||b|| 1.222452535429e-08 322 KSP preconditioned resid norm 5.406463658158e-04 true resid norm 2.274292714706e-01 ||r(i)||/||b|| 1.222423349456e-08 323 KSP preconditioned resid norm 5.393043081620e-04 true resid norm 2.274239051253e-01 ||r(i)||/||b|| 1.222394505563e-08 324 KSP preconditioned resid norm 5.379721953979e-04 true resid norm 2.274185887375e-01 ||r(i)||/||b|| 1.222365930188e-08 325 KSP preconditioned resid norm 5.366499053049e-04 true resid norm 2.274133248777e-01 ||r(i)||/||b|| 1.222337637150e-08 326 KSP preconditioned resid norm 5.353373177566e-04 true resid norm 2.274081127307e-01 ||r(i)||/||b|| 1.222309622065e-08 327 KSP preconditioned resid norm 5.340343146738e-04 true resid norm 2.274029468427e-01 ||r(i)||/||b|| 1.222281855621e-08 328 KSP preconditioned resid norm 5.327407799788e-04 true resid norm 2.273978353712e-01 ||r(i)||/||b|| 1.222254381664e-08 329 KSP preconditioned resid norm 5.314565995531e-04 true resid norm 2.273927809222e-01 ||r(i)||/||b|| 1.222227214200e-08 330 KSP preconditioned resid norm 5.301816611941e-04 true resid norm 2.273877604496e-01 ||r(i)||/||b|| 1.222200229359e-08 331 KSP preconditioned resid norm 5.289158545748e-04 true resid norm 2.273827995564e-01 ||r(i)||/||b|| 1.222173564753e-08 332 KSP preconditioned resid norm 5.276590712029e-04 true resid norm 2.273778827831e-01 ||r(i)||/||b|| 1.222147137291e-08 333 KSP preconditioned resid norm 5.264112043825e-04 true resid norm 2.273730105232e-01 ||r(i)||/||b|| 1.222120949086e-08 334 KSP preconditioned resid norm 5.251721491753e-04 true resid norm 2.273681879072e-01 ||r(i)||/||b|| 1.222095027716e-08 335 KSP preconditioned resid norm 5.239418023642e-04 true resid norm 2.273634127611e-01 ||r(i)||/||b|| 1.222069361495e-08 336 KSP preconditioned resid norm 5.227200624168e-04 true resid norm 2.273586761157e-01 ||r(i)||/||b|| 1.222043902213e-08 337 KSP preconditioned resid norm 5.215068294502e-04 true resid norm 2.273539880103e-01 ||r(i)||/||b|| 1.222018703831e-08 338 KSP preconditioned resid norm 5.203020051970e-04 true resid norm 2.273493409278e-01 ||r(i)||/||b|| 1.221993725947e-08 339 KSP preconditioned resid norm 5.191054929715e-04 true resid norm 2.273447356659e-01 ||r(i)||/||b|| 1.221968972845e-08 340 KSP preconditioned resid norm 5.179171976373e-04 true resid norm 2.273401743693e-01 ||r(i)||/||b|| 1.221944456056e-08 341 KSP preconditioned resid norm 5.167370255756e-04 true resid norm 2.273356583138e-01 ||r(i)||/||b|| 1.221920182437e-08 342 KSP preconditioned resid norm 5.155648846541e-04 true resid norm 2.273311761791e-01 ||r(i)||/||b|| 1.221896091140e-08 343 KSP preconditioned resid norm 5.144006841968e-04 true resid norm 2.273267443148e-01 ||r(i)||/||b|| 1.221872270044e-08 344 KSP preconditioned resid norm 5.132443349545e-04 true resid norm 2.273223492524e-01 ||r(i)||/||b|| 1.221848646758e-08 345 KSP preconditioned resid norm 5.120957490761e-04 true resid norm 2.273179883497e-01 ||r(i)||/||b|| 1.221825207078e-08 346 KSP preconditioned resid norm 5.109548400808e-04 true resid norm 2.273136645087e-01 ||r(i)||/||b|| 1.221801966603e-08 347 KSP preconditioned resid norm 5.098215228303e-04 true resid norm 2.273093837127e-01 ||r(i)||/||b|| 1.221778957494e-08 348 KSP preconditioned resid norm 5.086957135023e-04 true resid norm 2.273051488454e-01 ||r(i)||/||b|| 1.221756195250e-08 349 KSP preconditioned resid norm 5.075773295647e-04 true resid norm 2.273009423403e-01 ||r(i)||/||b|| 1.221733585451e-08 350 KSP preconditioned resid norm 5.064662897500e-04 true resid norm 2.272967719436e-01 ||r(i)||/||b|| 1.221711169733e-08 351 KSP preconditioned resid norm 5.053625140303e-04 true resid norm 2.272926387381e-01 ||r(i)||/||b|| 1.221688953917e-08 352 KSP preconditioned resid norm 5.042659235933e-04 true resid norm 2.272885475640e-01 ||r(i)||/||b|| 1.221666964018e-08 353 KSP preconditioned resid norm 5.031764408188e-04 true resid norm 2.272844895049e-01 ||r(i)||/||b|| 1.221645152110e-08 354 KSP preconditioned resid norm 5.020939892553e-04 true resid norm 2.272804697266e-01 ||r(i)||/||b|| 1.221623545961e-08 355 KSP preconditioned resid norm 5.010184935978e-04 true resid norm 2.272764750102e-01 ||r(i)||/||b|| 1.221602074518e-08 356 KSP preconditioned resid norm 4.999498796654e-04 true resid norm 2.272725184153e-01 ||r(i)||/||b|| 1.221580807977e-08 357 KSP preconditioned resid norm 4.988880743802e-04 true resid norm 2.272686009393e-01 ||r(i)||/||b|| 1.221559751698e-08 358 KSP preconditioned resid norm 4.978330057460e-04 true resid norm 2.272647089737e-01 ||r(i)||/||b|| 1.221538832536e-08 359 KSP preconditioned resid norm 4.967846028282e-04 true resid norm 2.272608549171e-01 ||r(i)||/||b|| 1.221518117135e-08 360 KSP preconditioned resid norm 4.957427957331e-04 true resid norm 2.272570371519e-01 ||r(i)||/||b|| 1.221497596797e-08 361 KSP preconditioned resid norm 4.947075155891e-04 true resid norm 2.272532368496e-01 ||r(i)||/||b|| 1.221477170323e-08 362 KSP preconditioned resid norm 4.936786945270e-04 true resid norm 2.272494806239e-01 ||r(i)||/||b|| 1.221456980758e-08 363 KSP preconditioned resid norm 4.926562656616e-04 true resid norm 2.272457630299e-01 ||r(i)||/||b|| 1.221436998837e-08 364 KSP preconditioned resid norm 4.916401630734e-04 true resid norm 2.272420559634e-01 ||r(i)||/||b|| 1.221417073501e-08 365 KSP preconditioned resid norm 4.906303217906e-04 true resid norm 2.272383875961e-01 ||r(i)||/||b|| 1.221397356172e-08 366 KSP preconditioned resid norm 4.896266777717e-04 true resid norm 2.272347532208e-01 ||r(i)||/||b|| 1.221377821548e-08 367 KSP preconditioned resid norm 4.886291678887e-04 true resid norm 2.272311469888e-01 ||r(i)||/||b|| 1.221358438194e-08 368 KSP preconditioned resid norm 4.876377299101e-04 true resid norm 2.272275679731e-01 ||r(i)||/||b|| 1.221339201126e-08 369 KSP preconditioned resid norm 4.866523024848e-04 true resid norm 2.272240236707e-01 ||r(i)||/||b|| 1.221320150640e-08 370 KSP preconditioned resid norm 4.856728251258e-04 true resid norm 2.272205041611e-01 ||r(i)||/||b|| 1.221301233415e-08 371 KSP preconditioned resid norm 4.846992381953e-04 true resid norm 2.272170183199e-01 ||r(i)||/||b|| 1.221282497156e-08 372 KSP preconditioned resid norm 4.837314828886e-04 true resid norm 2.272135473147e-01 ||r(i)||/||b|| 1.221263840641e-08 373 KSP preconditioned resid norm 4.827695012199e-04 true resid norm 2.272101115439e-01 ||r(i)||/||b|| 1.221245373508e-08 374 KSP preconditioned resid norm 4.818132360072e-04 true resid norm 2.272067081687e-01 ||r(i)||/||b|| 1.221227080501e-08 375 KSP preconditioned resid norm 4.808626308583e-04 true resid norm 2.272033271308e-01 ||r(i)||/||b|| 1.221208907556e-08 376 KSP preconditioned resid norm 4.799176301569e-04 true resid norm 2.271999749848e-01 ||r(i)||/||b|| 1.221190889904e-08 377 KSP preconditioned resid norm 4.789781790487e-04 true resid norm 2.271966450350e-01 ||r(i)||/||b|| 1.221172991555e-08 378 KSP preconditioned resid norm 4.780442234279e-04 true resid norm 2.271933397917e-01 ||r(i)||/||b|| 1.221155226003e-08 379 KSP preconditioned resid norm 4.771157099247e-04 true resid norm 2.271900724801e-01 ||r(i)||/||b|| 1.221137664332e-08 380 KSP preconditioned resid norm 4.761925858919e-04 true resid norm 2.271868170668e-01 ||r(i)||/||b|| 1.221120166614e-08 381 KSP preconditioned resid norm 4.752747993927e-04 true resid norm 2.271835904147e-01 ||r(i)||/||b|| 1.221102823486e-08 382 KSP preconditioned resid norm 4.743622991882e-04 true resid norm 2.271803942131e-01 ||r(i)||/||b|| 1.221085644029e-08 383 KSP preconditioned resid norm 4.734550347255e-04 true resid norm 2.271772113960e-01 ||r(i)||/||b|| 1.221068536513e-08 384 KSP preconditioned resid norm 4.725529561261e-04 true resid norm 2.271740607530e-01 ||r(i)||/||b|| 1.221051601932e-08 385 KSP preconditioned resid norm 4.716560141741e-04 true resid norm 2.271709379123e-01 ||r(i)||/||b|| 1.221034816786e-08 386 KSP preconditioned resid norm 4.707641603049e-04 true resid norm 2.271678371952e-01 ||r(i)||/||b|| 1.221018150554e-08 387 KSP preconditioned resid norm 4.698773465945e-04 true resid norm 2.271647550651e-01 ||r(i)||/||b|| 1.221001584227e-08 388 KSP preconditioned resid norm 4.689955257484e-04 true resid norm 2.271616974404e-01 ||r(i)||/||b|| 1.220985149614e-08 389 KSP preconditioned resid norm 4.681186510911e-04 true resid norm 2.271586639247e-01 ||r(i)||/||b|| 1.220968844587e-08 390 KSP preconditioned resid norm 4.672466765557e-04 true resid norm 2.271556569050e-01 ||r(i)||/||b|| 1.220952681975e-08 391 KSP preconditioned resid norm 4.663795566737e-04 true resid norm 2.271526629300e-01 ||r(i)||/||b|| 1.220936589478e-08 392 KSP preconditioned resid norm 4.655172465652e-04 true resid norm 2.271496888861e-01 ||r(i)||/||b|| 1.220920604110e-08 393 KSP preconditioned resid norm 4.646597019289e-04 true resid norm 2.271467476964e-01 ||r(i)||/||b|| 1.220904795331e-08 394 KSP preconditioned resid norm 4.638068790326e-04 true resid norm 2.271438309499e-01 ||r(i)||/||b|| 1.220889117934e-08 395 KSP preconditioned resid norm 4.629587347041e-04 true resid norm 2.271409246445e-01 ||r(i)||/||b|| 1.220873496657e-08 396 KSP preconditioned resid norm 4.621152263217e-04 true resid norm 2.271380445215e-01 ||r(i)||/||b|| 1.220858016110e-08 397 KSP preconditioned resid norm 4.612763118054e-04 true resid norm 2.271351835881e-01 ||r(i)||/||b|| 1.220842638706e-08 398 KSP preconditioned resid norm 4.604419496079e-04 true resid norm 2.271323410508e-01 ||r(i)||/||b|| 1.220827360180e-08 399 KSP preconditioned resid norm 4.596120987061e-04 true resid norm 2.271295276622e-01 ||r(i)||/||b|| 1.220812238328e-08 400 KSP preconditioned resid norm 4.587867185927e-04 true resid norm 2.271267325128e-01 ||r(i)||/||b|| 1.220797214510e-08 401 KSP preconditioned resid norm 4.579657692675e-04 true resid norm 2.271239586573e-01 ||r(i)||/||b|| 1.220782305146e-08 402 KSP preconditioned resid norm 4.571492112299e-04 true resid norm 2.271212006983e-01 ||r(i)||/||b|| 1.220767481226e-08 403 KSP preconditioned resid norm 4.563370054704e-04 true resid norm 2.271184630789e-01 ||r(i)||/||b|| 1.220752766630e-08 404 KSP preconditioned resid norm 4.555291134628e-04 true resid norm 2.271157424499e-01 ||r(i)||/||b|| 1.220738143356e-08 405 KSP preconditioned resid norm 4.547254971569e-04 true resid norm 2.271130457614e-01 ||r(i)||/||b|| 1.220723648762e-08 406 KSP preconditioned resid norm 4.539261189705e-04 true resid norm 2.271103648407e-01 ||r(i)||/||b|| 1.220709238919e-08 407 KSP preconditioned resid norm 4.531309417826e-04 true resid norm 2.271076977760e-01 ||r(i)||/||b|| 1.220694903552e-08 408 KSP preconditioned resid norm 4.523399289253e-04 true resid norm 2.271050597086e-01 ||r(i)||/||b|| 1.220680724044e-08 409 KSP preconditioned resid norm 4.515530441776e-04 true resid norm 2.271024370999e-01 ||r(i)||/||b|| 1.220666627626e-08 410 KSP preconditioned resid norm 4.507702517580e-04 true resid norm 2.270998303189e-01 ||r(i)||/||b|| 1.220652616281e-08 411 KSP preconditioned resid norm 4.499915163175e-04 true resid norm 2.270972384154e-01 ||r(i)||/||b|| 1.220638684902e-08 412 KSP preconditioned resid norm 4.492168029333e-04 true resid norm 2.270946735014e-01 ||r(i)||/||b|| 1.220624898591e-08 413 KSP preconditioned resid norm 4.484460771019e-04 true resid norm 2.270921205245e-01 ||r(i)||/||b|| 1.220611176440e-08 414 KSP preconditioned resid norm 4.476793047329e-04 true resid norm 2.270895903425e-01 ||r(i)||/||b|| 1.220597576812e-08 415 KSP preconditioned resid norm 4.469164521424e-04 true resid norm 2.270870676456e-01 ||r(i)||/||b|| 1.220584017416e-08 416 KSP preconditioned resid norm 4.461574860470e-04 true resid norm 2.270845680213e-01 ||r(i)||/||b|| 1.220570582034e-08 417 KSP preconditioned resid norm 4.454023735575e-04 true resid norm 2.270820869287e-01 ||r(i)||/||b|| 1.220557246259e-08 418 KSP preconditioned resid norm 4.446510821733e-04 true resid norm 2.270796200224e-01 ||r(i)||/||b|| 1.220543986735e-08 419 KSP preconditioned resid norm 4.439035797760e-04 true resid norm 2.270771634200e-01 ||r(i)||/||b|| 1.220530782594e-08 420 KSP preconditioned resid norm 4.431598346240e-04 true resid norm 2.270747400295e-01 ||r(i)||/||b|| 1.220517756967e-08 421 KSP preconditioned resid norm 4.424198153467e-04 true resid norm 2.270723202497e-01 ||r(i)||/||b|| 1.220504750746e-08 422 KSP preconditioned resid norm 4.416834909390e-04 true resid norm 2.270699179499e-01 ||r(i)||/||b|| 1.220491838479e-08 423 KSP preconditioned resid norm 4.409508307558e-04 true resid norm 2.270675366511e-01 ||r(i)||/||b|| 1.220479039092e-08 424 KSP preconditioned resid norm 4.402218045067e-04 true resid norm 2.270651645741e-01 ||r(i)||/||b|| 1.220466289272e-08 425 KSP preconditioned resid norm 4.394963822506e-04 true resid norm 2.270628099520e-01 ||r(i)||/||b|| 1.220453633271e-08 426 KSP preconditioned resid norm 4.387745343908e-04 true resid norm 2.270604733944e-01 ||r(i)||/||b|| 1.220441074366e-08 427 KSP preconditioned resid norm 4.380562316695e-04 true resid norm 2.270581537641e-01 ||r(i)||/||b|| 1.220428606445e-08 428 KSP preconditioned resid norm 4.373414451635e-04 true resid norm 2.270558434526e-01 ||r(i)||/||b|| 1.220416188612e-08 429 KSP preconditioned resid norm 4.366301462784e-04 true resid norm 2.270535516584e-01 ||r(i)||/||b|| 1.220403870309e-08 430 KSP preconditioned resid norm 4.359223067447e-04 true resid norm 2.270512761768e-01 ||r(i)||/||b|| 1.220391639685e-08 431 KSP preconditioned resid norm 4.352178986124e-04 true resid norm 2.270490152990e-01 ||r(i)||/||b|| 1.220379487556e-08 432 KSP preconditioned resid norm 4.345168942468e-04 true resid norm 2.270467633796e-01 ||r(i)||/||b|| 1.220367383578e-08 433 KSP preconditioned resid norm 4.338192663236e-04 true resid norm 2.270445319892e-01 ||r(i)||/||b|| 1.220355389943e-08 434 KSP preconditioned resid norm 4.331249878248e-04 true resid norm 2.270423115941e-01 ||r(i)||/||b|| 1.220343455407e-08 435 KSP preconditioned resid norm 4.324340320340e-04 true resid norm 2.270401105554e-01 ||r(i)||/||b|| 1.220331624911e-08 436 KSP preconditioned resid norm 4.317463725322e-04 true resid norm 2.270379186454e-01 ||r(i)||/||b|| 1.220319843482e-08 437 KSP preconditioned resid norm 4.310619831936e-04 true resid norm 2.270357391552e-01 ||r(i)||/||b|| 1.220308128808e-08 438 KSP preconditioned resid norm 4.303808381813e-04 true resid norm 2.270335770775e-01 ||r(i)||/||b|| 1.220296507725e-08 439 KSP preconditioned resid norm 4.297029119434e-04 true resid norm 2.270314323086e-01 ||r(i)||/||b|| 1.220284979677e-08 440 KSP preconditioned resid norm 4.290281792087e-04 true resid norm 2.270292968363e-01 ||r(i)||/||b|| 1.220273501598e-08 441 KSP preconditioned resid norm 4.283566149830e-04 true resid norm 2.270271716039e-01 ||r(i)||/||b|| 1.220262078558e-08 442 KSP preconditioned resid norm 4.276881945451e-04 true resid norm 2.270250607831e-01 ||r(i)||/||b|| 1.220250732980e-08 443 KSP preconditioned resid norm 4.270228934431e-04 true resid norm 2.270229646143e-01 ||r(i)||/||b|| 1.220239466155e-08 444 KSP preconditioned resid norm 4.263606874903e-04 true resid norm 2.270208861679e-01 ||r(i)||/||b|| 1.220228294588e-08 445 KSP preconditioned resid norm 4.257015527619e-04 true resid norm 2.270188171070e-01 ||r(i)||/||b|| 1.220217173467e-08 446 KSP preconditioned resid norm 4.250454655911e-04 true resid norm 2.270167567033e-01 ||r(i)||/||b|| 1.220206098879e-08 447 KSP preconditioned resid norm 4.243924025659e-04 true resid norm 2.270147147863e-01 ||r(i)||/||b|| 1.220195123655e-08 448 KSP preconditioned resid norm 4.237423405250e-04 true resid norm 2.270126868721e-01 ||r(i)||/||b|| 1.220184223697e-08 449 KSP preconditioned resid norm 4.230952565548e-04 true resid norm 2.270106674291e-01 ||r(i)||/||b|| 1.220173369270e-08 450 KSP preconditioned resid norm 4.224511279860e-04 true resid norm 2.270086563071e-01 ||r(i)||/||b|| 1.220162559569e-08 451 KSP preconditioned resid norm 4.218099323900e-04 true resid norm 2.270066621367e-01 ||r(i)||/||b|| 1.220151840982e-08 452 KSP preconditioned resid norm 4.211716475759e-04 true resid norm 2.270046806121e-01 ||r(i)||/||b|| 1.220141190366e-08 453 KSP preconditioned resid norm 4.205362515871e-04 true resid norm 2.270027093782e-01 ||r(i)||/||b|| 1.220130595061e-08 454 KSP preconditioned resid norm 4.199037226981e-04 true resid norm 2.270007533393e-01 ||r(i)||/||b|| 1.220120081429e-08 455 KSP preconditioned resid norm 4.192740394116e-04 true resid norm 2.269987974986e-01 ||r(i)||/||b|| 1.220109568863e-08 456 KSP preconditioned resid norm 4.186471804550e-04 true resid norm 2.269968704692e-01 ||r(i)||/||b|| 1.220099211156e-08 457 KSP preconditioned resid norm 4.180231247782e-04 true resid norm 2.269949465846e-01 ||r(i)||/||b|| 1.220088870352e-08 458 KSP preconditioned resid norm 4.174018515494e-04 true resid norm 2.269930380515e-01 ||r(i)||/||b|| 1.220078612062e-08 459 KSP preconditioned resid norm 4.167833401536e-04 true resid norm 2.269911319289e-01 ||r(i)||/||b|| 1.220068366728e-08 460 KSP preconditioned resid norm 4.161675701884e-04 true resid norm 2.269892413764e-01 ||r(i)||/||b|| 1.220058205083e-08 461 KSP preconditioned resid norm 4.155545214621e-04 true resid norm 2.269873679167e-01 ||r(i)||/||b|| 1.220048135312e-08 462 KSP preconditioned resid norm 4.149441739906e-04 true resid norm 2.269854983148e-01 ||r(i)||/||b|| 1.220038086275e-08 463 KSP preconditioned resid norm 4.143365079946e-04 true resid norm 2.269836387108e-01 ||r(i)||/||b|| 1.220028090977e-08 464 KSP preconditioned resid norm 4.137315038968e-04 true resid norm 2.269817985566e-01 ||r(i)||/||b|| 1.220018200221e-08 465 KSP preconditioned resid norm 4.131291423197e-04 true resid norm 2.269799604871e-01 ||r(i)||/||b|| 1.220008320670e-08 466 KSP preconditioned resid norm 4.125294040826e-04 true resid norm 2.269781397259e-01 ||r(i)||/||b|| 1.219998534151e-08 467 KSP preconditioned resid norm 4.119322701991e-04 true resid norm 2.269763300365e-01 ||r(i)||/||b|| 1.219988807142e-08 468 KSP preconditioned resid norm 4.113377218747e-04 true resid norm 2.269745252023e-01 ||r(i)||/||b|| 1.219979106229e-08 469 KSP preconditioned resid norm 4.107457405041e-04 true resid norm 2.269727304101e-01 ||r(i)||/||b|| 1.219969459292e-08 470 KSP preconditioned resid norm 4.101563076692e-04 true resid norm 2.269709535702e-01 ||r(i)||/||b|| 1.219959908848e-08 471 KSP preconditioned resid norm 4.095694051360e-04 true resid norm 2.269691758088e-01 ||r(i)||/||b|| 1.219950353450e-08 472 KSP preconditioned resid norm 4.089850148528e-04 true resid norm 2.269674151244e-01 ||r(i)||/||b|| 1.219940889841e-08 473 KSP preconditioned resid norm 4.084031189479e-04 true resid norm 2.269656652530e-01 ||r(i)||/||b|| 1.219931484352e-08 474 KSP preconditioned resid norm 4.078236997266e-04 true resid norm 2.269639202302e-01 ||r(i)||/||b|| 1.219922104923e-08 475 KSP preconditioned resid norm 4.072467396699e-04 true resid norm 2.269621861633e-01 ||r(i)||/||b|| 1.219912784382e-08 476 KSP preconditioned resid norm 4.066722214315e-04 true resid norm 2.269604712779e-01 ||r(i)||/||b|| 1.219903566941e-08 477 KSP preconditioned resid norm 4.061001278360e-04 true resid norm 2.269587553579e-01 ||r(i)||/||b|| 1.219894343938e-08 478 KSP preconditioned resid norm 4.055304418768e-04 true resid norm 2.269570522399e-01 ||r(i)||/||b|| 1.219885189746e-08 479 KSP preconditioned resid norm 4.049631467137e-04 true resid norm 2.269553629850e-01 ||r(i)||/||b|| 1.219876110068e-08 480 KSP preconditioned resid norm 4.043982256709e-04 true resid norm 2.269536819539e-01 ||r(i)||/||b|| 1.219867074593e-08 481 KSP preconditioned resid norm 4.038356622351e-04 true resid norm 2.269520051347e-01 ||r(i)||/||b|| 1.219858061755e-08 482 KSP preconditioned resid norm 4.032754400533e-04 true resid norm 2.269503391324e-01 ||r(i)||/||b|| 1.219849107059e-08 483 KSP preconditioned resid norm 4.027175429308e-04 true resid norm 2.269486835371e-01 ||r(i)||/||b|| 1.219840208300e-08 484 KSP preconditioned resid norm 4.021619548294e-04 true resid norm 2.269470378222e-01 ||r(i)||/||b|| 1.219831362647e-08 485 KSP preconditioned resid norm 4.016086598653e-04 true resid norm 2.269453968781e-01 ||r(i)||/||b|| 1.219822542637e-08 486 KSP preconditioned resid norm 4.010576423073e-04 true resid norm 2.269437660933e-01 ||r(i)||/||b|| 1.219813777233e-08 487 KSP preconditioned resid norm 4.005088865749e-04 true resid norm 2.269421551975e-01 ||r(i)||/||b|| 1.219805118732e-08 488 KSP preconditioned resid norm 3.999623772362e-04 true resid norm 2.269405444292e-01 ||r(i)||/||b|| 1.219796460916e-08 489 KSP preconditioned resid norm 3.994180990066e-04 true resid norm 2.269389343314e-01 ||r(i)||/||b|| 1.219787806704e-08 490 KSP preconditioned resid norm 3.988760367466e-04 true resid norm 2.269373508449e-01 ||r(i)||/||b|| 1.219779295527e-08 491 KSP preconditioned resid norm 3.983361754599e-04 true resid norm 2.269357662877e-01 ||r(i)||/||b|| 1.219770778594e-08 492 KSP preconditioned resid norm 3.977985002923e-04 true resid norm 2.269341873098e-01 ||r(i)||/||b|| 1.219762291650e-08 493 KSP preconditioned resid norm 3.972629965292e-04 true resid norm 2.269326165890e-01 ||r(i)||/||b|| 1.219753849088e-08 494 KSP preconditioned resid norm 3.967296495946e-04 true resid norm 2.269310627073e-01 ||r(i)||/||b|| 1.219745497035e-08 495 KSP preconditioned resid norm 3.961984450490e-04 true resid norm 2.269295113684e-01 ||r(i)||/||b|| 1.219737158650e-08 496 KSP preconditioned resid norm 3.956693685876e-04 true resid norm 2.269279722981e-01 ||r(i)||/||b|| 1.219728886208e-08 497 KSP preconditioned resid norm 3.951424060395e-04 true resid norm 2.269264344699e-01 ||r(i)||/||b|| 1.219720620442e-08 498 KSP preconditioned resid norm 3.946175433651e-04 true resid norm 2.269249136179e-01 ||r(i)||/||b|| 1.219712445923e-08 499 KSP preconditioned resid norm 3.940947666552e-04 true resid norm 2.269233870762e-01 ||r(i)||/||b|| 1.219704240822e-08 500 KSP preconditioned resid norm 3.935740621292e-04 true resid norm 2.269218859991e-01 ||r(i)||/||b|| 1.219696172592e-08 Linear solve did not converge due to DIVERGED_ITS iterations 500 KSP Object: 8 MPI processes type: gmres GMRES: restart=500, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=500, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 8 MPI processes type: asm Additive Schwarz: total subdomain blocks = 8, amount of overlap = 1 Additive Schwarz: restriction/interpolation type - RESTRICT Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 1e-12 matrix ordering: nd factor fill ratio given 5, needed 3.70575 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=5630, cols=5630 package used to perform factorization: petsc total: nonzeros=877150, allocated nonzeros=877150 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 1126 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=5630, cols=5630 total: nonzeros=236700, allocated nonzeros=236700 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 1126 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 8 MPI processes type: mpiaij rows=41000, cols=41000 total: nonzeros=1817800, allocated nonzeros=2555700 total number of mallocs used during MatSetValues calls =121180 using I-node (on process 0) routines: found 1025 nodes, limit used is 5 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- /home/u14/umhassa5/mecfd/gas-code/TLEC2CCP/ctf/bin/Linux_p/ctf_Linux on a arch-linu named mecfd02 with 8 processors, by umhassa5 Mon Apr 28 14:55:39 2014 Using Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 CDT 2011 Max Max/Min Avg Total Time (sec): 7.395e+01 1.04834 7.331e+01 Objects: 1.541e+03 1.00000 1.541e+03 Flops: 5.413e+09 1.03192 5.360e+09 4.288e+10 Flops/sec: 7.655e+07 1.07860 7.314e+07 5.851e+08 MPI Messages: 3.108e+03 1.95718 2.732e+03 2.185e+04 MPI Message Lengths: 3.360e+07 2.89490 7.101e+03 1.552e+08 MPI Reductions: 1.559e+03 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 7.3305e+01 100.0% 4.2880e+10 100.0% 2.185e+04 100.0% 7.101e+03 100.0% 1.558e+03 99.9% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage MatMult 1001 1.0 2.0220e+00 1.2 4.54e+08 1.0 1.4e+04 4.0e+03 0.0e+00 3 8 64 36 0 3 8 64 36 0 1779 MatSolve 501 1.0 3.0825e+00 1.2 1.00e+09 1.1 0.0e+00 0.0e+00 0.0e+00 4 18 0 0 0 4 18 0 0 0 2510 MatLUFactorSym 1 1.0 3.1987e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatLUFactorNum 1 1.0 9.0718e-02 1.6 8.21e+07 1.5 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 5825 MatAssemblyBegin 2 1.0 1.1914e+01190.5 0.00e+00 0.0 4.2e+01 1.2e+06 2.0e+00 14 0 0 33 0 14 0 0 33 0 0 MatAssemblyEnd 2 1.0 3.0220e+01 1.0 0.00e+00 0.0 5.6e+01 1.0e+03 9.0e+00 41 0 0 0 1 41 0 0 0 1 0 MatGetRowIJ 1 1.0 2.6202e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetSubMatrice 1 1.0 2.6700e-02 1.1 0.00e+00 0.0 1.4e+02 5.5e+04 7.0e+00 0 0 1 5 0 0 0 1 5 0 0 MatGetOrdering 1 1.0 1.0290e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatIncreaseOvrlp 1 1.0 3.5419e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 3 3.0 3.3307e-04 3.6 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecMDot 500 1.0 3.7091e+00 1.2 1.28e+09 1.0 0.0e+00 0.0e+00 5.0e+02 5 24 0 0 32 5 24 0 0 32 2769 VecNorm 1003 1.0 9.0194e-01 5.3 1.03e+07 1.0 0.0e+00 0.0e+00 1.0e+03 1 0 0 0 64 1 0 0 0 64 91 VecScale 501 1.0 4.4811e-03 1.1 2.57e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 4584 VecCopy 502 1.0 2.6488e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 1505 1.0 3.8517e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 501 1.0 6.1393e-03 1.1 5.14e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 6692 VecAYPX 501 1.0 1.7209e-02 1.2 2.57e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1194 VecMAXPY 1001 1.0 6.7261e+00 1.1 2.57e+09 1.0 0.0e+00 0.0e+00 0.0e+00 9 48 0 0 0 9 48 0 0 0 3060 VecAssemblyBegin 1 1.0 1.2943e-02 1.2 0.00e+00 0.0 2.2e+02 3.1e+04 3.0e+00 0 0 1 4 0 0 0 1 4 0 0 VecAssemblyEnd 1 1.0 1.7521e-03 6.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecScatterBegin 2004 1.0 9.6959e-02 1.3 0.00e+00 0.0 2.1e+04 4.1e+03 0.0e+00 0 0 96 56 0 0 0 96 56 0 0 VecScatterEnd 2004 1.0 5.1422e-01 5.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 501 1.0 5.3980e-01 5.8 7.70e+06 1.0 0.0e+00 0.0e+00 5.0e+02 0 0 0 0 32 0 0 0 0 32 114 KSPGMRESOrthog 500 1.0 6.9229e+00 1.1 2.57e+09 1.0 0.0e+00 0.0e+00 5.0e+02 9 48 0 0 32 9 48 0 0 32 2967 KSPSetup 2 1.0 2.8701e-03 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1 1.0 1.5673e+01 1.0 5.41e+09 1.0 2.1e+04 4.4e+03 1.5e+03 21100 97 60 98 21100 97 60 98 2736 PCSetUp 2 1.0 1.5435e-01 1.4 8.21e+07 1.5 2.0e+02 4.0e+04 2.6e+01 0 1 1 5 2 0 1 1 5 2 3423 PCSetUpOnBlocks 1 1.0 1.2381e-01 1.5 8.21e+07 1.5 0.0e+00 0.0e+00 7.0e+00 0 1 0 0 0 0 1 0 0 0 4268 PCApply 501 1.0 3.2335e+00 1.2 1.00e+09 1.1 7.0e+03 4.0e+03 0.0e+00 4 18 32 18 0 4 18 32 18 0 2393 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Matrix 5 5 14741588 0 Vector 1515 1515 64610424 0 Vector Scatter 3 3 3180 0 Index Set 12 12 413568 0 Krylov Solver 2 2 4048456 0 Preconditioner 2 2 1832 0 Viewer 2 1 720 0 ======================================================================================================================== Average time to get PetscTime(): 1.90735e-07 Average time for MPI_Barrier(): 8.2016e-06 Average time for zero size MPI_Send(): 1.05202e-05 #PETSc Option Table entries: -ksp_converged_reason -ksp_gmres_restart 500 -ksp_max_it 500 -ksp_monitor_true_residual -ksp_type gmres -ksp_view -log_summary -pc_type asm -sub_pc_type lu #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 Configure run at: Sat Dec 31 07:53:05 2011 Configure options: --with-mpi-dir=/home/mecfd/common/openmpi-p --PETSC_DIR=/home/mecfd/common/sw/petsc-3.2-p5-pgi --with-debugging=0 --with-shared-libraries=1 --download-f-blas-lapack=1 --download-superlu_dist=yes --download-parmetis=yes --download-mumps=yes --download-scalapack=yes --download-spooles=yes --download-blacs=yes --download-hypre=yes ----------------------------------------- Libraries compiled on Sat Dec 31 07:53:05 2011 on mecfd02 Machine characteristics: Linux-2.6.18-238.19.1.el5-x86_64-with-redhat-5.7-Final Using PETSc directory: /home/mecfd/common/sw/petsc-3.2-p5-pgi Using PETSc arch: arch-linux2-c-opt ----------------------------------------- Using C compiler: /home/mecfd/common/openmpi-p/bin/mpicc -fPIC -O ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /home/mecfd/common/openmpi-p/bin/mpif90 -fPIC -O ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/home/mecfd/common/sw/petsc-3.2-p5-pgi/arch-linux2-c-opt/include -I/home/mecfd/common/sw/petsc-3.2-p5-pgi/include -I/home/mecfd/common/sw/petsc-3.2-p5-pgi/include -I/home/mecfd/common/sw/petsc-3.2-p5-pgi/arch-linux2-c-opt/include -I/home/mecfd/common/openmpi-p/include -I/home/mecfd/common/sw/openmpi-1.4.4-pgi/include ----------------------------------------- Using C linker: /home/mecfd/common/openmpi-p/bin/mpicc Using Fortran linker: /home/mecfd/common/openmpi-p/bin/mpif90 Using libraries: -Wl,-rpath,/home/mecfd/common/sw/petsc-3.2-p5-pgi/arch-linux2-c-opt/lib -L/home/mecfd/common/sw/petsc-3.2-p5-pgi/arch-linux2-c-opt/lib -lpetsc -lX11 -lpthread -Wl,-rpath,/home/mecfd/common/sw/petsc-3.2-p5-pgi/arch-linux2-c-opt/lib -L/home/mecfd/common/sw/petsc-3.2-p5-pgi/arch-linux2-c-opt/lib -lsuperlu_dist_2.5 -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lparmetis -lmetis -lHYPRE -Wl,-rpath,/home/mecfd/common/sw/openmpi-1.4.4-pgi/lib -Wl,-rpath,/local/linux-local/pgi/linux86-64/10.0/libso -Wl,-rpath,/local/linux-local/pgi/linux86-64/10.0/lib -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.1.2 -lmpi_cxx -lstd -lC -lspooles -lscalapack -lblacs -lflapack -lfblas /local/linux-local/pgi/linux86-64/10.0/lib/pgi.ld -L/home/mecfd/common/sw/openmpi-1.4.4-pgi/lib -L/local/linux-local/pgi/linux86-64/10.0/libso -L/local/linux-local/pgi/linux86-64/10.0/lib -L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 -ldl -lmpi -lopen-rte -lopen-pal -lnsl -lutil -lpthread -lnspgc -lpgc -lmpi_f90 -lmpi_f77 -lpgf90 -lpgf90_rpm1 -lpgf902 -lpgftnrtl -lpgf90rtl -lrt -lm -lmpi_cxx -lstd -lC -ldl -lmpi -lopen-rte -lopen-pal -lnsl -lutil -lpthread -lnspgc -lpgc -ldl ----------------------------------------- -- With Best Regards; Foad Quoting Barry Smith : > > Please run with the additional options -ksp_max_it 500 > -ksp_gmres_restart 500 -ksp_monitor_true_residual > -ksp_monitor_singular_value and send back all the output (that would > include the 500 residual norms as it tries to converge.) > > Barry > > On Apr 28, 2014, at 1:21 PM, Foad Hassaninejadfarahani > wrote: > >> Hello Again; >> >> I used -ksp_rtol 1.e-12 and it took way way longer to get the >> result for one iteration and it did not converge: >> >> Linear solve did not converge due to DIVERGED_ITS iterations 10000 >> KSP Object: 8 MPI processes >> type: gmres >> GMRES: restart=300, using Classical (unmodified) Gram-Schmidt >> Orthogonalization with no iterative refinement >> GMRES: happy breakdown tolerance 1e-30 >> maximum iterations=10000, initial guess is zero >> tolerances: relative=1e-12, absolute=1e-50, divergence=10000 >> left preconditioning >> using PRECONDITIONED norm type for convergence test >> PC Object: 8 MPI processes >> type: asm >> Additive Schwarz: total subdomain blocks = 8, amount of overlap = 1 >> Additive Schwarz: restriction/interpolation type - RESTRICT >> Local solve is same for all blocks, in the following KSP and PC objects: >> KSP Object: (sub_) 1 MPI processes >> type: preonly >> maximum iterations=10000, initial guess is zero >> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >> left preconditioning >> using NONE norm type for convergence test >> PC Object: (sub_) 1 MPI processes >> type: lu >> LU: out-of-place factorization >> tolerance for zero pivot 1e-12 >> matrix ordering: nd >> factor fill ratio given 5, needed 3.70575 >> Factored matrix follows: >> Matrix Object: 1 MPI processes >> type: seqaij >> rows=5630, cols=5630 >> package used to perform factorization: petsc >> total: nonzeros=877150, allocated nonzeros=877150 >> total number of mallocs used during MatSetValues calls =0 >> using I-node routines: found 1126 nodes, limit used is 5 >> linear system matrix = precond matrix: >> Matrix Object: 1 MPI processes >> type: seqaij >> rows=5630, cols=5630 >> total: nonzeros=236700, allocated nonzeros=236700 >> total number of mallocs used during MatSetValues calls =0 >> using I-node routines: found 1126 nodes, limit used is 5 >> linear system matrix = precond matrix: >> Matrix Object: 8 MPI processes >> type: mpiaij >> rows=41000, cols=41000 >> total: nonzeros=1817800, allocated nonzeros=2555700 >> total number of mallocs used during MatSetValues calls =121180 >> using I-node (on process 0) routines: found 1025 nodes, limit used is 5 >> >> >> Well, let me clear everything. I am solving the whole system (air >> and water) coupled at once. Although originally the system is not >> linear, but I linearized the equations, so I have some lagged >> terms. In addition the interface (between two phases) location is >> wrong at the beginning and should be corrected in each iteration >> after getting the solution. Therefore, I solve the whole domain, >> move the interface and again solve the whole domain. This should >> continue until the interface movement becomes from the order of >> 1E-12. >> >> My problem is after getting the converged solution. Restarting from >> the converged solution, if I use Superlu, it gives me back the >> converged solution and stops after one iteration. But, if I use any >> iterative solver, it does not give me back the converged solution >> and starts moving the interface cause the wrong solution ask for >> new interface location. This leads to oscillation for ever and for >> some cases divergence. >> >> -- >> With Best Regards; >> Foad >> >> >> Quoting Barry Smith : >> >>> >>> On Apr 28, 2014, at 12:59 PM, Barry Smith wrote: >>> >>>> >>>> First try a much tighter tolerance on the linear solver. Use >>>> -ksp_rtol 1.e-12 >>>> >>>> I don?t fully understand. Is the coupled system nonlinear? Are >>>> you solving a nonlinear system, how are you doing that since you >>>> seem to be only solving a single linear system? Does the linear >>>> system involve all unknowns in the fluid and air? >>>> >>>> Barry >>>> >>>> >>>> >>>> On Apr 28, 2014, at 11:19 AM, Foad Hassaninejadfarahani >>>> wrote: >>>> >>>>> Hello PETSc team; >>>>> >>>>> The PETSc setup in my code is working now. I have issues with >>>>> using the iterative solver instead of direct solver. >>>>> >>>>> I am solving a 2D, two-phase flow. Two fluids (air and water) >>>>> flow into a channel and there is interaction between two phases. >>>>> I am solving for the velocities in x and y directions, pressure >>>>> and two scalars. They are all coupled together. I am looking for >>>>> the steady-state solution. Since there is interface between the >>>>> phases which needs updating, there are many iterations to reach >>>>> the steady-state solution. "A" is a nine-banded non-symmetric >>>>> matrix and each node has five unknowns. I am storing the >>>>> non-zero coefficients and their locations in three separate >>>>> vectors. >>>>> >>>>> I started using the direct solver. Superlu works fine and gives >>>>> me good results compared to the previous works. However it is >>>>> not cheap and applicable for fine grids. But, the iterative >>>>> solver did not work and here is what I did: >>>>> >>>>> I got the converged solution by using Superlu. After that I >>>>> restarted from the converged solution and did one iteration >>>>> using -pc_type lu -pc_factor_mat_solver_package superlu_dist >>>>> -log_summary. Again, it gave me the same converged solution. >>>>> >>>>> After that I started from the converged solution once more and >>>>> this time I tried different combinations of iterative solvers >>>>> and preconditions like the followings: >>>>> -ksp_type gmres -ksp_gmres_restart 300 -pc_type asm -sub_pc_type >>>>> lu ksp_monitor_true_residual -ksp_converged_reason -ksp_view >>>>> -log_summary >>>>> >>>>> and here is the report: >>>>> Linear solve converged due to CONVERGED_RTOL iterations 41 >>>>> KSP Object: 8 MPI processes >>>>> type: gmres >>>>> GMRES: restart=300, using Classical (unmodified) Gram-Schmidt >>>>> Orthogonalization with no iterative refinement >>>>> GMRES: happy breakdown tolerance 1e-30 >>>>> maximum iterations=10000, initial guess is zero >>>>> tolerances: relative=1e-06, absolute=1e-50, divergence=10000 >>>>> left preconditioning >>>>> using PRECONDITIONED norm type for convergence test >>>>> PC Object: 8 MPI processes >>>>> type: asm >>>>> Additive Schwarz: total subdomain blocks = 8, amount of overlap = 1 >>>>> Additive Schwarz: restriction/interpolation type - RESTRICT >>>>> Local solve is same for all blocks, in the following KSP and PC objects: >>>>> KSP Object: (sub_) 1 MPI processes >>>>> type: preonly >>>>> maximum iterations=10000, initial guess is zero >>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >>>>> left preconditioning >>>>> using NONE norm type for convergence test >>>>> PC Object: (sub_) 1 MPI processes >>>>> type: lu >>>>> LU: out-of-place factorization >>>>> tolerance for zero pivot 1e-12 >>>>> matrix ordering: nd >>>>> factor fill ratio given 5, needed 3.70575 >>>>> Factored matrix follows: >>>>> Matrix Object: 1 MPI processes >>>>> type: seqaij >>>>> rows=5630, cols=5630 >>>>> package used to perform factorization: petsc >>>>> total: nonzeros=877150, allocated nonzeros=877150 >>>>> total number of mallocs used during MatSetValues calls =0 >>>>> using I-node routines: found 1126 nodes, limit used is 5 >>>>> linear system matrix = precond matrix: >>>>> Matrix Object: 1 MPI processes >>>>> type: seqaij >>>>> rows=5630, cols=5630 >>>>> total: nonzeros=236700, allocated nonzeros=236700 >>>>> total number of mallocs used during MatSetValues calls =0 >>>>> using I-node routines: found 1126 nodes, limit used is 5 >>>>> linear system matrix = precond matrix: >>>>> Matrix Object: 8 MPI processes >>>>> type: mpiaij >>>>> rows=41000, cols=41000 >>>>> total: nonzeros=1817800, allocated nonzeros=2555700 >>>>> total number of mallocs used during MatSetValues calls =121180 >>>>> using I-node (on process 0) routines: found 1025 nodes, limit >>>>> used is 5 >>>>> >>>>> But, the results are far from the converged solution. For >>>>> example two reference nodes for the pressure are compared: >>>>> >>>>> Based on Superlu >>>>> Channel Inlet pressure (MIXTURE): 0.38890D-01 >>>>> Channel Inlet pressure (LIQUID): 0.38416D-01 >>>>> >>>>> Based on Gmres >>>>> Channel Inlet pressure (MIXTURE): -0.87214D+00 >>>>> Channel Inlet pressure (LIQUID): -0.87301D+00 >>>>> >>>>> >>>>> I also tried this: >>>>> -ksp_type gcr -pc_type asm -ksp_diagonal_scale >>>>> -ksp_diagonal_scale_fix -ksp_monitor_true_residual >>>>> -ksp_converged_reason -ksp_view -log_summary >>>>> >>>>> and here is the report: >>>>> 0 KSP unpreconditioned resid norm 2.248340888101e+05 true resid >>>>> norm 2.248340888101e+05 ||r(i)||/||b|| 1.000000000000e+00 >>>>> 1 KSP unpreconditioned resid norm 4.900010460179e+04 true resid >>>>> norm 4.900010460179e+04 ||r(i)||/||b|| 2.179389471637e-01 >>>>> 2 KSP unpreconditioned resid norm 4.267761572746e+04 true resid >>>>> norm 4.267761572746e+04 ||r(i)||/||b|| 1.898182608933e-01 >>>>> 3 KSP unpreconditioned resid norm 2.041242251471e+03 true resid >>>>> norm 2.041242251471e+03 ||r(i)||/||b|| 9.078882398457e-03 >>>>> 4 KSP unpreconditioned resid norm 1.852885420564e+03 true resid >>>>> norm 1.852885420564e+03 ||r(i)||/||b|| 8.241123178296e-03 >>>>> 5 KSP unpreconditioned resid norm 1.748965594395e+02 true resid >>>>> norm 1.748965594395e+02 ||r(i)||/||b|| 7.778916460804e-04 >>>>> 6 KSP unpreconditioned resid norm 5.664539353996e+01 true resid >>>>> norm 5.664539353996e+01 ||r(i)||/||b|| 2.519430831852e-04 >>>>> 7 KSP unpreconditioned resid norm 3.607535692806e+01 true resid >>>>> norm 3.607535692806e+01 ||r(i)||/||b|| 1.604532351788e-04 >>>>> 8 KSP unpreconditioned resid norm 1.041501303366e+01 true resid >>>>> norm 1.041501303366e+01 ||r(i)||/||b|| 4.632310468924e-05 >>>>> 9 KSP unpreconditioned resid norm 3.089920380322e+00 true resid >>>>> norm 3.089920380322e+00 ||r(i)||/||b|| 1.374311340720e-05 >>>>> 10 KSP unpreconditioned resid norm 1.456883209806e+00 true resid >>>>> norm 1.456883209806e+00 ||r(i)||/||b|| 6.479814593583e-06 >>>>> 11 KSP unpreconditioned resid norm 5.566902714391e-01 true resid >>>>> norm 5.566902714391e-01 ||r(i)||/||b|| 2.476004748147e-06 >>>>> 12 KSP unpreconditioned resid norm 2.403913756663e-01 true resid >>>>> norm 2.403913756663e-01 ||r(i)||/||b|| 1.069194520006e-06 >>>>> 13 KSP unpreconditioned resid norm 1.650435118839e-01 true resid >>>>> norm 1.650435118839e-01 ||r(i)||/||b|| 7.340680088032e-07 >>>>> Linear solve converged due to CONVERGED_RTOL iterations 13 >>>>> KSP Object: 8 MPI processes >>>>> type: gcr >>>>> GCR: restart = 30 >>>>> GCR: restarts performed = 1 >>>>> maximum iterations=10000, initial guess is zero >>>>> tolerances: relative=1e-06, absolute=1e-50, divergence=10000 >>>>> right preconditioning >>>>> diagonally scaled system >>>>> using UNPRECONDITIONED norm type for convergence test >>>>> PC Object: 8 MPI processes >>>>> type: asm >>>>> Additive Schwarz: total subdomain blocks = 8, amount of overlap = 1 >>>>> Additive Schwarz: restriction/interpolation type - RESTRICT >>>>> Local solve is same for all blocks, in the following KSP and PC objects: >>>>> KSP Object: (sub_) 1 MPI processes >>>>> type: preonly >>>>> maximum iterations=10000, initial guess is zero >>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >>>>> left preconditioning >>>>> using NONE norm type for convergence test >>>>> PC Object: (sub_) 1 MPI processes >>>>> type: ilu >>>>> ILU: out-of-place factorization >>>>> 0 levels of fill >>>>> tolerance for zero pivot 1e-12 >>>>> using diagonal shift to prevent zero pivot >>>>> matrix ordering: natural >>>>> factor fill ratio given 1, needed 1 >>>>> Factored matrix follows: >>>>> Matrix Object: 1 MPI processes >>>>> type: seqaij >>>>> rows=5630, cols=5630 >>>>> package used to perform factorization: petsc >>>>> total: nonzeros=236700, allocated nonzeros=236700 >>>>> total number of mallocs used during MatSetValues calls =0 >>>>> using I-node routines: found 1126 nodes, limit used is 5 >>>>> linear system matrix = precond matrix: >>>>> Matrix Object: 1 MPI processes >>>>> type: seqaij >>>>> rows=5630, cols=5630 >>>>> total: nonzeros=236700, allocated nonzeros=236700 >>>>> total number of mallocs used during MatSetValues calls =0 >>>>> using I-node routines: found 1126 nodes, limit used is 5 >>>>> linear system matrix = precond matrix: >>>>> Matrix Object: 8 MPI processes >>>>> type: mpiaij >>>>> rows=41000, cols=41000 >>>>> total: nonzeros=1817800, allocated nonzeros=2555700 >>>>> total number of mallocs used during MatSetValues calls =121180 >>>>> using I-node (on process 0) routines: found 1025 nodes, limit >>>>> used is 5 >>>>> >>>>> Channel Inlet pressure (MIXTURE): -0.90733D+00 >>>>> Channel Inlet pressure (LIQUID): -0.10118D+01 >>>>> >>>>> >>>>> As you may see these are complete different results which are >>>>> not close to the converged solution. >>>>> >>>>> Since, I want to have fine grids I need to use iterative solver. >>>>> I wonder if I am missing something or using wrong >>>>> solver/precondition/option. I would appreciate if you could help >>>>> me (like always). >>>>> >>>>> -- >>>>> With Best Regards; >>>>> Foad >>>>> >>>>> >>>>> >>>>> >>>> >>> >>> >>> >> >> > > > From epscodes at gmail.com Mon Apr 28 15:23:26 2014 From: epscodes at gmail.com (Xiangdong) Date: Mon, 28 Apr 2014 16:23:26 -0400 Subject: [petsc-users] questions about the SNES Function Norm Message-ID: Hello everyone, When I run snes program, it outputs "SNES Function norm 1.23456789e+10". It seems that this norm is different from residue norm (even if solving F(x)=0) and also differ from norm of the Jacobian. What is the definition of this "SNES Function Norm"? Thank you. Best, Xiangdong -------------- next part -------------- An HTML attachment was scrubbed... URL: From ztdepyahoo at 163.com Mon Apr 28 19:00:34 2014 From: ztdepyahoo at 163.com (=?GBK?B?tqHAz8qm?=) Date: Tue, 29 Apr 2014 08:00:34 +0800 (CST) Subject: [petsc-users] How to set the value in Ax=b. In-Reply-To: References: Message-ID: <1222007.6ed.145aac87287.Coremail.ztdepyahoo@163.com> Dear group: I have assembled the coefficients of A and b. My computation alogrithm need to specify a component of x to be a constant value. for example x[5]=3.14; then how to resolve this problem. Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From abhyshr at mcs.anl.gov Mon Apr 28 19:39:40 2014 From: abhyshr at mcs.anl.gov (Abhyankar, Shrirang G.) Date: Tue, 29 Apr 2014 00:39:40 +0000 Subject: [petsc-users] How to set the value in Ax=b. In-Reply-To: <1222007.6ed.145aac87287.Coremail.ztdepyahoo@163.com> References: , <1222007.6ed.145aac87287.Coremail.ztdepyahoo@163.com> Message-ID: <4F4E0435-7AF3-4103-AB8E-B1BC182855B2@anl.gov> Zero out the sixth row, set A[5][5]= 1.0 and b[5]=3.14. On Apr 28, 2014, at 7:00 PM, "???" > wrote: Dear group: I have assembled the coefficients of A and b. My computation alogrithm need to specify a component of x to be a constant value. for example x[5]=3.14; then how to resolve this problem. Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Apr 28 21:19:36 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 28 Apr 2014 21:19:36 -0500 Subject: [petsc-users] How to set the value in Ax=b. In-Reply-To: <4F4E0435-7AF3-4103-AB8E-B1BC182855B2@anl.gov> References: , <1222007.6ed.145aac87287.Coremail.ztdepyahoo@163.com> <4F4E0435-7AF3-4103-AB8E-B1BC182855B2@anl.gov> Message-ID: On Apr 28, 2014, at 7:39 PM, Abhyankar, Shrirang G. wrote: > Zero out the sixth row, set A[5][5]= 1.0 and b[5]=3.14. You can use, for example, MatZeroRows() to zero that row if you have already assembled the matrix. Barry > > On Apr 28, 2014, at 7:00 PM, "???" wrote: > >> Dear group: >> I have assembled the coefficients of A and b. My computation alogrithm need to specify a component of x to be a constant value. for example x[5]=3.14; >> then how to resolve this problem. >> Regards >> >> From bsmith at mcs.anl.gov Mon Apr 28 21:28:34 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 28 Apr 2014 21:28:34 -0500 Subject: [petsc-users] questions about the SNES Function Norm In-Reply-To: References: Message-ID: <039D1727-3A9C-4785-914C-0AFE27EA68FF@mcs.anl.gov> On Apr 28, 2014, at 3:23 PM, Xiangdong wrote: > Hello everyone, > > When I run snes program, ^^^^ what SNES program?? > it outputs "SNES Function norm 1.23456789e+10". It seems that this norm is different from residue norm (even if solving F(x)=0) Please send the full output where you see this. > and also differ from norm of the Jacobian. What is the definition of this "SNES Function Norm?? The SNES Function Norm as printed by PETSc is suppose to the 2-norm of F(x) - b (where b is usually zero) and this is also the same thing as the ?residue norm? Barry > > Thank you. > > Best, > Xiangdong From anush at bu.edu Tue Apr 29 00:07:48 2014 From: anush at bu.edu (Anush Krishnan) Date: Tue, 29 Apr 2014 01:07:48 -0400 Subject: [petsc-users] Assembling a matrix for a DMComposite vector Message-ID: Hi all, I created a DMComposite using two DMDAs (representing the x and y components of velocity in a 2-D staggered cartesian grid used for CFD simulations). DMCompositeGetISLocalToGlobalMappings gives me the global indices of the elements and ghost cells, which I can use to set up a matrix that operates on the vector created with the DMComposite. I obtain the correct global indices for all the interior points. But when I look at the global indices of the ghost cells, the ones outside the domain from the x-component DM return -1, but the ones outside the domain from the y-component DM return a positive value (which seems to be the largest global index of the interior points on Process 0). My question is: Why do they not return -1? Wouldn't that make matrix assembly easier (since MatSetValues ignores negative indices)? I have attached a code which demonstrates the above. On a related note: Is it possible to use MatSetValuesStencil for assembling a block diagonal matrix that operates on a vector created using the above DMComposite? Thanks, Anush -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ex15.c Type: text/x-csrc Size: 4073 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: makefile Type: application/octet-stream Size: 2287 bytes Desc: not available URL: From popov at uni-mainz.de Tue Apr 29 04:14:12 2014 From: popov at uni-mainz.de (Anton Popov) Date: Tue, 29 Apr 2014 11:14:12 +0200 Subject: [petsc-users] Assembling a matrix for a DMComposite vector In-Reply-To: References: Message-ID: <535F6D64.5040700@uni-mainz.de> On 4/29/14 7:07 AM, Anush Krishnan wrote: > Hi all, > > I created a DMComposite using two DMDAs (representing the x and y > components of velocity in a 2-D staggered cartesian grid used for CFD > simulations). DMCompositeGetISLocalToGlobalMappings gives me the > global indices of the elements and ghost cells, which I can use to set > up a matrix that operates on the vector created with the DMComposite. > I obtain the correct global indices for all the interior points. But > when I look at the global indices of the ghost cells, the ones outside > the domain from the x-component DM return -1, but the ones outside the > domain from the y-component DM return a positive value (which seems to > be the largest global index of the interior points on Process 0). My > question is: Why do they not return -1? Wouldn't that make matrix > assembly easier (since MatSetValues ignores negative indices)? I have > attached a code which demonstrates the above. > > On a related note: Is it possible to use MatSetValuesStencil for > assembling a block diagonal matrix that operates on a vector created > using the above DMComposite? > > Thanks, > Anush You can do the whole thing much easier (to my opinion). Since you created two DMDA anyway, just do: - find first index on every processor using MPI_Scan - create two global vectors (no ghosts) - put proper global indicies to global vectors - create two local vectors (with ghosts) and set ALL entries to -1 (to have what you need in boundary ghosts) - call global-to-local scatter Done! The advantage is that you can access global indices (including ghosts) in every block using i-j-k indexing scheme. I personally find this way quite easy to implement with PETSc Anton -------------- next part -------------- An HTML attachment was scrubbed... URL: From anush at bu.edu Tue Apr 29 08:43:35 2014 From: anush at bu.edu (Anush Krishnan) Date: Tue, 29 Apr 2014 09:43:35 -0400 Subject: [petsc-users] Assembling a matrix for a DMComposite vector In-Reply-To: <535F6D64.5040700@uni-mainz.de> References: <535F6D64.5040700@uni-mainz.de> Message-ID: Hi Anton, You can do the whole thing much easier (to my opinion). > Since you created two DMDA anyway, just do: > > - find first index on every processor using MPI_Scan > - create two global vectors (no ghosts) > - put proper global indicies to global vectors > - create two local vectors (with ghosts) and set ALL entries to -1 (to > have what you need in boundary ghosts) > - call global-to-local scatter > > Done! > Thanks for the suggestion. That does seem easier to implement. I would still like the understand the behaviour of DMCompositeGetISLocalToGlobalMappings though. Anush > > The advantage is that you can access global indices (including ghosts) in > every block using i-j-k indexing scheme. > I personally find this way quite easy to implement with PETSc > > Anton > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnw20 at cam.ac.uk Tue Apr 29 08:57:10 2014 From: gnw20 at cam.ac.uk (Garth N. Wells) Date: Tue, 29 Apr 2014 15:57:10 +0200 Subject: [petsc-users] VecGhostGetLocalForm overhead? Message-ID: <706A12AC-68E2-4C76-9A24-4D4242C11193@cam.ac.uk> I?m using VecGhostGetLocalForm to test whether or not a vector has ghost values. Is there any overhead associated with calling VecGhostGetLocalForm that I should be concerned about? Garth From knepley at gmail.com Tue Apr 29 09:03:07 2014 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 29 Apr 2014 09:03:07 -0500 Subject: [petsc-users] VecGhostGetLocalForm overhead? In-Reply-To: <706A12AC-68E2-4C76-9A24-4D4242C11193@cam.ac.uk> References: <706A12AC-68E2-4C76-9A24-4D4242C11193@cam.ac.uk> Message-ID: On Tue, Apr 29, 2014 at 8:57 AM, Garth N. Wells wrote: > I?m using VecGhostGetLocalForm to test whether or not a vector has ghost > values. Is there any overhead associated with calling VecGhostGetLocalForm > that I should be concerned about? No, its just checks vector state (no communication or expensive operations). Thanks, Matt > > Garth -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From norihiro.w at gmail.com Tue Apr 29 09:19:41 2014 From: norihiro.w at gmail.com (Norihiro Watanabe) Date: Tue, 29 Apr 2014 16:19:41 +0200 Subject: [petsc-users] infinite loop with NEWTONTR? In-Reply-To: References: <6B7E61ED-503A-44AE-9DCF-E74F4A8A62D5@mcs.anl.gov> Message-ID: Hi Barry, Is it possible that -snes_mf_operator makes convergence of linear solves slower if unknowns are poorly scaled for multiphysics problems? I gave up to check Jacobian for the large problem because it takes too long time. Instead I tested it with several different small size problems and noticed scaling of my unknowns makes a difference in FD Jacobian. My unknowns are two kinds: pressure (1e7) and temperature (1e3). I scaled pressure by 1e-5 and got the following different results from -snes_type test without scaling Norm of matrix ratio 3.81371e-05 difference 6.42349e+06 (user-defined state) with scaling Norm of matrix ratio 8.69182e-09 difference 1463.98 (user-defined state) which may suggest that a differentiate parameter h is not properly set if unknowns are poorly scaled. I also tested the scaling for the large problem with keeping fluid properties constant and got the following different convergence behaviours: without -snes_mf_operator 0 SNES Function norm 4.425457683773e+04 0 KSP Residual norm 44254.6 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 2 1 KSP Residual norm 0.000168321 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 2 2 KSP Residual norm 8.18977e-06 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 2 3 KSP Residual norm 4.75882e-07 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 1 4 KSP Residual norm 4.06286e-08 Linear solve converged due to CONVERGED_RTOL iterations 4 1 SNES Function norm 2.229156139237e+05 with -snes_mf_operator 0 SNES Function norm 4.425457683883e+04 0 KSP Residual norm 44254.6 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 2 1 KSP Residual norm 5255.66 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 1 2 KSP Residual norm 1646.58 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 1 3 KSP Residual norm 1518.05 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 1 ... 42 KSP Residual norm 0.656962 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 1 43 KSP Residual norm 0.462202 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 1 with -snes_mf_operator and scaling of pressure by 1e-5 0 SNES Function norm 4.425457683773e+04 0 KSP Residual norm 44254.6 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 2 1 KSP Residual norm 1883.94 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 1 2 KSP Residual norm 893.88 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 1 ... 42 KSP Residual norm 6.66081e-08 Linear solve converged due to CONVERGED_ITS iterations 1 Linear solve converged due to CONVERGED_RTOL iterations 1 43 KSP Residual norm 2.17062e-08 Linear solve converged due to CONVERGED_RTOL iterations 43 1 SNES Function norm 2.200867439822e+05 On Mon, Apr 28, 2014 at 5:59 PM, Barry Smith wrote: > > It will take a very long time > > On Apr 28, 2014, at 9:14 AM, Norihiro Watanabe > wrote: > > > I cannot surely say my Jacobian for this particular problem is correct, > as I have not checked it. For a smaller problem, I've already checked its > correctness using -snes_type test or -snes_compare_explicit (but linear > solve and nonlinear solve with FD one need a few more iterations than with > my Jacobian). To make it sure, now I started -snes_type test for the > problem and will update you once it finished. By the way, I'm waiting the > calculation for more than three hours now. Is it usual for a large problem > (>1e6 dof) or is there something wrong? > > > > > > > > > > > > On Mon, Apr 28, 2014 at 6:34 AM, Barry Smith wrote: > > > > I have run your code. I changed to use -snes_type newtonls and also > -snes_mf_operator there is something wrong with your Jacobian: > > > > Without -snes_mf_operator > > 0 SNES Function norm 1.821611413735e+03 > > 0 KSP Residual norm 1821.61 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 1 KSP Residual norm 0.000167024 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 2 KSP Residual norm 7.66595e-06 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 3 KSP Residual norm 4.4581e-07 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 4 KSP Residual norm 3.77537e-08 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 5 KSP Residual norm 2.20453e-09 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 6 KSP Residual norm 1.711e-10 > > Linear solve converged due to CONVERGED_RTOL iterations 6 > > > > with -snes_mf_operator > > > > 0 SNES Function norm 1.821611413735e+03 > > 0 KSP Residual norm 1821.61 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 1 KSP Residual norm 1796.39 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 2 KSP Residual norm 1786.2 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 3 KSP Residual norm 1741.11 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 4 KSP Residual norm 1733.92 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 5 KSP Residual norm 1726.57 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 6 KSP Residual norm 1725.35 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 7 KSP Residual norm 1723.89 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 8 KSP Residual norm 1715.41 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 9 KSP Residual norm 1713.72 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 10 KSP Residual norm 1702.84 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > > > ? > > > > This means your Jacobian is wrong. Your first order of business is to > fix your Jacobian. I noticed in previous emails your discussion with Jed > about switching to MatGetLocalSubMatrix() and using -snes_type test YOU > NEED TO DO THIS. You will get no where with an incorrect Jacobian. You need > to fix your Jacobian before you do anything else! No amount of other > options or methods will help you with a wrong Jacobian! Once you have a > correct Jacobian if you still have convergence problems let us know and we > can make further suggestions. > > > > Barry > > > > On Apr 25, 2014, at 7:31 AM, Norihiro Watanabe > wrote: > > > > > Hi, > > > > > > In my simulation, nonlinear solve with the trust regtion method got > stagnent after linear solve (see output below). Is it possible that the > method goes to inifite loop? Is there any parameter to avoid this situation? > > > > > > 0 SNES Function norm 1.828728087153e+03 > > > 0 KSP Residual norm 91.2735 > > > Linear solve converged due to CONVERGED_ITS iterations 1 > > > Linear solve converged due to CONVERGED_RTOL iterations 3 > > > 1 KSP Residual norm 3.42223 > > > Linear solve converged due to CONVERGED_STEP_LENGTH iterations 1 > > > > > > > > > Thank you in advance, > > > Nori > > > > > > > > > > -- > > Norihiro Watanabe > > -- Norihiro Watanabe -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Apr 29 10:02:45 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 29 Apr 2014 10:02:45 -0500 Subject: [petsc-users] infinite loop with NEWTONTR? In-Reply-To: References: <6B7E61ED-503A-44AE-9DCF-E74F4A8A62D5@mcs.anl.gov> Message-ID: <16CF9360-425E-40D4-A56A-F5A04A272DE5@mcs.anl.gov> On Apr 29, 2014, at 9:19 AM, Norihiro Watanabe wrote: > Hi Barry, > > Is it possible that -snes_mf_operator makes convergence of linear solves slower if unknowns are poorly scaled for multiphysics problems? I gave up to check Jacobian for the large problem because it takes too long time. Instead I tested it with several different small size problems and noticed scaling of my unknowns makes a difference in FD Jacobian. My unknowns are two kinds: pressure (1e7) and temperature (1e3). This is not really a huge difference in scaling. In fact it seems pretty minor to me. > I scaled pressure by 1e-5 and got the following different results from -snes_type test Having a good scaling is always a good idea. > > without scaling > Norm of matrix ratio 3.81371e-05 difference 6.42349e+06 (user-defined state) > > with scaling > Norm of matrix ratio 8.69182e-09 difference 1463.98 (user-defined state) > > which may suggest that a differentiate parameter h is not properly set if unknowns are poorly scaled. I also tested the scaling for the large problem with keeping fluid properties constant and got the following different convergence behaviours: > > without -snes_mf_operator > 0 SNES Function norm 4.425457683773e+04 > 0 KSP Residual norm 44254.6 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 2 > 1 KSP Residual norm 0.000168321 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 2 > 2 KSP Residual norm 8.18977e-06 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 2 > 3 KSP Residual norm 4.75882e-07 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 4 KSP Residual norm 4.06286e-08 > Linear solve converged due to CONVERGED_RTOL iterations 4 > 1 SNES Function norm 2.229156139237e+05 I don?t like this. Your SNES function norm has increased. I would avoid using a line search of basic and want to see some real decrease in the residual norm. Have you thought about doing some grid sequencing or some other continuation method to solve the nonlinear system? You could also try running with quad precision (./configure ?with-precision=__float128 and gnu compilers) In my experience most failures of Newton do not come from ?difficult physics? but rather from some error in the model (making it crazy), or the function or Jacobian evaluation. Barry > > with -snes_mf_operator > 0 SNES Function norm 4.425457683883e+04 > 0 KSP Residual norm 44254.6 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 2 > 1 KSP Residual norm 5255.66 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 2 KSP Residual norm 1646.58 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 3 KSP Residual norm 1518.05 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > ... > 42 KSP Residual norm 0.656962 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 43 KSP Residual norm 0.462202 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > > with -snes_mf_operator and scaling of pressure by 1e-5 > 0 SNES Function norm 4.425457683773e+04 > 0 KSP Residual norm 44254.6 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 2 > 1 KSP Residual norm 1883.94 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 2 KSP Residual norm 893.88 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > ... > 42 KSP Residual norm 6.66081e-08 > Linear solve converged due to CONVERGED_ITS iterations 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 43 KSP Residual norm 2.17062e-08 > Linear solve converged due to CONVERGED_RTOL iterations 43 > 1 SNES Function norm 2.200867439822e+05 > > > > > > > > > On Mon, Apr 28, 2014 at 5:59 PM, Barry Smith wrote: > > It will take a very long time > > On Apr 28, 2014, at 9:14 AM, Norihiro Watanabe wrote: > > > I cannot surely say my Jacobian for this particular problem is correct, as I have not checked it. For a smaller problem, I've already checked its correctness using -snes_type test or -snes_compare_explicit (but linear solve and nonlinear solve with FD one need a few more iterations than with my Jacobian). To make it sure, now I started -snes_type test for the problem and will update you once it finished. By the way, I'm waiting the calculation for more than three hours now. Is it usual for a large problem (>1e6 dof) or is there something wrong? > > > > > > > > > > > > On Mon, Apr 28, 2014 at 6:34 AM, Barry Smith wrote: > > > > I have run your code. I changed to use -snes_type newtonls and also -snes_mf_operator there is something wrong with your Jacobian: > > > > Without -snes_mf_operator > > 0 SNES Function norm 1.821611413735e+03 > > 0 KSP Residual norm 1821.61 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 1 KSP Residual norm 0.000167024 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 2 KSP Residual norm 7.66595e-06 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 3 KSP Residual norm 4.4581e-07 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 4 KSP Residual norm 3.77537e-08 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 5 KSP Residual norm 2.20453e-09 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 6 KSP Residual norm 1.711e-10 > > Linear solve converged due to CONVERGED_RTOL iterations 6 > > > > with -snes_mf_operator > > > > 0 SNES Function norm 1.821611413735e+03 > > 0 KSP Residual norm 1821.61 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 1 KSP Residual norm 1796.39 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 2 KSP Residual norm 1786.2 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 3 KSP Residual norm 1741.11 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 4 KSP Residual norm 1733.92 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 5 KSP Residual norm 1726.57 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 6 KSP Residual norm 1725.35 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 7 KSP Residual norm 1723.89 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 8 KSP Residual norm 1715.41 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 9 KSP Residual norm 1713.72 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 10 KSP Residual norm 1702.84 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > > > ? > > > > This means your Jacobian is wrong. Your first order of business is to fix your Jacobian. I noticed in previous emails your discussion with Jed about switching to MatGetLocalSubMatrix() and using -snes_type test YOU NEED TO DO THIS. You will get no where with an incorrect Jacobian. You need to fix your Jacobian before you do anything else! No amount of other options or methods will help you with a wrong Jacobian! Once you have a correct Jacobian if you still have convergence problems let us know and we can make further suggestions. > > > > Barry > > > > On Apr 25, 2014, at 7:31 AM, Norihiro Watanabe wrote: > > > > > Hi, > > > > > > In my simulation, nonlinear solve with the trust regtion method got stagnent after linear solve (see output below). Is it possible that the method goes to inifite loop? Is there any parameter to avoid this situation? > > > > > > 0 SNES Function norm 1.828728087153e+03 > > > 0 KSP Residual norm 91.2735 > > > Linear solve converged due to CONVERGED_ITS iterations 1 > > > Linear solve converged due to CONVERGED_RTOL iterations 3 > > > 1 KSP Residual norm 3.42223 > > > Linear solve converged due to CONVERGED_STEP_LENGTH iterations 1 > > > > > > > > > Thank you in advance, > > > Nori > > > > > > > > > > -- > > Norihiro Watanabe > > > > > -- > Norihiro Watanabe From norihiro.w at gmail.com Tue Apr 29 10:45:13 2014 From: norihiro.w at gmail.com (Norihiro Watanabe) Date: Tue, 29 Apr 2014 17:45:13 +0200 Subject: [petsc-users] infinite loop with NEWTONTR? In-Reply-To: <16CF9360-425E-40D4-A56A-F5A04A272DE5@mcs.anl.gov> References: <6B7E61ED-503A-44AE-9DCF-E74F4A8A62D5@mcs.anl.gov> <16CF9360-425E-40D4-A56A-F5A04A272DE5@mcs.anl.gov> Message-ID: Barry, thank you for the tips. Besides the trust region method, I have also tested line search methods. -snes_linesearch_type cp worked robustly. Other line search types didn't converge, except for basic. I'll spend some more time to check if my Jacobian is wrong or if -snes_mf_operator has some problem for my problem. On Tue, Apr 29, 2014 at 5:02 PM, Barry Smith wrote: > > On Apr 29, 2014, at 9:19 AM, Norihiro Watanabe > wrote: > > > Hi Barry, > > > > Is it possible that -snes_mf_operator makes convergence of linear solves > slower if unknowns are poorly scaled for multiphysics problems? I gave up > to check Jacobian for the large problem because it takes too long time. > Instead I tested it with several different small size problems and noticed > scaling of my unknowns makes a difference in FD Jacobian. My unknowns are > two kinds: pressure (1e7) and temperature (1e3). > > This is not really a huge difference in scaling. In fact it seems > pretty minor to me. > > > I scaled pressure by 1e-5 and got the following different results from > -snes_type test > > Having a good scaling is always a good idea. > > > > without scaling > > Norm of matrix ratio 3.81371e-05 difference 6.42349e+06 (user-defined > state) > > > > with scaling > > Norm of matrix ratio 8.69182e-09 difference 1463.98 (user-defined state) > > > > which may suggest that a differentiate parameter h is not properly set > if unknowns are poorly scaled. I also tested the scaling for the large > problem with keeping fluid properties constant and got the following > different convergence behaviours: > > > > without -snes_mf_operator > > 0 SNES Function norm 4.425457683773e+04 > > 0 KSP Residual norm 44254.6 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 2 > > 1 KSP Residual norm 0.000168321 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 2 > > 2 KSP Residual norm 8.18977e-06 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 2 > > 3 KSP Residual norm 4.75882e-07 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 4 KSP Residual norm 4.06286e-08 > > Linear solve converged due to CONVERGED_RTOL iterations 4 > > 1 SNES Function norm 2.229156139237e+05 > > I don?t like this. Your SNES function norm has increased. I would avoid > using a line search of basic and want to see some real decrease in the > residual norm. > > Have you thought about doing some grid sequencing or some other > continuation method to solve the nonlinear system? > > You could also try running with quad precision (./configure > ?with-precision=__float128 and gnu compilers) > > In my experience most failures of Newton do not come from ?difficult > physics? but rather from some error in the model (making it crazy), or the > function or Jacobian evaluation. > > Barry > > > > > with -snes_mf_operator > > 0 SNES Function norm 4.425457683883e+04 > > 0 KSP Residual norm 44254.6 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 2 > > 1 KSP Residual norm 5255.66 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 2 KSP Residual norm 1646.58 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 3 KSP Residual norm 1518.05 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > ... > > 42 KSP Residual norm 0.656962 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 43 KSP Residual norm 0.462202 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > > > with -snes_mf_operator and scaling of pressure by 1e-5 > > 0 SNES Function norm 4.425457683773e+04 > > 0 KSP Residual norm 44254.6 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 2 > > 1 KSP Residual norm 1883.94 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 2 KSP Residual norm 893.88 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > ... > > 42 KSP Residual norm 6.66081e-08 > > Linear solve converged due to CONVERGED_ITS iterations 1 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > 43 KSP Residual norm 2.17062e-08 > > Linear solve converged due to CONVERGED_RTOL iterations 43 > > 1 SNES Function norm 2.200867439822e+05 > > > > > > > > > > > > > > > > > > On Mon, Apr 28, 2014 at 5:59 PM, Barry Smith wrote: > > > > It will take a very long time > > > > On Apr 28, 2014, at 9:14 AM, Norihiro Watanabe > wrote: > > > > > I cannot surely say my Jacobian for this particular problem is > correct, as I have not checked it. For a smaller problem, I've already > checked its correctness using -snes_type test or -snes_compare_explicit > (but linear solve and nonlinear solve with FD one need a few more > iterations than with my Jacobian). To make it sure, now I started > -snes_type test for the problem and will update you once it finished. By > the way, I'm waiting the calculation for more than three hours now. Is it > usual for a large problem (>1e6 dof) or is there something wrong? > > > > > > > > > > > > > > > > > > On Mon, Apr 28, 2014 at 6:34 AM, Barry Smith > wrote: > > > > > > I have run your code. I changed to use -snes_type newtonls and also > -snes_mf_operator there is something wrong with your Jacobian: > > > > > > Without -snes_mf_operator > > > 0 SNES Function norm 1.821611413735e+03 > > > 0 KSP Residual norm 1821.61 > > > Linear solve converged due to CONVERGED_ITS iterations 1 > > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > > 1 KSP Residual norm 0.000167024 > > > Linear solve converged due to CONVERGED_ITS iterations 1 > > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > > 2 KSP Residual norm 7.66595e-06 > > > Linear solve converged due to CONVERGED_ITS iterations 1 > > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > > 3 KSP Residual norm 4.4581e-07 > > > Linear solve converged due to CONVERGED_ITS iterations 1 > > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > > 4 KSP Residual norm 3.77537e-08 > > > Linear solve converged due to CONVERGED_ITS iterations 1 > > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > > 5 KSP Residual norm 2.20453e-09 > > > Linear solve converged due to CONVERGED_ITS iterations 1 > > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > > 6 KSP Residual norm 1.711e-10 > > > Linear solve converged due to CONVERGED_RTOL iterations 6 > > > > > > with -snes_mf_operator > > > > > > 0 SNES Function norm 1.821611413735e+03 > > > 0 KSP Residual norm 1821.61 > > > Linear solve converged due to CONVERGED_ITS iterations 1 > > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > > 1 KSP Residual norm 1796.39 > > > Linear solve converged due to CONVERGED_ITS iterations 1 > > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > > 2 KSP Residual norm 1786.2 > > > Linear solve converged due to CONVERGED_ITS iterations 1 > > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > > 3 KSP Residual norm 1741.11 > > > Linear solve converged due to CONVERGED_ITS iterations 1 > > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > > 4 KSP Residual norm 1733.92 > > > Linear solve converged due to CONVERGED_ITS iterations 1 > > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > > 5 KSP Residual norm 1726.57 > > > Linear solve converged due to CONVERGED_ITS iterations 1 > > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > > 6 KSP Residual norm 1725.35 > > > Linear solve converged due to CONVERGED_ITS iterations 1 > > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > > 7 KSP Residual norm 1723.89 > > > Linear solve converged due to CONVERGED_ITS iterations 1 > > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > > 8 KSP Residual norm 1715.41 > > > Linear solve converged due to CONVERGED_ITS iterations 1 > > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > > 9 KSP Residual norm 1713.72 > > > Linear solve converged due to CONVERGED_ITS iterations 1 > > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > > 10 KSP Residual norm 1702.84 > > > Linear solve converged due to CONVERGED_ITS iterations 1 > > > > > > ? > > > > > > This means your Jacobian is wrong. Your first order of business is > to fix your Jacobian. I noticed in previous emails your discussion with Jed > about switching to MatGetLocalSubMatrix() and using -snes_type test YOU > NEED TO DO THIS. You will get no where with an incorrect Jacobian. You need > to fix your Jacobian before you do anything else! No amount of other > options or methods will help you with a wrong Jacobian! Once you have a > correct Jacobian if you still have convergence problems let us know and we > can make further suggestions. > > > > > > Barry > > > > > > On Apr 25, 2014, at 7:31 AM, Norihiro Watanabe > wrote: > > > > > > > Hi, > > > > > > > > In my simulation, nonlinear solve with the trust regtion method got > stagnent after linear solve (see output below). Is it possible that the > method goes to inifite loop? Is there any parameter to avoid this situation? > > > > > > > > 0 SNES Function norm 1.828728087153e+03 > > > > 0 KSP Residual norm 91.2735 > > > > Linear solve converged due to CONVERGED_ITS iterations 1 > > > > Linear solve converged due to CONVERGED_RTOL iterations 3 > > > > 1 KSP Residual norm 3.42223 > > > > Linear solve converged due to CONVERGED_STEP_LENGTH iterations 1 > > > > > > > > > > > > Thank you in advance, > > > > Nori > > > > > > > > > > > > > > > -- > > > Norihiro Watanabe > > > > > > > > > > -- > > Norihiro Watanabe > > -- Norihiro Watanabe -------------- next part -------------- An HTML attachment was scrubbed... URL: From epscodes at gmail.com Tue Apr 29 14:09:00 2014 From: epscodes at gmail.com (Xiangdong) Date: Tue, 29 Apr 2014 15:09:00 -0400 Subject: [petsc-users] questions about the SNES Function Norm In-Reply-To: <039D1727-3A9C-4785-914C-0AFE27EA68FF@mcs.anl.gov> References: <039D1727-3A9C-4785-914C-0AFE27EA68FF@mcs.anl.gov> Message-ID: It turns out to a be a bug in my FormFunctionLocal(DMDALocalInfo *info,PetscScalar **x,PetscScalar **f,AppCtx *user). I forgot to initialize the array f. Zero the array f solved the problem and gave consistent result. Just curious, why does not petsc initialize the array f to zero by default inside petsc when passing the f array to FormFunctionLocal? I have another quick question about the array x passed to FormFunctionLocal. If I want to know the which x is evaluated, how can I output x in a vector format? Currently, I created a global vector vecx and a local vector vecx_local, get the array of vecx_local_array, copy the x to vecx_local_array, scatter to global vecx and output vecx. Is there a quick way to restore the array x to a vector and output? Thank you. Best, Xiangdong On Mon, Apr 28, 2014 at 10:28 PM, Barry Smith wrote: > > On Apr 28, 2014, at 3:23 PM, Xiangdong wrote: > > > Hello everyone, > > > > When I run snes program, > > ^^^^ what SNES program?? > > > it outputs "SNES Function norm 1.23456789e+10". It seems that this norm > is different from residue norm (even if solving F(x)=0) > > Please send the full output where you see this. > > > and also differ from norm of the Jacobian. What is the definition of > this "SNES Function Norm?? > > The SNES Function Norm as printed by PETSc is suppose to the 2-norm of > F(x) - b (where b is usually zero) and this is also the same thing as the > ?residue norm? > > Barry > > > > > Thank you. > > > > Best, > > Xiangdong > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Apr 29 14:12:33 2014 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 29 Apr 2014 14:12:33 -0500 Subject: [petsc-users] questions about the SNES Function Norm In-Reply-To: References: <039D1727-3A9C-4785-914C-0AFE27EA68FF@mcs.anl.gov> Message-ID: On Tue, Apr 29, 2014 at 2:09 PM, Xiangdong wrote: > It turns out to a be a bug in my FormFunctionLocal(DMDALocalInfo > *info,PetscScalar **x,PetscScalar **f,AppCtx *user). I forgot to initialize > the array f. Zero the array f solved the problem and gave consistent result. > > Just curious, why does not petsc initialize the array f to zero by default > inside petsc when passing the f array to FormFunctionLocal? > If you directly set entires, you might not want us to spend the time writing those zeros. > I have another quick question about the array x passed to > FormFunctionLocal. If I want to know the which x is evaluated, how can I > output x in a vector format? Currently, I created a global vector vecx and > a local vector vecx_local, get the array of vecx_local_array, copy the x to > vecx_local_array, scatter to global vecx and output vecx. Is there a quick > way to restore the array x to a vector and output? > I cannot think of a better way than that. Matt > Thank you. > > Best, > Xiangdong > > > > On Mon, Apr 28, 2014 at 10:28 PM, Barry Smith wrote: > >> >> On Apr 28, 2014, at 3:23 PM, Xiangdong wrote: >> >> > Hello everyone, >> > >> > When I run snes program, >> >> ^^^^ what SNES program?? >> >> > it outputs "SNES Function norm 1.23456789e+10". It seems that this norm >> is different from residue norm (even if solving F(x)=0) >> >> Please send the full output where you see this. >> >> > and also differ from norm of the Jacobian. What is the definition of >> this "SNES Function Norm?? >> >> The SNES Function Norm as printed by PETSc is suppose to the 2-norm of >> F(x) - b (where b is usually zero) and this is also the same thing as the >> ?residue norm? >> >> Barry >> >> > >> > Thank you. >> > >> > Best, >> > Xiangdong >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From song.gao2 at mail.mcgill.ca Tue Apr 29 14:57:35 2014 From: song.gao2 at mail.mcgill.ca (Song Gao) Date: Tue, 29 Apr 2014 15:57:35 -0400 Subject: [petsc-users] Is it possible to use -snes_ksp_ew in the framework of KSP? In-Reply-To: References: Message-ID: Dear Petsc users, We are working on a matrix-free solver for NS equations. We first developed the solver in the SNES framework. But because we want to take control of the Newton iterations and reuse some of our existing code, we stop using SNES and begin developing the matrix-free solver in KSP framework. But we have an issue when using MFFD and VecSetValuesBlocked together. My code looks like call MatMFFDSetFunction(pet_mat_mf, flowsol_ng_mf, ctx, ierpetsc) call MatMFFDSetBase(pet_mat_mf, pet_solu_snes, PETSC_NULL_OBJECT, ierpetsc) The vec pet_solu_snes are set to blocksize 5. flowsol_ng_mf looks like: subroutine flowsol_ng_mf ( ctxx, pet_solu_snesa, pet_rhs_snesa, ierrpet ) ............... call VecGetBlockSize(pet_rhs_snesa,i,ierpetsc) write(*,*) "pet_rhs_snesa",i ............. call VecSetValuesBlocked ( pet_rhs_snesa, ndperl, pirow2, rhsloc, ADD_VALUES, ierpetsc ) .......... end I'm worried about the block size of pet_rhs_snesa is not properly initialized. So I checked the blocksize in the function evaluation. The output with single CPU looks like: 0 KSP preconditioned resid norm 7.818306841541e+00 true resid norm 9.619278462343e-03 ||r(i)||/||b|| 1.000000000000e+00 pet_rhs_snesa 1 pet_rhs_snesa 1 pet_rhs_snesa 5 1 KSP preconditioned resid norm 1.739193723080e+00 true resid norm 2.564460053330e-03 ||r(i)||/||b|| 2.665958848545e-01 pet_rhs_snesa 1 pet_rhs_snesa 5 2 KSP preconditioned resid norm 3.816590845074e-01 true resid norm 8.222646506071e-04 ||r(i)||/||b|| 8.548090730777e-02 pet_rhs_snesa 1 pet_rhs_snesa 5 I think the block size is not initialized correctly for the first call, which leads to the misuse of VecSetValuesBlocked. One way to circumvent is to create a temporary vec with correct blocksize, assemble it with VecSetValuesBlocked and then copy back. But Is there other solutions? Thank you very much. Song On Sat, Mar 8, 2014 at 8:51 PM, Matthew Knepley wrote: > On Sat, Mar 8, 2014 at 7:45 PM, Song Gao wrote: > >> Thank you all. >> >> Most of the difficulties come from the current structure of the program. >> Its size and "age" make any radical modification a challenge. The large >> uses of global variables and deprecated "goto" statements call for a >> revision, which however is unlikely to occur, against our better >> judgement... >> >> That being said, the current tools provided byPETSc are sufficient, >> however not necessarily convenient for our purposes. As a wishful thinking >> we can say that the implementation of PETSc SNES features into existing >> codes would be easier if the outer Newton-like loop could be managed by the >> original code, out of the SNES context. On the other hand, we realize that >> this might be in contrast with the requirements of PETSc itself. >> > > I guess I wanted to know specifically why you cannot just dump all of the > other things in the legacy code into SNESSetUpdate() > or SNESSetConvergenceTest(). Was there something about the way we > structured the hooks that made this impossible? > > Thanks, > > Matt > > >> The application of the Eistenstat-Walker method together with a >> Matrix-Free approach are of key importance. Thus, as kindly suggested by >> Jed Brown, implementing our own version of EW scheme might turn out to be >> the only way. We would be happy to provide more details if you think that >> they might be helpful for the future developments of PETSc. >> >> >> On Fri, Mar 7, 2014 at 2:39 PM, Matthew Knepley wrote: >> >>> On Fri, Mar 7, 2014 at 1:21 PM, Song Gao wrote: >>> >>>> Hello, >>>> >>>> We are working on a legacy codes which solves the NS equations. The >>>> codes take control of newton iterations itself and use KSP to solve the >>>> linear system. >>>> >>>> We modified the codes, used SNES to control the newton iteration and >>>> changed the code to matrix free fashion. But the legacy codes did a lot of >>>> other things between two newton iterations (such as output solution, update >>>> variables....). I know we could use linesearchpostcheck but it is >>>> difficulty to do that correctly. Therefore, we decide to go back to the KSP >>>> framework but still use matrix free. >>>> >>> >>> What makes it difficult? >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> When using SNES, we always use the runtime option -snes_ksp_ew, we >>>> observe that for some test cases, the residual stalls without -snes_ksp_ew, >>>> but converges with-snes_ksp_ew. So I'm thinking if it is possible to >>>> use -snes_ksp_ew in KSP? Thanks in advance. >>>> >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Tue Apr 29 15:06:57 2014 From: jed at jedbrown.org (Jed Brown) Date: Tue, 29 Apr 2014 14:06:57 -0600 Subject: [petsc-users] Is it possible to use -snes_ksp_ew in the framework of KSP? In-Reply-To: References: Message-ID: <87fvkvrjby.fsf@jedbrown.org> Song Gao writes: > Dear Petsc users, > > We are working on a matrix-free solver for NS equations. We first developed > the solver in the SNES framework. But because we want to take control of > the Newton iterations and reuse some of our existing code, we stop using > SNES and begin developing the matrix-free solver in KSP framework. > > But we have an issue when using MFFD and VecSetValuesBlocked together. > My code looks like > > call MatMFFDSetFunction(pet_mat_mf, flowsol_ng_mf, ctx, ierpetsc) > call MatMFFDSetBase(pet_mat_mf, pet_solu_snes, PETSC_NULL_OBJECT, ierpetsc) > > The vec pet_solu_snes are set to blocksize 5. Did you set the block size for the vector passed to KSP? MatMFFD does not create its own vector (and the base vector would be the wrong size for a non-square matrix anyway). > flowsol_ng_mf looks like: > subroutine flowsol_ng_mf ( ctxx, pet_solu_snesa, pet_rhs_snesa, ierrpet ) > ............... > call VecGetBlockSize(pet_rhs_snesa,i,ierpetsc) > write(*,*) "pet_rhs_snesa",i > ............. > call VecSetValuesBlocked ( pet_rhs_snesa, ndperl, pirow2, rhsloc, > ADD_VALUES, ierpetsc ) > .......... > end > > I'm worried about the block size of pet_rhs_snesa is not properly > initialized. So I checked the blocksize in the function evaluation. > > The output with single CPU looks like: > > 0 KSP preconditioned resid norm 7.818306841541e+00 true resid norm > 9.619278462343e-03 ||r(i)||/||b|| 1.000000000000e+00 > pet_rhs_snesa 1 > pet_rhs_snesa 1 > pet_rhs_snesa 5 > 1 KSP preconditioned resid norm 1.739193723080e+00 true resid norm > 2.564460053330e-03 ||r(i)||/||b|| 2.665958848545e-01 > pet_rhs_snesa 1 > pet_rhs_snesa 5 > 2 KSP preconditioned resid norm 3.816590845074e-01 true resid norm > 8.222646506071e-04 ||r(i)||/||b|| 8.548090730777e-02 > pet_rhs_snesa 1 > pet_rhs_snesa 5 > > I think the block size is not initialized correctly for the first call, > which leads to the misuse of VecSetValuesBlocked. > > One way to circumvent is to create a temporary vec with correct blocksize, > assemble it with VecSetValuesBlocked and then copy back. But Is there other > solutions? Thank you very much. > > > > Song > > > > On Sat, Mar 8, 2014 at 8:51 PM, Matthew Knepley wrote: > >> On Sat, Mar 8, 2014 at 7:45 PM, Song Gao wrote: >> >>> Thank you all. >>> >>> Most of the difficulties come from the current structure of the program. >>> Its size and "age" make any radical modification a challenge. The large >>> uses of global variables and deprecated "goto" statements call for a >>> revision, which however is unlikely to occur, against our better >>> judgement... >>> >>> That being said, the current tools provided byPETSc are sufficient, >>> however not necessarily convenient for our purposes. As a wishful thinking >>> we can say that the implementation of PETSc SNES features into existing >>> codes would be easier if the outer Newton-like loop could be managed by the >>> original code, out of the SNES context. On the other hand, we realize that >>> this might be in contrast with the requirements of PETSc itself. >>> >> >> I guess I wanted to know specifically why you cannot just dump all of the >> other things in the legacy code into SNESSetUpdate() >> or SNESSetConvergenceTest(). Was there something about the way we >> structured the hooks that made this impossible? >> >> Thanks, >> >> Matt >> >> >>> The application of the Eistenstat-Walker method together with a >>> Matrix-Free approach are of key importance. Thus, as kindly suggested by >>> Jed Brown, implementing our own version of EW scheme might turn out to be >>> the only way. We would be happy to provide more details if you think that >>> they might be helpful for the future developments of PETSc. >>> >>> >>> On Fri, Mar 7, 2014 at 2:39 PM, Matthew Knepley wrote: >>> >>>> On Fri, Mar 7, 2014 at 1:21 PM, Song Gao wrote: >>>> >>>>> Hello, >>>>> >>>>> We are working on a legacy codes which solves the NS equations. The >>>>> codes take control of newton iterations itself and use KSP to solve the >>>>> linear system. >>>>> >>>>> We modified the codes, used SNES to control the newton iteration and >>>>> changed the code to matrix free fashion. But the legacy codes did a lot of >>>>> other things between two newton iterations (such as output solution, update >>>>> variables....). I know we could use linesearchpostcheck but it is >>>>> difficulty to do that correctly. Therefore, we decide to go back to the KSP >>>>> framework but still use matrix free. >>>>> >>>> >>>> What makes it difficult? >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> When using SNES, we always use the runtime option -snes_ksp_ew, we >>>>> observe that for some test cases, the residual stalls without -snes_ksp_ew, >>>>> but converges with-snes_ksp_ew. So I'm thinking if it is possible to >>>>> use -snes_ksp_ew in KSP? Thanks in advance. >>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From song.gao2 at mail.mcgill.ca Tue Apr 29 15:27:37 2014 From: song.gao2 at mail.mcgill.ca (Song Gao) Date: Tue, 29 Apr 2014 16:27:37 -0400 Subject: [petsc-users] Is it possible to use -snes_ksp_ew in the framework of KSP? In-Reply-To: <87fvkvrjby.fsf@jedbrown.org> References: <87fvkvrjby.fsf@jedbrown.org> Message-ID: Thanks for your quick answer. Yeah, the Vec I passed to KSPSolve is in the wrong blocksize. Only the rhs Vec is blocked, the solution Vec is not blocked. Add VecSetBlockSize before KSPSolve solved the problem. Thanks you very much. On Tue, Apr 29, 2014 at 4:06 PM, Jed Brown wrote: > Song Gao writes: > > > Dear Petsc users, > > > > We are working on a matrix-free solver for NS equations. We first > developed > > the solver in the SNES framework. But because we want to take control of > > the Newton iterations and reuse some of our existing code, we stop using > > SNES and begin developing the matrix-free solver in KSP framework. > > > > But we have an issue when using MFFD and VecSetValuesBlocked together. > > My code looks like > > > > call MatMFFDSetFunction(pet_mat_mf, flowsol_ng_mf, ctx, ierpetsc) > > call MatMFFDSetBase(pet_mat_mf, pet_solu_snes, PETSC_NULL_OBJECT, > ierpetsc) > > > > The vec pet_solu_snes are set to blocksize 5. > > Did you set the block size for the vector passed to KSP? MatMFFD does > not create its own vector (and the base vector would be the wrong size > for a non-square matrix anyway). > > > flowsol_ng_mf looks like: > > subroutine flowsol_ng_mf ( ctxx, pet_solu_snesa, pet_rhs_snesa, ierrpet > ) > > ............... > > call VecGetBlockSize(pet_rhs_snesa,i,ierpetsc) > > write(*,*) "pet_rhs_snesa",i > > ............. > > call VecSetValuesBlocked ( pet_rhs_snesa, ndperl, pirow2, rhsloc, > > ADD_VALUES, ierpetsc ) > > .......... > > end > > > > I'm worried about the block size of pet_rhs_snesa is not properly > > initialized. So I checked the blocksize in the function evaluation. > > > > The output with single CPU looks like: > > > > 0 KSP preconditioned resid norm 7.818306841541e+00 true resid norm > > 9.619278462343e-03 ||r(i)||/||b|| 1.000000000000e+00 > > pet_rhs_snesa 1 > > pet_rhs_snesa 1 > > pet_rhs_snesa 5 > > 1 KSP preconditioned resid norm 1.739193723080e+00 true resid norm > > 2.564460053330e-03 ||r(i)||/||b|| 2.665958848545e-01 > > pet_rhs_snesa 1 > > pet_rhs_snesa 5 > > 2 KSP preconditioned resid norm 3.816590845074e-01 true resid norm > > 8.222646506071e-04 ||r(i)||/||b|| 8.548090730777e-02 > > pet_rhs_snesa 1 > > pet_rhs_snesa 5 > > > > I think the block size is not initialized correctly for the first call, > > which leads to the misuse of VecSetValuesBlocked. > > > > One way to circumvent is to create a temporary vec with correct > blocksize, > > assemble it with VecSetValuesBlocked and then copy back. But Is there > other > > solutions? Thank you very much. > > > > > > > > Song > > > > > > > > On Sat, Mar 8, 2014 at 8:51 PM, Matthew Knepley > wrote: > > > >> On Sat, Mar 8, 2014 at 7:45 PM, Song Gao > wrote: > >> > >>> Thank you all. > >>> > >>> Most of the difficulties come from the current structure of the > program. > >>> Its size and "age" make any radical modification a challenge. The large > >>> uses of global variables and deprecated "goto" statements call for a > >>> revision, which however is unlikely to occur, against our better > >>> judgement... > >>> > >>> That being said, the current tools provided byPETSc are sufficient, > >>> however not necessarily convenient for our purposes. As a wishful > thinking > >>> we can say that the implementation of PETSc SNES features into existing > >>> codes would be easier if the outer Newton-like loop could be managed > by the > >>> original code, out of the SNES context. On the other hand, we realize > that > >>> this might be in contrast with the requirements of PETSc itself. > >>> > >> > >> I guess I wanted to know specifically why you cannot just dump all of > the > >> other things in the legacy code into SNESSetUpdate() > >> or SNESSetConvergenceTest(). Was there something about the way we > >> structured the hooks that made this impossible? > >> > >> Thanks, > >> > >> Matt > >> > >> > >>> The application of the Eistenstat-Walker method together with a > >>> Matrix-Free approach are of key importance. Thus, as kindly suggested > by > >>> Jed Brown, implementing our own version of EW scheme might turn out to > be > >>> the only way. We would be happy to provide more details if you think > that > >>> they might be helpful for the future developments of PETSc. > >>> > >>> > >>> On Fri, Mar 7, 2014 at 2:39 PM, Matthew Knepley >wrote: > >>> > >>>> On Fri, Mar 7, 2014 at 1:21 PM, Song Gao >wrote: > >>>> > >>>>> Hello, > >>>>> > >>>>> We are working on a legacy codes which solves the NS equations. The > >>>>> codes take control of newton iterations itself and use KSP to solve > the > >>>>> linear system. > >>>>> > >>>>> We modified the codes, used SNES to control the newton iteration and > >>>>> changed the code to matrix free fashion. But the legacy codes did a > lot of > >>>>> other things between two newton iterations (such as output solution, > update > >>>>> variables....). I know we could use linesearchpostcheck but it is > >>>>> difficulty to do that correctly. Therefore, we decide to go back to > the KSP > >>>>> framework but still use matrix free. > >>>>> > >>>> > >>>> What makes it difficult? > >>>> > >>>> Thanks, > >>>> > >>>> Matt > >>>> > >>>> > >>>>> When using SNES, we always use the runtime option -snes_ksp_ew, we > >>>>> observe that for some test cases, the residual stalls without > -snes_ksp_ew, > >>>>> but converges with-snes_ksp_ew. So I'm thinking if it is possible to > >>>>> use -snes_ksp_ew in KSP? Thanks in advance. > >>>>> > >>>>> > >>>>> > >>>> > >>>> > >>>> -- > >>>> What most experimenters take for granted before they begin their > >>>> experiments is infinitely more interesting than any results to which > their > >>>> experiments lead. > >>>> -- Norbert Wiener > >>>> > >>> > >>> > >> > >> > >> -- > >> What most experimenters take for granted before they begin their > >> experiments is infinitely more interesting than any results to which > their > >> experiments lead. > >> -- Norbert Wiener > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Wed Apr 30 02:17:07 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Wed, 30 Apr 2014 15:17:07 +0800 Subject: [petsc-users] KSP breakdown in specific cluster (update) In-Reply-To: <877g6f6yh1.fsf@jedbrown.org> References: <5357854E.10502@gmail.com> <53578E1F.4000709@gmail.com> <535864F9.5010505@gmail.com> <53586C7F.8010909@gmail.com> <877g6f6yh1.fsf@jedbrown.org> Message-ID: <5360A373.9030909@gmail.com> Hi, I switched to the GNU compiler v4.8.2 from intel and everything worked fine. So guess I'll have to use GNU from now on. Thank you Yours sincerely, TAY wee-beng On 24/4/2014 2:19 PM, Jed Brown wrote: > TAY wee-beng writes: > >> On 24/4/2014 9:41 AM, Barry Smith wrote: >>> The numbers like >>> >>> sum = 1.9762625833649862e-323 >>> >>> rho = 1.9762625833649862e-323 >>> >>> beta = 1.600807474747106e-316 >>> omega = 1.6910452843641213e-315 >>> d2 = 1.5718032521948665e-316 >>> >>> are nonsense. They would generally indicate that something is wrong, but unfortunately don?t point to exactly what is wrong. >> Hi, >> >> In that case, how do I troubleshoot? Any suggestions? > You have memory corruption. Use Valgrind, perhaps compiler checks like > -fmudflap, and location watchpoints in the debugger. From jsd1 at rice.edu Wed Apr 30 02:37:48 2014 From: jsd1 at rice.edu (Justin Dong) Date: Wed, 30 Apr 2014 02:37:48 -0500 Subject: [petsc-users] MATSOLVERSUPERLU_DIST not giving the correct solution Message-ID: Hi, I'm trying to solve a linear system in parallel using SuperLU but for some reason, it's not giving me the correct solution. I'm testing on a small example so I can compare the sequential and parallel cases manually. I'm absolutely sure that my routine for generating the matrix and right-hand side in parallel is working correctly. Running with 1 process and PCLU gives the correct solution. Running with 2 processes and using SUPERLU_DIST does not give the correct solution (I tried with 1 process too but according to the superlu documentation, I would need SUPERLU for sequential?). This is the code for solving the system: /* solve the system */ KSPCreate(PETSC_COMM_WORLD, &ksp); KSPSetOperators(ksp, Aglobal, Aglobal, DIFFERENT_NONZERO_PATTERN); KSPSetType(ksp,KSPPREONLY); KSPGetPC(ksp, &pc); KSPSetTolerances(ksp, 1.e-13, PETSC_DEFAULT, PETSC_DEFAULT, PETSC_DEFAULT); PCFactorSetMatSolverPackage(pc, MATSOLVERSUPERLU_DIST); KSPSolve(ksp, bglobal, bglobal); Sincerely, Justin -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsd1 at rice.edu Wed Apr 30 03:02:35 2014 From: jsd1 at rice.edu (Justin Dong) Date: Wed, 30 Apr 2014 03:02:35 -0500 Subject: [petsc-users] MATSOLVERSUPERLU_DIST not giving the correct solution In-Reply-To: References: Message-ID: I actually was able to solve my own problem...for some reason, I need to do PCSetType(pc, PCLU); PCFactorSetMatSolverPackage(pc, MATSOLVERSUPERLU_DIST); KSPSetTolerances(ksp, 1.e-15, PETSC_DEFAULT, PETSC_DEFAULT, PETSC_DEFAULT); instead of the ordering I initially had, though I'm not really clear on what the issue was. However, there seems to be some loss of accuracy as I increase the number of processes. Is this expected, or can I force a lower tolerance somehow? I am able to compare the solutions to a reference solution, and the error increases as I increase the processes. This is the solution in sequential: superlu_1process = [ -6.8035811950925553e-06 1.6324030474375778e-04 5.4145340579614926e-02 1.6640521936281516e-04 -1.7669374392923965e-04 -2.8099208957838207e-04 5.3958133511222223e-02 -5.4077899123806263e-02 -5.3972905090366369e-02 -1.9485020474821160e-04 5.4239813043824400e-02 4.4883984259948430e-04]; superlu_2process = [ -6.8035811950509821e-06 1.6324030474371623e-04 5.4145340579605655e-02 1.6640521936281687e-04 -1.7669374392923807e-04 -2.8099208957839834e-04 5.3958133511212911e-02 -5.4077899123796964e-02 -5.3972905090357078e-02 -1.9485020474824480e-04 5.4239813043815172e-02 4.4883984259953320e-04]; superlu_4process= [ -6.8035811952565206e-06 1.6324030474386164e-04 5.4145340579691455e-02 1.6640521936278326e-04 -1.7669374392921441e-04 -2.8099208957829171e-04 5.3958133511299078e-02 -5.4077899123883062e-02 -5.3972905090443085e-02 -1.9485020474806352e-04 5.4239813043900860e-02 4.4883984259921287e-04]; This is some finite element solution and I can compute the error of the solution against an exact solution in the functional L2 norm: error with 1 process: 1.71340e-02 (accepted value) error with 2 processes: 2.65018e-02 error with 3 processes: 3.00164e-02 error with 4 processes: 3.14544e-02 Is there a way to remedy this? On Wed, Apr 30, 2014 at 2:37 AM, Justin Dong wrote: > Hi, > > I'm trying to solve a linear system in parallel using SuperLU but for some > reason, it's not giving me the correct solution. I'm testing on a small > example so I can compare the sequential and parallel cases manually. I'm > absolutely sure that my routine for generating the matrix and right-hand > side in parallel is working correctly. > > Running with 1 process and PCLU gives the correct solution. Running with 2 > processes and using SUPERLU_DIST does not give the correct solution (I > tried with 1 process too but according to the superlu documentation, I > would need SUPERLU for sequential?). This is the code for solving the > system: > > /* solve the system */ > KSPCreate(PETSC_COMM_WORLD, &ksp); > KSPSetOperators(ksp, Aglobal, Aglobal, DIFFERENT_NONZERO_PATTERN); > KSPSetType(ksp,KSPPREONLY); > > KSPGetPC(ksp, &pc); > > KSPSetTolerances(ksp, 1.e-13, PETSC_DEFAULT, PETSC_DEFAULT, PETSC_DEFAULT); > PCFactorSetMatSolverPackage(pc, MATSOLVERSUPERLU_DIST); > > KSPSolve(ksp, bglobal, bglobal); > > Sincerely, > Justin > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Apr 30 05:32:00 2014 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 30 Apr 2014 05:32:00 -0500 Subject: [petsc-users] MATSOLVERSUPERLU_DIST not giving the correct solution In-Reply-To: References: Message-ID: On Wed, Apr 30, 2014 at 3:02 AM, Justin Dong wrote: > I actually was able to solve my own problem...for some reason, I need to > do > > PCSetType(pc, PCLU); > PCFactorSetMatSolverPackage(pc, MATSOLVERSUPERLU_DIST); > KSPSetTolerances(ksp, 1.e-15, PETSC_DEFAULT, PETSC_DEFAULT, PETSC_DEFAULT); > 1) Before you do SetType(PCLU) the preconditioner has no type, so FactorSetMatSolverPackage() has no effect 2) There is a larger issue here. Never ever ever ever code in this way. Hardcoding a solver is crazy. The solver you use should depend on the equation, discretization, flow regime, and architecture. Recompiling for all those is out of the question. You should just use KSPCreate() KSPSetOperators() KSPSetFromOptions() KSPSolve() and then -pc_type lu -pc_factor_mat_solver_package superlu_dist > > instead of the ordering I initially had, though I'm not really clear on > what the issue was. However, there seems to be some loss of accuracy as I > increase the number of processes. Is this expected, or can I force a lower > tolerance somehow? I am able to compare the solutions to a reference > solution, and the error increases as I increase the processes. This is the > solution in sequential: > Yes, this is unavoidable. However, just turn on the Krylov solver -ksp_type gmres -ksp_rtol 1.0e-10 and you can get whatever residual tolerance you want. To get a specific error, you would need a posteriori error estimation, which you could include in a custom convergence criterion. Thanks, Matt > superlu_1process = [ > -6.8035811950925553e-06 > 1.6324030474375778e-04 > 5.4145340579614926e-02 > 1.6640521936281516e-04 > -1.7669374392923965e-04 > -2.8099208957838207e-04 > 5.3958133511222223e-02 > -5.4077899123806263e-02 > -5.3972905090366369e-02 > -1.9485020474821160e-04 > 5.4239813043824400e-02 > 4.4883984259948430e-04]; > > superlu_2process = [ > -6.8035811950509821e-06 > 1.6324030474371623e-04 > 5.4145340579605655e-02 > 1.6640521936281687e-04 > -1.7669374392923807e-04 > -2.8099208957839834e-04 > 5.3958133511212911e-02 > -5.4077899123796964e-02 > -5.3972905090357078e-02 > -1.9485020474824480e-04 > 5.4239813043815172e-02 > 4.4883984259953320e-04]; > > superlu_4process= [ > -6.8035811952565206e-06 > 1.6324030474386164e-04 > 5.4145340579691455e-02 > 1.6640521936278326e-04 > -1.7669374392921441e-04 > -2.8099208957829171e-04 > 5.3958133511299078e-02 > -5.4077899123883062e-02 > -5.3972905090443085e-02 > -1.9485020474806352e-04 > 5.4239813043900860e-02 > 4.4883984259921287e-04]; > > This is some finite element solution and I can compute the error of the > solution against an exact solution in the functional L2 norm: > > error with 1 process: 1.71340e-02 (accepted value) > error with 2 processes: 2.65018e-02 > error with 3 processes: 3.00164e-02 > error with 4 processes: 3.14544e-02 > > > Is there a way to remedy this? > > > On Wed, Apr 30, 2014 at 2:37 AM, Justin Dong wrote: > >> Hi, >> >> I'm trying to solve a linear system in parallel using SuperLU but for >> some reason, it's not giving me the correct solution. I'm testing on a >> small example so I can compare the sequential and parallel cases manually. >> I'm absolutely sure that my routine for generating the matrix and >> right-hand side in parallel is working correctly. >> >> Running with 1 process and PCLU gives the correct solution. Running with >> 2 processes and using SUPERLU_DIST does not give the correct solution (I >> tried with 1 process too but according to the superlu documentation, I >> would need SUPERLU for sequential?). This is the code for solving the >> system: >> >> /* solve the system */ >> KSPCreate(PETSC_COMM_WORLD, &ksp); >> KSPSetOperators(ksp, Aglobal, Aglobal, DIFFERENT_NONZERO_PATTERN); >> KSPSetType(ksp,KSPPREONLY); >> >> KSPGetPC(ksp, &pc); >> >> KSPSetTolerances(ksp, 1.e-13, PETSC_DEFAULT, PETSC_DEFAULT, >> PETSC_DEFAULT); >> PCFactorSetMatSolverPackage(pc, MATSOLVERSUPERLU_DIST); >> >> KSPSolve(ksp, bglobal, bglobal); >> >> Sincerely, >> Justin >> >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsd1 at rice.edu Wed Apr 30 06:19:50 2014 From: jsd1 at rice.edu (Justin Dong) Date: Wed, 30 Apr 2014 06:19:50 -0500 Subject: [petsc-users] MATSOLVERSUPERLU_DIST not giving the correct solution In-Reply-To: References: Message-ID: Thanks. If I turn on the Krylov solver, the issue still seems to persist though. mpiexec -n 4 -ksp_type gmres -ksp_rtol 1.0e-13 -pc_type lu -pc_factor_mat_solver_package superlu_dist I'm testing on a very small system now (24 ndofs), but if I go larger (around 20000 ndofs) then it gets worse. For the small system, I exported the matrices to matlab to make sure they were being assembled correct in parallel, and I'm certain that that they are. On Wed, Apr 30, 2014 at 5:32 AM, Matthew Knepley wrote: > On Wed, Apr 30, 2014 at 3:02 AM, Justin Dong wrote: > >> I actually was able to solve my own problem...for some reason, I need to >> do >> >> PCSetType(pc, PCLU); >> PCFactorSetMatSolverPackage(pc, MATSOLVERSUPERLU_DIST); >> KSPSetTolerances(ksp, 1.e-15, PETSC_DEFAULT, PETSC_DEFAULT, >> PETSC_DEFAULT); >> > > 1) Before you do SetType(PCLU) the preconditioner has no type, so > FactorSetMatSolverPackage() has no effect > > 2) There is a larger issue here. Never ever ever ever code in this way. > Hardcoding a solver is crazy. The solver you > use should depend on the equation, discretization, flow regime, and > architecture. Recompiling for all those is > out of the question. You should just use > > KSPCreate() > KSPSetOperators() > KSPSetFromOptions() > KSPSolve() > > and then > > -pc_type lu -pc_factor_mat_solver_package superlu_dist > > >> >> instead of the ordering I initially had, though I'm not really clear on >> what the issue was. However, there seems to be some loss of accuracy as I >> increase the number of processes. Is this expected, or can I force a lower >> tolerance somehow? I am able to compare the solutions to a reference >> solution, and the error increases as I increase the processes. This is the >> solution in sequential: >> > > Yes, this is unavoidable. However, just turn on the Krylov solver > > -ksp_type gmres -ksp_rtol 1.0e-10 > > and you can get whatever residual tolerance you want. To get a specific > error, you would need > a posteriori error estimation, which you could include in a custom > convergence criterion. > > Thanks, > > Matt > > >> superlu_1process = [ >> -6.8035811950925553e-06 >> 1.6324030474375778e-04 >> 5.4145340579614926e-02 >> 1.6640521936281516e-04 >> -1.7669374392923965e-04 >> -2.8099208957838207e-04 >> 5.3958133511222223e-02 >> -5.4077899123806263e-02 >> -5.3972905090366369e-02 >> -1.9485020474821160e-04 >> 5.4239813043824400e-02 >> 4.4883984259948430e-04]; >> >> superlu_2process = [ >> -6.8035811950509821e-06 >> 1.6324030474371623e-04 >> 5.4145340579605655e-02 >> 1.6640521936281687e-04 >> -1.7669374392923807e-04 >> -2.8099208957839834e-04 >> 5.3958133511212911e-02 >> -5.4077899123796964e-02 >> -5.3972905090357078e-02 >> -1.9485020474824480e-04 >> 5.4239813043815172e-02 >> 4.4883984259953320e-04]; >> >> superlu_4process= [ >> -6.8035811952565206e-06 >> 1.6324030474386164e-04 >> 5.4145340579691455e-02 >> 1.6640521936278326e-04 >> -1.7669374392921441e-04 >> -2.8099208957829171e-04 >> 5.3958133511299078e-02 >> -5.4077899123883062e-02 >> -5.3972905090443085e-02 >> -1.9485020474806352e-04 >> 5.4239813043900860e-02 >> 4.4883984259921287e-04]; >> >> This is some finite element solution and I can compute the error of the >> solution against an exact solution in the functional L2 norm: >> >> error with 1 process: 1.71340e-02 (accepted value) >> error with 2 processes: 2.65018e-02 >> error with 3 processes: 3.00164e-02 >> error with 4 processes: 3.14544e-02 >> >> >> Is there a way to remedy this? >> >> >> On Wed, Apr 30, 2014 at 2:37 AM, Justin Dong wrote: >> >>> Hi, >>> >>> I'm trying to solve a linear system in parallel using SuperLU but for >>> some reason, it's not giving me the correct solution. I'm testing on a >>> small example so I can compare the sequential and parallel cases manually. >>> I'm absolutely sure that my routine for generating the matrix and >>> right-hand side in parallel is working correctly. >>> >>> Running with 1 process and PCLU gives the correct solution. Running with >>> 2 processes and using SUPERLU_DIST does not give the correct solution (I >>> tried with 1 process too but according to the superlu documentation, I >>> would need SUPERLU for sequential?). This is the code for solving the >>> system: >>> >>> /* solve the system */ >>> KSPCreate(PETSC_COMM_WORLD, &ksp); >>> KSPSetOperators(ksp, Aglobal, Aglobal, DIFFERENT_NONZERO_PATTERN); >>> KSPSetType(ksp,KSPPREONLY); >>> >>> KSPGetPC(ksp, &pc); >>> >>> KSPSetTolerances(ksp, 1.e-13, PETSC_DEFAULT, PETSC_DEFAULT, >>> PETSC_DEFAULT); >>> PCFactorSetMatSolverPackage(pc, MATSOLVERSUPERLU_DIST); >>> >>> KSPSolve(ksp, bglobal, bglobal); >>> >>> Sincerely, >>> Justin >>> >>> >>> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Apr 30 06:46:20 2014 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 30 Apr 2014 06:46:20 -0500 Subject: [petsc-users] MATSOLVERSUPERLU_DIST not giving the correct solution In-Reply-To: References: Message-ID: On Wed, Apr 30, 2014 at 6:19 AM, Justin Dong wrote: > Thanks. If I turn on the Krylov solver, the issue still seems to persist > though. > > mpiexec -n 4 -ksp_type gmres -ksp_rtol 1.0e-13 -pc_type lu > -pc_factor_mat_solver_package superlu_dist > > I'm testing on a very small system now (24 ndofs), but if I go larger > (around 20000 ndofs) then it gets worse. > > For the small system, I exported the matrices to matlab to make sure they > were being assembled correct in parallel, and I'm certain that that they > are. > For convergence questions, always run using -ksp_monitor -ksp_view so that we can see exactly what you run. Thanks, Matt > > On Wed, Apr 30, 2014 at 5:32 AM, Matthew Knepley wrote: > >> On Wed, Apr 30, 2014 at 3:02 AM, Justin Dong wrote: >> >>> I actually was able to solve my own problem...for some reason, I need to >>> do >>> >>> PCSetType(pc, PCLU); >>> PCFactorSetMatSolverPackage(pc, MATSOLVERSUPERLU_DIST); >>> KSPSetTolerances(ksp, 1.e-15, PETSC_DEFAULT, PETSC_DEFAULT, >>> PETSC_DEFAULT); >>> >> >> 1) Before you do SetType(PCLU) the preconditioner has no type, so >> FactorSetMatSolverPackage() has no effect >> >> 2) There is a larger issue here. Never ever ever ever code in this way. >> Hardcoding a solver is crazy. The solver you >> use should depend on the equation, discretization, flow regime, and >> architecture. Recompiling for all those is >> out of the question. You should just use >> >> KSPCreate() >> KSPSetOperators() >> KSPSetFromOptions() >> KSPSolve() >> >> and then >> >> -pc_type lu -pc_factor_mat_solver_package superlu_dist >> >> >>> >>> instead of the ordering I initially had, though I'm not really clear on >>> what the issue was. However, there seems to be some loss of accuracy as I >>> increase the number of processes. Is this expected, or can I force a lower >>> tolerance somehow? I am able to compare the solutions to a reference >>> solution, and the error increases as I increase the processes. This is the >>> solution in sequential: >>> >> >> Yes, this is unavoidable. However, just turn on the Krylov solver >> >> -ksp_type gmres -ksp_rtol 1.0e-10 >> >> and you can get whatever residual tolerance you want. To get a specific >> error, you would need >> a posteriori error estimation, which you could include in a custom >> convergence criterion. >> >> Thanks, >> >> Matt >> >> >>> superlu_1process = [ >>> -6.8035811950925553e-06 >>> 1.6324030474375778e-04 >>> 5.4145340579614926e-02 >>> 1.6640521936281516e-04 >>> -1.7669374392923965e-04 >>> -2.8099208957838207e-04 >>> 5.3958133511222223e-02 >>> -5.4077899123806263e-02 >>> -5.3972905090366369e-02 >>> -1.9485020474821160e-04 >>> 5.4239813043824400e-02 >>> 4.4883984259948430e-04]; >>> >>> superlu_2process = [ >>> -6.8035811950509821e-06 >>> 1.6324030474371623e-04 >>> 5.4145340579605655e-02 >>> 1.6640521936281687e-04 >>> -1.7669374392923807e-04 >>> -2.8099208957839834e-04 >>> 5.3958133511212911e-02 >>> -5.4077899123796964e-02 >>> -5.3972905090357078e-02 >>> -1.9485020474824480e-04 >>> 5.4239813043815172e-02 >>> 4.4883984259953320e-04]; >>> >>> superlu_4process= [ >>> -6.8035811952565206e-06 >>> 1.6324030474386164e-04 >>> 5.4145340579691455e-02 >>> 1.6640521936278326e-04 >>> -1.7669374392921441e-04 >>> -2.8099208957829171e-04 >>> 5.3958133511299078e-02 >>> -5.4077899123883062e-02 >>> -5.3972905090443085e-02 >>> -1.9485020474806352e-04 >>> 5.4239813043900860e-02 >>> 4.4883984259921287e-04]; >>> >>> This is some finite element solution and I can compute the error of the >>> solution against an exact solution in the functional L2 norm: >>> >>> error with 1 process: 1.71340e-02 (accepted value) >>> error with 2 processes: 2.65018e-02 >>> error with 3 processes: 3.00164e-02 >>> error with 4 processes: 3.14544e-02 >>> >>> >>> Is there a way to remedy this? >>> >>> >>> On Wed, Apr 30, 2014 at 2:37 AM, Justin Dong wrote: >>> >>>> Hi, >>>> >>>> I'm trying to solve a linear system in parallel using SuperLU but for >>>> some reason, it's not giving me the correct solution. I'm testing on a >>>> small example so I can compare the sequential and parallel cases manually. >>>> I'm absolutely sure that my routine for generating the matrix and >>>> right-hand side in parallel is working correctly. >>>> >>>> Running with 1 process and PCLU gives the correct solution. Running >>>> with 2 processes and using SUPERLU_DIST does not give the correct solution >>>> (I tried with 1 process too but according to the superlu documentation, I >>>> would need SUPERLU for sequential?). This is the code for solving the >>>> system: >>>> >>>> /* solve the system */ >>>> KSPCreate(PETSC_COMM_WORLD, &ksp); >>>> KSPSetOperators(ksp, Aglobal, Aglobal, DIFFERENT_NONZERO_PATTERN); >>>> KSPSetType(ksp,KSPPREONLY); >>>> >>>> KSPGetPC(ksp, &pc); >>>> >>>> KSPSetTolerances(ksp, 1.e-13, PETSC_DEFAULT, PETSC_DEFAULT, >>>> PETSC_DEFAULT); >>>> PCFactorSetMatSolverPackage(pc, MATSOLVERSUPERLU_DIST); >>>> >>>> KSPSolve(ksp, bglobal, bglobal); >>>> >>>> Sincerely, >>>> Justin >>>> >>>> >>>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Apr 30 07:57:54 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 30 Apr 2014 07:57:54 -0500 Subject: [petsc-users] MATSOLVERSUPERLU_DIST not giving the correct solution In-Reply-To: References: Message-ID: On Apr 30, 2014, at 6:46 AM, Matthew Knepley wrote: > On Wed, Apr 30, 2014 at 6:19 AM, Justin Dong wrote: > Thanks. If I turn on the Krylov solver, the issue still seems to persist though. > > mpiexec -n 4 -ksp_type gmres -ksp_rtol 1.0e-13 -pc_type lu -pc_factor_mat_solver_package superlu_dist > > I'm testing on a very small system now (24 ndofs), but if I go larger (around 20000 ndofs) then it gets worse. > > For the small system, I exported the matrices to matlab to make sure they were being assembled correct in parallel, and I'm certain that that they are. > > For convergence questions, always run using -ksp_monitor -ksp_view so that we can see exactly what you run. Also run with -ksp_pc_side right > > Thanks, > > Matt > > > On Wed, Apr 30, 2014 at 5:32 AM, Matthew Knepley wrote: > On Wed, Apr 30, 2014 at 3:02 AM, Justin Dong wrote: > I actually was able to solve my own problem...for some reason, I need to do > > PCSetType(pc, PCLU); > PCFactorSetMatSolverPackage(pc, MATSOLVERSUPERLU_DIST); > KSPSetTolerances(ksp, 1.e-15, PETSC_DEFAULT, PETSC_DEFAULT, PETSC_DEFAULT); > > 1) Before you do SetType(PCLU) the preconditioner has no type, so FactorSetMatSolverPackage() has no effect > > 2) There is a larger issue here. Never ever ever ever code in this way. Hardcoding a solver is crazy. The solver you > use should depend on the equation, discretization, flow regime, and architecture. Recompiling for all those is > out of the question. You should just use > > KSPCreate() > KSPSetOperators() > KSPSetFromOptions() > KSPSolve() > > and then > > -pc_type lu -pc_factor_mat_solver_package superlu_dist > > > instead of the ordering I initially had, though I'm not really clear on what the issue was. However, there seems to be some loss of accuracy as I increase the number of processes. Is this expected, or can I force a lower tolerance somehow? I am able to compare the solutions to a reference solution, and the error increases as I increase the processes. This is the solution in sequential: > > Yes, this is unavoidable. However, just turn on the Krylov solver > > -ksp_type gmres -ksp_rtol 1.0e-10 > > and you can get whatever residual tolerance you want. To get a specific error, you would need > a posteriori error estimation, which you could include in a custom convergence criterion. > > Thanks, > > Matt > > superlu_1process = [ > -6.8035811950925553e-06 > 1.6324030474375778e-04 > 5.4145340579614926e-02 > 1.6640521936281516e-04 > -1.7669374392923965e-04 > -2.8099208957838207e-04 > 5.3958133511222223e-02 > -5.4077899123806263e-02 > -5.3972905090366369e-02 > -1.9485020474821160e-04 > 5.4239813043824400e-02 > 4.4883984259948430e-04]; > > superlu_2process = [ > -6.8035811950509821e-06 > 1.6324030474371623e-04 > 5.4145340579605655e-02 > 1.6640521936281687e-04 > -1.7669374392923807e-04 > -2.8099208957839834e-04 > 5.3958133511212911e-02 > -5.4077899123796964e-02 > -5.3972905090357078e-02 > -1.9485020474824480e-04 > 5.4239813043815172e-02 > 4.4883984259953320e-04]; > > superlu_4process= [ > -6.8035811952565206e-06 > 1.6324030474386164e-04 > 5.4145340579691455e-02 > 1.6640521936278326e-04 > -1.7669374392921441e-04 > -2.8099208957829171e-04 > 5.3958133511299078e-02 > -5.4077899123883062e-02 > -5.3972905090443085e-02 > -1.9485020474806352e-04 > 5.4239813043900860e-02 > 4.4883984259921287e-04]; > > This is some finite element solution and I can compute the error of the solution against an exact solution in the functional L2 norm: > > error with 1 process: 1.71340e-02 (accepted value) > error with 2 processes: 2.65018e-02 > error with 3 processes: 3.00164e-02 > error with 4 processes: 3.14544e-02 > > > Is there a way to remedy this? > > > On Wed, Apr 30, 2014 at 2:37 AM, Justin Dong wrote: > Hi, > > I'm trying to solve a linear system in parallel using SuperLU but for some reason, it's not giving me the correct solution. I'm testing on a small example so I can compare the sequential and parallel cases manually. I'm absolutely sure that my routine for generating the matrix and right-hand side in parallel is working correctly. > > Running with 1 process and PCLU gives the correct solution. Running with 2 processes and using SUPERLU_DIST does not give the correct solution (I tried with 1 process too but according to the superlu documentation, I would need SUPERLU for sequential?). This is the code for solving the system: > > /* solve the system */ > KSPCreate(PETSC_COMM_WORLD, &ksp); > KSPSetOperators(ksp, Aglobal, Aglobal, DIFFERENT_NONZERO_PATTERN); > KSPSetType(ksp,KSPPREONLY); > > KSPGetPC(ksp, &pc); > > KSPSetTolerances(ksp, 1.e-13, PETSC_DEFAULT, PETSC_DEFAULT, PETSC_DEFAULT); > PCFactorSetMatSolverPackage(pc, MATSOLVERSUPERLU_DIST); > > KSPSolve(ksp, bglobal, bglobal); > > Sincerely, > Justin > > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From epscodes at gmail.com Wed Apr 30 08:15:36 2014 From: epscodes at gmail.com (Xiangdong) Date: Wed, 30 Apr 2014 09:15:36 -0400 Subject: [petsc-users] Test Jacobian In-Reply-To: <7C7267E8-801E-49A4-902E-0E58B51077C0@mcs.anl.gov> References: <7C7267E8-801E-49A4-902E-0E58B51077C0@mcs.anl.gov> Message-ID: I echo the error message Que reported: [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Object is in wrong state! [0]PETSC ERROR: SNESTest aborts after Jacobian test! [0]PETSC ERROR: ------------------------------------------------------------------------ The snes_fd, snes_fd_color and my own Jacobian work fine separately, but -snes_type test complains the same error message Que mentioned before. Thanks. Xiangdong On Tue, Apr 15, 2014 at 12:31 PM, Barry Smith wrote: > > On Apr 15, 2014, at 11:09 AM, Peter Brune wrote: > > > Perhaps there might be a clue somewhere else in the message: > > > > [0]PETSC ERROR: SNESTest aborts after Jacobian test! > > > > All SNESTest does is compute the difference between the > finite-difference Jacobian and the user Jacobian (congratulations! yours > isn't very different!) and then aborts with the error that you have > experienced here. > > > > It may finally be time to excise SNESTest entirely as we have way better > alternatives now. Anyone mind if I do that? I'll make sure to update the > docs appropriately. > > Peter, > > This has not been done because there are two sets of alternatives and > both have problems. We need to merge/fix up the other testing > infrastructures before removing the -snes_type test > > Barry > > > > > - Peter > > > > > > On Tue, Apr 15, 2014 at 11:01 AM, Que Cat wrote: > > Hello Petsc-users, > > > > I tested the hand-coded Jacobian, and received the follow error: > > > > Testing hand-coded Jacobian, if the ratio is > > O(1.e-8), the hand-coded Jacobian is probably correct. > > Run with -snes_test_display to show difference > > of hand-coded and finite difference Jacobian. > > Norm of matrix ratio 7.23832e-10 difference 1.39168e-07 (user-defined > state) > > Norm of matrix ratio 1.15746e-08 difference 2.2254e-06 (constant state > -1.0) > > Norm of matrix ratio 7.23832e-10 difference 1.39168e-07 (constant state > 1.0) > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > > [0]PETSC ERROR: Object is in wrong state! > > [0]PETSC ERROR: SNESTest aborts after Jacobian test! > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: Petsc Release Version 3.4.3, Oct, 15, 2013 > > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > > [0]PETSC ERROR: See docs/index.html for manual pages. > > > > What does "Object is in wrong state!" mean? What are the possible > sources of this error? Thank you for your time. > > > > Que > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsd1 at rice.edu Wed Apr 30 08:17:09 2014 From: jsd1 at rice.edu (Justin Dong) Date: Wed, 30 Apr 2014 08:17:09 -0500 Subject: [petsc-users] MATSOLVERSUPERLU_DIST not giving the correct solution In-Reply-To: References: Message-ID: Here are the results of one example where the solution is incorrect: 0 KSP unpreconditioned resid norm 7.267616711036e-05 true resid norm 7.267616711036e-05 ||r(i)||/||b|| 1.000000000000e+00 1 KSP unpreconditioned resid norm 4.081398605668e-16 true resid norm 4.017029301117e-16 ||r(i)||/||b|| 5.527299334618e-12 2 KSP unpreconditioned resid norm 4.378737248697e-21 true resid norm 4.545226736905e-16 ||r(i)||/||b|| 6.254081520291e-12 KSP Object: 4 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-13, absolute=1e-50, divergence=10000 right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 4 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: natural factor fill ratio given 0, needed 0 Factored matrix follows: Matrix Object: 4 MPI processes type: mpiaij rows=1536, cols=1536 package used to perform factorization: superlu_dist total: nonzeros=0, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 SuperLU_DIST run parameters: Process grid nprow 2 x npcol 2 Equilibrate matrix TRUE Matrix input mode 1 Replace tiny pivots TRUE Use iterative refinement FALSE Processors in row 2 col partition 2 Row permutation LargeDiag Column permutation METIS_AT_PLUS_A Parallel symbolic factorization FALSE Repeated factorization SamePattern_SameRowPerm linear system matrix = precond matrix: Matrix Object: 4 MPI processes type: mpiaij rows=1536, cols=1536 total: nonzeros=17856, allocated nonzeros=64512 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 128 nodes, limit used is 5 On Wed, Apr 30, 2014 at 7:57 AM, Barry Smith wrote: > > On Apr 30, 2014, at 6:46 AM, Matthew Knepley wrote: > > > On Wed, Apr 30, 2014 at 6:19 AM, Justin Dong wrote: > > Thanks. If I turn on the Krylov solver, the issue still seems to persist > though. > > > > mpiexec -n 4 -ksp_type gmres -ksp_rtol 1.0e-13 -pc_type lu > -pc_factor_mat_solver_package superlu_dist > > > > I'm testing on a very small system now (24 ndofs), but if I go larger > (around 20000 ndofs) then it gets worse. > > > > For the small system, I exported the matrices to matlab to make sure > they were being assembled correct in parallel, and I'm certain that that > they are. > > > > For convergence questions, always run using -ksp_monitor -ksp_view so > that we can see exactly what you run. > > Also run with -ksp_pc_side right > > > > > > Thanks, > > > > Matt > > > > > > On Wed, Apr 30, 2014 at 5:32 AM, Matthew Knepley > wrote: > > On Wed, Apr 30, 2014 at 3:02 AM, Justin Dong wrote: > > I actually was able to solve my own problem...for some reason, I need to > do > > > > PCSetType(pc, PCLU); > > PCFactorSetMatSolverPackage(pc, MATSOLVERSUPERLU_DIST); > > KSPSetTolerances(ksp, 1.e-15, PETSC_DEFAULT, PETSC_DEFAULT, > PETSC_DEFAULT); > > > > 1) Before you do SetType(PCLU) the preconditioner has no type, so > FactorSetMatSolverPackage() has no effect > > > > 2) There is a larger issue here. Never ever ever ever code in this way. > Hardcoding a solver is crazy. The solver you > > use should depend on the equation, discretization, flow regime, and > architecture. Recompiling for all those is > > out of the question. You should just use > > > > KSPCreate() > > KSPSetOperators() > > KSPSetFromOptions() > > KSPSolve() > > > > and then > > > > -pc_type lu -pc_factor_mat_solver_package superlu_dist > > > > > > instead of the ordering I initially had, though I'm not really clear on > what the issue was. However, there seems to be some loss of accuracy as I > increase the number of processes. Is this expected, or can I force a lower > tolerance somehow? I am able to compare the solutions to a reference > solution, and the error increases as I increase the processes. This is the > solution in sequential: > > > > Yes, this is unavoidable. However, just turn on the Krylov solver > > > > -ksp_type gmres -ksp_rtol 1.0e-10 > > > > and you can get whatever residual tolerance you want. To get a specific > error, you would need > > a posteriori error estimation, which you could include in a custom > convergence criterion. > > > > Thanks, > > > > Matt > > > > superlu_1process = [ > > -6.8035811950925553e-06 > > 1.6324030474375778e-04 > > 5.4145340579614926e-02 > > 1.6640521936281516e-04 > > -1.7669374392923965e-04 > > -2.8099208957838207e-04 > > 5.3958133511222223e-02 > > -5.4077899123806263e-02 > > -5.3972905090366369e-02 > > -1.9485020474821160e-04 > > 5.4239813043824400e-02 > > 4.4883984259948430e-04]; > > > > superlu_2process = [ > > -6.8035811950509821e-06 > > 1.6324030474371623e-04 > > 5.4145340579605655e-02 > > 1.6640521936281687e-04 > > -1.7669374392923807e-04 > > -2.8099208957839834e-04 > > 5.3958133511212911e-02 > > -5.4077899123796964e-02 > > -5.3972905090357078e-02 > > -1.9485020474824480e-04 > > 5.4239813043815172e-02 > > 4.4883984259953320e-04]; > > > > superlu_4process= [ > > -6.8035811952565206e-06 > > 1.6324030474386164e-04 > > 5.4145340579691455e-02 > > 1.6640521936278326e-04 > > -1.7669374392921441e-04 > > -2.8099208957829171e-04 > > 5.3958133511299078e-02 > > -5.4077899123883062e-02 > > -5.3972905090443085e-02 > > -1.9485020474806352e-04 > > 5.4239813043900860e-02 > > 4.4883984259921287e-04]; > > > > This is some finite element solution and I can compute the error of the > solution against an exact solution in the functional L2 norm: > > > > error with 1 process: 1.71340e-02 (accepted value) > > error with 2 processes: 2.65018e-02 > > error with 3 processes: 3.00164e-02 > > error with 4 processes: 3.14544e-02 > > > > > > Is there a way to remedy this? > > > > > > On Wed, Apr 30, 2014 at 2:37 AM, Justin Dong wrote: > > Hi, > > > > I'm trying to solve a linear system in parallel using SuperLU but for > some reason, it's not giving me the correct solution. I'm testing on a > small example so I can compare the sequential and parallel cases manually. > I'm absolutely sure that my routine for generating the matrix and > right-hand side in parallel is working correctly. > > > > Running with 1 process and PCLU gives the correct solution. Running with > 2 processes and using SUPERLU_DIST does not give the correct solution (I > tried with 1 process too but according to the superlu documentation, I > would need SUPERLU for sequential?). This is the code for solving the > system: > > > > /* solve the system */ > > KSPCreate(PETSC_COMM_WORLD, &ksp); > > KSPSetOperators(ksp, Aglobal, Aglobal, DIFFERENT_NONZERO_PATTERN); > > KSPSetType(ksp,KSPPREONLY); > > > > KSPGetPC(ksp, &pc); > > > > KSPSetTolerances(ksp, 1.e-13, PETSC_DEFAULT, PETSC_DEFAULT, > PETSC_DEFAULT); > > PCFactorSetMatSolverPackage(pc, MATSOLVERSUPERLU_DIST); > > > > KSPSolve(ksp, bglobal, bglobal); > > > > Sincerely, > > Justin > > > > > > > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Apr 30 08:42:36 2014 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 30 Apr 2014 08:42:36 -0500 Subject: [petsc-users] MATSOLVERSUPERLU_DIST not giving the correct solution In-Reply-To: References: Message-ID: On Wed, Apr 30, 2014 at 8:17 AM, Justin Dong wrote: > Here are the results of one example where the solution is incorrect: > I am skeptical of your conclusion. The entry in "true residual norm", 4.545226736905e-16, is generated using just MatMult() and VecAXPY(). It is the definition of "correct". What do you get when you replicate these steps with the solution that is returned? Matt > 0 KSP unpreconditioned resid norm 7.267616711036e-05 true resid norm > 7.267616711036e-05 ||r(i)||/||b|| 1.000000000000e+00 > > 1 KSP unpreconditioned resid norm 4.081398605668e-16 true resid norm > 4.017029301117e-16 ||r(i)||/||b|| 5.527299334618e-12 > > 2 KSP unpreconditioned resid norm 4.378737248697e-21 true resid norm > 4.545226736905e-16 ||r(i)||/||b|| 6.254081520291e-12 > > KSP Object: 4 MPI processes > > type: gmres > > GMRES: restart=30, using Classical (unmodified) Gram-Schmidt > Orthogonalization with no iterative refinement > > GMRES: happy breakdown tolerance 1e-30 > > maximum iterations=10000, initial guess is zero > > tolerances: relative=1e-13, absolute=1e-50, divergence=10000 > > right preconditioning > > using UNPRECONDITIONED norm type for convergence test > > PC Object: 4 MPI processes > > type: lu > > LU: out-of-place factorization > > tolerance for zero pivot 2.22045e-14 > > matrix ordering: natural > > factor fill ratio given 0, needed 0 > > Factored matrix follows: > > Matrix Object: 4 MPI processes > > type: mpiaij > > rows=1536, cols=1536 > > package used to perform factorization: superlu_dist > > total: nonzeros=0, allocated nonzeros=0 > > total number of mallocs used during MatSetValues calls =0 > > SuperLU_DIST run parameters: > > Process grid nprow 2 x npcol 2 > > Equilibrate matrix TRUE > > Matrix input mode 1 > > Replace tiny pivots TRUE > > Use iterative refinement FALSE > > Processors in row 2 col partition 2 > > Row permutation LargeDiag > > Column permutation METIS_AT_PLUS_A > > Parallel symbolic factorization FALSE > > Repeated factorization SamePattern_SameRowPerm > > linear system matrix = precond matrix: > > Matrix Object: 4 MPI processes > > type: mpiaij > > rows=1536, cols=1536 > > total: nonzeros=17856, allocated nonzeros=64512 > > total number of mallocs used during MatSetValues calls =0 > > using I-node (on process 0) routines: found 128 nodes, limit used is > 5 > > > On Wed, Apr 30, 2014 at 7:57 AM, Barry Smith wrote: > >> >> On Apr 30, 2014, at 6:46 AM, Matthew Knepley wrote: >> >> > On Wed, Apr 30, 2014 at 6:19 AM, Justin Dong wrote: >> > Thanks. If I turn on the Krylov solver, the issue still seems to >> persist though. >> > >> > mpiexec -n 4 -ksp_type gmres -ksp_rtol 1.0e-13 -pc_type lu >> -pc_factor_mat_solver_package superlu_dist >> > >> > I'm testing on a very small system now (24 ndofs), but if I go larger >> (around 20000 ndofs) then it gets worse. >> > >> > For the small system, I exported the matrices to matlab to make sure >> they were being assembled correct in parallel, and I'm certain that that >> they are. >> > >> > For convergence questions, always run using -ksp_monitor -ksp_view so >> that we can see exactly what you run. >> >> Also run with -ksp_pc_side right >> >> >> > >> > Thanks, >> > >> > Matt >> > >> > >> > On Wed, Apr 30, 2014 at 5:32 AM, Matthew Knepley >> wrote: >> > On Wed, Apr 30, 2014 at 3:02 AM, Justin Dong wrote: >> > I actually was able to solve my own problem...for some reason, I need >> to do >> > >> > PCSetType(pc, PCLU); >> > PCFactorSetMatSolverPackage(pc, MATSOLVERSUPERLU_DIST); >> > KSPSetTolerances(ksp, 1.e-15, PETSC_DEFAULT, PETSC_DEFAULT, >> PETSC_DEFAULT); >> > >> > 1) Before you do SetType(PCLU) the preconditioner has no type, so >> FactorSetMatSolverPackage() has no effect >> > >> > 2) There is a larger issue here. Never ever ever ever code in this way. >> Hardcoding a solver is crazy. The solver you >> > use should depend on the equation, discretization, flow regime, >> and architecture. Recompiling for all those is >> > out of the question. You should just use >> > >> > KSPCreate() >> > KSPSetOperators() >> > KSPSetFromOptions() >> > KSPSolve() >> > >> > and then >> > >> > -pc_type lu -pc_factor_mat_solver_package superlu_dist >> > >> > >> > instead of the ordering I initially had, though I'm not really clear on >> what the issue was. However, there seems to be some loss of accuracy as I >> increase the number of processes. Is this expected, or can I force a lower >> tolerance somehow? I am able to compare the solutions to a reference >> solution, and the error increases as I increase the processes. This is the >> solution in sequential: >> > >> > Yes, this is unavoidable. However, just turn on the Krylov solver >> > >> > -ksp_type gmres -ksp_rtol 1.0e-10 >> > >> > and you can get whatever residual tolerance you want. To get a specific >> error, you would need >> > a posteriori error estimation, which you could include in a custom >> convergence criterion. >> > >> > Thanks, >> > >> > Matt >> > >> > superlu_1process = [ >> > -6.8035811950925553e-06 >> > 1.6324030474375778e-04 >> > 5.4145340579614926e-02 >> > 1.6640521936281516e-04 >> > -1.7669374392923965e-04 >> > -2.8099208957838207e-04 >> > 5.3958133511222223e-02 >> > -5.4077899123806263e-02 >> > -5.3972905090366369e-02 >> > -1.9485020474821160e-04 >> > 5.4239813043824400e-02 >> > 4.4883984259948430e-04]; >> > >> > superlu_2process = [ >> > -6.8035811950509821e-06 >> > 1.6324030474371623e-04 >> > 5.4145340579605655e-02 >> > 1.6640521936281687e-04 >> > -1.7669374392923807e-04 >> > -2.8099208957839834e-04 >> > 5.3958133511212911e-02 >> > -5.4077899123796964e-02 >> > -5.3972905090357078e-02 >> > -1.9485020474824480e-04 >> > 5.4239813043815172e-02 >> > 4.4883984259953320e-04]; >> > >> > superlu_4process= [ >> > -6.8035811952565206e-06 >> > 1.6324030474386164e-04 >> > 5.4145340579691455e-02 >> > 1.6640521936278326e-04 >> > -1.7669374392921441e-04 >> > -2.8099208957829171e-04 >> > 5.3958133511299078e-02 >> > -5.4077899123883062e-02 >> > -5.3972905090443085e-02 >> > -1.9485020474806352e-04 >> > 5.4239813043900860e-02 >> > 4.4883984259921287e-04]; >> > >> > This is some finite element solution and I can compute the error of the >> solution against an exact solution in the functional L2 norm: >> > >> > error with 1 process: 1.71340e-02 (accepted value) >> > error with 2 processes: 2.65018e-02 >> > error with 3 processes: 3.00164e-02 >> > error with 4 processes: 3.14544e-02 >> > >> > >> > Is there a way to remedy this? >> > >> > >> > On Wed, Apr 30, 2014 at 2:37 AM, Justin Dong wrote: >> > Hi, >> > >> > I'm trying to solve a linear system in parallel using SuperLU but for >> some reason, it's not giving me the correct solution. I'm testing on a >> small example so I can compare the sequential and parallel cases manually. >> I'm absolutely sure that my routine for generating the matrix and >> right-hand side in parallel is working correctly. >> > >> > Running with 1 process and PCLU gives the correct solution. Running >> with 2 processes and using SUPERLU_DIST does not give the correct solution >> (I tried with 1 process too but according to the superlu documentation, I >> would need SUPERLU for sequential?). This is the code for solving the >> system: >> > >> > /* solve the system */ >> > KSPCreate(PETSC_COMM_WORLD, &ksp); >> > KSPSetOperators(ksp, Aglobal, Aglobal, DIFFERENT_NONZERO_PATTERN); >> > KSPSetType(ksp,KSPPREONLY); >> > >> > KSPGetPC(ksp, &pc); >> > >> > KSPSetTolerances(ksp, 1.e-13, PETSC_DEFAULT, PETSC_DEFAULT, >> PETSC_DEFAULT); >> > PCFactorSetMatSolverPackage(pc, MATSOLVERSUPERLU_DIST); >> > >> > KSPSolve(ksp, bglobal, bglobal); >> > >> > Sincerely, >> > Justin >> > >> > >> > >> > >> > >> > >> > -- >> > What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> > -- Norbert Wiener >> > >> > >> > >> > >> > -- >> > What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> > -- Norbert Wiener >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Apr 30 09:23:52 2014 From: jed at jedbrown.org (Jed Brown) Date: Wed, 30 Apr 2014 08:23:52 -0600 Subject: [petsc-users] Test Jacobian In-Reply-To: References: <7C7267E8-801E-49A4-902E-0E58B51077C0@mcs.anl.gov> Message-ID: <87d2fyq4jr.fsf@jedbrown.org> Xiangdong writes: > I echo the error message Que reported: > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Object is in wrong state! > [0]PETSC ERROR: SNESTest aborts after Jacobian test! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > The snes_fd, snes_fd_color and my own Jacobian work fine separately, but > -snes_type test complains the same error message Que mentioned before. I'm afraid we'll reply with the same response. As indicated in the message, this is how SNESTest works. What do you want to have happen? -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From brune at mcs.anl.gov Wed Apr 30 10:07:59 2014 From: brune at mcs.anl.gov (Peter Brune) Date: Wed, 30 Apr 2014 10:07:59 -0500 Subject: [petsc-users] Test Jacobian In-Reply-To: <87d2fyq4jr.fsf@jedbrown.org> References: <7C7267E8-801E-49A4-902E-0E58B51077C0@mcs.anl.gov> <87d2fyq4jr.fsf@jedbrown.org> Message-ID: I've updated the manpage for SNESTEST in next and master to reflect the exact behavior of SNESTEST. Hopefully users will actually check the docs and this will clear up some of the confusion. - Peter On Wed, Apr 30, 2014 at 9:23 AM, Jed Brown wrote: > Xiangdong writes: > > > I echo the error message Que reported: > > > > [0]PETSC ERROR: --------------------- Error Message > > ------------------------------------ > > [0]PETSC ERROR: Object is in wrong state! > > [0]PETSC ERROR: SNESTest aborts after Jacobian test! > > [0]PETSC ERROR: > > ------------------------------------------------------------------------ > > > > The snes_fd, snes_fd_color and my own Jacobian work fine separately, but > > -snes_type test complains the same error message Que mentioned before. > > I'm afraid we'll reply with the same response. As indicated in the > message, this is how SNESTest works. What do you want to have happen? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.ndengue at gmail.com Wed Apr 30 10:10:19 2014 From: steve.ndengue at gmail.com (Steve Ndengue) Date: Wed, 30 Apr 2014 10:10:19 -0500 Subject: [petsc-users] Questions on convergence (SLEPC) Message-ID: <5361125B.40201@gmail.com> Dear all, I have few questions on achieving convergence with SLEPC. I am doing some comparison on how SLEPC performs compare to a LAPACK installation on my system (an 8 processors icore7 with 3.4 GHz running Ubuntu). 1/ It appears that a calculation requesting the LAPACK eigensolver runs faster using my libraries than when done with SLEPC selecting the 'lapack' method. I guess most of the time is spent when assembling the matrix? However if the time seems reasonable for a matrix of size less than 2000*2000, for one with 4000*4000 and above, the computation time seems more than ten times slower with SLEPC and the 'lapack' method!!! 2/ I was however expecting that running an iterative calculation such as 'krylovschur', 'lanczos' or 'arnoldi' the time would be shorter but that is not the case. Inserting the Shift-and-Invert spectral transform, i could converge faster for small matrices but it takes more time using these iteratives methods than using the Lapack library on my system, when the size allows; even when requesting only few eigenstates (less than 50). regarding the 2 previous comments I would like to know if there are some rules on how to ensure a fast convergence of a diagonalisation with SLEPC? 3/ About the diagonalisation on many processors, after we assign values to the matrix, does SLEPC automatically distribute the calculation among the requested processes or shall we need to insert commands on the code to enforce it? Sincerely, -- Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Wed Apr 30 10:19:03 2014 From: jroman at dsic.upv.es (Jose E. Roman) Date: Wed, 30 Apr 2014 17:19:03 +0200 Subject: [petsc-users] Questions on convergence (SLEPC) In-Reply-To: <5361125B.40201@gmail.com> References: <5361125B.40201@gmail.com> Message-ID: <59C4C50F-1F9D-4CA7-B36C-4359528871B2@dsic.upv.es> El 30/04/2014, a las 17:10, Steve Ndengue escribi?: > Dear all, > > I have few questions on achieving convergence with SLEPC. > I am doing some comparison on how SLEPC performs compare to a LAPACK installation on my system (an 8 processors icore7 with 3.4 GHz running Ubuntu). > > 1/ It appears that a calculation requesting the LAPACK eigensolver runs faster using my libraries than when done with SLEPC selecting the 'lapack' method. I guess most of the time is spent when assembling the matrix? However if the time seems reasonable for a matrix of size less than 2000*2000, for one with 4000*4000 and above, the computation time seems more than ten times slower with SLEPC and the 'lapack' method!!! Once again, do not use SLEPc's 'lapack' method, it is just for debugging purposes. > > 2/ I was however expecting that running an iterative calculation such as 'krylovschur', 'lanczos' or 'arnoldi' the time would be shorter but that is not the case. Inserting the Shift-and-Invert spectral transform, i could converge faster for small matrices but it takes more time using these iteratives methods than using the Lapack library on my system, when the size allows; even when requesting only few eigenstates (less than 50). Is your matrix sparse? > > regarding the 2 previous comments I would like to know if there are some rules on how to ensure a fast convergence of a diagonalisation with SLEPC? > > 3/ About the diagonalisation on many processors, after we assign values to the matrix, does SLEPC automatically distribute the calculation among the requested processes or shall we need to insert commands on the code to enforce it? Read the manual, and have a look at examples that work in parallel (most of them). > > Sincerely, > > > > -- > Steve > From steve.ndengue at gmail.com Wed Apr 30 10:25:44 2014 From: steve.ndengue at gmail.com (Steve Ndengue) Date: Wed, 30 Apr 2014 10:25:44 -0500 Subject: [petsc-users] Questions on convergence (SLEPC) In-Reply-To: <59C4C50F-1F9D-4CA7-B36C-4359528871B2@dsic.upv.es> References: <5361125B.40201@gmail.com> <59C4C50F-1F9D-4CA7-B36C-4359528871B2@dsic.upv.es> Message-ID: <536115F8.8090803@gmail.com> Yes, the matrix is sparse. On 04/30/2014 10:19 AM, Jose E. Roman wrote: > El 30/04/2014, a las 17:10, Steve Ndengue escribi?: > >> Dear all, >> >> I have few questions on achieving convergence with SLEPC. >> I am doing some comparison on how SLEPC performs compare to a LAPACK installation on my system (an 8 processors icore7 with 3.4 GHz running Ubuntu). >> >> 1/ It appears that a calculation requesting the LAPACK eigensolver runs faster using my libraries than when done with SLEPC selecting the 'lapack' method. I guess most of the time is spent when assembling the matrix? However if the time seems reasonable for a matrix of size less than 2000*2000, for one with 4000*4000 and above, the computation time seems more than ten times slower with SLEPC and the 'lapack' method!!! > Once again, do not use SLEPc's 'lapack' method, it is just for debugging purposes. > >> 2/ I was however expecting that running an iterative calculation such as 'krylovschur', 'lanczos' or 'arnoldi' the time would be shorter but that is not the case. Inserting the Shift-and-Invert spectral transform, i could converge faster for small matrices but it takes more time using these iteratives methods than using the Lapack library on my system, when the size allows; even when requesting only few eigenstates (less than 50). > Is your matrix sparse? > > >> regarding the 2 previous comments I would like to know if there are some rules on how to ensure a fast convergence of a diagonalisation with SLEPC? >> >> 3/ About the diagonalisation on many processors, after we assign values to the matrix, does SLEPC automatically distribute the calculation among the requested processes or shall we need to insert commands on the code to enforce it? > Read the manual, and have a look at examples that work in parallel (most of them). > >> Sincerely, >> >> >> >> -- >> Steve >> -- Steve A. Ndengu? --- -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Apr 30 13:19:28 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 30 Apr 2014 13:19:28 -0500 Subject: [petsc-users] MATSOLVERSUPERLU_DIST not giving the correct solution In-Reply-To: References: Message-ID: <27731913-7032-44A5-B058-EDE4A583502D@mcs.anl.gov> Please send the same thing on one process. On Apr 30, 2014, at 8:17 AM, Justin Dong wrote: > Here are the results of one example where the solution is incorrect: > > 0 KSP unpreconditioned resid norm 7.267616711036e-05 true resid norm 7.267616711036e-05 ||r(i)||/||b|| 1.000000000000e+00 > > 1 KSP unpreconditioned resid norm 4.081398605668e-16 true resid norm 4.017029301117e-16 ||r(i)||/||b|| 5.527299334618e-12 > > > 2 KSP unpreconditioned resid norm 4.378737248697e-21 true resid norm 4.545226736905e-16 ||r(i)||/||b|| 6.254081520291e-12 > > KSP Object: 4 MPI processes > > type: gmres > > GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement > > GMRES: happy breakdown tolerance 1e-30 > > maximum iterations=10000, initial guess is zero > > tolerances: relative=1e-13, absolute=1e-50, divergence=10000 > > right preconditioning > > using UNPRECONDITIONED norm type for convergence test > > PC Object: 4 MPI processes > > type: lu > > LU: out-of-place factorization > > tolerance for zero pivot 2.22045e-14 > > matrix ordering: natural > > factor fill ratio given 0, needed 0 > > Factored matrix follows: > > Matrix Object: 4 MPI processes > > type: mpiaij > > rows=1536, cols=1536 > > package used to perform factorization: superlu_dist > > total: nonzeros=0, allocated nonzeros=0 > > total number of mallocs used during MatSetValues calls =0 > > SuperLU_DIST run parameters: > > Process grid nprow 2 x npcol 2 > > Equilibrate matrix TRUE > > Matrix input mode 1 > > Replace tiny pivots TRUE > > Use iterative refinement FALSE > > Processors in row 2 col partition 2 > > Row permutation LargeDiag > > Column permutation METIS_AT_PLUS_A > > Parallel symbolic factorization FALSE > > Repeated factorization SamePattern_SameRowPerm > > linear system matrix = precond matrix: > > Matrix Object: 4 MPI processes > > type: mpiaij > > rows=1536, cols=1536 > > total: nonzeros=17856, allocated nonzeros=64512 > > total number of mallocs used during MatSetValues calls =0 > > using I-node (on process 0) routines: found 128 nodes, limit used is 5 > > > > On Wed, Apr 30, 2014 at 7:57 AM, Barry Smith wrote: > > On Apr 30, 2014, at 6:46 AM, Matthew Knepley wrote: > > > On Wed, Apr 30, 2014 at 6:19 AM, Justin Dong wrote: > > Thanks. If I turn on the Krylov solver, the issue still seems to persist though. > > > > mpiexec -n 4 -ksp_type gmres -ksp_rtol 1.0e-13 -pc_type lu -pc_factor_mat_solver_package superlu_dist > > > > I'm testing on a very small system now (24 ndofs), but if I go larger (around 20000 ndofs) then it gets worse. > > > > For the small system, I exported the matrices to matlab to make sure they were being assembled correct in parallel, and I'm certain that that they are. > > > > For convergence questions, always run using -ksp_monitor -ksp_view so that we can see exactly what you run. > > Also run with -ksp_pc_side right > > > > > > Thanks, > > > > Matt > > > > > > On Wed, Apr 30, 2014 at 5:32 AM, Matthew Knepley wrote: > > On Wed, Apr 30, 2014 at 3:02 AM, Justin Dong wrote: > > I actually was able to solve my own problem...for some reason, I need to do > > > > PCSetType(pc, PCLU); > > PCFactorSetMatSolverPackage(pc, MATSOLVERSUPERLU_DIST); > > KSPSetTolerances(ksp, 1.e-15, PETSC_DEFAULT, PETSC_DEFAULT, PETSC_DEFAULT); > > > > 1) Before you do SetType(PCLU) the preconditioner has no type, so FactorSetMatSolverPackage() has no effect > > > > 2) There is a larger issue here. Never ever ever ever code in this way. Hardcoding a solver is crazy. The solver you > > use should depend on the equation, discretization, flow regime, and architecture. Recompiling for all those is > > out of the question. You should just use > > > > KSPCreate() > > KSPSetOperators() > > KSPSetFromOptions() > > KSPSolve() > > > > and then > > > > -pc_type lu -pc_factor_mat_solver_package superlu_dist > > > > > > instead of the ordering I initially had, though I'm not really clear on what the issue was. However, there seems to be some loss of accuracy as I increase the number of processes. Is this expected, or can I force a lower tolerance somehow? I am able to compare the solutions to a reference solution, and the error increases as I increase the processes. This is the solution in sequential: > > > > Yes, this is unavoidable. However, just turn on the Krylov solver > > > > -ksp_type gmres -ksp_rtol 1.0e-10 > > > > and you can get whatever residual tolerance you want. To get a specific error, you would need > > a posteriori error estimation, which you could include in a custom convergence criterion. > > > > Thanks, > > > > Matt > > > > superlu_1process = [ > > -6.8035811950925553e-06 > > 1.6324030474375778e-04 > > 5.4145340579614926e-02 > > 1.6640521936281516e-04 > > -1.7669374392923965e-04 > > -2.8099208957838207e-04 > > 5.3958133511222223e-02 > > -5.4077899123806263e-02 > > -5.3972905090366369e-02 > > -1.9485020474821160e-04 > > 5.4239813043824400e-02 > > 4.4883984259948430e-04]; > > > > superlu_2process = [ > > -6.8035811950509821e-06 > > 1.6324030474371623e-04 > > 5.4145340579605655e-02 > > 1.6640521936281687e-04 > > -1.7669374392923807e-04 > > -2.8099208957839834e-04 > > 5.3958133511212911e-02 > > -5.4077899123796964e-02 > > -5.3972905090357078e-02 > > -1.9485020474824480e-04 > > 5.4239813043815172e-02 > > 4.4883984259953320e-04]; > > > > superlu_4process= [ > > -6.8035811952565206e-06 > > 1.6324030474386164e-04 > > 5.4145340579691455e-02 > > 1.6640521936278326e-04 > > -1.7669374392921441e-04 > > -2.8099208957829171e-04 > > 5.3958133511299078e-02 > > -5.4077899123883062e-02 > > -5.3972905090443085e-02 > > -1.9485020474806352e-04 > > 5.4239813043900860e-02 > > 4.4883984259921287e-04]; > > > > This is some finite element solution and I can compute the error of the solution against an exact solution in the functional L2 norm: > > > > error with 1 process: 1.71340e-02 (accepted value) > > error with 2 processes: 2.65018e-02 > > error with 3 processes: 3.00164e-02 > > error with 4 processes: 3.14544e-02 > > > > > > Is there a way to remedy this? > > > > > > On Wed, Apr 30, 2014 at 2:37 AM, Justin Dong wrote: > > Hi, > > > > I'm trying to solve a linear system in parallel using SuperLU but for some reason, it's not giving me the correct solution. I'm testing on a small example so I can compare the sequential and parallel cases manually. I'm absolutely sure that my routine for generating the matrix and right-hand side in parallel is working correctly. > > > > Running with 1 process and PCLU gives the correct solution. Running with 2 processes and using SUPERLU_DIST does not give the correct solution (I tried with 1 process too but according to the superlu documentation, I would need SUPERLU for sequential?). This is the code for solving the system: > > > > /* solve the system */ > > KSPCreate(PETSC_COMM_WORLD, &ksp); > > KSPSetOperators(ksp, Aglobal, Aglobal, DIFFERENT_NONZERO_PATTERN); > > KSPSetType(ksp,KSPPREONLY); > > > > KSPGetPC(ksp, &pc); > > > > KSPSetTolerances(ksp, 1.e-13, PETSC_DEFAULT, PETSC_DEFAULT, PETSC_DEFAULT); > > PCFactorSetMatSolverPackage(pc, MATSOLVERSUPERLU_DIST); > > > > KSPSolve(ksp, bglobal, bglobal); > > > > Sincerely, > > Justin > > > > > > > > > > > > > > -- > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > -- Norbert Wiener > > > > > > > > > > -- > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > -- Norbert Wiener > > From jroman at dsic.upv.es Wed Apr 30 13:30:40 2014 From: jroman at dsic.upv.es (Jose E. Roman) Date: Wed, 30 Apr 2014 20:30:40 +0200 Subject: [petsc-users] Questions on convergence (SLEPC) In-Reply-To: <536115F8.8090803@gmail.com> References: <5361125B.40201@gmail.com> <59C4C50F-1F9D-4CA7-B36C-4359528871B2@dsic.upv.es> <536115F8.8090803@gmail.com> Message-ID: El 30/04/2014, a las 17:25, Steve Ndengue escribi?: > Yes, the matrix is sparse. How sparse? How are you running the solver? Where is the time spent (log_summary)? Jose > > > On 04/30/2014 10:19 AM, Jose E. Roman wrote: >> El 30/04/2014, a las 17:10, Steve Ndengue escribi?: >> >> >>> Dear all, >>> >>> I have few questions on achieving convergence with SLEPC. >>> I am doing some comparison on how SLEPC performs compare to a LAPACK installation on my system (an 8 processors icore7 with 3.4 GHz running Ubuntu). >>> >>> 1/ It appears that a calculation requesting the LAPACK eigensolver runs faster using my libraries than when done with SLEPC selecting the 'lapack' method. I guess most of the time is spent when assembling the matrix? However if the time seems reasonable for a matrix of size less than 2000*2000, for one with 4000*4000 and above, the computation time seems more than ten times slower with SLEPC and the 'lapack' method!!! >>> >> Once again, do not use SLEPc's 'lapack' method, it is just for debugging purposes. >> >> >>> 2/ I was however expecting that running an iterative calculation such as 'krylovschur', 'lanczos' or 'arnoldi' the time would be shorter but that is not the case. Inserting the Shift-and-Invert spectral transform, i could converge faster for small matrices but it takes more time using these iteratives methods than using the Lapack library on my system, when the size allows; even when requesting only few eigenstates (less than 50). >>> >> Is your matrix sparse? >> >> >> >>> regarding the 2 previous comments I would like to know if there are some rules on how to ensure a fast convergence of a diagonalisation with SLEPC? >>> >>> 3/ About the diagonalisation on many processors, after we assign values to the matrix, does SLEPC automatically distribute the calculation among the requested processes or shall we need to insert commands on the code to enforce it? >>> >> Read the manual, and have a look at examples that work in parallel (most of them). >> >> >>> Sincerely, >>> >>> >>> >>> -- >>> Steve >>> >>> > > > -- > Steve A. Ndengu? > --- > From federico.marini at unimi.it Wed Apr 30 14:01:28 2014 From: federico.marini at unimi.it (Federico Marini) Date: Wed, 30 Apr 2014 14:01:28 -0500 Subject: [petsc-users] Convergence criterion for composite preconditioner In-Reply-To: <76b0b2562553e5.536133e0@unimi.it> References: <76b0fcd5254cd4.53612f5f@unimi.it> <7740c157252335.536131f5@unimi.it> <76a0b740253e1a.53613232@unimi.it> <7760e502256d52.53613270@unimi.it> <76b0f4e5254ae6.536132ad@unimi.it> <77d0dc13252c37.536132eb@unimi.it> <76d0941f2564c4.53613328@unimi.it> <7770f87325437d.53613365@unimi.it> <76d0e0b8257576.536133a3@unimi.it> <76b0b2562553e5.536133e0@unimi.it> Message-ID: <77a08f0d253643.53610238@unimi.it> ?Dear PETSc users, I implemented PCCOMPOSITE preconditioner of type PC_COMPOSITE_ADDITIVE. It has two components: ?- pc1: PCASM 1-level additive Schwarz preconditioning ?- pc2: PCSHELL 2-level additive Schwarz preconditioning so basically the whole preconditioner is a 2-level additive Schwarz preconditioner, where the second level is hand-made in a MATSHELL matrix structure. For PCASM, I use defalut options. I run tests with PCSHELL activated and deactivated. I use the default stopping criterion with this setting: ierr=KSPSetTolerances(ksp,1e-7,PETSC_DEFAULT,PETSC_DEFAULT,PETSC_DEFAULT);CHKERRQ(ierr); In the end I get these convergence results with 256 MPI tasks (1054729 total dofs) using KSPCG ?PSASM only: ?... 134 KSP preconditioned resid norm 4.680038240771e-06 true resid norm 5.417972183036e-07 ||r(i)||/||b|| 2.402197263846e-06 135 KSP preconditioned resid norm 3.681603361690e-06 true resid norm 4.091445069661e-07 ||r(i)||/||b|| 1.814047363013e-06 136 KSP preconditioned resid norm 2.938575711894e-06 true resid norm 3.280865690135e-07 ||r(i)||/||b|| 1.454656155040e-06 ?- convergence detected ?PCASM + PCSHELL ... ?22 KSP preconditioned resid norm 6.610753500258e-05 true resid norm 4.725684013113e-06 ||r(i)||/||b|| 2.095253504927e-05 ?23 KSP preconditioned resid norm 1.173818907551e-04 true resid norm 2.984650385207e-06 ||r(i)||/||b|| 1.323321483881e-05 ?24 KSP preconditioned resid norm 4.409443889428e-05 true resid norm 1.959757043931e-06 ||r(i)||/||b|| 8.689086709369e-06 ?- convergence detected The preconditioner, in both (1- and 2-level) versions, works fine, but the relative residual norm stopping criterion is not achieved. Observing the residual norms, I don't think the next iteration would achieve the requested 10^-7 tolerance. Can anyone explain me why? Thank you in advance, Federico Marini **************** Il 5 x mille alla nostra Universit? ? un investimento sui giovani, sui loro migliori progetti. Sostiene la libera ricerca. Alimenta le loro speranze nel futuro. Investi il tuo 5 x mille sui giovani. Universit? degli Studi di Milano codice fiscale 80012650158 http://www.unimi.it/13084.htm?utm_source=firmaMail&utm_medium=email&utm_content=linkFirmaEmail&utm_campaign=5xmille -------------- next part -------------- An HTML attachment was scrubbed... URL: From D.Lathouwers at tudelft.nl Wed Apr 30 14:53:46 2014 From: D.Lathouwers at tudelft.nl (Danny Lathouwers - TNW) Date: Wed, 30 Apr 2014 19:53:46 +0000 Subject: [petsc-users] singular matrix solve using MAT_SHIFT_POSITIVE_DEFINITE option Message-ID: <4E6B33F4128CED4DB307BA83146E9A64258E28C4@SRV362.tudelft.net> Dear users, I encountered a strange problem. I have a singular matrix P (Poisson, Neumann boundary conditions, N=4). The rhs b sums to 0. If I hand-fill the matrix with the right entries (non-zeroes only) things work with KSPCG and ICC preconditioning and using the MAT_SHIFT_POSITIVE_DEFINITE option. Convergence in 2 iterations to (a) correct solution. So far for the debugging problem. My real problem computes P from D * M * D^T. If I do this I get the same matrix (on std out I do not see the difference to all digits). The system P * x = b now does NOT converge. More strange is that is if I remove the zeroes from D then things do work again. Either things are overly sensitive or I am misusing petsc. It does work when using e.g. the AMG preconditioner (again it is a correct but different solution). So system really seems OK. Should I also use the Null space commands as I have seen in some of the examples as well? But, I recall from many years ago when using MICCG (alpha) preconditioning that no such tricks were needed for CG with Poisson-Neumann. I am supposing the MAT_SHIFT_POSITIVE_DEFINITE option does something similar as MICCG. For clarity I have included the code (unfortunately this is the smallest I could get it; it's quite straightforward though). By setting the value of option to 1 in main.f90 the code use P = D * M * D^T otherwise it will use the hand-filled matrix. The code prints the matrix P and solution etc. Anyone any hints on this? What other preconditioners (serial) are suitable for this problem besides ICC/AMG? Thanks very much. Danny Lathouwers -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: petsc.f90 Type: application/octet-stream Size: 5507 bytes Desc: petsc.f90 URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: f90_kind.f90 Type: application/octet-stream Size: 694 bytes Desc: f90_kind.f90 URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: main.f90 Type: application/octet-stream Size: 10611 bytes Desc: main.f90 URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Makefile Type: application/octet-stream Size: 770 bytes Desc: Makefile URL: From jroman at dsic.upv.es Wed Apr 30 16:10:27 2014 From: jroman at dsic.upv.es (Jose E. Roman) Date: Wed, 30 Apr 2014 23:10:27 +0200 Subject: [petsc-users] Questions on convergence (SLEPC) In-Reply-To: <536149A0.2090901@gmail.com> References: <5361125B.40201@gmail.com> <59C4C50F-1F9D-4CA7-B36C-4359528871B2@dsic.upv.es> <536115F8.8090803@gmail.com> <536149A0.2090901@gmail.com> Message-ID: El 30/04/2014, a las 21:06, Steve Ndengue escribi?: > I am presently doing a run with a 2000*2000 matrix and it took about 20 mins (obtained from code timing routines and not PETSC - PETSC informations are in the log). > I am expecting to be able to do calculations for matrices up to 100000*100000 with multiprocessors and more if the resources allow it with. > > The matrix has zero entries but they are generally less than 50% from the total number of matrix elements; so I would guess it is not exactly sparse. > The log file is joined to this message. > > Sincerely, Please reply to the list. You should not use a debug build, see the warning notice in the output. The eigensolve takes 6.3 seconds, and most of the time is in factoring the matrix. The matrix seems to be almost full. You need to do preallocation: http://www.mcs.anl.gov/petsc/documentation/faq.html#efficient-assembly SLEPc is appropriate for sparse matrices. If the matrices are not sparse then the methods are likely not appropriate. Jose > > > On 04/30/2014 01:30 PM, Jose E. Roman wrote: >> El 30/04/2014, a las 17:25, Steve Ndengue escribi?: >> >> >>> Yes, the matrix is sparse. >>> >> How sparse? >> How are you running the solver? >> Where is the time spent (log_summary)? >> >> Jose >> >> >> >>> >>> On 04/30/2014 10:19 AM, Jose E. Roman wrote: >>> >>>> El 30/04/2014, a las 17:10, Steve Ndengue escribi?: >>>> >>>> >>>> >>>>> Dear all, >>>>> >>>>> I have few questions on achieving convergence with SLEPC. >>>>> I am doing some comparison on how SLEPC performs compare to a LAPACK installation on my system (an 8 processors icore7 with 3.4 GHz running Ubuntu). >>>>> >>>>> 1/ It appears that a calculation requesting the LAPACK eigensolver runs faster using my libraries than when done with SLEPC selecting the 'lapack' method. I guess most of the time is spent when assembling the matrix? However if the time seems reasonable for a matrix of size less than 2000*2000, for one with 4000*4000 and above, the computation time seems more than ten times slower with SLEPC and the 'lapack' method!!! >>>>> >>>>> >>>> Once again, do not use SLEPc's 'lapack' method, it is just for debugging purposes. >>>> >>>> >>>> >>>>> 2/ I was however expecting that running an iterative calculation such as 'krylovschur', 'lanczos' or 'arnoldi' the time would be shorter but that is not the case. Inserting the Shift-and-Invert spectral transform, i could converge faster for small matrices but it takes more time using these iteratives methods than using the Lapack library on my system, when the size allows; even when requesting only few eigenstates (less than 50). >>>>> >>>>> >>>> Is your matrix sparse? >>>> >>>> >>>> >>>> >>>>> regarding the 2 previous comments I would like to know if there are some rules on how to ensure a fast convergence of a diagonalisation with SLEPC? >>>>> >>>>> 3/ About the diagonalisation on many processors, after we assign values to the matrix, does SLEPC automatically distribute the calculation among the requested processes or shall we need to insert commands on the code to enforce it? >>>>> >>>>> >>>> Read the manual, and have a look at examples that work in parallel (most of them). >>>> >>>> >>>> >>>>> Sincerely, >>>>> >>>>> >>>>> >>>>> -- >>>>> Steve >>>>> >>>>> >>>>> >>> >>> -- >>> Steve A. Ndengu? >>> --- >>> >>> > > > -- > Steve A. Ndengu? > --- > Postdoctoral Fellow > Department of Chemistry > Missouri University of Science and Technology > ---- > > From knepley at gmail.com Wed Apr 30 17:08:31 2014 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 30 Apr 2014 17:08:31 -0500 Subject: [petsc-users] singular matrix solve using MAT_SHIFT_POSITIVE_DEFINITE option In-Reply-To: <4E6B33F4128CED4DB307BA83146E9A64258E28C4@SRV362.tudelft.net> References: <4E6B33F4128CED4DB307BA83146E9A64258E28C4@SRV362.tudelft.net> Message-ID: On Wed, Apr 30, 2014 at 2:53 PM, Danny Lathouwers - TNW < D.Lathouwers at tudelft.nl> wrote: > Dear users, > > > > I encountered a strange problem. I have a singular matrix P (Poisson, > Neumann boundary conditions, N=4). The rhs b sums to 0. > > If I hand-fill the matrix with the right entries (non-zeroes only) things > work with KSPCG and ICC preconditioning and using the > MAT_SHIFT_POSITIVE_DEFINITE option. Convergence in 2 iterations to (a) > correct solution. So far for the debugging problem. > That option changes the preconditioner matrix to (alpha I + P). I don't know of a theoretical reason that this should be a good preconditioner, but perhaps it exists. Certainly ICC is exquisitely sensitive (you can easily write down matrices where an epsilon change destroys convergence). Yes, you should use null space, and here it is really easy -ksp_constant_null_space Its possible that this fixes your convergence, if the ICC perturbation was introducing components in the null space to your solution. Matt > My real problem computes P from D * M * D^T. If I do this I get the same > matrix (on std out I do not see the difference to all digits). > > The system P * x = b now does NOT converge. > > More strange is that is if I remove the zeroes from D then things do work > again. > > Either things are overly sensitive or I am misusing petsc. > > It does work when using e.g. the AMG preconditioner (again it is a correct > but different solution). So system really seems OK. > > > > Should I also use the Null space commands as I have seen in some of the > examples as well? > > But, I recall from many years ago when using MICCG (alpha) preconditioning > that no such tricks were needed for CG with Poisson-Neumann. I am supposing > the MAT_SHIFT_POSITIVE_DEFINITE option does something similar as MICCG. > > > > For clarity I have included the code (unfortunately this is the smallest I > could get it; it?s quite straightforward though). > > By setting the value of option to 1 in main.f90 the code use P = D * M * > D^T otherwise it will use the hand-filled matrix. > > The code prints the matrix P and solution etc. > > > > Anyone any hints on this? > > What other preconditioners (serial) are suitable for this problem besides > ICC/AMG? > > > > Thanks very much. > > Danny Lathouwers > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsd1 at rice.edu Wed Apr 30 17:28:05 2014 From: jsd1 at rice.edu (Justin Dong) Date: Wed, 30 Apr 2014 17:28:05 -0500 Subject: [petsc-users] MATSOLVERSUPERLU_DIST not giving the correct solution In-Reply-To: <27731913-7032-44A5-B058-EDE4A583502D@mcs.anl.gov> References: <27731913-7032-44A5-B058-EDE4A583502D@mcs.anl.gov> Message-ID: Problem solved. It as user error on my part. The parallel solve was working correctly but when I was computing the functional errors, I needed to extract an array from the solution vector. Not all of the processes had finished assembling yet, so I think that caused some problems with the array. I'm noticing though that superlu_dist is taking longer than just using PCLU in sequential. Using the time function in Mac terminal: 34.59 real 8.12 user 7.76 sys 34.59 real 8.74 user 7.87 sys 34.60 real 8.06 user 7.80 sys 34.59 real 8.84 user 7.77 sys In sequential: 17.22 real 16.79 user 0.23 sys Is this at all expected? My code is around 2x faster in parallel (on a dual core machine). I tried -pc_type redundant -redundant_pc_type lu but that didn't speed up the parallel case. On Wed, Apr 30, 2014 at 1:19 PM, Barry Smith wrote: > > Please send the same thing on one process. > > > On Apr 30, 2014, at 8:17 AM, Justin Dong wrote: > > > Here are the results of one example where the solution is incorrect: > > > > 0 KSP unpreconditioned resid norm 7.267616711036e-05 true resid norm > 7.267616711036e-05 ||r(i)||/||b|| 1.000000000000e+00 > > > > 1 KSP unpreconditioned resid norm 4.081398605668e-16 true resid norm > 4.017029301117e-16 ||r(i)||/||b|| 5.527299334618e-12 > > > > > > 2 KSP unpreconditioned resid norm 4.378737248697e-21 true resid norm > 4.545226736905e-16 ||r(i)||/||b|| 6.254081520291e-12 > > > > KSP Object: 4 MPI processes > > > > type: gmres > > > > GMRES: restart=30, using Classical (unmodified) Gram-Schmidt > Orthogonalization with no iterative refinement > > > > GMRES: happy breakdown tolerance 1e-30 > > > > maximum iterations=10000, initial guess is zero > > > > tolerances: relative=1e-13, absolute=1e-50, divergence=10000 > > > > right preconditioning > > > > using UNPRECONDITIONED norm type for convergence test > > > > PC Object: 4 MPI processes > > > > type: lu > > > > LU: out-of-place factorization > > > > tolerance for zero pivot 2.22045e-14 > > > > matrix ordering: natural > > > > factor fill ratio given 0, needed 0 > > > > Factored matrix follows: > > > > Matrix Object: 4 MPI processes > > > > type: mpiaij > > > > rows=1536, cols=1536 > > > > package used to perform factorization: superlu_dist > > > > total: nonzeros=0, allocated nonzeros=0 > > > > total number of mallocs used during MatSetValues calls =0 > > > > SuperLU_DIST run parameters: > > > > Process grid nprow 2 x npcol 2 > > > > Equilibrate matrix TRUE > > > > Matrix input mode 1 > > > > Replace tiny pivots TRUE > > > > Use iterative refinement FALSE > > > > Processors in row 2 col partition 2 > > > > Row permutation LargeDiag > > > > Column permutation METIS_AT_PLUS_A > > > > Parallel symbolic factorization FALSE > > > > Repeated factorization SamePattern_SameRowPerm > > > > linear system matrix = precond matrix: > > > > Matrix Object: 4 MPI processes > > > > type: mpiaij > > > > rows=1536, cols=1536 > > > > total: nonzeros=17856, allocated nonzeros=64512 > > > > total number of mallocs used during MatSetValues calls =0 > > > > using I-node (on process 0) routines: found 128 nodes, limit used > is 5 > > > > > > > > On Wed, Apr 30, 2014 at 7:57 AM, Barry Smith wrote: > > > > On Apr 30, 2014, at 6:46 AM, Matthew Knepley wrote: > > > > > On Wed, Apr 30, 2014 at 6:19 AM, Justin Dong wrote: > > > Thanks. If I turn on the Krylov solver, the issue still seems to > persist though. > > > > > > mpiexec -n 4 -ksp_type gmres -ksp_rtol 1.0e-13 -pc_type lu > -pc_factor_mat_solver_package superlu_dist > > > > > > I'm testing on a very small system now (24 ndofs), but if I go larger > (around 20000 ndofs) then it gets worse. > > > > > > For the small system, I exported the matrices to matlab to make sure > they were being assembled correct in parallel, and I'm certain that that > they are. > > > > > > For convergence questions, always run using -ksp_monitor -ksp_view so > that we can see exactly what you run. > > > > Also run with -ksp_pc_side right > > > > > > > > > > Thanks, > > > > > > Matt > > > > > > > > > On Wed, Apr 30, 2014 at 5:32 AM, Matthew Knepley > wrote: > > > On Wed, Apr 30, 2014 at 3:02 AM, Justin Dong wrote: > > > I actually was able to solve my own problem...for some reason, I need > to do > > > > > > PCSetType(pc, PCLU); > > > PCFactorSetMatSolverPackage(pc, MATSOLVERSUPERLU_DIST); > > > KSPSetTolerances(ksp, 1.e-15, PETSC_DEFAULT, PETSC_DEFAULT, > PETSC_DEFAULT); > > > > > > 1) Before you do SetType(PCLU) the preconditioner has no type, so > FactorSetMatSolverPackage() has no effect > > > > > > 2) There is a larger issue here. Never ever ever ever code in this > way. Hardcoding a solver is crazy. The solver you > > > use should depend on the equation, discretization, flow regime, > and architecture. Recompiling for all those is > > > out of the question. You should just use > > > > > > KSPCreate() > > > KSPSetOperators() > > > KSPSetFromOptions() > > > KSPSolve() > > > > > > and then > > > > > > -pc_type lu -pc_factor_mat_solver_package superlu_dist > > > > > > > > > instead of the ordering I initially had, though I'm not really clear > on what the issue was. However, there seems to be some loss of accuracy as > I increase the number of processes. Is this expected, or can I force a > lower tolerance somehow? I am able to compare the solutions to a reference > solution, and the error increases as I increase the processes. This is the > solution in sequential: > > > > > > Yes, this is unavoidable. However, just turn on the Krylov solver > > > > > > -ksp_type gmres -ksp_rtol 1.0e-10 > > > > > > and you can get whatever residual tolerance you want. To get a > specific error, you would need > > > a posteriori error estimation, which you could include in a custom > convergence criterion. > > > > > > Thanks, > > > > > > Matt > > > > > > superlu_1process = [ > > > -6.8035811950925553e-06 > > > 1.6324030474375778e-04 > > > 5.4145340579614926e-02 > > > 1.6640521936281516e-04 > > > -1.7669374392923965e-04 > > > -2.8099208957838207e-04 > > > 5.3958133511222223e-02 > > > -5.4077899123806263e-02 > > > -5.3972905090366369e-02 > > > -1.9485020474821160e-04 > > > 5.4239813043824400e-02 > > > 4.4883984259948430e-04]; > > > > > > superlu_2process = [ > > > -6.8035811950509821e-06 > > > 1.6324030474371623e-04 > > > 5.4145340579605655e-02 > > > 1.6640521936281687e-04 > > > -1.7669374392923807e-04 > > > -2.8099208957839834e-04 > > > 5.3958133511212911e-02 > > > -5.4077899123796964e-02 > > > -5.3972905090357078e-02 > > > -1.9485020474824480e-04 > > > 5.4239813043815172e-02 > > > 4.4883984259953320e-04]; > > > > > > superlu_4process= [ > > > -6.8035811952565206e-06 > > > 1.6324030474386164e-04 > > > 5.4145340579691455e-02 > > > 1.6640521936278326e-04 > > > -1.7669374392921441e-04 > > > -2.8099208957829171e-04 > > > 5.3958133511299078e-02 > > > -5.4077899123883062e-02 > > > -5.3972905090443085e-02 > > > -1.9485020474806352e-04 > > > 5.4239813043900860e-02 > > > 4.4883984259921287e-04]; > > > > > > This is some finite element solution and I can compute the error of > the solution against an exact solution in the functional L2 norm: > > > > > > error with 1 process: 1.71340e-02 (accepted value) > > > error with 2 processes: 2.65018e-02 > > > error with 3 processes: 3.00164e-02 > > > error with 4 processes: 3.14544e-02 > > > > > > > > > Is there a way to remedy this? > > > > > > > > > On Wed, Apr 30, 2014 at 2:37 AM, Justin Dong wrote: > > > Hi, > > > > > > I'm trying to solve a linear system in parallel using SuperLU but for > some reason, it's not giving me the correct solution. I'm testing on a > small example so I can compare the sequential and parallel cases manually. > I'm absolutely sure that my routine for generating the matrix and > right-hand side in parallel is working correctly. > > > > > > Running with 1 process and PCLU gives the correct solution. Running > with 2 processes and using SUPERLU_DIST does not give the correct solution > (I tried with 1 process too but according to the superlu documentation, I > would need SUPERLU for sequential?). This is the code for solving the > system: > > > > > > /* solve the system */ > > > KSPCreate(PETSC_COMM_WORLD, &ksp); > > > KSPSetOperators(ksp, Aglobal, Aglobal, > DIFFERENT_NONZERO_PATTERN); > > > KSPSetType(ksp,KSPPREONLY); > > > > > > KSPGetPC(ksp, &pc); > > > > > > KSPSetTolerances(ksp, 1.e-13, PETSC_DEFAULT, PETSC_DEFAULT, > PETSC_DEFAULT); > > > PCFactorSetMatSolverPackage(pc, MATSOLVERSUPERLU_DIST); > > > > > > KSPSolve(ksp, bglobal, bglobal); > > > > > > Sincerely, > > > Justin > > > > > > > > > > > > > > > > > > > > > -- > > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > > -- Norbert Wiener > > > > > > > > > > > > > > > -- > > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > > -- Norbert Wiener > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Apr 30 19:26:34 2014 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 30 Apr 2014 19:26:34 -0500 Subject: [petsc-users] MATSOLVERSUPERLU_DIST not giving the correct solution In-Reply-To: References: <27731913-7032-44A5-B058-EDE4A583502D@mcs.anl.gov> Message-ID: On Wed, Apr 30, 2014 at 5:28 PM, Justin Dong wrote: > Problem solved. It as user error on my part. The parallel solve was > working correctly but when I was computing the functional errors, I needed > to extract an array from the solution vector. Not all of the processes had > finished assembling yet, so I think that caused some problems with the > array. > > I'm noticing though that superlu_dist is taking longer than just using > PCLU in sequential. Using the time function in Mac terminal: > > 34.59 real 8.12 user 7.76 sys > > 34.59 real 8.74 user 7.87 sys > > 34.60 real 8.06 user 7.80 sys > > 34.59 real 8.84 user 7.77 sys > > > In sequential: > > 17.22 real 16.79 user 0.23 sys > > > Is this at all expected? My code is around 2x faster in parallel (on a > dual core machine). I tried -pc_type redundant -redundant_pc_type lu but > that didn't speed up the parallel case. > Its definitely possible, depends on your machine: http://www.mcs.anl.gov/petsc/documentation/faq.html#computers Your assembly is mostly compute-bound so it scales nicely. Matt > > On Wed, Apr 30, 2014 at 1:19 PM, Barry Smith wrote: > >> >> Please send the same thing on one process. >> >> >> On Apr 30, 2014, at 8:17 AM, Justin Dong wrote: >> >> > Here are the results of one example where the solution is incorrect: >> > >> > 0 KSP unpreconditioned resid norm 7.267616711036e-05 true resid norm >> 7.267616711036e-05 ||r(i)||/||b|| 1.000000000000e+00 >> > >> > 1 KSP unpreconditioned resid norm 4.081398605668e-16 true resid norm >> 4.017029301117e-16 ||r(i)||/||b|| 5.527299334618e-12 >> > >> > >> > 2 KSP unpreconditioned resid norm 4.378737248697e-21 true resid norm >> 4.545226736905e-16 ||r(i)||/||b|| 6.254081520291e-12 >> > >> > KSP Object: 4 MPI processes >> > >> > type: gmres >> > >> > GMRES: restart=30, using Classical (unmodified) Gram-Schmidt >> Orthogonalization with no iterative refinement >> > >> > GMRES: happy breakdown tolerance 1e-30 >> > >> > maximum iterations=10000, initial guess is zero >> > >> > tolerances: relative=1e-13, absolute=1e-50, divergence=10000 >> > >> > right preconditioning >> > >> > using UNPRECONDITIONED norm type for convergence test >> > >> > PC Object: 4 MPI processes >> > >> > type: lu >> > >> > LU: out-of-place factorization >> > >> > tolerance for zero pivot 2.22045e-14 >> > >> > matrix ordering: natural >> > >> > factor fill ratio given 0, needed 0 >> > >> > Factored matrix follows: >> > >> > Matrix Object: 4 MPI processes >> > >> > type: mpiaij >> > >> > rows=1536, cols=1536 >> > >> > package used to perform factorization: superlu_dist >> > >> > total: nonzeros=0, allocated nonzeros=0 >> > >> > total number of mallocs used during MatSetValues calls =0 >> > >> > SuperLU_DIST run parameters: >> > >> > Process grid nprow 2 x npcol 2 >> > >> > Equilibrate matrix TRUE >> > >> > Matrix input mode 1 >> > >> > Replace tiny pivots TRUE >> > >> > Use iterative refinement FALSE >> > >> > Processors in row 2 col partition 2 >> > >> > Row permutation LargeDiag >> > >> > Column permutation METIS_AT_PLUS_A >> > >> > Parallel symbolic factorization FALSE >> > >> > Repeated factorization SamePattern_SameRowPerm >> > >> > linear system matrix = precond matrix: >> > >> > Matrix Object: 4 MPI processes >> > >> > type: mpiaij >> > >> > rows=1536, cols=1536 >> > >> > total: nonzeros=17856, allocated nonzeros=64512 >> > >> > total number of mallocs used during MatSetValues calls =0 >> > >> > using I-node (on process 0) routines: found 128 nodes, limit used >> is 5 >> > >> > >> > >> > On Wed, Apr 30, 2014 at 7:57 AM, Barry Smith >> wrote: >> > >> > On Apr 30, 2014, at 6:46 AM, Matthew Knepley wrote: >> > >> > > On Wed, Apr 30, 2014 at 6:19 AM, Justin Dong wrote: >> > > Thanks. If I turn on the Krylov solver, the issue still seems to >> persist though. >> > > >> > > mpiexec -n 4 -ksp_type gmres -ksp_rtol 1.0e-13 -pc_type lu >> -pc_factor_mat_solver_package superlu_dist >> > > >> > > I'm testing on a very small system now (24 ndofs), but if I go larger >> (around 20000 ndofs) then it gets worse. >> > > >> > > For the small system, I exported the matrices to matlab to make sure >> they were being assembled correct in parallel, and I'm certain that that >> they are. >> > > >> > > For convergence questions, always run using -ksp_monitor -ksp_view so >> that we can see exactly what you run. >> > >> > Also run with -ksp_pc_side right >> > >> > >> > > >> > > Thanks, >> > > >> > > Matt >> > > >> > > >> > > On Wed, Apr 30, 2014 at 5:32 AM, Matthew Knepley >> wrote: >> > > On Wed, Apr 30, 2014 at 3:02 AM, Justin Dong wrote: >> > > I actually was able to solve my own problem...for some reason, I need >> to do >> > > >> > > PCSetType(pc, PCLU); >> > > PCFactorSetMatSolverPackage(pc, MATSOLVERSUPERLU_DIST); >> > > KSPSetTolerances(ksp, 1.e-15, PETSC_DEFAULT, PETSC_DEFAULT, >> PETSC_DEFAULT); >> > > >> > > 1) Before you do SetType(PCLU) the preconditioner has no type, so >> FactorSetMatSolverPackage() has no effect >> > > >> > > 2) There is a larger issue here. Never ever ever ever code in this >> way. Hardcoding a solver is crazy. The solver you >> > > use should depend on the equation, discretization, flow regime, >> and architecture. Recompiling for all those is >> > > out of the question. You should just use >> > > >> > > KSPCreate() >> > > KSPSetOperators() >> > > KSPSetFromOptions() >> > > KSPSolve() >> > > >> > > and then >> > > >> > > -pc_type lu -pc_factor_mat_solver_package superlu_dist >> > > >> > > >> > > instead of the ordering I initially had, though I'm not really clear >> on what the issue was. However, there seems to be some loss of accuracy as >> I increase the number of processes. Is this expected, or can I force a >> lower tolerance somehow? I am able to compare the solutions to a reference >> solution, and the error increases as I increase the processes. This is the >> solution in sequential: >> > > >> > > Yes, this is unavoidable. However, just turn on the Krylov solver >> > > >> > > -ksp_type gmres -ksp_rtol 1.0e-10 >> > > >> > > and you can get whatever residual tolerance you want. To get a >> specific error, you would need >> > > a posteriori error estimation, which you could include in a custom >> convergence criterion. >> > > >> > > Thanks, >> > > >> > > Matt >> > > >> > > superlu_1process = [ >> > > -6.8035811950925553e-06 >> > > 1.6324030474375778e-04 >> > > 5.4145340579614926e-02 >> > > 1.6640521936281516e-04 >> > > -1.7669374392923965e-04 >> > > -2.8099208957838207e-04 >> > > 5.3958133511222223e-02 >> > > -5.4077899123806263e-02 >> > > -5.3972905090366369e-02 >> > > -1.9485020474821160e-04 >> > > 5.4239813043824400e-02 >> > > 4.4883984259948430e-04]; >> > > >> > > superlu_2process = [ >> > > -6.8035811950509821e-06 >> > > 1.6324030474371623e-04 >> > > 5.4145340579605655e-02 >> > > 1.6640521936281687e-04 >> > > -1.7669374392923807e-04 >> > > -2.8099208957839834e-04 >> > > 5.3958133511212911e-02 >> > > -5.4077899123796964e-02 >> > > -5.3972905090357078e-02 >> > > -1.9485020474824480e-04 >> > > 5.4239813043815172e-02 >> > > 4.4883984259953320e-04]; >> > > >> > > superlu_4process= [ >> > > -6.8035811952565206e-06 >> > > 1.6324030474386164e-04 >> > > 5.4145340579691455e-02 >> > > 1.6640521936278326e-04 >> > > -1.7669374392921441e-04 >> > > -2.8099208957829171e-04 >> > > 5.3958133511299078e-02 >> > > -5.4077899123883062e-02 >> > > -5.3972905090443085e-02 >> > > -1.9485020474806352e-04 >> > > 5.4239813043900860e-02 >> > > 4.4883984259921287e-04]; >> > > >> > > This is some finite element solution and I can compute the error of >> the solution against an exact solution in the functional L2 norm: >> > > >> > > error with 1 process: 1.71340e-02 (accepted value) >> > > error with 2 processes: 2.65018e-02 >> > > error with 3 processes: 3.00164e-02 >> > > error with 4 processes: 3.14544e-02 >> > > >> > > >> > > Is there a way to remedy this? >> > > >> > > >> > > On Wed, Apr 30, 2014 at 2:37 AM, Justin Dong wrote: >> > > Hi, >> > > >> > > I'm trying to solve a linear system in parallel using SuperLU but for >> some reason, it's not giving me the correct solution. I'm testing on a >> small example so I can compare the sequential and parallel cases manually. >> I'm absolutely sure that my routine for generating the matrix and >> right-hand side in parallel is working correctly. >> > > >> > > Running with 1 process and PCLU gives the correct solution. Running >> with 2 processes and using SUPERLU_DIST does not give the correct solution >> (I tried with 1 process too but according to the superlu documentation, I >> would need SUPERLU for sequential?). This is the code for solving the >> system: >> > > >> > > /* solve the system */ >> > > KSPCreate(PETSC_COMM_WORLD, &ksp); >> > > KSPSetOperators(ksp, Aglobal, Aglobal, >> DIFFERENT_NONZERO_PATTERN); >> > > KSPSetType(ksp,KSPPREONLY); >> > > >> > > KSPGetPC(ksp, &pc); >> > > >> > > KSPSetTolerances(ksp, 1.e-13, PETSC_DEFAULT, PETSC_DEFAULT, >> PETSC_DEFAULT); >> > > PCFactorSetMatSolverPackage(pc, MATSOLVERSUPERLU_DIST); >> > > >> > > KSPSolve(ksp, bglobal, bglobal); >> > > >> > > Sincerely, >> > > Justin >> > > >> > > >> > > >> > > >> > > >> > > >> > > -- >> > > What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> > > -- Norbert Wiener >> > > >> > > >> > > >> > > >> > > -- >> > > What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> > > -- Norbert Wiener >> > >> > >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Apr 30 20:46:46 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 30 Apr 2014 20:46:46 -0500 Subject: [petsc-users] Convergence criterion for composite preconditioner In-Reply-To: <77a08f0d253643.53610238@unimi.it> References: <76b0fcd5254cd4.53612f5f@unimi.it> <7740c157252335.536131f5@unimi.it> <76a0b740253e1a.53613232@unimi.it> <7760e502256d52.53613270@unimi.it> <76b0f4e5254ae6.536132ad@unimi.it> <77d0dc13252c37.536132eb@unimi.it> <76d0941f2564c4.53613328@unimi.it> <7770f87325437d.53613365@unimi.it> <76d0e0b8257576.536133a3@unimi.it> <76b0b2562553e5.536133e0@unimi.it> <77a08f0d253643.53610238@unimi.it> Message-ID: <97599E13-1FF9-461C-A28F-FDA803B95F2F@mcs.anl.gov> On Apr 30, 2014, at 2:01 PM, Federico Marini wrote: > Dear PETSc users, > > I implemented PCCOMPOSITE preconditioner of type PC_COMPOSITE_ADDITIVE. It has two components: > > - pc1: PCASM 1-level additive Schwarz preconditioning > - pc2: PCSHELL 2-level additive Schwarz preconditioning > > so basically the whole preconditioner is a 2-level additive Schwarz preconditioner, where the second level is hand-made in a MATSHELL matrix structure. For PCASM, I use defalut options. Be careful here. By default PCASM uses RASM (restricted Additive Schwarz method) which is not a symmetric preconditioner so will not work properly with KSPCG. You can use PCASMSetType(pc1, PC_ASM_BASIC) or -sub_1_pc_asm_type basic > I run tests with PCSHELL activated and deactivated. > > I use the default stopping criterion with this setting: > > ierr=KSPSetTolerances(ksp,1e-7,PETSC_DEFAULT,PETSC_DEFAULT,PETSC_DEFAULT);CHKERRQ(ierr); > > In the end I get these convergence results with 256 MPI tasks (1054729 total dofs) using KSPCG > > PSASM only: > ... > 134 KSP preconditioned resid norm 4.680038240771e-06 true resid norm 5.417972183036e-07 ||r(i)||/||b|| 2.402197263846e-06 > 135 KSP preconditioned resid norm 3.681603361690e-06 true resid norm 4.091445069661e-07 ||r(i)||/||b|| 1.814047363013e-06 > 136 KSP preconditioned resid norm 2.938575711894e-06 true resid norm 3.280865690135e-07 ||r(i)||/||b|| 1.454656155040e-06 > - convergence detected > > PCASM + PCSHELL > ... > 22 KSP preconditioned resid norm 6.610753500258e-05 true resid norm 4.725684013113e-06 ||r(i)||/||b|| 2.095253504927e-05 > 23 KSP preconditioned resid norm 1.173818907551e-04 true resid norm 2.984650385207e-06 ||r(i)||/||b|| 1.323321483881e-05 > 24 KSP preconditioned resid norm 4.409443889428e-05 true resid norm 1.959757043931e-06 ||r(i)||/||b|| 8.689086709369e-06 > - convergence detected > > The preconditioner, in both (1- and 2-level) versions, works fine, but the relative residual norm stopping criterion is not achieved. Observing the residual norms, I don't think the next iteration would achieve the requested 10^-7 tolerance. The default test for CG uses left preconditioner and the relative decrease in the preconditioned residual norm. So in this case it is 2.938575711894e-06/initialpreconditionednorm and 4.409443889428e-05/initialpreconditionednorm. Since you didn?t send the 0th iteration I don?t know what the initial norms are but I am confident it has decreased by 1e-7. BTW: with CG you can use -ksp_norm_type preconditioned or unpreconditioned or natural or KSPSetNormType(ksp, see the manual page) to use different norms for testing. Barry > > Can anyone explain me why? > > Thank you in advance, > Federico Marini > > Il 5 x mille alla nostra Universit? ? un investimento sui giovani, > sui loro migliori progetti. > Sostiene la libera ricerca. > Alimenta le loro speranze nel futuro. > > Investi il tuo 5 x mille sui giovani. > From bsmith at mcs.anl.gov Wed Apr 30 21:08:28 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 30 Apr 2014 21:08:28 -0500 Subject: [petsc-users] singular matrix solve using MAT_SHIFT_POSITIVE_DEFINITE option In-Reply-To: <4E6B33F4128CED4DB307BA83146E9A64258E28C4@SRV362.tudelft.net> References: <4E6B33F4128CED4DB307BA83146E9A64258E28C4@SRV362.tudelft.net> Message-ID: You should use algebraic or geometric multigrid for this problem. Also use the null space command. You should not use ICC(0) it just won?t be competitive for large problems and as Matt says is extremely sensitive to that zero eigenvalue. Barry On Apr 30, 2014, at 2:53 PM, Danny Lathouwers - TNW wrote: > Dear users, > > I encountered a strange problem. I have a singular matrix P (Poisson, Neumann boundary conditions, N=4). The rhs b sums to 0. > If I hand-fill the matrix with the right entries (non-zeroes only) things work with KSPCG and ICC preconditioning and using the MAT_SHIFT_POSITIVE_DEFINITE option. Convergence in 2 iterations to (a) correct solution. So far for the debugging problem. > > My real problem computes P from D * M * D^T. If I do this I get the same matrix (on std out I do not see the difference to all digits). > The system P * x = b now does NOT converge. > More strange is that is if I remove the zeroes from D then things do work again. > Either things are overly sensitive or I am misusing petsc. > It does work when using e.g. the AMG preconditioner (again it is a correct but different solution). So system really seems OK. > > Should I also use the Null space commands as I have seen in some of the examples as well? > But, I recall from many years ago when using MICCG (alpha) preconditioning that no such tricks were needed for CG with Poisson-Neumann. I am supposing the MAT_SHIFT_POSITIVE_DEFINITE option does something similar as MICCG. > > For clarity I have included the code (unfortunately this is the smallest I could get it; it?s quite straightforward though). > By setting the value of option to 1 in main.f90 the code use P = D * M * D^T otherwise it will use the hand-filled matrix. > The code prints the matrix P and solution etc. > > Anyone any hints on this? > What other preconditioners (serial) are suitable for this problem besides ICC/AMG? > > Thanks very much. > Danny Lathouwers > >