From knepley at gmail.com Sun Feb 1 10:57:37 2015 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 1 Feb 2015 10:57:37 -0600 Subject: [petsc-users] PETSc and AMPI In-Reply-To: <87twz65lat.fsf@jedbrown.org> References: <030a01d03ca3$a5244da0$ef6ce8e0$@engr.wisc.edu> <87twz65lat.fsf@jedbrown.org> Message-ID: On Sat, Jan 31, 2015 at 10:04 PM, Jed Brown wrote: > Matthew Knepley writes: > > > On Sat, Jan 31, 2015 at 4:19 PM, Satish Balay wrote: > > > >> Dan, > >> > >> I'm forwarding this to e-mail petsc-users. I'm not familiar with > >> Charm++ or AMPI - but others might have suggestions. > >> > >> Also - if you can send us confiugre.log the the failure with AMPI - we > >> can look at it - and see if there is an issue from PETSc side. > >> > > > > Also, I cannot find the download for AMPI. Can you mail it so we can try > it > > here? > > http://charm.cs.uiuc.edu/research/ampi/ > > Barry experimented with this a while back. It is not currently > supported and my understanding is that PETSc would need public API > changes to support AMPI. This might be possible as part of the > thread-safety work. > I went there before, but there is no download link. I know Barry did this before, but now they are telling everyone that is an MPI implementation. Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Feb 1 12:07:41 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 1 Feb 2015 12:07:41 -0600 Subject: [petsc-users] PETSc and AMPI In-Reply-To: References: <030a01d03ca3$a5244da0$ef6ce8e0$@engr.wisc.edu> <87twz65lat.fsf@jedbrown.org> Message-ID: It is inside the Charm download From README.ampi Porting to AMPI --------------- Global and static variables are unusable in virtualized AMPI programs, because a separate copy would be needed for each VP. Therefore, to run with more than 1 VP per processor, all globals and statics must be modified to use local storage. Barry > On Feb 1, 2015, at 10:57 AM, Matthew Knepley wrote: > > On Sat, Jan 31, 2015 at 10:04 PM, Jed Brown wrote: > Matthew Knepley writes: > > > On Sat, Jan 31, 2015 at 4:19 PM, Satish Balay wrote: > > > >> Dan, > >> > >> I'm forwarding this to e-mail petsc-users. I'm not familiar with > >> Charm++ or AMPI - but others might have suggestions. > >> > >> Also - if you can send us confiugre.log the the failure with AMPI - we > >> can look at it - and see if there is an issue from PETSc side. > >> > > > > Also, I cannot find the download for AMPI. Can you mail it so we can try it > > here? > > http://charm.cs.uiuc.edu/research/ampi/ > > Barry experimented with this a while back. It is not currently > supported and my understanding is that PETSc would need public API > changes to support AMPI. This might be possible as part of the > thread-safety work. > > I went there before, but there is no download link. > > I know Barry did this before, but now they are telling everyone that is an MPI > implementation. > > Matt > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From jed at jedbrown.org Sun Feb 1 19:58:51 2015 From: jed at jedbrown.org (Jed Brown) Date: Sun, 01 Feb 2015 18:58:51 -0700 Subject: [petsc-users] PETSc and AMPI In-Reply-To: References: <030a01d03ca3$a5244da0$ef6ce8e0$@engr.wisc.edu> <87twz65lat.fsf@jedbrown.org> Message-ID: <87oapd5b1g.fsf@jedbrown.org> Barry Smith writes: > Porting to AMPI > --------------- > Global and static variables are unusable in virtualized AMPI programs, because > a separate copy would be needed for each VP. Therefore, to run with more than > 1 VP per processor, all globals and statics must be modified to use local > storage. This is more than is needed for thread safety, but removing all variables with static linkage is one reliable way to ensure thread safety. This would require a library context of some sort, which unfortunately doesn't interact well with profiling and debugging and makes it more difficult to load plugins (the abstraction leaks because dlopen has global effects, as does IO). I don't know if we've thought carefully about the usability cost of eradicating all variables with static linkage. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From bsmith at mcs.anl.gov Sun Feb 1 21:24:27 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 1 Feb 2015 21:24:27 -0600 Subject: [petsc-users] PETSc and AMPI In-Reply-To: <87oapd5b1g.fsf@jedbrown.org> References: <030a01d03ca3$a5244da0$ef6ce8e0$@engr.wisc.edu> <87twz65lat.fsf@jedbrown.org> <87oapd5b1g.fsf@jedbrown.org> Message-ID: <26042233-4C7B-4006-9BB2-8AB873E1C935@mcs.anl.gov> > On Feb 1, 2015, at 7:58 PM, Jed Brown wrote: > > Barry Smith writes: >> Porting to AMPI >> --------------- >> Global and static variables are unusable in virtualized AMPI programs, because >> a separate copy would be needed for each VP. Therefore, to run with more than >> 1 VP per processor, all globals and statics must be modified to use local >> storage. > > This is more than is needed for thread safety, but removing all > variables with static linkage is one reliable way to ensure thread > safety. For our current support for --with-threadsafety all "global" variables are created and initialized etc only in PetscInitialize()/PetscFinalize() so the user is free to use threads in between the PetscInitialize() and Finalize(). And, of course, profiling is turned off. We could possibly "cheat" with AMPI to essentially have PetscInitialize()/Finalize() run through most their code only on thread 0 (assuming we have a way of determining thread 0) to create the "global" data structures (this is registering of objects, classids etc) and only call MPI_Init() and whatever else needs to be called by all of them. May not be too ugly, but yet another "special case" leading to more complexity of the code base. I'd be happy to see a branch attempting this Barry > This would require a library context of some sort, which > unfortunately doesn't interact well with profiling and debugging and > makes it more difficult to load plugins (the abstraction leaks because > dlopen has global effects, as does IO). I don't know if we've thought > carefully about the usability cost of eradicating all variables with > static linkage. From jed at jedbrown.org Sun Feb 1 21:33:15 2015 From: jed at jedbrown.org (Jed Brown) Date: Sun, 01 Feb 2015 20:33:15 -0700 Subject: [petsc-users] PETSc and AMPI In-Reply-To: <26042233-4C7B-4006-9BB2-8AB873E1C935@mcs.anl.gov> References: <030a01d03ca3$a5244da0$ef6ce8e0$@engr.wisc.edu> <87twz65lat.fsf@jedbrown.org> <87oapd5b1g.fsf@jedbrown.org> <26042233-4C7B-4006-9BB2-8AB873E1C935@mcs.anl.gov> Message-ID: <87fvap56o4.fsf@jedbrown.org> Barry Smith writes: > We could possibly "cheat" with AMPI to essentially have > PetscInitialize()/Finalize() run through most their code only on > thread 0 (assuming we have a way of determining thread 0) Just guard it with a lock. > to create the "global" data structures (this is registering of > objects, classids etc) and only call MPI_Init() and whatever else > needs to be called by all of them. May not be too ugly, but yet > another "special case" leading to more complexity of the code base. What about debugging and profiling? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From bsmith at mcs.anl.gov Sun Feb 1 21:59:56 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 1 Feb 2015 21:59:56 -0600 Subject: [petsc-users] PETSc and AMPI In-Reply-To: <87fvap56o4.fsf@jedbrown.org> References: <030a01d03ca3$a5244da0$ef6ce8e0$@engr.wisc.edu> <87twz65lat.fsf@jedbrown.org> <87oapd5b1g.fsf@jedbrown.org> <26042233-4C7B-4006-9BB2-8AB873E1C935@mcs.anl.gov> <87fvap56o4.fsf@jedbrown.org> Message-ID: <4E5CE853-8950-4607-A7AF-DBE8C6F39712@mcs.anl.gov> > On Feb 1, 2015, at 9:33 PM, Jed Brown wrote: > > Barry Smith writes: >> We could possibly "cheat" with AMPI to essentially have >> PetscInitialize()/Finalize() run through most their code only on >> thread 0 (assuming we have a way of determining thread 0) > > Just guard it with a lock. Not sure what you mean here. We want that code to be only run through once, we don't want or need it to be run by each thread. It makes no sense for each thread to call TSInitializePackage() for example. > >> to create the "global" data structures (this is registering of >> objects, classids etc) and only call MPI_Init() and whatever else >> needs to be called by all of them. May not be too ugly, but yet >> another "special case" leading to more complexity of the code base. > > What about debugging and profiling? This is the same issue for "thread safety"* as well as AMPI. I don't think AMPI introduces any particular additional hitches. Barry * in the sense that it is currently implemented meaning each thread works on each own objects so doesn't need to lock "MatSetValues" etc. This other "thread safety" has its own can of worms. From jed at jedbrown.org Sun Feb 1 22:15:26 2015 From: jed at jedbrown.org (Jed Brown) Date: Sun, 01 Feb 2015 21:15:26 -0700 Subject: [petsc-users] PETSc and AMPI In-Reply-To: <4E5CE853-8950-4607-A7AF-DBE8C6F39712@mcs.anl.gov> References: <030a01d03ca3$a5244da0$ef6ce8e0$@engr.wisc.edu> <87twz65lat.fsf@jedbrown.org> <87oapd5b1g.fsf@jedbrown.org> <26042233-4C7B-4006-9BB2-8AB873E1C935@mcs.anl.gov> <87fvap56o4.fsf@jedbrown.org> <4E5CE853-8950-4607-A7AF-DBE8C6F39712@mcs.anl.gov> Message-ID: <87d25t54pt.fsf@jedbrown.org> Barry Smith writes: >> On Feb 1, 2015, at 9:33 PM, Jed Brown wrote: >> >> Barry Smith writes: >>> We could possibly "cheat" with AMPI to essentially have >>> PetscInitialize()/Finalize() run through most their code only on >>> thread 0 (assuming we have a way of determining thread 0) >> >> Just guard it with a lock. > > Not sure what you mean here. We want that code to be only run > through once, we don't want or need it to be run by each thread. It > makes no sense for each thread to call TSInitializePackage() for > example. Yes, as long as the threads see TSPackageInitialized as true, it's safe to call. So the only problem is that we have a race condition. One way to do this is to make TSPackageInitialized an int. The code looks something like this (depending on the primitives) if (AtomicCompareAndSwap(&TSPackageInitialized,0,1)) { do the initialization TSPackageInitialized = 2; MemoryFenceWrite(); } else { while (AccessOnce(TSPackageInitialized) != 2) CPURelax(); } >> What about debugging and profiling? > > This is the same issue for "thread safety"* as well as AMPI. I > don't think AMPI introduces any particular additional hitches. > > Barry > > * in the sense that it is currently implemented meaning each thread > works on each own objects so doesn't need to lock "MatSetValues" > etc. This other "thread safety" has its own can of worms. If AMPI creates threads dynamically, we don't have the luxury of having hooks that can run when threads are spawned or finish. How do we ensure that profiling information has been propagated into the parent structure? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From gabel.fabian at gmail.com Mon Feb 2 03:39:22 2015 From: gabel.fabian at gmail.com (Fabian Gabel) Date: Mon, 02 Feb 2015 10:39:22 +0100 Subject: [petsc-users] Field Split PC for Fully-Coupled 3d stationary incompressible Navier-Stokes Solution Algorithm Message-ID: <1422869962.961.2.camel@gmail.com> Dear PETSc Team, I implemented a fully-coupled solution algorithm (finite volume method) for the 3d stationary incompressible Navier-Stokes equations. Currently I am solving the resulting linear systems using GMRES and ILU and I wanted to ask, if solver convergence could be improved using a field split preconditioner. The possibility to use PCFIELDSPLIT (Matrix is stored as interlaced) has already been implemented in my solver program, but I am not sure on how to choose the parameters. Each field corresponds to one of the variables (u,v,w,p). Considering the corresponding blocks A_.., the non-interlaced matrix would read as [A_uu 0 0 A_up] [0 A_vv 0 A_vp] [0 0 A_ww A_up] [A_pu A_pv A_pw A_pp] where furthermore A_uu = A_vv = A_ww. This might be considered to further improve the efficiency of the solve. You find attached the solver output for an analytical test case with 2e6 cells each having 4 degrees of freedom. I used the command-line options: -log_summary -coupledsolve_ksp_view -coupledsolve_ksp_monitor -coupledsolve_ksp_gmres_restart 100 -coupledsolve_pc_factor_levels 1 -coupledsolve_ksp_gmres_modifiedgramschmidt Regards, Fabian Gabel -------------- next part -------------- Sender: LSF System Subject: Job 399530: in cluster Done Job was submitted from host by user in cluster . Job was executed on host(s) , in queue , as user in cluster . was used as the home directory. was used as the working directory. Started at Mon Feb 2 00:05:57 2015 Results reported at Mon Feb 2 02:21:24 2015 Your job looked like: ------------------------------------------------------------ # LSBATCH: User input #! /bin/sh #BSUB -J coupling_cpld #BSUB -o /home/gu08vomo/thesis/singleblock/cpld_128.monitor.out.%J #BSUB -n 1 #BSUB -W 24:00 #BSUB -x #BSUB -q test_mpi2 #BSUB -a openmpi module load openmpi/intel/1.8.2 export PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr export MYWORKDIR=/work/scratch/gu08vomo/thesis/singleblock/128_1_1/ export OUTPUTDIR=/home/gu08vomo/thesis/coupling export PETSC_OPS="-coupledsolve_ksp_gmres_restart 100 -log_summary -coupledsolve_pc_factor_levels 1 -coupledsolve_ksp_monitor -coupledsolve_ksp_view -coupledsolve_ksp_gmres_modifiedgramschmidt -on_error_abort" echo "PETSC_DIR="$PETSC_DIR echo "MYWORKDIR="$MYWORKDIR echo "PETSC_OPS="$PETSC_OPS cd $MYWORKDIR mpirun -n 1 ./caffa3d.cpld.lnx ${PETSC_OPS} ------------------------------------------------------------ Successfully completed. Resource usage summary: CPU time : 8129.75 sec. Max Memory : 15153 MB Average Memory : 14864.49 MB Total Requested Memory : - Delta Memory : - (Delta: the difference between total requested memory and actual max usage.) Max Swap : 17294 MB Max Processes : 6 Max Threads : 11 The output (if any) follows: Modules: loading openmpi/intel/1.8.2 PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr MYWORKDIR=/work/scratch/gu08vomo/thesis/singleblock/128_1_1/ PETSC_OPS=-coupledsolve_ksp_gmres_restart 100 -log_summary -coupledsolve_pc_factor_levels 1 -coupledsolve_ksp_monitor -coupledsolve_ksp_view -coupledsolve_ksp_gmres_modifiedgramschmidt -on_error_abort ENTER PROBLEM NAME (SIX CHARACTERS): **************************************************** NAME OF PROBLEM SOLVED control **************************************************** *************************************************** CONTROL SETTINGS *************************************************** LREAD,LWRITE,LPOST,LTEST,LOUTS,LOUTE,LTIME,LGRAD F F T F F F F F IMON, JMON, KMON, MMON, RMON, IPR, JPR, KPR, MPR,NPCOR,NIGRAD 8 9 8 1 0 2 2 3 1 1 1 SORMAX, SLARGE, ALFA 0.1000E-07 0.1000E+31 0.9200E+00 (URF(I),I=1,6) 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 (SOR(I),I=1,6) 0.1000E-01 0.1000E-01 0.1000E-01 0.1000E-01 0.1000E-01 0.1000E-07 (GDS(I),I=1,6) - BLENDING (CDS-UDS) 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 LSG ****** *************************************************** START COUPLED ALGORITHM *************************************************** Residual norms for coupledsolve_ solve. 0 KSP Residual norm 4.264137584261e+04 1 KSP Residual norm 3.126035512702e+04 2 KSP Residual norm 2.204911630524e+04 3 KSP Residual norm 1.737088109639e+04 4 KSP Residual norm 1.281240604370e+04 5 KSP Residual norm 9.496461339190e+03 6 KSP Residual norm 6.497777189417e+03 7 KSP Residual norm 4.039003984054e+03 8 KSP Residual norm 2.316641365557e+03 9 KSP Residual norm 1.320992942746e+03 10 KSP Residual norm 1.041475739922e+03 11 KSP Residual norm 8.440675791671e+02 12 KSP Residual norm 7.359887768590e+02 13 KSP Residual norm 6.609284670027e+02 14 KSP Residual norm 6.053599102666e+02 15 KSP Residual norm 5.527891080694e+02 16 KSP Residual norm 5.126410860295e+02 17 KSP Residual norm 4.849583569954e+02 18 KSP Residual norm 4.523066587880e+02 19 KSP Residual norm 4.297597276582e+02 20 KSP Residual norm 3.892968353545e+02 KSP Object:(coupledsolve_) 1 MPI processes type: gmres GMRES: restart=100, using Modified Gram-Schmidt Orthogonalization GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=0.01, absolute=1e-50, divergence=10000 left preconditioning has attached null space using PRECONDITIONED norm type for convergence test PC Object:(coupledsolve_) 1 MPI processes type: ilu ILU: out-of-place factorization 1 level of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 2.44153 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 package used to perform factorization: petsc total: nonzeros=3.56993e+08, allocated nonzeros=3.56993e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 total: nonzeros=1.46217e+08, allocated nonzeros=1.46217e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines 0000001 0.1000E+01 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 4.663544661243e+04 1 KSP Residual norm 3.168120179194e+04 2 KSP Residual norm 2.273711424452e+04 3 KSP Residual norm 1.764970028953e+04 4 KSP Residual norm 1.304545968762e+04 5 KSP Residual norm 9.662351773909e+03 6 KSP Residual norm 6.567818161453e+03 7 KSP Residual norm 4.121067803948e+03 8 KSP Residual norm 2.294490858752e+03 9 KSP Residual norm 1.331012433748e+03 10 KSP Residual norm 9.957051616153e+02 11 KSP Residual norm 8.199631050494e+02 12 KSP Residual norm 6.814242501106e+02 13 KSP Residual norm 6.154530370574e+02 14 KSP Residual norm 5.466964730947e+02 15 KSP Residual norm 4.965755937205e+02 16 KSP Residual norm 4.545601147588e+02 17 KSP Residual norm 4.277460084127e+02 18 KSP Residual norm 4.010479793577e+02 19 KSP Residual norm 3.778523411748e+02 20 KSP Residual norm 3.469200526428e+02 21 KSP Residual norm 3.219354709128e+02 22 KSP Residual norm 3.004458639924e+02 23 KSP Residual norm 2.868550411268e+02 24 KSP Residual norm 2.722864715656e+02 25 KSP Residual norm 2.585151498116e+02 26 KSP Residual norm 2.445346268226e+02 27 KSP Residual norm 2.322810669447e+02 28 KSP Residual norm 2.233192387366e+02 29 KSP Residual norm 2.160507303574e+02 30 KSP Residual norm 2.083854420275e+02 31 KSP Residual norm 1.998695144673e+02 32 KSP Residual norm 1.900820384162e+02 33 KSP Residual norm 1.804877977814e+02 34 KSP Residual norm 1.721693524260e+02 35 KSP Residual norm 1.632041190198e+02 36 KSP Residual norm 1.511002918775e+02 37 KSP Residual norm 1.360844140549e+02 38 KSP Residual norm 1.238601707531e+02 39 KSP Residual norm 1.118946435195e+02 40 KSP Residual norm 9.893775824273e+01 41 KSP Residual norm 8.618294929060e+01 42 KSP Residual norm 7.181045223647e+01 KSP Object:(coupledsolve_) 1 MPI processes type: gmres GMRES: restart=100, using Modified Gram-Schmidt Orthogonalization GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=0.00178347, absolute=1e-50, divergence=10000 left preconditioning has attached null space using PRECONDITIONED norm type for convergence test PC Object:(coupledsolve_) 1 MPI processes type: ilu ILU: out-of-place factorization 1 level of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 2.44153 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 package used to perform factorization: petsc total: nonzeros=3.56993e+08, allocated nonzeros=3.56993e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 total: nonzeros=1.46217e+08, allocated nonzeros=1.46217e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines 0000002 0.1783E+00 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 4.630065023664e+04 1 KSP Residual norm 3.164394717005e+04 2 KSP Residual norm 2.263973343941e+04 3 KSP Residual norm 1.761218313885e+04 4 KSP Residual norm 1.299713661834e+04 5 KSP Residual norm 9.650210925382e+03 6 KSP Residual norm 6.549685992897e+03 7 KSP Residual norm 4.117365163497e+03 8 KSP Residual norm 2.260658726683e+03 9 KSP Residual norm 1.263210112244e+03 10 KSP Residual norm 8.932637087965e+02 11 KSP Residual norm 6.984538826190e+02 12 KSP Residual norm 5.342589004903e+02 13 KSP Residual norm 4.534825495870e+02 14 KSP Residual norm 3.701491789034e+02 15 KSP Residual norm 3.064850263835e+02 16 KSP Residual norm 2.556718808864e+02 17 KSP Residual norm 2.254706924793e+02 18 KSP Residual norm 2.014038755502e+02 19 KSP Residual norm 1.847341590994e+02 20 KSP Residual norm 1.654911370535e+02 21 KSP Residual norm 1.500670327830e+02 22 KSP Residual norm 1.360918364067e+02 23 KSP Residual norm 1.271248745404e+02 24 KSP Residual norm 1.172161482285e+02 25 KSP Residual norm 1.087109449318e+02 26 KSP Residual norm 9.947107291853e+01 27 KSP Residual norm 9.147688586265e+01 28 KSP Residual norm 8.536147363800e+01 29 KSP Residual norm 8.136484384206e+01 30 KSP Residual norm 7.800503448915e+01 31 KSP Residual norm 7.518867432949e+01 32 KSP Residual norm 7.203123146794e+01 33 KSP Residual norm 6.897490191157e+01 34 KSP Residual norm 6.641928130540e+01 35 KSP Residual norm 6.388068582300e+01 36 KSP Residual norm 6.113690770584e+01 37 KSP Residual norm 5.783606662707e+01 38 KSP Residual norm 5.529793507697e+01 39 KSP Residual norm 5.234092915273e+01 40 KSP Residual norm 4.915758493970e+01 41 KSP Residual norm 4.560281972280e+01 42 KSP Residual norm 4.140181251677e+01 43 KSP Residual norm 3.673637485107e+01 44 KSP Residual norm 3.244550907907e+01 45 KSP Residual norm 2.896610620119e+01 46 KSP Residual norm 2.548070305247e+01 47 KSP Residual norm 2.237319456220e+01 48 KSP Residual norm 2.027302400792e+01 49 KSP Residual norm 1.910022642661e+01 50 KSP Residual norm 1.826930728962e+01 51 KSP Residual norm 1.785655829800e+01 52 KSP Residual norm 1.763224604356e+01 53 KSP Residual norm 1.751078309016e+01 54 KSP Residual norm 1.742263144575e+01 55 KSP Residual norm 1.739598373764e+01 56 KSP Residual norm 1.732267664368e+01 57 KSP Residual norm 1.724504457412e+01 58 KSP Residual norm 1.712291752753e+01 59 KSP Residual norm 1.694892174032e+01 60 KSP Residual norm 1.653834984518e+01 61 KSP Residual norm 1.593887169818e+01 62 KSP Residual norm 1.498021817678e+01 63 KSP Residual norm 1.377339473662e+01 KSP Object:(coupledsolve_) 1 MPI processes type: gmres GMRES: restart=100, using Modified Gram-Schmidt Orthogonalization GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=0.000315648, absolute=1e-50, divergence=10000 left preconditioning has attached null space using PRECONDITIONED norm type for convergence test PC Object:(coupledsolve_) 1 MPI processes type: ilu ILU: out-of-place factorization 1 level of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 2.44153 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 package used to perform factorization: petsc total: nonzeros=3.56993e+08, allocated nonzeros=3.56993e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 total: nonzeros=1.46217e+08, allocated nonzeros=1.46217e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines 0000003 0.3156E-01 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 4.647895869934e+04 1 KSP Residual norm 3.164140657549e+04 2 KSP Residual norm 2.267537495404e+04 3 KSP Residual norm 1.761483262298e+04 4 KSP Residual norm 1.300443054660e+04 5 KSP Residual norm 9.651630186412e+03 6 KSP Residual norm 6.547343453543e+03 7 KSP Residual norm 4.117438446817e+03 8 KSP Residual norm 2.256659409064e+03 9 KSP Residual norm 1.265819166008e+03 10 KSP Residual norm 8.933660046859e+02 11 KSP Residual norm 7.016010669894e+02 12 KSP Residual norm 5.355737541951e+02 13 KSP Residual norm 4.565881390630e+02 14 KSP Residual norm 3.722255933169e+02 15 KSP Residual norm 3.095216022966e+02 16 KSP Residual norm 2.585396824477e+02 17 KSP Residual norm 2.285067693929e+02 18 KSP Residual norm 2.045913265261e+02 19 KSP Residual norm 1.875202073998e+02 20 KSP Residual norm 1.685441753572e+02 21 KSP Residual norm 1.525519469781e+02 22 KSP Residual norm 1.390532538886e+02 23 KSP Residual norm 1.297738331046e+02 24 KSP Residual norm 1.204293080673e+02 25 KSP Residual norm 1.116138208749e+02 26 KSP Residual norm 1.028152995873e+02 27 KSP Residual norm 9.459187263580e+01 28 KSP Residual norm 8.879098150911e+01 29 KSP Residual norm 8.468470297290e+01 30 KSP Residual norm 8.139115426677e+01 31 KSP Residual norm 7.854365188757e+01 32 KSP Residual norm 7.543814515137e+01 33 KSP Residual norm 7.251178248557e+01 34 KSP Residual norm 6.993131774268e+01 35 KSP Residual norm 6.747206805630e+01 36 KSP Residual norm 6.450178891522e+01 37 KSP Residual norm 6.122755579171e+01 38 KSP Residual norm 5.847176167259e+01 39 KSP Residual norm 5.558872649235e+01 40 KSP Residual norm 5.199440186325e+01 41 KSP Residual norm 4.827838163440e+01 42 KSP Residual norm 4.357770469309e+01 43 KSP Residual norm 3.874033022020e+01 44 KSP Residual norm 3.403086111802e+01 45 KSP Residual norm 3.034296410178e+01 46 KSP Residual norm 2.657911534277e+01 47 KSP Residual norm 2.338518073473e+01 48 KSP Residual norm 2.119252989273e+01 49 KSP Residual norm 1.999042707430e+01 50 KSP Residual norm 1.916867198104e+01 51 KSP Residual norm 1.876179121985e+01 52 KSP Residual norm 1.855117992720e+01 53 KSP Residual norm 1.843615588967e+01 54 KSP Residual norm 1.837481864393e+01 55 KSP Residual norm 1.835041400648e+01 56 KSP Residual norm 1.829311897991e+01 57 KSP Residual norm 1.821187384047e+01 58 KSP Residual norm 1.810348141614e+01 59 KSP Residual norm 1.790611204688e+01 60 KSP Residual norm 1.748908091503e+01 61 KSP Residual norm 1.678499937403e+01 62 KSP Residual norm 1.570470339445e+01 63 KSP Residual norm 1.432632256838e+01 64 KSP Residual norm 1.282560896521e+01 65 KSP Residual norm 1.088303615451e+01 66 KSP Residual norm 9.056100054935e+00 67 KSP Residual norm 7.574126511480e+00 68 KSP Residual norm 6.656785636415e+00 KSP Object:(coupledsolve_) 1 MPI processes type: gmres GMRES: restart=100, using Modified Gram-Schmidt Orthogonalization GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=0.00014427, absolute=1e-50, divergence=10000 left preconditioning has attached null space using PRECONDITIONED norm type for convergence test PC Object:(coupledsolve_) 1 MPI processes type: ilu ILU: out-of-place factorization 1 level of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 2.44153 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 package used to perform factorization: petsc total: nonzeros=3.56993e+08, allocated nonzeros=3.56993e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 total: nonzeros=1.46217e+08, allocated nonzeros=1.46217e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines 0000004 0.1443E-01 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 4.646166895679e+04 1 KSP Residual norm 3.162794916970e+04 2 KSP Residual norm 2.266738619716e+04 3 KSP Residual norm 1.760736748111e+04 4 KSP Residual norm 1.299932064878e+04 5 KSP Residual norm 9.647941758129e+03 6 KSP Residual norm 6.544994184392e+03 7 KSP Residual norm 4.116457773313e+03 8 KSP Residual norm 2.255664935077e+03 9 KSP Residual norm 1.264306308044e+03 10 KSP Residual norm 8.908901589557e+02 11 KSP Residual norm 6.987468717809e+02 12 KSP Residual norm 5.319073501972e+02 13 KSP Residual norm 4.524196934948e+02 14 KSP Residual norm 3.674186403965e+02 15 KSP Residual norm 3.040321938762e+02 16 KSP Residual norm 2.524309113091e+02 17 KSP Residual norm 2.219799064945e+02 18 KSP Residual norm 1.978568247666e+02 19 KSP Residual norm 1.806983446511e+02 20 KSP Residual norm 1.617468589079e+02 21 KSP Residual norm 1.457027385030e+02 22 KSP Residual norm 1.321605852649e+02 23 KSP Residual norm 1.227949944746e+02 24 KSP Residual norm 1.133692598350e+02 25 KSP Residual norm 1.044410904210e+02 26 KSP Residual norm 9.547669714017e+01 27 KSP Residual norm 8.703595042809e+01 28 KSP Residual norm 8.104780888404e+01 29 KSP Residual norm 7.684348355059e+01 30 KSP Residual norm 7.351615328806e+01 31 KSP Residual norm 7.071771512030e+01 32 KSP Residual norm 6.768391869107e+01 33 KSP Residual norm 6.487019341949e+01 34 KSP Residual norm 6.238797207379e+01 35 KSP Residual norm 6.007900086671e+01 36 KSP Residual norm 5.734821016072e+01 37 KSP Residual norm 5.445313412774e+01 38 KSP Residual norm 5.205568293699e+01 39 KSP Residual norm 4.956750124140e+01 40 KSP Residual norm 4.645229525338e+01 41 KSP Residual norm 4.327214568058e+01 42 KSP Residual norm 3.929588207545e+01 43 KSP Residual norm 3.511642540487e+01 44 KSP Residual norm 3.094695757531e+01 45 KSP Residual norm 2.761787475871e+01 46 KSP Residual norm 2.418823206906e+01 47 KSP Residual norm 2.115021466160e+01 48 KSP Residual norm 1.896971718445e+01 49 KSP Residual norm 1.773171799152e+01 50 KSP Residual norm 1.688936695070e+01 51 KSP Residual norm 1.644873693463e+01 52 KSP Residual norm 1.620624421908e+01 53 KSP Residual norm 1.606235825223e+01 54 KSP Residual norm 1.597603165603e+01 55 KSP Residual norm 1.592979313395e+01 56 KSP Residual norm 1.585070667704e+01 57 KSP Residual norm 1.575850937273e+01 58 KSP Residual norm 1.565022954513e+01 59 KSP Residual norm 1.548292885902e+01 60 KSP Residual norm 1.516675021353e+01 61 KSP Residual norm 1.466313551891e+01 62 KSP Residual norm 1.389064887442e+01 63 KSP Residual norm 1.288656989332e+01 64 KSP Residual norm 1.174443423833e+01 65 KSP Residual norm 1.018064747125e+01 66 KSP Residual norm 8.617403304011e+00 67 KSP Residual norm 7.290597684937e+00 68 KSP Residual norm 6.442366684519e+00 69 KSP Residual norm 5.877792172302e+00 70 KSP Residual norm 5.471068144115e+00 71 KSP Residual norm 5.180425165192e+00 72 KSP Residual norm 4.919723230471e+00 73 KSP Residual norm 4.712300980874e+00 74 KSP Residual norm 4.538905220129e+00 75 KSP Residual norm 4.424414845165e+00 76 KSP Residual norm 4.347508963862e+00 77 KSP Residual norm 4.286801810203e+00 78 KSP Residual norm 4.228492099686e+00 79 KSP Residual norm 4.158691869148e+00 80 KSP Residual norm 4.074545832104e+00 81 KSP Residual norm 3.962654065224e+00 82 KSP Residual norm 3.813497008497e+00 83 KSP Residual norm 3.626672326407e+00 84 KSP Residual norm 3.411326356042e+00 85 KSP Residual norm 3.208359689142e+00 86 KSP Residual norm 3.026816883459e+00 87 KSP Residual norm 2.878539493112e+00 88 KSP Residual norm 2.745009596150e+00 89 KSP Residual norm 2.632408525415e+00 90 KSP Residual norm 2.527542369945e+00 91 KSP Residual norm 2.402218233625e+00 92 KSP Residual norm 2.263578042553e+00 93 KSP Residual norm 2.135997663149e+00 94 KSP Residual norm 2.040608697474e+00 95 KSP Residual norm 1.963473994745e+00 96 KSP Residual norm 1.895099898148e+00 97 KSP Residual norm 1.840999762006e+00 98 KSP Residual norm 1.787353606136e+00 99 KSP Residual norm 1.726725823332e+00 100 KSP Residual norm 1.671297723702e+00 101 KSP Residual norm 1.671268205371e+00 102 KSP Residual norm 1.671256170705e+00 103 KSP Residual norm 1.671236538684e+00 104 KSP Residual norm 1.671214560026e+00 105 KSP Residual norm 1.671161981492e+00 106 KSP Residual norm 1.671122556315e+00 107 KSP Residual norm 1.671029915334e+00 108 KSP Residual norm 1.670993667023e+00 109 KSP Residual norm 1.670864888360e+00 110 KSP Residual norm 1.670808105537e+00 111 KSP Residual norm 1.670758091248e+00 112 KSP Residual norm 1.670623436100e+00 113 KSP Residual norm 1.669983091255e+00 114 KSP Residual norm 1.669404821472e+00 115 KSP Residual norm 1.666783314748e+00 116 KSP Residual norm 1.664073050381e+00 117 KSP Residual norm 1.658300718698e+00 118 KSP Residual norm 1.652487203337e+00 119 KSP Residual norm 1.641952718903e+00 120 KSP Residual norm 1.630647882043e+00 121 KSP Residual norm 1.611026859856e+00 122 KSP Residual norm 1.596546484602e+00 123 KSP Residual norm 1.582487393499e+00 124 KSP Residual norm 1.572208202935e+00 125 KSP Residual norm 1.557930113763e+00 126 KSP Residual norm 1.544299404084e+00 127 KSP Residual norm 1.526462343761e+00 KSP Object:(coupledsolve_) 1 MPI processes type: gmres GMRES: restart=100, using Modified Gram-Schmidt Orthogonalization GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=3.30054e-05, absolute=1e-50, divergence=10000 left preconditioning has attached null space using PRECONDITIONED norm type for convergence test PC Object:(coupledsolve_) 1 MPI processes type: ilu ILU: out-of-place factorization 1 level of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 2.44153 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 package used to perform factorization: petsc total: nonzeros=3.56993e+08, allocated nonzeros=3.56993e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 total: nonzeros=1.46217e+08, allocated nonzeros=1.46217e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines 0000005 0.3301E-02 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 4.646767606450e+04 1 KSP Residual norm 3.162275664728e+04 2 KSP Residual norm 2.266678835526e+04 3 KSP Residual norm 1.760405539287e+04 4 KSP Residual norm 1.299731442450e+04 5 KSP Residual norm 9.645913289170e+03 6 KSP Residual norm 6.543256704091e+03 7 KSP Residual norm 4.115472801140e+03 8 KSP Residual norm 2.254515372798e+03 9 KSP Residual norm 1.263803538394e+03 10 KSP Residual norm 8.898910453689e+02 11 KSP Residual norm 6.979268154493e+02 12 KSP Residual norm 5.306317602403e+02 13 KSP Residual norm 4.511960851342e+02 14 KSP Residual norm 3.658945090470e+02 15 KSP Residual norm 3.024619159863e+02 16 KSP Residual norm 2.507078919539e+02 17 KSP Residual norm 2.202254396975e+02 18 KSP Residual norm 1.961323217073e+02 19 KSP Residual norm 1.789958195745e+02 20 KSP Residual norm 1.601911164136e+02 21 KSP Residual norm 1.441855508663e+02 22 KSP Residual norm 1.307810591903e+02 23 KSP Residual norm 1.214166456000e+02 24 KSP Residual norm 1.120875839080e+02 25 KSP Residual norm 1.031234334005e+02 26 KSP Residual norm 9.419353991793e+01 27 KSP Residual norm 8.566665798779e+01 28 KSP Residual norm 7.966600953171e+01 29 KSP Residual norm 7.540448037391e+01 30 KSP Residual norm 7.206607048159e+01 31 KSP Residual norm 6.925307048193e+01 32 KSP Residual norm 6.622460842507e+01 33 KSP Residual norm 6.342640550987e+01 34 KSP Residual norm 6.095150007627e+01 35 KSP Residual norm 5.867496605567e+01 36 KSP Residual norm 5.596723916998e+01 37 KSP Residual norm 5.314895959547e+01 38 KSP Residual norm 5.079521779221e+01 39 KSP Residual norm 4.839061277630e+01 40 KSP Residual norm 4.532356195123e+01 41 KSP Residual norm 4.223855235322e+01 42 KSP Residual norm 3.835925124393e+01 43 KSP Residual norm 3.431508106617e+01 44 KSP Residual norm 3.022938541630e+01 45 KSP Residual norm 2.697864808585e+01 46 KSP Residual norm 2.361219135624e+01 47 KSP Residual norm 2.062155371921e+01 48 KSP Residual norm 1.844552464969e+01 49 KSP Residual norm 1.720509289823e+01 50 KSP Residual norm 1.636360108453e+01 51 KSP Residual norm 1.591837813554e+01 52 KSP Residual norm 1.567021968704e+01 53 KSP Residual norm 1.551929851513e+01 54 KSP Residual norm 1.542825501826e+01 55 KSP Residual norm 1.537433717926e+01 56 KSP Residual norm 1.528985025953e+01 57 KSP Residual norm 1.519213783688e+01 58 KSP Residual norm 1.508280341137e+01 59 KSP Residual norm 1.491628651855e+01 60 KSP Residual norm 1.461843414631e+01 61 KSP Residual norm 1.414344287823e+01 62 KSP Residual norm 1.342757985312e+01 63 KSP Residual norm 1.249593206045e+01 64 KSP Residual norm 1.143574304534e+01 65 KSP Residual norm 9.967288826620e+00 66 KSP Residual norm 8.481275204481e+00 67 KSP Residual norm 7.204807272123e+00 68 KSP Residual norm 6.381903836935e+00 69 KSP Residual norm 5.831545434036e+00 70 KSP Residual norm 5.432592966671e+00 71 KSP Residual norm 5.146861617958e+00 72 KSP Residual norm 4.888642929897e+00 73 KSP Residual norm 4.683030206719e+00 74 KSP Residual norm 4.510175850692e+00 75 KSP Residual norm 4.396185687853e+00 76 KSP Residual norm 4.318858248932e+00 77 KSP Residual norm 4.258175544978e+00 78 KSP Residual norm 4.199544367719e+00 79 KSP Residual norm 4.129966169271e+00 80 KSP Residual norm 4.046140565241e+00 81 KSP Residual norm 3.935619374937e+00 82 KSP Residual norm 3.788898051796e+00 83 KSP Residual norm 3.605901597905e+00 84 KSP Residual norm 3.395295241465e+00 85 KSP Residual norm 3.197042767807e+00 86 KSP Residual norm 3.019627954305e+00 87 KSP Residual norm 2.874222889676e+00 88 KSP Residual norm 2.742794221649e+00 89 KSP Residual norm 2.631166883611e+00 90 KSP Residual norm 2.526643311653e+00 91 KSP Residual norm 2.400727080802e+00 92 KSP Residual norm 2.260939631374e+00 93 KSP Residual norm 2.131729354992e+00 94 KSP Residual norm 2.035029521617e+00 95 KSP Residual norm 1.956764697974e+00 96 KSP Residual norm 1.887767577461e+00 97 KSP Residual norm 1.833468687463e+00 98 KSP Residual norm 1.780152526381e+00 99 KSP Residual norm 1.720320716957e+00 100 KSP Residual norm 1.665988784985e+00 101 KSP Residual norm 1.665965240507e+00 102 KSP Residual norm 1.665954729088e+00 103 KSP Residual norm 1.665937194501e+00 104 KSP Residual norm 1.665916777406e+00 105 KSP Residual norm 1.665867946666e+00 106 KSP Residual norm 1.665827785805e+00 107 KSP Residual norm 1.665736540807e+00 108 KSP Residual norm 1.665693951910e+00 109 KSP Residual norm 1.665570595936e+00 110 KSP Residual norm 1.665515054958e+00 111 KSP Residual norm 1.665471153261e+00 112 KSP Residual norm 1.665358754451e+00 113 KSP Residual norm 1.664845137439e+00 114 KSP Residual norm 1.664376381382e+00 115 KSP Residual norm 1.662248880461e+00 116 KSP Residual norm 1.659973629100e+00 117 KSP Residual norm 1.655076155974e+00 118 KSP Residual norm 1.649993775032e+00 119 KSP Residual norm 1.640535134866e+00 120 KSP Residual norm 1.629867537149e+00 121 KSP Residual norm 1.610463310720e+00 122 KSP Residual norm 1.595168600483e+00 123 KSP Residual norm 1.580109807276e+00 124 KSP Residual norm 1.569428991184e+00 125 KSP Residual norm 1.555259089321e+00 126 KSP Residual norm 1.542524009704e+00 127 KSP Residual norm 1.526183086647e+00 128 KSP Residual norm 1.508440296237e+00 129 KSP Residual norm 1.487833281136e+00 130 KSP Residual norm 1.463532112925e+00 131 KSP Residual norm 1.429179455160e+00 132 KSP Residual norm 1.387700294617e+00 133 KSP Residual norm 1.336789727491e+00 134 KSP Residual norm 1.292147284535e+00 135 KSP Residual norm 1.254777761143e+00 136 KSP Residual norm 1.222901571442e+00 137 KSP Residual norm 1.194432637970e+00 138 KSP Residual norm 1.177009606224e+00 139 KSP Residual norm 1.166974674460e+00 140 KSP Residual norm 1.155020626859e+00 141 KSP Residual norm 1.135308380726e+00 142 KSP Residual norm 1.107859723490e+00 143 KSP Residual norm 1.069210017639e+00 144 KSP Residual norm 1.028207577076e+00 145 KSP Residual norm 9.848070999139e-01 146 KSP Residual norm 9.517861703484e-01 147 KSP Residual norm 9.253198705547e-01 148 KSP Residual norm 9.081970818881e-01 149 KSP Residual norm 8.977878465527e-01 150 KSP Residual norm 8.935757543749e-01 151 KSP Residual norm 8.915346081796e-01 152 KSP Residual norm 8.885860230732e-01 153 KSP Residual norm 8.786930532800e-01 154 KSP Residual norm 8.519866676699e-01 155 KSP Residual norm 8.026900627722e-01 156 KSP Residual norm 7.290658292478e-01 157 KSP Residual norm 6.433221713529e-01 KSP Object:(coupledsolve_) 1 MPI processes type: gmres GMRES: restart=100, using Modified Gram-Schmidt Orthogonalization GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1.54225e-05, absolute=1e-50, divergence=10000 left preconditioning has attached null space using PRECONDITIONED norm type for convergence test PC Object:(coupledsolve_) 1 MPI processes type: ilu ILU: out-of-place factorization 1 level of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 2.44153 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 package used to perform factorization: petsc total: nonzeros=3.56993e+08, allocated nonzeros=3.56993e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 total: nonzeros=1.46217e+08, allocated nonzeros=1.46217e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines 0000006 0.1542E-02 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 4.646682314705e+04 1 KSP Residual norm 3.162224808417e+04 2 KSP Residual norm 2.266700350315e+04 3 KSP Residual norm 1.760399404085e+04 4 KSP Residual norm 1.299736447403e+04 5 KSP Residual norm 9.645867449042e+03 6 KSP Residual norm 6.543278613621e+03 7 KSP Residual norm 4.115534018034e+03 8 KSP Residual norm 2.254625032713e+03 9 KSP Residual norm 1.263936952881e+03 10 KSP Residual norm 8.899984952188e+02 11 KSP Residual norm 6.980402120813e+02 12 KSP Residual norm 5.307223020484e+02 13 KSP Residual norm 4.512707553700e+02 14 KSP Residual norm 3.659387048094e+02 15 KSP Residual norm 3.024843851480e+02 16 KSP Residual norm 2.507149614618e+02 17 KSP Residual norm 2.202265941830e+02 18 KSP Residual norm 1.961337756767e+02 19 KSP Residual norm 1.789992105643e+02 20 KSP Residual norm 1.602019365546e+02 21 KSP Residual norm 1.442002362882e+02 22 KSP Residual norm 1.308009566207e+02 23 KSP Residual norm 1.214357003137e+02 24 KSP Residual norm 1.121050276959e+02 25 KSP Residual norm 1.031322685775e+02 26 KSP Residual norm 9.418932380941e+01 27 KSP Residual norm 8.564495313533e+01 28 KSP Residual norm 7.962958233368e+01 29 KSP Residual norm 7.535618133974e+01 30 KSP Residual norm 7.200777786142e+01 31 KSP Residual norm 6.918808703243e+01 32 KSP Residual norm 6.615355942714e+01 33 KSP Residual norm 6.335422266158e+01 34 KSP Residual norm 6.087994810096e+01 35 KSP Residual norm 5.860768305111e+01 36 KSP Residual norm 5.590605079199e+01 37 KSP Residual norm 5.309692677547e+01 38 KSP Residual norm 5.074937855236e+01 39 KSP Residual norm 4.834985837531e+01 40 KSP Residual norm 4.528521713467e+01 41 KSP Residual norm 4.220152845758e+01 42 KSP Residual norm 3.832512548078e+01 43 KSP Residual norm 3.428544141697e+01 44 KSP Residual norm 3.020429577065e+01 45 KSP Residual norm 2.695690934582e+01 46 KSP Residual norm 2.359414263150e+01 47 KSP Residual norm 2.060552006026e+01 48 KSP Residual norm 1.842936351276e+01 49 KSP Residual norm 1.718743101553e+01 50 KSP Residual norm 1.634513113602e+01 51 KSP Residual norm 1.589910831290e+01 52 KSP Residual norm 1.565048632185e+01 53 KSP Residual norm 1.549913015086e+01 54 KSP Residual norm 1.540796344631e+01 55 KSP Residual norm 1.535377118215e+01 56 KSP Residual norm 1.526899154893e+01 57 KSP Residual norm 1.517085203187e+01 58 KSP Residual norm 1.506106668467e+01 59 KSP Residual norm 1.489374090919e+01 60 KSP Residual norm 1.459539877189e+01 61 KSP Residual norm 1.412030415525e+01 62 KSP Residual norm 1.340561633295e+01 63 KSP Residual norm 1.247722217449e+01 64 KSP Residual norm 1.142223355488e+01 65 KSP Residual norm 9.960759904218e+00 66 KSP Residual norm 8.479724764253e+00 67 KSP Residual norm 7.205175367411e+00 68 KSP Residual norm 6.382247645371e+00 69 KSP Residual norm 5.831379327774e+00 70 KSP Residual norm 5.431781045368e+00 71 KSP Residual norm 5.145590103592e+00 72 KSP Residual norm 4.887058188869e+00 73 KSP Residual norm 4.681378237868e+00 74 KSP Residual norm 4.508678739551e+00 75 KSP Residual norm 4.394966289798e+00 76 KSP Residual norm 4.317858378339e+00 77 KSP Residual norm 4.257323827661e+00 78 KSP Residual norm 4.198748370852e+00 79 KSP Residual norm 4.129169139748e+00 80 KSP Residual norm 4.045395343135e+00 81 KSP Residual norm 3.935023505436e+00 82 KSP Residual norm 3.788775727201e+00 83 KSP Residual norm 3.606590145487e+00 84 KSP Residual norm 3.397169369894e+00 85 KSP Residual norm 3.200163402210e+00 86 KSP Residual norm 3.023822739624e+00 87 KSP Residual norm 2.879127347325e+00 88 KSP Residual norm 2.748099503455e+00 89 KSP Residual norm 2.636515396870e+00 90 KSP Residual norm 2.531774492895e+00 91 KSP Residual norm 2.405321602358e+00 92 KSP Residual norm 2.264683605217e+00 93 KSP Residual norm 2.134552375778e+00 94 KSP Residual norm 2.037077099533e+00 95 KSP Residual norm 1.958190627683e+00 96 KSP Residual norm 1.888715558971e+00 97 KSP Residual norm 1.834111026034e+00 98 KSP Residual norm 1.780632519655e+00 99 KSP Residual norm 1.720745801196e+00 100 KSP Residual norm 1.666458998667e+00 101 KSP Residual norm 1.666435684710e+00 102 KSP Residual norm 1.666425187739e+00 103 KSP Residual norm 1.666407589462e+00 104 KSP Residual norm 1.666386974836e+00 105 KSP Residual norm 1.666337796650e+00 106 KSP Residual norm 1.666297016972e+00 107 KSP Residual norm 1.666204921832e+00 108 KSP Residual norm 1.666161370903e+00 109 KSP Residual norm 1.666037316950e+00 110 KSP Residual norm 1.665981317531e+00 111 KSP Residual norm 1.665937402920e+00 112 KSP Residual norm 1.665825307209e+00 113 KSP Residual norm 1.665315593546e+00 114 KSP Residual norm 1.664849127774e+00 115 KSP Residual norm 1.662735263087e+00 116 KSP Residual norm 1.660468452198e+00 117 KSP Residual norm 1.655591325115e+00 118 KSP Residual norm 1.650522338355e+00 119 KSP Residual norm 1.641094735433e+00 120 KSP Residual norm 1.630448294649e+00 121 KSP Residual norm 1.611075491174e+00 122 KSP Residual norm 1.595774897502e+00 123 KSP Residual norm 1.580703368287e+00 124 KSP Residual norm 1.570018008550e+00 125 KSP Residual norm 1.555848693583e+00 126 KSP Residual norm 1.543112255037e+00 127 KSP Residual norm 1.526748985029e+00 128 KSP Residual norm 1.508973965304e+00 129 KSP Residual norm 1.488332784886e+00 130 KSP Residual norm 1.464046578349e+00 131 KSP Residual norm 1.429790383006e+00 132 KSP Residual norm 1.388559152436e+00 133 KSP Residual norm 1.338026272560e+00 134 KSP Residual norm 1.293772629189e+00 135 KSP Residual norm 1.256697242651e+00 136 KSP Residual norm 1.225027704653e+00 137 KSP Residual norm 1.196642569402e+00 138 KSP Residual norm 1.179189180775e+00 139 KSP Residual norm 1.169080422271e+00 140 KSP Residual norm 1.157001698950e+00 141 KSP Residual norm 1.137046212942e+00 142 KSP Residual norm 1.109214142657e+00 143 KSP Residual norm 1.069978873622e+00 144 KSP Residual norm 1.028402576664e+00 145 KSP Residual norm 9.844869861312e-01 146 KSP Residual norm 9.511605535844e-01 147 KSP Residual norm 9.245007185831e-01 148 KSP Residual norm 9.072815686860e-01 149 KSP Residual norm 8.968213413508e-01 150 KSP Residual norm 8.926024925512e-01 151 KSP Residual norm 8.905834226558e-01 152 KSP Residual norm 8.877042587324e-01 153 KSP Residual norm 8.779928017088e-01 154 KSP Residual norm 8.516060097454e-01 155 KSP Residual norm 8.026635369935e-01 156 KSP Residual norm 7.293115514688e-01 157 KSP Residual norm 6.437202348367e-01 158 KSP Residual norm 5.624080406391e-01 159 KSP Residual norm 4.858443647817e-01 160 KSP Residual norm 4.264957571657e-01 161 KSP Residual norm 3.858780563124e-01 162 KSP Residual norm 3.598458900999e-01 163 KSP Residual norm 3.410917304099e-01 164 KSP Residual norm 3.229940833118e-01 165 KSP Residual norm 3.027899484024e-01 166 KSP Residual norm 2.755127285400e-01 167 KSP Residual norm 2.498053869347e-01 168 KSP Residual norm 2.314660155452e-01 169 KSP Residual norm 2.188796341968e-01 170 KSP Residual norm 2.113300313226e-01 171 KSP Residual norm 2.083451871524e-01 172 KSP Residual norm 2.072616327406e-01 173 KSP Residual norm 2.067011126820e-01 174 KSP Residual norm 2.062362449857e-01 175 KSP Residual norm 2.048983172897e-01 176 KSP Residual norm 2.031910394026e-01 177 KSP Residual norm 2.005418070100e-01 178 KSP Residual norm 1.971457554824e-01 179 KSP Residual norm 1.931519024998e-01 180 KSP Residual norm 1.871198294090e-01 181 KSP Residual norm 1.780354633589e-01 182 KSP Residual norm 1.647179957053e-01 183 KSP Residual norm 1.498292446234e-01 184 KSP Residual norm 1.354635651340e-01 KSP Object:(coupledsolve_) 1 MPI processes type: gmres GMRES: restart=100, using Modified Gram-Schmidt Orthogonalization GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=2.91923e-06, absolute=1e-50, divergence=10000 left preconditioning has attached null space using PRECONDITIONED norm type for convergence test PC Object:(coupledsolve_) 1 MPI processes type: ilu ILU: out-of-place factorization 1 level of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 2.44153 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 package used to perform factorization: petsc total: nonzeros=3.56993e+08, allocated nonzeros=3.56993e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 total: nonzeros=1.46217e+08, allocated nonzeros=1.46217e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines 0000007 0.2919E-03 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 4.646787455792e+04 1 KSP Residual norm 3.162241956898e+04 2 KSP Residual norm 2.266751754353e+04 3 KSP Residual norm 1.760415085826e+04 4 KSP Residual norm 1.299754583121e+04 5 KSP Residual norm 9.645915265046e+03 6 KSP Residual norm 6.543277917532e+03 7 KSP Residual norm 4.115529141578e+03 8 KSP Residual norm 2.254597408668e+03 9 KSP Residual norm 1.263984275593e+03 10 KSP Residual norm 8.900177015723e+02 11 KSP Residual norm 6.980844807830e+02 12 KSP Residual norm 5.307315524104e+02 13 KSP Residual norm 4.512901660541e+02 14 KSP Residual norm 3.659316017433e+02 15 KSP Residual norm 3.024811783739e+02 16 KSP Residual norm 2.507046989506e+02 17 KSP Residual norm 2.202202011191e+02 18 KSP Residual norm 1.961325403663e+02 19 KSP Residual norm 1.790000833697e+02 20 KSP Residual norm 1.602143568404e+02 21 KSP Residual norm 1.442149915719e+02 22 KSP Residual norm 1.308282735973e+02 23 KSP Residual norm 1.214618155162e+02 24 KSP Residual norm 1.121392228531e+02 25 KSP Residual norm 1.031604946627e+02 26 KSP Residual norm 9.422035455444e+01 27 KSP Residual norm 8.566719561477e+01 28 KSP Residual norm 7.965142848918e+01 29 KSP Residual norm 7.537164420944e+01 30 KSP Residual norm 7.202130634290e+01 31 KSP Residual norm 6.919785061706e+01 32 KSP Residual norm 6.616188825782e+01 33 KSP Residual norm 6.336178439002e+01 34 KSP Residual norm 6.088707032209e+01 35 KSP Residual norm 5.861633815668e+01 36 KSP Residual norm 5.591444042291e+01 37 KSP Residual norm 5.310815354349e+01 38 KSP Residual norm 5.075973670845e+01 39 KSP Residual norm 4.836228448705e+01 40 KSP Residual norm 4.529410983860e+01 41 KSP Residual norm 4.221009518246e+01 42 KSP Residual norm 3.832937524508e+01 43 KSP Residual norm 3.429058198191e+01 44 KSP Residual norm 3.020695817573e+01 45 KSP Residual norm 2.696042554941e+01 46 KSP Residual norm 2.359701520897e+01 47 KSP Residual norm 2.060993376571e+01 48 KSP Residual norm 1.843376433965e+01 49 KSP Residual norm 1.719212548963e+01 50 KSP Residual norm 1.634999542504e+01 51 KSP Residual norm 1.590413186562e+01 52 KSP Residual norm 1.565566302081e+01 53 KSP Residual norm 1.550435044233e+01 54 KSP Residual norm 1.541344580762e+01 55 KSP Residual norm 1.535915416823e+01 56 KSP Residual norm 1.527449770347e+01 57 KSP Residual norm 1.517614215638e+01 58 KSP Residual norm 1.506626692646e+01 59 KSP Residual norm 1.489824567240e+01 60 KSP Residual norm 1.459942366934e+01 61 KSP Residual norm 1.412270416692e+01 62 KSP Residual norm 1.340696967410e+01 63 KSP Residual norm 1.247741269018e+01 64 KSP Residual norm 1.142243610210e+01 65 KSP Residual norm 9.961171104813e+00 66 KSP Residual norm 8.480559919767e+00 67 KSP Residual norm 7.205833023893e+00 68 KSP Residual norm 6.382739591642e+00 69 KSP Residual norm 5.831697601515e+00 70 KSP Residual norm 5.431880614702e+00 71 KSP Residual norm 5.145637123761e+00 72 KSP Residual norm 4.887034548660e+00 73 KSP Residual norm 4.681444840014e+00 74 KSP Residual norm 4.508859788039e+00 75 KSP Residual norm 4.395343085134e+00 76 KSP Residual norm 4.318343971412e+00 77 KSP Residual norm 4.257928060761e+00 78 KSP Residual norm 4.199354655160e+00 79 KSP Residual norm 4.129805132319e+00 80 KSP Residual norm 4.046003868938e+00 81 KSP Residual norm 3.935677862897e+00 82 KSP Residual norm 3.789529756650e+00 83 KSP Residual norm 3.607582172816e+00 84 KSP Residual norm 3.398499981525e+00 85 KSP Residual norm 3.201877995389e+00 86 KSP Residual norm 3.025854859658e+00 87 KSP Residual norm 2.881354373405e+00 88 KSP Residual norm 2.750413138008e+00 89 KSP Residual norm 2.638792686404e+00 90 KSP Residual norm 2.533932644066e+00 91 KSP Residual norm 2.407215609042e+00 92 KSP Residual norm 2.266226734110e+00 93 KSP Residual norm 2.135706585489e+00 94 KSP Residual norm 2.037929599285e+00 95 KSP Residual norm 1.958788876664e+00 96 KSP Residual norm 1.889129148426e+00 97 KSP Residual norm 1.834399052016e+00 98 KSP Residual norm 1.780853147274e+00 99 KSP Residual norm 1.720938968492e+00 100 KSP Residual norm 1.666670021784e+00 101 KSP Residual norm 1.666646700440e+00 102 KSP Residual norm 1.666636179445e+00 103 KSP Residual norm 1.666618515877e+00 104 KSP Residual norm 1.666597776841e+00 105 KSP Residual norm 1.666548372022e+00 106 KSP Residual norm 1.666507323865e+00 107 KSP Residual norm 1.666414847890e+00 108 KSP Residual norm 1.666371007548e+00 109 KSP Residual norm 1.666246603478e+00 110 KSP Residual norm 1.666190378399e+00 111 KSP Residual norm 1.666146346680e+00 112 KSP Residual norm 1.666033891999e+00 113 KSP Residual norm 1.665523331111e+00 114 KSP Residual norm 1.665055600490e+00 115 KSP Residual norm 1.662937657071e+00 116 KSP Residual norm 1.660665567023e+00 117 KSP Residual norm 1.655779543628e+00 118 KSP Residual norm 1.650701962114e+00 119 KSP Residual norm 1.641266669022e+00 120 KSP Residual norm 1.630620732697e+00 121 KSP Residual norm 1.611268755409e+00 122 KSP Residual norm 1.596000653848e+00 123 KSP Residual norm 1.580962642610e+00 124 KSP Residual norm 1.570296077272e+00 125 KSP Residual norm 1.556134011983e+00 126 KSP Residual norm 1.543381660723e+00 127 KSP Residual norm 1.526974751699e+00 128 KSP Residual norm 1.509124894241e+00 129 KSP Residual norm 1.488395562711e+00 130 KSP Residual norm 1.464017993815e+00 131 KSP Residual norm 1.429703471449e+00 132 KSP Residual norm 1.388499817127e+00 133 KSP Residual norm 1.338114606516e+00 134 KSP Residual norm 1.294063089008e+00 135 KSP Residual norm 1.257164973266e+00 136 KSP Residual norm 1.225631006496e+00 137 KSP Residual norm 1.197324597026e+00 138 KSP Residual norm 1.179880255901e+00 139 KSP Residual norm 1.169741878388e+00 140 KSP Residual norm 1.157593883237e+00 141 KSP Residual norm 1.137503987850e+00 142 KSP Residual norm 1.109465455245e+00 143 KSP Residual norm 1.069948744536e+00 144 KSP Residual norm 1.028142160404e+00 145 KSP Residual norm 9.840785732947e-01 146 KSP Residual norm 9.507111349491e-01 147 KSP Residual norm 9.240624806011e-01 148 KSP Residual norm 9.068665897340e-01 149 KSP Residual norm 8.964166177453e-01 150 KSP Residual norm 8.921979647266e-01 151 KSP Residual norm 8.901778256863e-01 152 KSP Residual norm 8.872957430191e-01 153 KSP Residual norm 8.775766569363e-01 154 KSP Residual norm 8.511775594037e-01 155 KSP Residual norm 8.022410365738e-01 156 KSP Residual norm 7.289295591693e-01 157 KSP Residual norm 6.434004077838e-01 158 KSP Residual norm 5.621312062788e-01 159 KSP Residual norm 4.855860452659e-01 160 KSP Residual norm 4.262275129045e-01 161 KSP Residual norm 3.855958854359e-01 162 KSP Residual norm 3.595536528627e-01 163 KSP Residual norm 3.408088035541e-01 164 KSP Residual norm 3.227421183639e-01 165 KSP Residual norm 3.025977007496e-01 166 KSP Residual norm 2.753933126878e-01 167 KSP Residual norm 2.497380870084e-01 168 KSP Residual norm 2.314181657724e-01 169 KSP Residual norm 2.188249695058e-01 170 KSP Residual norm 2.112498991079e-01 171 KSP Residual norm 2.082370872920e-01 172 KSP Residual norm 2.071318079841e-01 173 KSP Residual norm 2.065526809781e-01 174 KSP Residual norm 2.060705227797e-01 175 KSP Residual norm 2.047046273401e-01 176 KSP Residual norm 2.029701399785e-01 177 KSP Residual norm 2.002917530868e-01 178 KSP Residual norm 1.968720676027e-01 179 KSP Residual norm 1.928622438764e-01 180 KSP Residual norm 1.868245574497e-01 181 KSP Residual norm 1.777515286797e-01 182 KSP Residual norm 1.644665551736e-01 183 KSP Residual norm 1.496248021816e-01 184 KSP Residual norm 1.353130468729e-01 185 KSP Residual norm 1.229663189139e-01 186 KSP Residual norm 1.144358947500e-01 187 KSP Residual norm 1.082868255996e-01 188 KSP Residual norm 1.048880652156e-01 189 KSP Residual norm 1.031195062847e-01 190 KSP Residual norm 1.024718673721e-01 191 KSP Residual norm 1.022559174379e-01 192 KSP Residual norm 1.021036783396e-01 193 KSP Residual norm 1.019657881419e-01 194 KSP Residual norm 1.017980457938e-01 195 KSP Residual norm 1.016477359134e-01 196 KSP Residual norm 1.015707516307e-01 197 KSP Residual norm 1.014174585458e-01 198 KSP Residual norm 1.009440615061e-01 199 KSP Residual norm 9.915866894294e-02 200 KSP Residual norm 9.512008215184e-02 201 KSP Residual norm 9.510916826631e-02 202 KSP Residual norm 9.510773078797e-02 203 KSP Residual norm 9.510628781695e-02 204 KSP Residual norm 9.510575565002e-02 205 KSP Residual norm 9.510475251711e-02 206 KSP Residual norm 9.510310432136e-02 207 KSP Residual norm 9.509751978584e-02 208 KSP Residual norm 9.509749481405e-02 209 KSP Residual norm 9.508851576596e-02 210 KSP Residual norm 9.508751404773e-02 211 KSP Residual norm 9.508509492259e-02 212 KSP Residual norm 9.508109513659e-02 213 KSP Residual norm 9.504691055487e-02 214 KSP Residual norm 9.502281050170e-02 215 KSP Residual norm 9.486710123017e-02 216 KSP Residual norm 9.472518441275e-02 217 KSP Residual norm 9.435149394284e-02 218 KSP Residual norm 9.414908049023e-02 219 KSP Residual norm 9.356745513628e-02 220 KSP Residual norm 9.311472317663e-02 221 KSP Residual norm 9.219326331173e-02 222 KSP Residual norm 9.122066012265e-02 223 KSP Residual norm 8.990908305578e-02 224 KSP Residual norm 8.878211262491e-02 225 KSP Residual norm 8.688778679785e-02 226 KSP Residual norm 8.489546160419e-02 227 KSP Residual norm 8.202980281556e-02 228 KSP Residual norm 7.806516571878e-02 229 KSP Residual norm 7.420860430572e-02 230 KSP Residual norm 7.030645800753e-02 231 KSP Residual norm 6.688076405149e-02 KSP Object:(coupledsolve_) 1 MPI processes type: gmres GMRES: restart=100, using Modified Gram-Schmidt Orthogonalization GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1.4941e-06, absolute=1e-50, divergence=10000 left preconditioning has attached null space using PRECONDITIONED norm type for convergence test PC Object:(coupledsolve_) 1 MPI processes type: ilu ILU: out-of-place factorization 1 level of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 2.44153 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 package used to perform factorization: petsc total: nonzeros=3.56993e+08, allocated nonzeros=3.56993e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 total: nonzeros=1.46217e+08, allocated nonzeros=1.46217e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines 0000008 0.1494E-03 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 4.646785899754e+04 1 KSP Residual norm 3.162241963523e+04 2 KSP Residual norm 2.266755474816e+04 3 KSP Residual norm 1.760416996810e+04 4 KSP Residual norm 1.299757623853e+04 5 KSP Residual norm 9.645933075566e+03 6 KSP Residual norm 6.543295017150e+03 7 KSP Residual norm 4.115540904692e+03 8 KSP Residual norm 2.254605622422e+03 9 KSP Residual norm 1.263989792173e+03 10 KSP Residual norm 8.900191944198e+02 11 KSP Residual norm 6.980849611266e+02 12 KSP Residual norm 5.307290744447e+02 13 KSP Residual norm 4.512866645668e+02 14 KSP Residual norm 3.659273432897e+02 15 KSP Residual norm 3.024770236796e+02 16 KSP Residual norm 2.507012089798e+02 17 KSP Residual norm 2.202174960588e+02 18 KSP Residual norm 1.961310093090e+02 19 KSP Residual norm 1.789994885772e+02 20 KSP Residual norm 1.602154149970e+02 21 KSP Residual norm 1.442172391655e+02 22 KSP Residual norm 1.308318499035e+02 23 KSP Residual norm 1.214659030625e+02 24 KSP Residual norm 1.121437477073e+02 25 KSP Residual norm 1.031648411327e+02 26 KSP Residual norm 9.422427673988e+01 27 KSP Residual norm 8.567044856205e+01 28 KSP Residual norm 7.965408379279e+01 29 KSP Residual norm 7.537381860542e+01 30 KSP Residual norm 7.202298745462e+01 31 KSP Residual norm 6.919924454463e+01 32 KSP Residual norm 6.616297935307e+01 33 KSP Residual norm 6.336283335375e+01 34 KSP Residual norm 6.088800087541e+01 35 KSP Residual norm 5.861726798462e+01 36 KSP Residual norm 5.591522537135e+01 37 KSP Residual norm 5.310897571943e+01 38 KSP Residual norm 5.076057173301e+01 39 KSP Residual norm 4.836318888318e+01 40 KSP Residual norm 4.529490566091e+01 41 KSP Residual norm 4.221075436246e+01 42 KSP Residual norm 3.832997005633e+01 43 KSP Residual norm 3.429124033383e+01 44 KSP Residual norm 3.020765036337e+01 45 KSP Residual norm 2.696108435505e+01 46 KSP Residual norm 2.359774585096e+01 47 KSP Residual norm 2.061074646428e+01 48 KSP Residual norm 1.843457497693e+01 49 KSP Residual norm 1.719282319883e+01 50 KSP Residual norm 1.635065497967e+01 51 KSP Residual norm 1.590474043023e+01 52 KSP Residual norm 1.565624509311e+01 53 KSP Residual norm 1.550490689429e+01 54 KSP Residual norm 1.541401027656e+01 55 KSP Residual norm 1.535970284754e+01 56 KSP Residual norm 1.527502174515e+01 57 KSP Residual norm 1.517663510774e+01 58 KSP Residual norm 1.506671699214e+01 59 KSP Residual norm 1.489860076730e+01 60 KSP Residual norm 1.459967430148e+01 61 KSP Residual norm 1.412279142878e+01 62 KSP Residual norm 1.340690841327e+01 63 KSP Residual norm 1.247731094848e+01 64 KSP Residual norm 1.142243316674e+01 65 KSP Residual norm 9.961428273333e+00 66 KSP Residual norm 8.481147737731e+00 67 KSP Residual norm 7.206684986896e+00 68 KSP Residual norm 6.383691533928e+00 69 KSP Residual norm 5.832639901943e+00 70 KSP Residual norm 5.432734749361e+00 71 KSP Residual norm 5.146379698285e+00 72 KSP Residual norm 4.887647340210e+00 73 KSP Residual norm 4.681949636685e+00 74 KSP Residual norm 4.509282888616e+00 75 KSP Residual norm 4.395722233812e+00 76 KSP Residual norm 4.318693362313e+00 77 KSP Residual norm 4.258254178497e+00 78 KSP Residual norm 4.199655417310e+00 79 KSP Residual norm 4.130074597887e+00 80 KSP Residual norm 4.046243813050e+00 81 KSP Residual norm 3.935887894406e+00 82 KSP Residual norm 3.789722856844e+00 83 KSP Residual norm 3.607777098798e+00 84 KSP Residual norm 3.398720943175e+00 85 KSP Residual norm 3.202136361923e+00 86 KSP Residual norm 3.026145839437e+00 87 KSP Residual norm 2.881657493973e+00 88 KSP Residual norm 2.750710112900e+00 89 KSP Residual norm 2.639061204225e+00 90 KSP Residual norm 2.534154650364e+00 91 KSP Residual norm 2.407362997564e+00 92 KSP Residual norm 2.266278488174e+00 93 KSP Residual norm 2.135668235264e+00 94 KSP Residual norm 2.037824996927e+00 95 KSP Residual norm 1.958637350418e+00 96 KSP Residual norm 1.888947780945e+00 97 KSP Residual norm 1.834206018615e+00 98 KSP Residual norm 1.780665311177e+00 99 KSP Residual norm 1.720772226464e+00 100 KSP Residual norm 1.666535916890e+00 101 KSP Residual norm 1.666512616863e+00 102 KSP Residual norm 1.666502105063e+00 103 KSP Residual norm 1.666484452071e+00 104 KSP Residual norm 1.666463714440e+00 105 KSP Residual norm 1.666414315783e+00 106 KSP Residual norm 1.666373253632e+00 107 KSP Residual norm 1.666280787418e+00 108 KSP Residual norm 1.666236924028e+00 109 KSP Residual norm 1.666112547580e+00 110 KSP Residual norm 1.666056310013e+00 111 KSP Residual norm 1.666012284025e+00 112 KSP Residual norm 1.665899780342e+00 113 KSP Residual norm 1.665389341622e+00 114 KSP Residual norm 1.664921508600e+00 115 KSP Residual norm 1.662804031224e+00 116 KSP Residual norm 1.660532182247e+00 117 KSP Residual norm 1.655648042222e+00 118 KSP Residual norm 1.650572332343e+00 119 KSP Residual norm 1.641143903348e+00 120 KSP Residual norm 1.630507694891e+00 121 KSP Residual norm 1.611181429498e+00 122 KSP Residual norm 1.595938325545e+00 123 KSP Residual norm 1.580925270578e+00 124 KSP Residual norm 1.570272641046e+00 125 KSP Residual norm 1.556122326172e+00 126 KSP Residual norm 1.543371116181e+00 127 KSP Residual norm 1.526957378354e+00 128 KSP Residual norm 1.509087687707e+00 129 KSP Residual norm 1.488329582236e+00 130 KSP Residual norm 1.463915548685e+00 131 KSP Residual norm 1.429563173530e+00 132 KSP Residual norm 1.388335832887e+00 133 KSP Residual norm 1.337957017087e+00 134 KSP Residual norm 1.293934830851e+00 135 KSP Residual norm 1.257072208985e+00 136 KSP Residual norm 1.225574500015e+00 137 KSP Residual norm 1.197301250593e+00 138 KSP Residual norm 1.179875356832e+00 139 KSP Residual norm 1.169742288429e+00 140 KSP Residual norm 1.157592400564e+00 141 KSP Residual norm 1.137491175092e+00 142 KSP Residual norm 1.109428815140e+00 143 KSP Residual norm 1.069873164992e+00 144 KSP Residual norm 1.028029310124e+00 145 KSP Residual norm 9.839374683457e-01 146 KSP Residual norm 9.505588139893e-01 147 KSP Residual norm 9.239080440484e-01 148 KSP Residual norm 9.067164074833e-01 149 KSP Residual norm 8.962720921303e-01 150 KSP Residual norm 8.920570430702e-01 151 KSP Residual norm 8.900393667456e-01 152 KSP Residual norm 8.871605726314e-01 153 KSP Residual norm 8.774468330367e-01 154 KSP Residual norm 8.510519600767e-01 155 KSP Residual norm 8.021187897424e-01 156 KSP Residual norm 7.288150132050e-01 157 KSP Residual norm 6.432975066353e-01 158 KSP Residual norm 5.620378079208e-01 159 KSP Residual norm 4.854986024440e-01 160 KSP Residual norm 4.261411555818e-01 161 KSP Residual norm 3.855071375565e-01 162 KSP Residual norm 3.594622876611e-01 163 KSP Residual norm 3.407175721585e-01 164 KSP Residual norm 3.226556535301e-01 165 KSP Residual norm 3.025210288453e-01 166 KSP Residual norm 2.753330710023e-01 167 KSP Residual norm 2.496947452834e-01 168 KSP Residual norm 2.313879605898e-01 169 KSP Residual norm 2.188033285691e-01 170 KSP Residual norm 2.112327574522e-01 171 KSP Residual norm 2.082207821413e-01 172 KSP Residual norm 2.071148041300e-01 173 KSP Residual norm 2.065345589101e-01 174 KSP Residual norm 2.060510391765e-01 175 KSP Residual norm 2.046829187112e-01 176 KSP Residual norm 2.029457800704e-01 177 KSP Residual norm 2.002641090672e-01 178 KSP Residual norm 1.968410139196e-01 179 KSP Residual norm 1.928278685586e-01 180 KSP Residual norm 1.867862024192e-01 181 KSP Residual norm 1.777084796605e-01 182 KSP Residual norm 1.644184457829e-01 183 KSP Residual norm 1.495766185157e-01 184 KSP Residual norm 1.352711303485e-01 185 KSP Residual norm 1.229360775876e-01 186 KSP Residual norm 1.144177518051e-01 187 KSP Residual norm 1.082789830400e-01 188 KSP Residual norm 1.048868188059e-01 189 KSP Residual norm 1.031220805948e-01 190 KSP Residual norm 1.024760655782e-01 191 KSP Residual norm 1.022606212569e-01 192 KSP Residual norm 1.021083552611e-01 193 KSP Residual norm 1.019700582837e-01 194 KSP Residual norm 1.018016547454e-01 195 KSP Residual norm 1.016505904502e-01 196 KSP Residual norm 1.015729805317e-01 197 KSP Residual norm 1.014187469714e-01 198 KSP Residual norm 1.009439086556e-01 199 KSP Residual norm 9.915660338008e-02 200 KSP Residual norm 9.511765094197e-02 201 KSP Residual norm 9.510673001787e-02 202 KSP Residual norm 9.510529073500e-02 203 KSP Residual norm 9.510384688258e-02 204 KSP Residual norm 9.510331480217e-02 205 KSP Residual norm 9.510231187647e-02 206 KSP Residual norm 9.510066420769e-02 207 KSP Residual norm 9.509508093767e-02 208 KSP Residual norm 9.509505646449e-02 209 KSP Residual norm 9.508608518537e-02 210 KSP Residual norm 9.508508487860e-02 211 KSP Residual norm 9.508266557621e-02 212 KSP Residual norm 9.507866314439e-02 213 KSP Residual norm 9.504445614581e-02 214 KSP Residual norm 9.502033391298e-02 215 KSP Residual norm 9.486451969616e-02 216 KSP Residual norm 9.472248087946e-02 217 KSP Residual norm 9.434860636622e-02 218 KSP Residual norm 9.414611400660e-02 219 KSP Residual norm 9.356443665062e-02 220 KSP Residual norm 9.311183883005e-02 221 KSP Residual norm 9.219103231839e-02 222 KSP Residual norm 9.121932123142e-02 223 KSP Residual norm 8.990890838054e-02 224 KSP Residual norm 8.878256018248e-02 225 KSP Residual norm 8.688817166040e-02 226 KSP Residual norm 8.489423244501e-02 227 KSP Residual norm 8.202467896223e-02 228 KSP Residual norm 7.805350316673e-02 229 KSP Residual norm 7.419063654580e-02 230 KSP Residual norm 7.028431803387e-02 231 KSP Residual norm 6.685751630834e-02 232 KSP Residual norm 6.380711797294e-02 233 KSP Residual norm 6.155045331759e-02 234 KSP Residual norm 6.018003979142e-02 235 KSP Residual norm 5.940591200007e-02 236 KSP Residual norm 5.890639440520e-02 237 KSP Residual norm 5.848752826170e-02 238 KSP Residual norm 5.801574566942e-02 239 KSP Residual norm 5.739878145171e-02 240 KSP Residual norm 5.624796293368e-02 241 KSP Residual norm 5.452934040580e-02 242 KSP Residual norm 5.319665521357e-02 243 KSP Residual norm 5.202264298858e-02 244 KSP Residual norm 5.138992766373e-02 245 KSP Residual norm 5.102604688588e-02 246 KSP Residual norm 5.088229349730e-02 247 KSP Residual norm 5.071870773193e-02 248 KSP Residual norm 5.054432736650e-02 249 KSP Residual norm 5.028913620107e-02 250 KSP Residual norm 4.989355617557e-02 251 KSP Residual norm 4.892839333162e-02 252 KSP Residual norm 4.700994528759e-02 253 KSP Residual norm 4.456504460985e-02 254 KSP Residual norm 4.229141391984e-02 255 KSP Residual norm 4.036476436768e-02 256 KSP Residual norm 3.828564254222e-02 257 KSP Residual norm 3.547978966327e-02 258 KSP Residual norm 3.153846829491e-02 259 KSP Residual norm 2.667749784025e-02 260 KSP Residual norm 2.274456576565e-02 261 KSP Residual norm 2.033797940284e-02 262 KSP Residual norm 1.911937325057e-02 263 KSP Residual norm 1.870784234382e-02 KSP Object:(coupledsolve_) 1 MPI processes type: gmres GMRES: restart=100, using Modified Gram-Schmidt Orthogonalization GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=4.09832e-07, absolute=1e-50, divergence=10000 left preconditioning has attached null space using PRECONDITIONED norm type for convergence test PC Object:(coupledsolve_) 1 MPI processes type: ilu ILU: out-of-place factorization 1 level of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 2.44153 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 package used to perform factorization: petsc total: nonzeros=3.56993e+08, allocated nonzeros=3.56993e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 total: nonzeros=1.46217e+08, allocated nonzeros=1.46217e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines 0000009 0.4098E-04 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 4.646794310590e+04 1 KSP Residual norm 3.162241144457e+04 2 KSP Residual norm 2.266758330804e+04 3 KSP Residual norm 1.760417191668e+04 4 KSP Residual norm 1.299758670251e+04 5 KSP Residual norm 9.645932097989e+03 6 KSP Residual norm 6.543290435478e+03 7 KSP Residual norm 4.115535858541e+03 8 KSP Residual norm 2.254598957612e+03 9 KSP Residual norm 1.263991449396e+03 10 KSP Residual norm 8.900182590806e+02 11 KSP Residual norm 6.980861928761e+02 12 KSP Residual norm 5.307262171508e+02 13 KSP Residual norm 4.512849622176e+02 14 KSP Residual norm 3.659236615256e+02 15 KSP Residual norm 3.024747741226e+02 16 KSP Residual norm 2.506993815693e+02 17 KSP Residual norm 2.202170454250e+02 18 KSP Residual norm 1.961319546780e+02 19 KSP Residual norm 1.790012692579e+02 20 KSP Residual norm 1.602190236520e+02 21 KSP Residual norm 1.442214343746e+02 22 KSP Residual norm 1.308375134742e+02 23 KSP Residual norm 1.214712456959e+02 24 KSP Residual norm 1.121496449570e+02 25 KSP Residual norm 1.031696554097e+02 26 KSP Residual norm 9.422898384346e+01 27 KSP Residual norm 8.567379879561e+01 28 KSP Residual norm 7.965718907008e+01 29 KSP Residual norm 7.537607163408e+01 30 KSP Residual norm 7.202499870580e+01 31 KSP Residual norm 6.920080031654e+01 32 KSP Residual norm 6.616434216430e+01 33 KSP Residual norm 6.336404004538e+01 34 KSP Residual norm 6.088909609771e+01 35 KSP Residual norm 5.861844772673e+01 36 KSP Residual norm 5.591630616401e+01 37 KSP Residual norm 5.311031080183e+01 38 KSP Residual norm 5.076181274355e+01 39 KSP Residual norm 4.836470703420e+01 40 KSP Residual norm 4.529617348546e+01 41 KSP Residual norm 4.221214012679e+01 42 KSP Residual norm 3.833099968552e+01 43 KSP Residual norm 3.429240068417e+01 44 KSP Residual norm 3.020848871835e+01 45 KSP Residual norm 2.696195815941e+01 46 KSP Residual norm 2.359842148717e+01 47 KSP Residual norm 2.061145350188e+01 48 KSP Residual norm 1.843513683908e+01 49 KSP Residual norm 1.719331526881e+01 50 KSP Residual norm 1.635107305796e+01 51 KSP Residual norm 1.590512684974e+01 52 KSP Residual norm 1.565662148799e+01 53 KSP Residual norm 1.550527586014e+01 54 KSP Residual norm 1.541439698856e+01 55 KSP Residual norm 1.536007310699e+01 56 KSP Residual norm 1.527539425919e+01 57 KSP Residual norm 1.517696956921e+01 58 KSP Residual norm 1.506702907661e+01 59 KSP Residual norm 1.489882474582e+01 60 KSP Residual norm 1.459984137835e+01 61 KSP Residual norm 1.412278623074e+01 62 KSP Residual norm 1.340680725254e+01 63 KSP Residual norm 1.247709648053e+01 64 KSP Residual norm 1.142224760264e+01 65 KSP Residual norm 9.961300376347e+00 66 KSP Residual norm 8.481117358976e+00 67 KSP Residual norm 7.206680906327e+00 68 KSP Residual norm 6.383699562408e+00 69 KSP Residual norm 5.832639666836e+00 70 KSP Residual norm 5.432711365762e+00 71 KSP Residual norm 5.146344404820e+00 72 KSP Residual norm 4.887595755564e+00 73 KSP Residual norm 4.681901403202e+00 74 KSP Residual norm 4.509245883488e+00 75 KSP Residual norm 4.395710403693e+00 76 KSP Residual norm 4.318700048259e+00 77 KSP Residual norm 4.258281952223e+00 78 KSP Residual norm 4.199693501965e+00 79 KSP Residual norm 4.130130389175e+00 80 KSP Residual norm 4.046313315040e+00 81 KSP Residual norm 3.935984711809e+00 82 KSP Residual norm 3.789854031766e+00 83 KSP Residual norm 3.607956679253e+00 84 KSP Residual norm 3.398952847764e+00 85 KSP Residual norm 3.202415188256e+00 86 KSP Residual norm 3.026455726325e+00 87 KSP Residual norm 2.881977682188e+00 88 KSP Residual norm 2.751023760732e+00 89 KSP Residual norm 2.639353883879e+00 90 KSP Residual norm 2.534416757160e+00 91 KSP Residual norm 2.407579119589e+00 92 KSP Residual norm 2.266443645258e+00 93 KSP Residual norm 2.135787292804e+00 94 KSP Residual norm 2.037914242784e+00 95 KSP Residual norm 1.958703835160e+00 96 KSP Residual norm 1.888997344448e+00 97 KSP Residual norm 1.834241780211e+00 98 KSP Residual norm 1.780690812289e+00 99 KSP Residual norm 1.720789601595e+00 100 KSP Residual norm 1.666550401716e+00 101 KSP Residual norm 1.666527099891e+00 102 KSP Residual norm 1.666516584670e+00 103 KSP Residual norm 1.666498923018e+00 104 KSP Residual norm 1.666478169502e+00 105 KSP Residual norm 1.666428740981e+00 106 KSP Residual norm 1.666387642753e+00 107 KSP Residual norm 1.666295119458e+00 108 KSP Residual norm 1.666251215522e+00 109 KSP Residual norm 1.666126787424e+00 110 KSP Residual norm 1.666070522422e+00 111 KSP Residual norm 1.666026482957e+00 112 KSP Residual norm 1.665913947737e+00 113 KSP Residual norm 1.665403430808e+00 114 KSP Residual norm 1.664935500925e+00 115 KSP Residual norm 1.662817665628e+00 116 KSP Residual norm 1.660545368051e+00 117 KSP Residual norm 1.655660478666e+00 118 KSP Residual norm 1.650584278468e+00 119 KSP Residual norm 1.641155700937e+00 120 KSP Residual norm 1.630520840913e+00 121 KSP Residual norm 1.611199032449e+00 122 KSP Residual norm 1.595961697251e+00 123 KSP Residual norm 1.580954067886e+00 124 KSP Residual norm 1.570304761874e+00 125 KSP Residual norm 1.556156148504e+00 126 KSP Residual norm 1.543403797953e+00 127 KSP Residual norm 1.526985167703e+00 128 KSP Residual norm 1.509106885936e+00 129 KSP Residual norm 1.488338648035e+00 130 KSP Residual norm 1.463914852118e+00 131 KSP Residual norm 1.429558165709e+00 132 KSP Residual norm 1.388339277738e+00 133 KSP Residual norm 1.337985029284e+00 134 KSP Residual norm 1.293993462940e+00 135 KSP Residual norm 1.257155784566e+00 136 KSP Residual norm 1.225675173103e+00 137 KSP Residual norm 1.197408282337e+00 138 KSP Residual norm 1.179978560088e+00 139 KSP Residual norm 1.169837672066e+00 140 KSP Residual norm 1.157674589905e+00 141 KSP Residual norm 1.137551235046e+00 142 KSP Residual norm 1.109458810874e+00 143 KSP Residual norm 1.069866095757e+00 144 KSP Residual norm 1.027995266046e+00 145 KSP Residual norm 9.838899744228e-01 146 KSP Residual norm 9.505115458137e-01 147 KSP Residual norm 9.238676750750e-01 148 KSP Residual norm 9.066828365376e-01 149 KSP Residual norm 8.962424499389e-01 150 KSP Residual norm 8.920288225402e-01 151 KSP Residual norm 8.900120019444e-01 152 KSP Residual norm 8.871336370193e-01 153 KSP Residual norm 8.774193057046e-01 154 KSP Residual norm 8.510214939527e-01 155 KSP Residual norm 8.020858083748e-01 156 KSP Residual norm 7.287823158652e-01 157 KSP Residual norm 6.432672361620e-01 158 KSP Residual norm 5.620068951550e-01 159 KSP Residual norm 4.854615626631e-01 160 KSP Residual norm 4.260939954273e-01 161 KSP Residual norm 3.854513424254e-01 162 KSP Residual norm 3.594017668521e-01 163 KSP Residual norm 3.406580182381e-01 164 KSP Residual norm 3.226022532991e-01 165 KSP Residual norm 3.024796076452e-01 166 KSP Residual norm 2.753061646896e-01 167 KSP Residual norm 2.496779865785e-01 168 KSP Residual norm 2.313748277234e-01 169 KSP Residual norm 2.187887185454e-01 170 KSP Residual norm 2.112136448556e-01 171 KSP Residual norm 2.081971917529e-01 172 KSP Residual norm 2.070879536415e-01 173 KSP Residual norm 2.065050782067e-01 174 KSP Residual norm 2.060192750336e-01 175 KSP Residual norm 2.046476706809e-01 176 KSP Residual norm 2.029072725267e-01 177 KSP Residual norm 2.002220948854e-01 178 KSP Residual norm 1.967961861007e-01 179 KSP Residual norm 1.927815494278e-01 180 KSP Residual norm 1.867406387423e-01 181 KSP Residual norm 1.776672783632e-01 182 KSP Residual norm 1.643859810357e-01 183 KSP Residual norm 1.495542605159e-01 184 KSP Residual norm 1.352583342086e-01 185 KSP Residual norm 1.229313657374e-01 186 KSP Residual norm 1.144196228877e-01 187 KSP Residual norm 1.082864880341e-01 188 KSP Residual norm 1.048982657023e-01 189 KSP Residual norm 1.031362194973e-01 190 KSP Residual norm 1.024915098503e-01 191 KSP Residual norm 1.022765131754e-01 192 KSP Residual norm 1.021242221115e-01 193 KSP Residual norm 1.019855354507e-01 194 KSP Residual norm 1.018164614942e-01 195 KSP Residual norm 1.016646004383e-01 196 KSP Residual norm 1.015862993282e-01 197 KSP Residual norm 1.014310004435e-01 198 KSP Residual norm 1.009544114157e-01 199 KSP Residual norm 9.916428806450e-02 200 KSP Residual norm 9.512262858618e-02 201 KSP Residual norm 9.511168759429e-02 202 KSP Residual norm 9.511024320826e-02 203 KSP Residual norm 9.510879570347e-02 204 KSP Residual norm 9.510826277069e-02 205 KSP Residual norm 9.510725865324e-02 206 KSP Residual norm 9.510561064569e-02 207 KSP Residual norm 9.510002406592e-02 208 KSP Residual norm 9.510000021914e-02 209 KSP Residual norm 9.509102870196e-02 210 KSP Residual norm 9.509002948167e-02 211 KSP Residual norm 9.508760761180e-02 212 KSP Residual norm 9.508360286743e-02 213 KSP Residual norm 9.504934724857e-02 214 KSP Residual norm 9.502519747918e-02 215 KSP Residual norm 9.486917412467e-02 216 KSP Residual norm 9.472692301088e-02 217 KSP Residual norm 9.435252109202e-02 218 KSP Residual norm 9.414975743975e-02 219 KSP Residual norm 9.356748156215e-02 220 KSP Residual norm 9.311456624145e-02 221 KSP Residual norm 9.219348750040e-02 222 KSP Residual norm 9.122156469638e-02 223 KSP Residual norm 8.991084837146e-02 224 KSP Residual norm 8.878397996698e-02 225 KSP Residual norm 8.688814969143e-02 226 KSP Residual norm 8.489168937643e-02 227 KSP Residual norm 8.201749668379e-02 228 KSP Residual norm 7.803888558309e-02 229 KSP Residual norm 7.416889342667e-02 230 KSP Residual norm 7.025814428144e-02 231 KSP Residual norm 6.683131935899e-02 232 KSP Residual norm 6.378422076232e-02 233 KSP Residual norm 6.153224623010e-02 234 KSP Residual norm 6.016566705703e-02 235 KSP Residual norm 5.939398436678e-02 236 KSP Residual norm 5.889637965047e-02 237 KSP Residual norm 5.847909370662e-02 238 KSP Residual norm 5.800856288600e-02 239 KSP Residual norm 5.739224566279e-02 240 KSP Residual norm 5.624115981895e-02 241 KSP Residual norm 5.452090032369e-02 242 KSP Residual norm 5.318624418262e-02 243 KSP Residual norm 5.201017376862e-02 244 KSP Residual norm 5.137643323998e-02 245 KSP Residual norm 5.101217280212e-02 246 KSP Residual norm 5.086842205965e-02 247 KSP Residual norm 5.070497740905e-02 248 KSP Residual norm 5.053080213257e-02 249 KSP Residual norm 5.027573755945e-02 250 KSP Residual norm 4.988008173340e-02 251 KSP Residual norm 4.891437285274e-02 252 KSP Residual norm 4.699466237240e-02 253 KSP Residual norm 4.454888040084e-02 254 KSP Residual norm 4.227523806995e-02 255 KSP Residual norm 4.034932942623e-02 256 KSP Residual norm 3.827123299276e-02 257 KSP Residual norm 3.546648609787e-02 258 KSP Residual norm 3.152613312200e-02 259 KSP Residual norm 2.666674037777e-02 260 KSP Residual norm 2.273569605434e-02 261 KSP Residual norm 2.033066834631e-02 262 KSP Residual norm 1.911302105857e-02 263 KSP Residual norm 1.870203505914e-02 264 KSP Residual norm 1.859585243834e-02 265 KSP Residual norm 1.856975421484e-02 266 KSP Residual norm 1.855846203228e-02 267 KSP Residual norm 1.855322447262e-02 268 KSP Residual norm 1.854500345722e-02 269 KSP Residual norm 1.854237822720e-02 270 KSP Residual norm 1.854233014646e-02 271 KSP Residual norm 1.853609241506e-02 272 KSP Residual norm 1.853285796832e-02 273 KSP Residual norm 1.852586315895e-02 274 KSP Residual norm 1.841984110565e-02 275 KSP Residual norm 1.814911260436e-02 276 KSP Residual norm 1.766539303344e-02 277 KSP Residual norm 1.688980488546e-02 278 KSP Residual norm 1.582939787606e-02 279 KSP Residual norm 1.443337616179e-02 280 KSP Residual norm 1.276842191257e-02 281 KSP Residual norm 1.117956806968e-02 282 KSP Residual norm 9.945624295268e-03 283 KSP Residual norm 9.275169831088e-03 284 KSP Residual norm 8.947963558305e-03 285 KSP Residual norm 8.794975817142e-03 286 KSP Residual norm 8.693837282785e-03 287 KSP Residual norm 8.624278332891e-03 288 KSP Residual norm 8.540378993711e-03 289 KSP Residual norm 8.468029663769e-03 KSP Object:(coupledsolve_) 1 MPI processes type: gmres GMRES: restart=100, using Modified Gram-Schmidt Orthogonalization GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1.83022e-07, absolute=1e-50, divergence=10000 left preconditioning has attached null space using PRECONDITIONED norm type for convergence test PC Object:(coupledsolve_) 1 MPI processes type: ilu ILU: out-of-place factorization 1 level of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 2.44153 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 package used to perform factorization: petsc total: nonzeros=3.56993e+08, allocated nonzeros=3.56993e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 total: nonzeros=1.46217e+08, allocated nonzeros=1.46217e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines 0000010 0.1830E-04 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 4.646794056296e+04 1 KSP Residual norm 3.162242091009e+04 2 KSP Residual norm 2.266759502361e+04 3 KSP Residual norm 1.760418081738e+04 4 KSP Residual norm 1.299759397918e+04 5 KSP Residual norm 9.645936099992e+03 6 KSP Residual norm 6.543292839490e+03 7 KSP Residual norm 4.115536911869e+03 8 KSP Residual norm 2.254599945204e+03 9 KSP Residual norm 1.263992322528e+03 10 KSP Residual norm 8.900188031408e+02 11 KSP Residual norm 6.980863995846e+02 12 KSP Residual norm 5.307258692241e+02 13 KSP Residual norm 4.512842434537e+02 14 KSP Residual norm 3.659226805885e+02 15 KSP Residual norm 3.024737477063e+02 16 KSP Residual norm 2.506985732589e+02 17 KSP Residual norm 2.202165696071e+02 18 KSP Residual norm 1.961319159072e+02 19 KSP Residual norm 1.790016804003e+02 20 KSP Residual norm 1.602200341348e+02 21 KSP Residual norm 1.442229888884e+02 22 KSP Residual norm 1.308394849630e+02 23 KSP Residual norm 1.214734141373e+02 24 KSP Residual norm 1.121518713563e+02 25 KSP Residual norm 1.031717845984e+02 26 KSP Residual norm 9.423088244960e+01 27 KSP Residual norm 8.567539374737e+01 28 KSP Residual norm 7.965849585974e+01 29 KSP Residual norm 7.537713997088e+01 30 KSP Residual norm 7.202584630300e+01 31 KSP Residual norm 6.920148240076e+01 32 KSP Residual norm 6.616486498631e+01 33 KSP Residual norm 6.336446977019e+01 34 KSP Residual norm 6.088947073693e+01 35 KSP Residual norm 5.861880916656e+01 36 KSP Residual norm 5.591666788374e+01 37 KSP Residual norm 5.311069156112e+01 38 KSP Residual norm 5.076220065733e+01 39 KSP Residual norm 4.836508472601e+01 40 KSP Residual norm 4.529650549919e+01 41 KSP Residual norm 4.221240909645e+01 42 KSP Residual norm 3.833120934576e+01 43 KSP Residual norm 3.429257052164e+01 44 KSP Residual norm 3.020863559246e+01 45 KSP Residual norm 2.696209794563e+01 46 KSP Residual norm 2.359858211994e+01 47 KSP Residual norm 2.061165078003e+01 48 KSP Residual norm 1.843536499597e+01 49 KSP Residual norm 1.719355223216e+01 50 KSP Residual norm 1.635132220793e+01 51 KSP Residual norm 1.590538225607e+01 52 KSP Residual norm 1.565687992986e+01 53 KSP Residual norm 1.550553471517e+01 54 KSP Residual norm 1.541465683501e+01 55 KSP Residual norm 1.536033031854e+01 56 KSP Residual norm 1.527564698424e+01 57 KSP Residual norm 1.517721471664e+01 58 KSP Residual norm 1.506725918727e+01 59 KSP Residual norm 1.489902503880e+01 60 KSP Residual norm 1.459999572712e+01 61 KSP Residual norm 1.412287051112e+01 62 KSP Residual norm 1.340682255853e+01 63 KSP Residual norm 1.247707230351e+01 64 KSP Residual norm 1.142222716302e+01 65 KSP Residual norm 9.961328344467e+00 66 KSP Residual norm 8.481213097916e+00 67 KSP Residual norm 7.206828182121e+00 68 KSP Residual norm 6.383867511199e+00 69 KSP Residual norm 5.832803891694e+00 70 KSP Residual norm 5.432857402482e+00 71 KSP Residual norm 5.146468532459e+00 72 KSP Residual norm 4.887695490955e+00 73 KSP Residual norm 4.681982223602e+00 74 KSP Residual norm 4.509315415008e+00 75 KSP Residual norm 4.395776113456e+00 76 KSP Residual norm 4.318765067692e+00 77 KSP Residual norm 4.258346686070e+00 78 KSP Residual norm 4.199756797887e+00 79 KSP Residual norm 4.130191211177e+00 80 KSP Residual norm 4.046371861089e+00 81 KSP Residual norm 3.936042058124e+00 82 KSP Residual norm 3.789914531199e+00 83 KSP Residual norm 3.608025172921e+00 84 KSP Residual norm 3.399033578402e+00 85 KSP Residual norm 3.202507881893e+00 86 KSP Residual norm 3.026555906042e+00 87 KSP Residual norm 2.882078843099e+00 88 KSP Residual norm 2.751120675122e+00 89 KSP Residual norm 2.639441946727e+00 90 KSP Residual norm 2.534492532410e+00 91 KSP Residual norm 2.407637294358e+00 92 KSP Residual norm 2.266481017361e+00 93 KSP Residual norm 2.135805753404e+00 94 KSP Residual norm 2.037918785856e+00 95 KSP Residual norm 1.958697948058e+00 96 KSP Residual norm 1.888983954656e+00 97 KSP Residual norm 1.834223993661e+00 98 KSP Residual norm 1.780671647359e+00 99 KSP Residual norm 1.720771876126e+00 100 KSP Residual norm 1.666536770109e+00 101 KSP Residual norm 1.666513473150e+00 102 KSP Residual norm 1.666502959356e+00 103 KSP Residual norm 1.666485298668e+00 104 KSP Residual norm 1.666464543976e+00 105 KSP Residual norm 1.666415113972e+00 106 KSP Residual norm 1.666374009252e+00 107 KSP Residual norm 1.666281480211e+00 108 KSP Residual norm 1.666237565552e+00 109 KSP Residual norm 1.666113135469e+00 110 KSP Residual norm 1.666056866326e+00 111 KSP Residual norm 1.666012828598e+00 112 KSP Residual norm 1.665900295828e+00 113 KSP Residual norm 1.665389846797e+00 114 KSP Residual norm 1.664921953371e+00 115 KSP Residual norm 1.662804391439e+00 116 KSP Residual norm 1.660532330253e+00 117 KSP Residual norm 1.655648077040e+00 118 KSP Residual norm 1.650572507041e+00 119 KSP Residual norm 1.641145478555e+00 120 KSP Residual norm 1.630512588552e+00 121 KSP Residual norm 1.611195161719e+00 122 KSP Residual norm 1.595961667610e+00 123 KSP Residual norm 1.580957657317e+00 124 KSP Residual norm 1.570310445267e+00 125 KSP Residual norm 1.556163620145e+00 126 KSP Residual norm 1.543411561787e+00 127 KSP Residual norm 1.526991880794e+00 128 KSP Residual norm 1.509110645571e+00 129 KSP Residual norm 1.488338195872e+00 130 KSP Residual norm 1.463909600575e+00 131 KSP Residual norm 1.429548831985e+00 132 KSP Residual norm 1.388329429705e+00 133 KSP Residual norm 1.337980167862e+00 134 KSP Residual norm 1.293996825700e+00 135 KSP Residual norm 1.257167080081e+00 136 KSP Residual norm 1.225693383842e+00 137 KSP Residual norm 1.197431430878e+00 138 KSP Residual norm 1.180003399757e+00 139 KSP Residual norm 1.169862075684e+00 140 KSP Residual norm 1.157697004519e+00 141 KSP Residual norm 1.137569095558e+00 142 KSP Residual norm 1.109469211829e+00 143 KSP Residual norm 1.069865369072e+00 144 KSP Residual norm 1.027984207364e+00 145 KSP Residual norm 9.838707471055e-01 146 KSP Residual norm 9.504885046048e-01 147 KSP Residual norm 9.238431663000e-01 148 KSP Residual norm 9.066586108979e-01 149 KSP Residual norm 8.962190554539e-01 150 KSP Residual norm 8.920061885173e-01 151 KSP Residual norm 8.899901598977e-01 152 KSP Residual norm 8.871132014251e-01 153 KSP Residual norm 8.774017876146e-01 154 KSP Residual norm 8.510081109038e-01 155 KSP Residual norm 8.020766019519e-01 156 KSP Residual norm 7.287768110259e-01 157 KSP Residual norm 6.432644624707e-01 158 KSP Residual norm 5.620049604574e-01 159 KSP Residual norm 4.854583733807e-01 160 KSP Residual norm 4.260879556293e-01 161 KSP Residual norm 3.854423650229e-01 162 KSP Residual norm 3.593906731896e-01 163 KSP Residual norm 3.406462382076e-01 164 KSP Residual norm 3.225911158772e-01 165 KSP Residual norm 3.024704481908e-01 166 KSP Residual norm 2.752997590853e-01 167 KSP Residual norm 2.496738533642e-01 168 KSP Residual norm 2.313719053971e-01 169 KSP Residual norm 2.187860753356e-01 170 KSP Residual norm 2.112105947380e-01 171 KSP Residual norm 2.081934802660e-01 172 KSP Residual norm 2.070836328158e-01 173 KSP Residual norm 2.065002047204e-01 174 KSP Residual norm 2.060138761629e-01 175 KSP Residual norm 2.046414802031e-01 176 KSP Residual norm 2.029003322198e-01 177 KSP Residual norm 2.002143923994e-01 178 KSP Residual norm 1.967879484278e-01 179 KSP Residual norm 1.927731669790e-01 180 KSP Residual norm 1.867326105610e-01 181 KSP Residual norm 1.776601067783e-01 182 KSP Residual norm 1.643799801956e-01 183 KSP Residual norm 1.495494543007e-01 184 KSP Residual norm 1.352548079705e-01 185 KSP Residual norm 1.229292742630e-01 186 KSP Residual norm 1.144189997323e-01 187 KSP Residual norm 1.082871200783e-01 188 KSP Residual norm 1.048996886080e-01 189 KSP Residual norm 1.031381095659e-01 190 KSP Residual norm 1.024935802512e-01 191 KSP Residual norm 1.022786058184e-01 192 KSP Residual norm 1.021262302176e-01 193 KSP Residual norm 1.019873775168e-01 194 KSP Residual norm 1.018180847734e-01 195 KSP Residual norm 1.016660000681e-01 196 KSP Residual norm 1.015875287963e-01 197 KSP Residual norm 1.014319989542e-01 198 KSP Residual norm 1.009551049753e-01 199 KSP Residual norm 9.916468075219e-02 200 KSP Residual norm 9.512317883981e-02 201 KSP Residual norm 9.511223745525e-02 202 KSP Residual norm 9.511079266867e-02 203 KSP Residual norm 9.510934504338e-02 204 KSP Residual norm 9.510881212008e-02 205 KSP Residual norm 9.510780801459e-02 206 KSP Residual norm 9.510616003053e-02 207 KSP Residual norm 9.510057399808e-02 208 KSP Residual norm 9.510055023490e-02 209 KSP Residual norm 9.509158103325e-02 210 KSP Residual norm 9.509058190901e-02 211 KSP Residual norm 9.508815983588e-02 212 KSP Residual norm 9.508415374330e-02 213 KSP Residual norm 9.504989235023e-02 214 KSP Residual norm 9.502573544769e-02 215 KSP Residual norm 9.486968429529e-02 216 KSP Residual norm 9.472738886847e-02 217 KSP Residual norm 9.435291305707e-02 218 KSP Residual norm 9.415008183939e-02 219 KSP Residual norm 9.356769363663e-02 220 KSP Residual norm 9.311468882306e-02 221 KSP Residual norm 9.219351064808e-02 222 KSP Residual norm 9.122150475080e-02 223 KSP Residual norm 8.991069859226e-02 224 KSP Residual norm 8.878369496576e-02 225 KSP Residual norm 8.688759851579e-02 226 KSP Residual norm 8.489070499667e-02 227 KSP Residual norm 8.201574769123e-02 228 KSP Residual norm 7.803595193363e-02 229 KSP Residual norm 7.416471819358e-02 230 KSP Residual norm 7.025306261953e-02 231 KSP Residual norm 6.682590398438e-02 232 KSP Residual norm 6.377893222909e-02 233 KSP Residual norm 6.152736829948e-02 234 KSP Residual norm 6.016119184202e-02 235 KSP Residual norm 5.938978317395e-02 236 KSP Residual norm 5.889244029584e-02 237 KSP Residual norm 5.847540252415e-02 238 KSP Residual norm 5.800508562240e-02 239 KSP Residual norm 5.738887327765e-02 240 KSP Residual norm 5.623768416810e-02 241 KSP Residual norm 5.451704139950e-02 242 KSP Residual norm 5.318199192831e-02 243 KSP Residual norm 5.200548929935e-02 244 KSP Residual norm 5.137145664822e-02 245 KSP Residual norm 5.100698468417e-02 246 KSP Residual norm 5.086311752738e-02 247 KSP Residual norm 5.069951614995e-02 248 KSP Residual norm 5.052517434736e-02 249 KSP Residual norm 5.026992215475e-02 250 KSP Residual norm 4.987408865857e-02 251 KSP Residual norm 4.890818438174e-02 252 KSP Residual norm 4.698848817064e-02 253 KSP Residual norm 4.454318821273e-02 254 KSP Residual norm 4.227034224830e-02 255 KSP Residual norm 4.034532343129e-02 256 KSP Residual norm 3.826816461049e-02 257 KSP Residual norm 3.546422959034e-02 258 KSP Residual norm 3.152438750427e-02 259 KSP Residual norm 2.666534329919e-02 260 KSP Residual norm 2.273451643623e-02 261 KSP Residual norm 2.032964954510e-02 262 KSP Residual norm 1.911213212598e-02 263 KSP Residual norm 1.870124392934e-02 264 KSP Residual norm 1.859513016056e-02 265 KSP Residual norm 1.856908373010e-02 266 KSP Residual norm 1.855784143068e-02 267 KSP Residual norm 1.855264531313e-02 268 KSP Residual norm 1.854448304005e-02 269 KSP Residual norm 1.854188821635e-02 270 KSP Residual norm 1.854184383198e-02 271 KSP Residual norm 1.853557455946e-02 272 KSP Residual norm 1.853232759157e-02 273 KSP Residual norm 1.852533879538e-02 274 KSP Residual norm 1.841930886285e-02 275 KSP Residual norm 1.814854377540e-02 276 KSP Residual norm 1.766474415927e-02 277 KSP Residual norm 1.688902327559e-02 278 KSP Residual norm 1.582849871839e-02 279 KSP Residual norm 1.443244920700e-02 280 KSP Residual norm 1.276761236171e-02 281 KSP Residual norm 1.117896517141e-02 282 KSP Residual norm 9.945218890734e-03 283 KSP Residual norm 9.274855564470e-03 284 KSP Residual norm 8.947674940877e-03 285 KSP Residual norm 8.794670758608e-03 286 KSP Residual norm 8.693501092206e-03 287 KSP Residual norm 8.623910483801e-03 288 KSP Residual norm 8.539978225803e-03 289 KSP Residual norm 8.467608154458e-03 290 KSP Residual norm 8.354913555280e-03 291 KSP Residual norm 8.224429679520e-03 292 KSP Residual norm 8.055420778175e-03 293 KSP Residual norm 7.901727475913e-03 294 KSP Residual norm 7.736158017260e-03 295 KSP Residual norm 7.573268633665e-03 296 KSP Residual norm 7.275766809448e-03 297 KSP Residual norm 6.926264175499e-03 298 KSP Residual norm 6.455316921366e-03 299 KSP Residual norm 6.135871825629e-03 300 KSP Residual norm 5.815802936373e-03 301 KSP Residual norm 5.809278671103e-03 302 KSP Residual norm 5.803627952879e-03 303 KSP Residual norm 5.794936857507e-03 304 KSP Residual norm 5.789655155858e-03 305 KSP Residual norm 5.781477119539e-03 306 KSP Residual norm 5.776202093111e-03 307 KSP Residual norm 5.765554645772e-03 308 KSP Residual norm 5.762080109318e-03 309 KSP Residual norm 5.750454975867e-03 310 KSP Residual norm 5.748262391041e-03 311 KSP Residual norm 5.746673634119e-03 312 KSP Residual norm 5.746469425509e-03 313 KSP Residual norm 5.742812203391e-03 314 KSP Residual norm 5.742286605044e-03 315 KSP Residual norm 5.732355871339e-03 316 KSP Residual norm 5.730682670544e-03 317 KSP Residual norm 5.712171171785e-03 318 KSP Residual norm 5.707690227955e-03 319 KSP Residual norm 5.676376368431e-03 320 KSP Residual norm 5.639086679433e-03 321 KSP Residual norm 5.572856125008e-03 322 KSP Residual norm 5.487064483643e-03 323 KSP Residual norm 5.363315087169e-03 324 KSP Residual norm 5.264378585981e-03 325 KSP Residual norm 5.159674205926e-03 326 KSP Residual norm 5.087185444844e-03 327 KSP Residual norm 5.051759041343e-03 328 KSP Residual norm 5.022344061460e-03 329 KSP Residual norm 5.009797450522e-03 330 KSP Residual norm 4.995683397939e-03 331 KSP Residual norm 4.984097267028e-03 332 KSP Residual norm 4.965722685716e-03 333 KSP Residual norm 4.943113705146e-03 334 KSP Residual norm 4.912533391057e-03 335 KSP Residual norm 4.863367227476e-03 336 KSP Residual norm 4.786889743538e-03 337 KSP Residual norm 4.672466474345e-03 338 KSP Residual norm 4.540852484907e-03 339 KSP Residual norm 4.418716817189e-03 340 KSP Residual norm 4.301708151930e-03 341 KSP Residual norm 4.169066709030e-03 342 KSP Residual norm 4.033285474113e-03 343 KSP Residual norm 3.867290143524e-03 344 KSP Residual norm 3.721493727078e-03 345 KSP Residual norm 3.556479659131e-03 346 KSP Residual norm 3.365623693566e-03 347 KSP Residual norm 3.085180720366e-03 348 KSP Residual norm 2.783861973680e-03 349 KSP Residual norm 2.502004149817e-03 350 KSP Residual norm 2.335237960090e-03 KSP Object:(coupledsolve_) 1 MPI processes type: gmres GMRES: restart=100, using Modified Gram-Schmidt Orthogonalization GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=5.10987e-08, absolute=1e-50, divergence=10000 left preconditioning has attached null space using PRECONDITIONED norm type for convergence test PC Object:(coupledsolve_) 1 MPI processes type: ilu ILU: out-of-place factorization 1 level of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 2.44153 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 package used to perform factorization: petsc total: nonzeros=3.56993e+08, allocated nonzeros=3.56993e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 total: nonzeros=1.46217e+08, allocated nonzeros=1.46217e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines 0000011 0.5110E-05 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 4.646794817862e+04 1 KSP Residual norm 3.162242022473e+04 2 KSP Residual norm 2.266759879554e+04 3 KSP Residual norm 1.760418098266e+04 4 KSP Residual norm 1.299759417060e+04 5 KSP Residual norm 9.645934719434e+03 6 KSP Residual norm 6.543291126285e+03 7 KSP Residual norm 4.115535473748e+03 8 KSP Residual norm 2.254598807031e+03 9 KSP Residual norm 1.263992303710e+03 10 KSP Residual norm 8.900185640390e+02 11 KSP Residual norm 6.980863856526e+02 12 KSP Residual norm 5.307253791507e+02 13 KSP Residual norm 4.512837937539e+02 14 KSP Residual norm 3.659219141456e+02 15 KSP Residual norm 3.024730616733e+02 16 KSP Residual norm 2.506979295463e+02 17 KSP Residual norm 2.202161328039e+02 18 KSP Residual norm 1.961317504103e+02 19 KSP Residual norm 1.790017523110e+02 20 KSP Residual norm 1.602205045150e+02 21 KSP Residual norm 1.442237053844e+02 22 KSP Residual norm 1.308405144875e+02 23 KSP Residual norm 1.214744985265e+02 24 KSP Residual norm 1.121530753700e+02 25 KSP Residual norm 1.031728689503e+02 26 KSP Residual norm 9.423191512980e+01 27 KSP Residual norm 8.567620298887e+01 28 KSP Residual norm 7.965921299605e+01 29 KSP Residual norm 7.537769563024e+01 30 KSP Residual norm 7.202631775398e+01 31 KSP Residual norm 6.920185140438e+01 32 KSP Residual norm 6.616516940982e+01 33 KSP Residual norm 6.336472575155e+01 34 KSP Residual norm 6.088969863615e+01 35 KSP Residual norm 5.861903774171e+01 36 KSP Residual norm 5.591688094047e+01 37 KSP Residual norm 5.311092334528e+01 38 KSP Residual norm 5.076240617862e+01 39 KSP Residual norm 4.836529990336e+01 40 KSP Residual norm 4.529665860372e+01 41 KSP Residual norm 4.221254351253e+01 42 KSP Residual norm 3.833126861123e+01 43 KSP Residual norm 3.429262285787e+01 44 KSP Residual norm 3.020864186320e+01 45 KSP Residual norm 2.696211244743e+01 46 KSP Residual norm 2.359858553862e+01 47 KSP Residual norm 2.061168420293e+01 48 KSP Residual norm 1.843541497525e+01 49 KSP Residual norm 1.719362192511e+01 50 KSP Residual norm 1.635140339042e+01 51 KSP Residual norm 1.590547211897e+01 52 KSP Residual norm 1.565697449274e+01 53 KSP Residual norm 1.550563058421e+01 54 KSP Residual norm 1.541475518815e+01 55 KSP Residual norm 1.536042735851e+01 56 KSP Residual norm 1.527574430274e+01 57 KSP Residual norm 1.517730691536e+01 58 KSP Residual norm 1.506734729924e+01 59 KSP Residual norm 1.489909821204e+01 60 KSP Residual norm 1.460005126490e+01 61 KSP Residual norm 1.412288771079e+01 62 KSP Residual norm 1.340680634019e+01 63 KSP Residual norm 1.247702364428e+01 64 KSP Residual norm 1.142217438868e+01 65 KSP Residual norm 9.961282387227e+00 66 KSP Residual norm 8.481185121766e+00 67 KSP Residual norm 7.206810863397e+00 68 KSP Residual norm 6.383858180062e+00 69 KSP Residual norm 5.832798049185e+00 70 KSP Residual norm 5.432852338110e+00 71 KSP Residual norm 5.146464378760e+00 72 KSP Residual norm 4.887691448197e+00 73 KSP Residual norm 4.681980254912e+00 74 KSP Residual norm 4.509316473713e+00 75 KSP Residual norm 4.395781745960e+00 76 KSP Residual norm 4.318774525041e+00 77 KSP Residual norm 4.258360040624e+00 78 KSP Residual norm 4.199772391385e+00 79 KSP Residual norm 4.130209415176e+00 80 KSP Residual norm 4.046391887596e+00 81 KSP Residual norm 3.936065347817e+00 82 KSP Residual norm 3.789941889868e+00 83 KSP Residual norm 3.608058063732e+00 84 KSP Residual norm 3.399071438071e+00 85 KSP Residual norm 3.202548835955e+00 86 KSP Residual norm 3.026597068546e+00 87 KSP Residual norm 2.882117544994e+00 88 KSP Residual norm 2.751155648505e+00 89 KSP Residual norm 2.639472573017e+00 90 KSP Residual norm 2.534519333513e+00 91 KSP Residual norm 2.407660118371e+00 92 KSP Residual norm 2.266501191463e+00 93 KSP Residual norm 2.135824403692e+00 94 KSP Residual norm 2.037936770818e+00 95 KSP Residual norm 1.958715137022e+00 96 KSP Residual norm 1.889000204637e+00 97 KSP Residual norm 1.834238871518e+00 98 KSP Residual norm 1.780684905104e+00 99 KSP Residual norm 1.720783199113e+00 100 KSP Residual norm 1.666546529404e+00 101 KSP Residual norm 1.666523232786e+00 102 KSP Residual norm 1.666512718691e+00 103 KSP Residual norm 1.666495057175e+00 104 KSP Residual norm 1.666474301142e+00 105 KSP Residual norm 1.666424868831e+00 106 KSP Residual norm 1.666383760687e+00 107 KSP Residual norm 1.666291225791e+00 108 KSP Residual norm 1.666247306293e+00 109 KSP Residual norm 1.666122871135e+00 110 KSP Residual norm 1.666066600345e+00 111 KSP Residual norm 1.666022563782e+00 112 KSP Residual norm 1.665910038180e+00 113 KSP Residual norm 1.665399623132e+00 114 KSP Residual norm 1.664931768017e+00 115 KSP Residual norm 1.662814345067e+00 116 KSP Residual norm 1.660542424462e+00 117 KSP Residual norm 1.655658388257e+00 118 KSP Residual norm 1.650583025167e+00 119 KSP Residual norm 1.641156214116e+00 120 KSP Residual norm 1.630523457097e+00 121 KSP Residual norm 1.611205815075e+00 122 KSP Residual norm 1.595971833308e+00 123 KSP Residual norm 1.580967172391e+00 124 KSP Residual norm 1.570319619399e+00 125 KSP Residual norm 1.556172473435e+00 126 KSP Residual norm 1.543420376229e+00 127 KSP Residual norm 1.527000687096e+00 128 KSP Residual norm 1.509119740789e+00 129 KSP Residual norm 1.488347845738e+00 130 KSP Residual norm 1.463920373265e+00 131 KSP Residual norm 1.429561488880e+00 132 KSP Residual norm 1.388344828086e+00 133 KSP Residual norm 1.337998578976e+00 134 KSP Residual norm 1.294017675066e+00 135 KSP Residual norm 1.257189303507e+00 136 KSP Residual norm 1.225716063782e+00 137 KSP Residual norm 1.197453456397e+00 138 KSP Residual norm 1.180024204344e+00 139 KSP Residual norm 1.169881733023e+00 140 KSP Residual norm 1.157715193123e+00 141 KSP Residual norm 1.137585019244e+00 142 KSP Residual norm 1.109482266385e+00 143 KSP Residual norm 1.069874723192e+00 144 KSP Residual norm 1.027990343661e+00 145 KSP Residual norm 9.838742335151e-01 146 KSP Residual norm 9.504905354113e-01 147 KSP Residual norm 9.238443944919e-01 148 KSP Residual norm 9.066595587520e-01 149 KSP Residual norm 8.962199741238e-01 150 KSP Residual norm 8.920072794854e-01 151 KSP Residual norm 8.899915844412e-01 152 KSP Residual norm 8.871153437550e-01 153 KSP Residual norm 8.774056312874e-01 154 KSP Residual norm 8.510149016763e-01 155 KSP Residual norm 8.020866050001e-01 156 KSP Residual norm 7.287890736505e-01 157 KSP Residual norm 6.432774916255e-01 158 KSP Residual norm 5.620173316995e-01 159 KSP Residual norm 4.854687212288e-01 160 KSP Residual norm 4.260956822319e-01 161 KSP Residual norm 3.854479074816e-01 162 KSP Residual norm 3.593947206994e-01 163 KSP Residual norm 3.406495705678e-01 164 KSP Residual norm 3.225941492446e-01 165 KSP Residual norm 3.024735113179e-01 166 KSP Residual norm 2.753025194714e-01 167 KSP Residual norm 2.496758883075e-01 168 KSP Residual norm 2.313729382162e-01 169 KSP Residual norm 2.187860642275e-01 170 KSP Residual norm 2.112095793913e-01 171 KSP Residual norm 2.081917690206e-01 172 KSP Residual norm 2.070814875689e-01 173 KSP Residual norm 2.064977295169e-01 174 KSP Residual norm 2.060111299911e-01 175 KSP Residual norm 2.046383515425e-01 176 KSP Residual norm 2.028969215770e-01 177 KSP Residual norm 2.002107994610e-01 178 KSP Residual norm 1.967844170377e-01 179 KSP Residual norm 1.927700487270e-01 180 KSP Residual norm 1.867304642941e-01 181 KSP Residual norm 1.776594851051e-01 182 KSP Residual norm 1.643812361055e-01 183 KSP Residual norm 1.495518903851e-01 184 KSP Residual norm 1.352575491109e-01 185 KSP Residual norm 1.229316254969e-01 186 KSP Residual norm 1.144208944734e-01 187 KSP Residual norm 1.082886368793e-01 188 KSP Residual norm 1.049009616255e-01 189 KSP Residual norm 1.031392513748e-01 190 KSP Residual norm 1.024946496172e-01 191 KSP Residual norm 1.022796201487e-01 192 KSP Residual norm 1.021271725287e-01 193 KSP Residual norm 1.019882327321e-01 194 KSP Residual norm 1.018188450053e-01 195 KSP Residual norm 1.016666748915e-01 196 KSP Residual norm 1.015881439429e-01 197 KSP Residual norm 1.014325381569e-01 198 KSP Residual norm 1.009555487070e-01 199 KSP Residual norm 9.916503368946e-02 200 KSP Residual norm 9.512353976735e-02 201 KSP Residual norm 9.511259829209e-02 202 KSP Residual norm 9.511115335871e-02 203 KSP Residual norm 9.510970571566e-02 204 KSP Residual norm 9.510917281021e-02 205 KSP Residual norm 9.510816868749e-02 206 KSP Residual norm 9.510652065050e-02 207 KSP Residual norm 9.510093471149e-02 208 KSP Residual norm 9.510091097090e-02 209 KSP Residual norm 9.509194237274e-02 210 KSP Residual norm 9.509094326417e-02 211 KSP Residual norm 9.508852107988e-02 212 KSP Residual norm 9.508451459660e-02 213 KSP Residual norm 9.505025128523e-02 214 KSP Residual norm 9.502609235340e-02 215 KSP Residual norm 9.487003099015e-02 216 KSP Residual norm 9.472771782103e-02 217 KSP Residual norm 9.435320025548e-02 218 KSP Residual norm 9.415032621956e-02 219 KSP Residual norm 9.356784225753e-02 220 KSP Residual norm 9.311473827663e-02 221 KSP Residual norm 9.219336570237e-02 222 KSP Residual norm 9.122113528254e-02 223 KSP Residual norm 8.991004536890e-02 224 KSP Residual norm 8.878281133970e-02 225 KSP Residual norm 8.688647796089e-02 226 KSP Residual norm 8.488946968032e-02 227 KSP Residual norm 8.201449704689e-02 228 KSP Residual norm 7.803472954230e-02 229 KSP Residual norm 7.416340082455e-02 230 KSP Residual norm 7.025156240310e-02 231 KSP Residual norm 6.682423697934e-02 232 KSP Residual norm 6.377715365022e-02 233 KSP Residual norm 6.152556296138e-02 234 KSP Residual norm 6.015940034639e-02 235 KSP Residual norm 5.938800330336e-02 236 KSP Residual norm 5.889068481951e-02 237 KSP Residual norm 5.847366697739e-02 238 KSP Residual norm 5.800334100373e-02 239 KSP Residual norm 5.738706210463e-02 240 KSP Residual norm 5.623565670351e-02 241 KSP Residual norm 5.451463788283e-02 242 KSP Residual norm 5.317928586375e-02 243 KSP Residual norm 5.200251785626e-02 244 KSP Residual norm 5.136833573201e-02 245 KSP Residual norm 5.100377858251e-02 246 KSP Residual norm 5.085988113583e-02 247 KSP Residual norm 5.069624628278e-02 248 KSP Residual norm 5.052187713109e-02 249 KSP Residual norm 5.026660046006e-02 250 KSP Residual norm 4.987075054332e-02 251 KSP Residual norm 4.890483334637e-02 252 KSP Residual norm 4.698524273166e-02 253 KSP Residual norm 4.454027052718e-02 254 KSP Residual norm 4.226791245493e-02 255 KSP Residual norm 4.034344993038e-02 256 KSP Residual norm 3.826692857224e-02 257 KSP Residual norm 3.546362228324e-02 258 KSP Residual norm 3.152424025617e-02 259 KSP Residual norm 2.666545590093e-02 260 KSP Residual norm 2.273469766349e-02 261 KSP Residual norm 2.032983844588e-02 262 KSP Residual norm 1.911233527691e-02 263 KSP Residual norm 1.870147116090e-02 264 KSP Residual norm 1.859537662912e-02 265 KSP Residual norm 1.856934599691e-02 266 KSP Residual norm 1.855812009877e-02 267 KSP Residual norm 1.855293872036e-02 268 KSP Residual norm 1.854479898695e-02 269 KSP Residual norm 1.854221655835e-02 270 KSP Residual norm 1.854217370892e-02 271 KSP Residual norm 1.853588957393e-02 272 KSP Residual norm 1.853263571229e-02 273 KSP Residual norm 1.852565105626e-02 274 KSP Residual norm 1.841961867231e-02 275 KSP Residual norm 1.814883116977e-02 276 KSP Residual norm 1.766498717631e-02 277 KSP Residual norm 1.688919691430e-02 278 KSP Residual norm 1.582859615957e-02 279 KSP Residual norm 1.443247501455e-02 280 KSP Residual norm 1.276757468811e-02 281 KSP Residual norm 1.117886730837e-02 282 KSP Residual norm 9.945073345874e-03 283 KSP Residual norm 9.274683560541e-03 284 KSP Residual norm 8.947495266794e-03 285 KSP Residual norm 8.794488396727e-03 286 KSP Residual norm 8.693318095210e-03 287 KSP Residual norm 8.623727988327e-03 288 KSP Residual norm 8.539798007917e-03 289 KSP Residual norm 8.467432635007e-03 290 KSP Residual norm 8.354748480861e-03 291 KSP Residual norm 8.224282457437e-03 292 KSP Residual norm 8.055305933738e-03 293 KSP Residual norm 7.901656224839e-03 294 KSP Residual norm 7.736134433174e-03 295 KSP Residual norm 7.573288309290e-03 296 KSP Residual norm 7.275834188027e-03 297 KSP Residual norm 6.926359848768e-03 298 KSP Residual norm 6.455407605098e-03 299 KSP Residual norm 6.135911999974e-03 300 KSP Residual norm 5.815721709426e-03 301 KSP Residual norm 5.809189537421e-03 302 KSP Residual norm 5.803533225777e-03 303 KSP Residual norm 5.794836000227e-03 304 KSP Residual norm 5.789552589566e-03 305 KSP Residual norm 5.781374261718e-03 306 KSP Residual norm 5.776101634419e-03 307 KSP Residual norm 5.765457529932e-03 308 KSP Residual norm 5.761988765367e-03 309 KSP Residual norm 5.750367556006e-03 310 KSP Residual norm 5.748177181645e-03 311 KSP Residual norm 5.746589313940e-03 312 KSP Residual norm 5.746384389268e-03 313 KSP Residual norm 5.742728196224e-03 314 KSP Residual norm 5.742202818034e-03 315 KSP Residual norm 5.732272707176e-03 316 KSP Residual norm 5.730600488410e-03 317 KSP Residual norm 5.712087083553e-03 318 KSP Residual norm 5.707606315808e-03 319 KSP Residual norm 5.676289228117e-03 320 KSP Residual norm 5.638983301243e-03 321 KSP Residual norm 5.572742009517e-03 322 KSP Residual norm 5.486916236148e-03 323 KSP Residual norm 5.363130691547e-03 324 KSP Residual norm 5.264172950545e-03 325 KSP Residual norm 5.159483934042e-03 326 KSP Residual norm 5.087018890630e-03 327 KSP Residual norm 5.051621099686e-03 328 KSP Residual norm 5.022226978517e-03 329 KSP Residual norm 5.009693302385e-03 330 KSP Residual norm 4.995587088680e-03 331 KSP Residual norm 4.984005171897e-03 332 KSP Residual norm 4.965631751681e-03 333 KSP Residual norm 4.943022303916e-03 334 KSP Residual norm 4.912437291209e-03 335 KSP Residual norm 4.863252374359e-03 336 KSP Residual norm 4.786740757642e-03 337 KSP Residual norm 4.672277450951e-03 338 KSP Residual norm 4.540641032013e-03 339 KSP Residual norm 4.418500003959e-03 340 KSP Residual norm 4.301512182985e-03 341 KSP Residual norm 4.168907184322e-03 342 KSP Residual norm 4.033160938357e-03 343 KSP Residual norm 3.867191781838e-03 344 KSP Residual norm 3.721409167681e-03 345 KSP Residual norm 3.556386202244e-03 346 KSP Residual norm 3.365501963161e-03 347 KSP Residual norm 3.085023170277e-03 348 KSP Residual norm 2.783699371092e-03 349 KSP Residual norm 2.501859251092e-03 350 KSP Residual norm 2.335115925404e-03 351 KSP Residual norm 2.213077601007e-03 352 KSP Residual norm 2.112158046435e-03 353 KSP Residual norm 2.015212782694e-03 354 KSP Residual norm 1.921647641456e-03 355 KSP Residual norm 1.847060360217e-03 356 KSP Residual norm 1.796046691844e-03 357 KSP Residual norm 1.761488658540e-03 358 KSP Residual norm 1.731830078146e-03 359 KSP Residual norm 1.698722691838e-03 360 KSP Residual norm 1.653193001061e-03 361 KSP Residual norm 1.597041227116e-03 362 KSP Residual norm 1.535517323663e-03 363 KSP Residual norm 1.486882775757e-03 364 KSP Residual norm 1.445907704322e-03 365 KSP Residual norm 1.414834963836e-03 366 KSP Residual norm 1.386499811580e-03 367 KSP Residual norm 1.368133395281e-03 368 KSP Residual norm 1.354764032353e-03 369 KSP Residual norm 1.345499546054e-03 370 KSP Residual norm 1.334630652873e-03 371 KSP Residual norm 1.322675479046e-03 372 KSP Residual norm 1.308271168381e-03 373 KSP Residual norm 1.289084827283e-03 374 KSP Residual norm 1.265044687477e-03 375 KSP Residual norm 1.235592114417e-03 376 KSP Residual norm 1.210241281194e-03 377 KSP Residual norm 1.191268659617e-03 378 KSP Residual norm 1.179473987471e-03 379 KSP Residual norm 1.169202175105e-03 380 KSP Residual norm 1.155378315735e-03 381 KSP Residual norm 1.125754903783e-03 382 KSP Residual norm 1.063037787143e-03 383 KSP Residual norm 9.763442008759e-04 KSP Object:(coupledsolve_) 1 MPI processes type: gmres GMRES: restart=100, using Modified Gram-Schmidt Orthogonalization GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=2.10264e-08, absolute=1e-50, divergence=10000 left preconditioning has attached null space using PRECONDITIONED norm type for convergence test PC Object:(coupledsolve_) 1 MPI processes type: ilu ILU: out-of-place factorization 1 level of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 2.44153 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 package used to perform factorization: petsc total: nonzeros=3.56993e+08, allocated nonzeros=3.56993e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 total: nonzeros=1.46217e+08, allocated nonzeros=1.46217e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines 0000012 0.2103E-05 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 4.646794731278e+04 1 KSP Residual norm 3.162242104518e+04 2 KSP Residual norm 2.266759974993e+04 3 KSP Residual norm 1.760418161328e+04 4 KSP Residual norm 1.299759456499e+04 5 KSP Residual norm 9.645934799511e+03 6 KSP Residual norm 6.543291138877e+03 7 KSP Residual norm 4.115535418770e+03 8 KSP Residual norm 2.254598850822e+03 9 KSP Residual norm 1.263992408383e+03 10 KSP Residual norm 8.900186868003e+02 11 KSP Residual norm 6.980864777924e+02 12 KSP Residual norm 5.307254274398e+02 13 KSP Residual norm 4.512837930320e+02 14 KSP Residual norm 3.659218722179e+02 15 KSP Residual norm 3.024729982584e+02 16 KSP Residual norm 2.506978794973e+02 17 KSP Residual norm 2.202161209266e+02 18 KSP Residual norm 1.961317965083e+02 19 KSP Residual norm 1.790018637389e+02 20 KSP Residual norm 1.602206958188e+02 21 KSP Residual norm 1.442239708648e+02 22 KSP Residual norm 1.308408305768e+02 23 KSP Residual norm 1.214748382951e+02 24 KSP Residual norm 1.121534128336e+02 25 KSP Residual norm 1.031731837343e+02 26 KSP Residual norm 9.423218782488e+01 27 KSP Residual norm 8.567643081863e+01 28 KSP Residual norm 7.965940258127e+01 29 KSP Residual norm 7.537785631106e+01 30 KSP Residual norm 7.202645258876e+01 31 KSP Residual norm 6.920196875519e+01 32 KSP Residual norm 6.616526975680e+01 33 KSP Residual norm 6.336481610607e+01 34 KSP Residual norm 6.088978194862e+01 35 KSP Residual norm 5.861911725269e+01 36 KSP Residual norm 5.591695634151e+01 37 KSP Residual norm 5.311099510427e+01 38 KSP Residual norm 5.076247366017e+01 39 KSP Residual norm 4.836536072960e+01 40 KSP Residual norm 4.529670726946e+01 41 KSP Residual norm 4.221257706719e+01 42 KSP Residual norm 3.833128568385e+01 43 KSP Residual norm 3.429262665453e+01 44 KSP Residual norm 3.020863604762e+01 45 KSP Residual norm 2.696210181505e+01 46 KSP Residual norm 2.359857617836e+01 47 KSP Residual norm 2.061168111830e+01 48 KSP Residual norm 1.843541926817e+01 49 KSP Residual norm 1.719363099234e+01 50 KSP Residual norm 1.635141682749e+01 51 KSP Residual norm 1.590548782419e+01 52 KSP Residual norm 1.565699103540e+01 53 KSP Residual norm 1.550564714959e+01 54 KSP Residual norm 1.541477176701e+01 55 KSP Residual norm 1.536044346368e+01 56 KSP Residual norm 1.527575954047e+01 57 KSP Residual norm 1.517732089734e+01 58 KSP Residual norm 1.506735928228e+01 59 KSP Residual norm 1.489910639119e+01 60 KSP Residual norm 1.460005344713e+01 61 KSP Residual norm 1.412288055348e+01 62 KSP Residual norm 1.340678988159e+01 63 KSP Residual norm 1.247700217929e+01 64 KSP Residual norm 1.142215577496e+01 65 KSP Residual norm 9.961275149531e+00 66 KSP Residual norm 8.481193969105e+00 67 KSP Residual norm 7.206833614383e+00 68 KSP Residual norm 6.383887850777e+00 69 KSP Residual norm 5.832829192935e+00 70 KSP Residual norm 5.432881766304e+00 71 KSP Residual norm 5.146490570972e+00 72 KSP Residual norm 4.887713492892e+00 73 KSP Residual norm 4.681998668394e+00 74 KSP Residual norm 4.509332268847e+00 75 KSP Residual norm 4.395796267723e+00 76 KSP Residual norm 4.318788462345e+00 77 KSP Residual norm 4.258373628887e+00 78 KSP Residual norm 4.199785516894e+00 79 KSP Residual norm 4.130221800069e+00 80 KSP Residual norm 4.046403437664e+00 81 KSP Residual norm 3.936075986660e+00 82 KSP Residual norm 3.789951875607e+00 83 KSP Residual norm 3.608067643175e+00 84 KSP Residual norm 3.399080523106e+00 85 KSP Residual norm 3.202557155666e+00 86 KSP Residual norm 3.026603967720e+00 87 KSP Residual norm 2.882122496601e+00 88 KSP Residual norm 2.751158514081e+00 89 KSP Residual norm 2.639473500498e+00 90 KSP Residual norm 2.534518757816e+00 91 KSP Residual norm 2.407658320941e+00 92 KSP Residual norm 2.266498909595e+00 93 KSP Residual norm 2.135822273908e+00 94 KSP Residual norm 2.037934903302e+00 95 KSP Residual norm 1.958713491423e+00 96 KSP Residual norm 1.888998697500e+00 97 KSP Residual norm 1.834237384262e+00 98 KSP Residual norm 1.780683405778e+00 99 KSP Residual norm 1.720781738057e+00 100 KSP Residual norm 1.666545302354e+00 101 KSP Residual norm 1.666522006830e+00 102 KSP Residual norm 1.666511493137e+00 103 KSP Residual norm 1.666493832218e+00 104 KSP Residual norm 1.666473076755e+00 105 KSP Residual norm 1.666423645726e+00 106 KSP Residual norm 1.666382538012e+00 107 KSP Residual norm 1.666290004417e+00 108 KSP Residual norm 1.666246084513e+00 109 KSP Residual norm 1.666121651537e+00 110 KSP Residual norm 1.666065381803e+00 111 KSP Residual norm 1.666021347324e+00 112 KSP Residual norm 1.665908828020e+00 113 KSP Residual norm 1.665398445693e+00 114 KSP Residual norm 1.664930622420e+00 115 KSP Residual norm 1.662813337399e+00 116 KSP Residual norm 1.660541560196e+00 117 KSP Residual norm 1.655657800190e+00 118 KSP Residual norm 1.650582696885e+00 119 KSP Residual norm 1.641156295389e+00 120 KSP Residual norm 1.630523890853e+00 121 KSP Residual norm 1.611206675593e+00 122 KSP Residual norm 1.595972819254e+00 123 KSP Residual norm 1.580968184774e+00 124 KSP Residual norm 1.570320649375e+00 125 KSP Residual norm 1.556173590388e+00 126 KSP Residual norm 1.543421645531e+00 127 KSP Residual norm 1.527002175383e+00 128 KSP Residual norm 1.509121522465e+00 129 KSP Residual norm 1.488349919173e+00 130 KSP Residual norm 1.463922823656e+00 131 KSP Residual norm 1.429564305324e+00 132 KSP Residual norm 1.388347967095e+00 133 KSP Residual norm 1.338001853026e+00 134 KSP Residual norm 1.294020943409e+00 135 KSP Residual norm 1.257192528259e+00 136 KSP Residual norm 1.225719323868e+00 137 KSP Residual norm 1.197456800086e+00 138 KSP Residual norm 1.180027631404e+00 139 KSP Residual norm 1.169885209842e+00 140 KSP Residual norm 1.157718739879e+00 141 KSP Residual norm 1.137588603470e+00 142 KSP Residual norm 1.109485829113e+00 143 KSP Residual norm 1.069877927764e+00 144 KSP Residual norm 1.027992741492e+00 145 KSP Residual norm 9.838753707649e-01 146 KSP Residual norm 9.504905189103e-01 147 KSP Residual norm 9.238433515462e-01 148 KSP Residual norm 9.066579827439e-01 149 KSP Residual norm 8.962182914137e-01 150 KSP Residual norm 8.920057392198e-01 151 KSP Residual norm 8.899902877758e-01 152 KSP Residual norm 8.871145815306e-01 153 KSP Residual norm 8.774061487223e-01 154 KSP Residual norm 8.510175746773e-01 155 KSP Residual norm 8.020914882623e-01 156 KSP Residual norm 7.287954521432e-01 157 KSP Residual norm 6.432844005758e-01 158 KSP Residual norm 5.620240753090e-01 159 KSP Residual norm 4.854747542949e-01 160 KSP Residual norm 4.261007993652e-01 161 KSP Residual norm 3.854521951267e-01 162 KSP Residual norm 3.593983650577e-01 163 KSP Residual norm 3.406527607988e-01 164 KSP Residual norm 3.225969321442e-01 165 KSP Residual norm 3.024758161273e-01 166 KSP Residual norm 2.753041414007e-01 167 KSP Residual norm 2.496768504321e-01 168 KSP Residual norm 2.313733948186e-01 169 KSP Residual norm 2.187862029180e-01 170 KSP Residual norm 2.112095352924e-01 171 KSP Residual norm 2.081916374341e-01 172 KSP Residual norm 2.070813070073e-01 173 KSP Residual norm 2.064975124556e-01 174 KSP Residual norm 2.060108863621e-01 175 KSP Residual norm 2.046380957614e-01 176 KSP Residual norm 2.028966943621e-01 177 KSP Residual norm 2.002106693333e-01 178 KSP Residual norm 1.967844746642e-01 179 KSP Residual norm 1.927704154222e-01 180 KSP Residual norm 1.867313038672e-01 181 KSP Residual norm 1.776608527541e-01 182 KSP Residual norm 1.643830124161e-01 183 KSP Residual norm 1.495536716434e-01 184 KSP Residual norm 1.352590123680e-01 185 KSP Residual norm 1.229326164350e-01 186 KSP Residual norm 1.144215129681e-01 187 KSP Residual norm 1.082889702377e-01 188 KSP Residual norm 1.049011065164e-01 189 KSP Residual norm 1.031392752575e-01 190 KSP Residual norm 1.024946048600e-01 191 KSP Residual norm 1.022795332199e-01 192 KSP Residual norm 1.021270447570e-01 193 KSP Residual norm 1.019880656597e-01 194 KSP Residual norm 1.018186438994e-01 195 KSP Residual norm 1.016664509853e-01 196 KSP Residual norm 1.015879088706e-01 197 KSP Residual norm 1.014322940952e-01 198 KSP Residual norm 1.009553055733e-01 199 KSP Residual norm 9.916482572719e-02 200 KSP Residual norm 9.512342386941e-02 201 KSP Residual norm 9.511248325477e-02 202 KSP Residual norm 9.511103849526e-02 203 KSP Residual norm 9.510959103739e-02 204 KSP Residual norm 9.510905819151e-02 205 KSP Residual norm 9.510805412760e-02 206 KSP Residual norm 9.510640608740e-02 207 KSP Residual norm 9.510082034050e-02 208 KSP Residual norm 9.510079658877e-02 209 KSP Residual norm 9.509182851008e-02 210 KSP Residual norm 9.509082939911e-02 211 KSP Residual norm 9.508840735780e-02 212 KSP Residual norm 9.508440087119e-02 213 KSP Residual norm 9.505013905104e-02 214 KSP Residual norm 9.502598056332e-02 215 KSP Residual norm 9.486992426992e-02 216 KSP Residual norm 9.472761251821e-02 217 KSP Residual norm 9.435310083909e-02 218 KSP Residual norm 9.415021939905e-02 219 KSP Residual norm 9.356771856349e-02 220 KSP Residual norm 9.311458398468e-02 221 KSP Residual norm 9.219313590203e-02 222 KSP Residual norm 9.122081352849e-02 223 KSP Residual norm 8.990960779062e-02 224 KSP Residual norm 8.878228951961e-02 225 KSP Residual norm 8.688590759927e-02 226 KSP Residual norm 8.488895590805e-02 227 KSP Residual norm 8.201417932109e-02 228 KSP Residual norm 7.803474186097e-02 229 KSP Residual norm 7.416364318322e-02 230 KSP Residual norm 7.025184977968e-02 231 KSP Residual norm 6.682437208499e-02 232 KSP Residual norm 6.377701721370e-02 233 KSP Residual norm 6.152514736713e-02 234 KSP Residual norm 6.015877437983e-02 235 KSP Residual norm 5.938723196524e-02 236 KSP Residual norm 5.888980306932e-02 237 KSP Residual norm 5.847268850172e-02 238 KSP Residual norm 5.800226683014e-02 239 KSP Residual norm 5.738589338689e-02 240 KSP Residual norm 5.623434384786e-02 241 KSP Residual norm 5.451314298112e-02 242 KSP Residual norm 5.317767831461e-02 243 KSP Residual norm 5.200082549237e-02 244 KSP Residual norm 5.136658926865e-02 245 KSP Residual norm 5.100199127167e-02 246 KSP Residual norm 5.085807254300e-02 247 KSP Residual norm 5.069440664953e-02 248 KSP Residual norm 5.052000746379e-02 249 KSP Residual norm 5.026470967770e-02 250 KSP Residual norm 4.986886326396e-02 251 KSP Residual norm 4.890299937499e-02 252 KSP Residual norm 4.698362078497e-02 253 KSP Residual norm 4.453901631550e-02 254 KSP Residual norm 4.226707084775e-02 255 KSP Residual norm 4.034300334185e-02 256 KSP Residual norm 3.826690444286e-02 257 KSP Residual norm 3.546399155642e-02 258 KSP Residual norm 3.152487139489e-02 259 KSP Residual norm 2.666615373973e-02 260 KSP Residual norm 2.273529749082e-02 261 KSP Residual norm 2.033031774416e-02 262 KSP Residual norm 1.911274616748e-02 263 KSP Residual norm 1.870185940863e-02 264 KSP Residual norm 1.859575804799e-02 265 KSP Residual norm 1.856972669256e-02 266 KSP Residual norm 1.855850234749e-02 267 KSP Residual norm 1.855332362484e-02 268 KSP Residual norm 1.854518944259e-02 269 KSP Residual norm 1.854261065721e-02 270 KSP Residual norm 1.854256832463e-02 271 KSP Residual norm 1.853627791637e-02 272 KSP Residual norm 1.853302020335e-02 273 KSP Residual norm 1.852603960906e-02 274 KSP Residual norm 1.842001652988e-02 275 KSP Residual norm 1.814923469822e-02 276 KSP Residual norm 1.766539189835e-02 277 KSP Residual norm 1.688959965275e-02 278 KSP Residual norm 1.582899452433e-02 279 KSP Residual norm 1.443286151646e-02 280 KSP Residual norm 1.276791138162e-02 281 KSP Residual norm 1.117910328389e-02 282 KSP Residual norm 9.945192398621e-03 283 KSP Residual norm 9.274722330789e-03 284 KSP Residual norm 8.947491273942e-03 285 KSP Residual norm 8.794462929387e-03 286 KSP Residual norm 8.693281164315e-03 287 KSP Residual norm 8.623684562087e-03 288 KSP Residual norm 8.539751270738e-03 289 KSP Residual norm 8.467385129320e-03 290 KSP Residual norm 8.354704923939e-03 291 KSP Residual norm 8.224247410752e-03 292 KSP Residual norm 8.055285339542e-03 293 KSP Residual norm 7.901654282735e-03 294 KSP Residual norm 7.736153001615e-03 295 KSP Residual norm 7.573325450255e-03 296 KSP Residual norm 7.275893170653e-03 297 KSP Residual norm 6.926429962956e-03 298 KSP Residual norm 6.455466141228e-03 299 KSP Residual norm 6.135933821795e-03 300 KSP Residual norm 5.815666338693e-03 301 KSP Residual norm 5.809130374282e-03 302 KSP Residual norm 5.803471533640e-03 303 KSP Residual norm 5.794771509496e-03 304 KSP Residual norm 5.789487508510e-03 305 KSP Residual norm 5.781309242740e-03 306 KSP Residual norm 5.776038138812e-03 307 KSP Residual norm 5.765396125330e-03 308 KSP Residual norm 5.761930813666e-03 309 KSP Residual norm 5.750312745072e-03 310 KSP Residual norm 5.748124141314e-03 311 KSP Residual norm 5.746537120795e-03 312 KSP Residual norm 5.746331630360e-03 313 KSP Residual norm 5.742676946322e-03 314 KSP Residual norm 5.742152064610e-03 315 KSP Residual norm 5.732223570720e-03 316 KSP Residual norm 5.730552189587e-03 317 KSP Residual norm 5.712038029083e-03 318 KSP Residual norm 5.707557159969e-03 319 KSP Residual norm 5.676236889977e-03 320 KSP Residual norm 5.638920820929e-03 321 KSP Residual norm 5.572669912271e-03 322 KSP Residual norm 5.486823306587e-03 323 KSP Residual norm 5.363016325985e-03 324 KSP Residual norm 5.264046821956e-03 325 KSP Residual norm 5.159363540733e-03 326 KSP Residual norm 5.086909662956e-03 327 KSP Residual norm 5.051526220793e-03 328 KSP Residual norm 5.022143193224e-03 329 KSP Residual norm 5.009617113449e-03 330 KSP Residual norm 4.995517097644e-03 331 KSP Residual norm 4.983940331292e-03 332 KSP Residual norm 4.965572839561e-03 333 KSP Residual norm 4.942970028556e-03 334 KSP Residual norm 4.912390369396e-03 335 KSP Residual norm 4.863206089647e-03 336 KSP Residual norm 4.786688659006e-03 337 KSP Residual norm 4.672213023382e-03 338 KSP Residual norm 4.540560084753e-03 339 KSP Residual norm 4.418401034658e-03 340 KSP Residual norm 4.301401892454e-03 341 KSP Residual norm 4.168790120742e-03 342 KSP Residual norm 4.033040098318e-03 343 KSP Residual norm 3.867066002873e-03 344 KSP Residual norm 3.721283767923e-03 345 KSP Residual norm 3.556259019987e-03 346 KSP Residual norm 3.365372845471e-03 347 KSP Residual norm 3.084897156456e-03 348 KSP Residual norm 2.783594984744e-03 349 KSP Residual norm 2.501786880130e-03 350 KSP Residual norm 2.335067791420e-03 351 KSP Residual norm 2.213045922261e-03 352 KSP Residual norm 2.112134354959e-03 353 KSP Residual norm 2.015185314019e-03 354 KSP Residual norm 1.921607887327e-03 355 KSP Residual norm 1.847008089694e-03 356 KSP Residual norm 1.795987920661e-03 357 KSP Residual norm 1.761428942028e-03 358 KSP Residual norm 1.731774056091e-03 359 KSP Residual norm 1.698674871035e-03 360 KSP Residual norm 1.653156894973e-03 361 KSP Residual norm 1.597017442915e-03 362 KSP Residual norm 1.535501845922e-03 363 KSP Residual norm 1.486869322257e-03 364 KSP Residual norm 1.445892831993e-03 365 KSP Residual norm 1.414816442941e-03 366 KSP Residual norm 1.386476306093e-03 367 KSP Residual norm 1.368105027367e-03 368 KSP Residual norm 1.354730645712e-03 369 KSP Residual norm 1.345461804246e-03 370 KSP Residual norm 1.334588721147e-03 371 KSP Residual norm 1.322630037734e-03 372 KSP Residual norm 1.308224169717e-03 373 KSP Residual norm 1.289038656360e-03 374 KSP Residual norm 1.265002144359e-03 375 KSP Residual norm 1.235554683894e-03 376 KSP Residual norm 1.210209450767e-03 377 KSP Residual norm 1.191241556973e-03 378 KSP Residual norm 1.179449565205e-03 379 KSP Residual norm 1.169178374947e-03 380 KSP Residual norm 1.155352854464e-03 381 KSP Residual norm 1.125723749533e-03 382 KSP Residual norm 1.062993527811e-03 383 KSP Residual norm 9.762876455431e-04 384 KSP Residual norm 8.716309045662e-04 385 KSP Residual norm 8.013198190313e-04 386 KSP Residual norm 7.590298588319e-04 387 KSP Residual norm 7.325493526081e-04 388 KSP Residual norm 7.092832495076e-04 389 KSP Residual norm 6.905482888929e-04 390 KSP Residual norm 6.651035332174e-04 391 KSP Residual norm 6.329192505908e-04 392 KSP Residual norm 5.934260255682e-04 393 KSP Residual norm 5.507211577125e-04 394 KSP Residual norm 5.189019973442e-04 395 KSP Residual norm 4.986106613015e-04 396 KSP Residual norm 4.859300702288e-04 397 KSP Residual norm 4.790237962945e-04 398 KSP Residual norm 4.753571936752e-04 399 KSP Residual norm 4.724859815474e-04 400 KSP Residual norm 4.702641411933e-04 401 KSP Residual norm 4.702579166046e-04 402 KSP Residual norm 4.702534659785e-04 403 KSP Residual norm 4.702525781263e-04 404 KSP Residual norm 4.702299514249e-04 405 KSP Residual norm 4.701596375520e-04 406 KSP Residual norm 4.700691125180e-04 407 KSP Residual norm 4.698943817501e-04 408 KSP Residual norm 4.697609274545e-04 409 KSP Residual norm 4.694146251088e-04 410 KSP Residual norm 4.689422028394e-04 411 KSP Residual norm 4.679409820991e-04 412 KSP Residual norm 4.659081767473e-04 413 KSP Residual norm 4.629674775264e-04 414 KSP Residual norm 4.605243382532e-04 415 KSP Residual norm 4.560441149686e-04 416 KSP Residual norm 4.513686394189e-04 417 KSP Residual norm 4.450436141934e-04 418 KSP Residual norm 4.404953743552e-04 419 KSP Residual norm 4.335300489375e-04 420 KSP Residual norm 4.264768336500e-04 421 KSP Residual norm 4.181313634857e-04 422 KSP Residual norm 4.089015109050e-04 423 KSP Residual norm 3.985765547747e-04 424 KSP Residual norm 3.864181772886e-04 425 KSP Residual norm 3.734937889570e-04 426 KSP Residual norm 3.635550581939e-04 427 KSP Residual norm 3.577463564334e-04 428 KSP Residual norm 3.555886330691e-04 429 KSP Residual norm 3.551496423876e-04 430 KSP Residual norm 3.550376898585e-04 431 KSP Residual norm 3.550369026011e-04 432 KSP Residual norm 3.550360644293e-04 433 KSP Residual norm 3.550360525119e-04 434 KSP Residual norm 3.550176246692e-04 435 KSP Residual norm 3.549902835946e-04 436 KSP Residual norm 3.549036707847e-04 437 KSP Residual norm 3.548071395013e-04 438 KSP Residual norm 3.545764041904e-04 439 KSP Residual norm 3.540769831233e-04 440 KSP Residual norm 3.530350156530e-04 441 KSP Residual norm 3.511736453965e-04 442 KSP Residual norm 3.487568430332e-04 443 KSP Residual norm 3.465785100421e-04 444 KSP Residual norm 3.449082826015e-04 445 KSP Residual norm 3.426375398689e-04 446 KSP Residual norm 3.395704495875e-04 447 KSP Residual norm 3.345017986412e-04 448 KSP Residual norm 3.289448827594e-04 449 KSP Residual norm 3.211778782644e-04 450 KSP Residual norm 3.124123987309e-04 451 KSP Residual norm 3.003492432774e-04 452 KSP Residual norm 2.839866872435e-04 453 KSP Residual norm 2.610213073183e-04 454 KSP Residual norm 2.329921788251e-04 KSP Object:(coupledsolve_) 1 MPI processes type: gmres GMRES: restart=100, using Modified Gram-Schmidt Orthogonalization GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=5.4108e-09, absolute=1e-50, divergence=10000 left preconditioning has attached null space using PRECONDITIONED norm type for convergence test PC Object:(coupledsolve_) 1 MPI processes type: ilu ILU: out-of-place factorization 1 level of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 2.44153 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 package used to perform factorization: petsc total: nonzeros=3.56993e+08, allocated nonzeros=3.56993e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 total: nonzeros=1.46217e+08, allocated nonzeros=1.46217e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines 0000013 0.5411E-06 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 4.646794822626e+04 1 KSP Residual norm 3.162242077343e+04 2 KSP Residual norm 2.266759984161e+04 3 KSP Residual norm 1.760418142042e+04 4 KSP Residual norm 1.299759449042e+04 5 KSP Residual norm 9.645934643032e+03 6 KSP Residual norm 6.543290985433e+03 7 KSP Residual norm 4.115535304728e+03 8 KSP Residual norm 2.254598766890e+03 9 KSP Residual norm 1.263992474215e+03 10 KSP Residual norm 8.900187580860e+02 11 KSP Residual norm 6.980865815692e+02 12 KSP Residual norm 5.307254900605e+02 13 KSP Residual norm 4.512838654665e+02 14 KSP Residual norm 3.659219148356e+02 15 KSP Residual norm 3.024730469541e+02 16 KSP Residual norm 2.506979232056e+02 17 KSP Residual norm 2.202161713121e+02 18 KSP Residual norm 1.961318569785e+02 19 KSP Residual norm 1.790019269757e+02 20 KSP Residual norm 1.602207730064e+02 21 KSP Residual norm 1.442240457485e+02 22 KSP Residual norm 1.308409171066e+02 23 KSP Residual norm 1.214749155682e+02 24 KSP Residual norm 1.121534925634e+02 25 KSP Residual norm 1.031732465118e+02 26 KSP Residual norm 9.423224877402e+01 27 KSP Residual norm 8.567647791972e+01 28 KSP Residual norm 7.965945297625e+01 29 KSP Residual norm 7.537790258719e+01 30 KSP Residual norm 7.202650270462e+01 31 KSP Residual norm 6.920201897717e+01 32 KSP Residual norm 6.616532352887e+01 33 KSP Residual norm 6.336487172875e+01 34 KSP Residual norm 6.088983795357e+01 35 KSP Residual norm 5.861917427246e+01 36 KSP Residual norm 5.591701083299e+01 37 KSP Residual norm 5.311104956787e+01 38 KSP Residual norm 5.076252322962e+01 39 KSP Residual norm 4.836540940484e+01 40 KSP Residual norm 4.529674703769e+01 41 KSP Residual norm 4.221261172536e+01 42 KSP Residual norm 3.833130603276e+01 43 KSP Residual norm 3.429263903662e+01 44 KSP Residual norm 3.020863603026e+01 45 KSP Residual norm 2.696209771691e+01 46 KSP Residual norm 2.359856635093e+01 47 KSP Residual norm 2.061167087891e+01 48 KSP Residual norm 1.843540784797e+01 49 KSP Residual norm 1.719362005583e+01 50 KSP Residual norm 1.635140604746e+01 51 KSP Residual norm 1.590547749463e+01 52 KSP Residual norm 1.565698108399e+01 53 KSP Residual norm 1.550563728058e+01 54 KSP Residual norm 1.541476201591e+01 55 KSP Residual norm 1.536043342096e+01 56 KSP Residual norm 1.527574927660e+01 57 KSP Residual norm 1.517730989565e+01 58 KSP Residual norm 1.506734792655e+01 59 KSP Residual norm 1.489909393503e+01 60 KSP Residual norm 1.460004021110e+01 61 KSP Residual norm 1.412286506854e+01 62 KSP Residual norm 1.340677302184e+01 63 KSP Residual norm 1.247698365546e+01 64 KSP Residual norm 1.142213845447e+01 65 KSP Residual norm 9.961260444371e+00 66 KSP Residual norm 8.481183174203e+00 67 KSP Residual norm 7.206825912256e+00 68 KSP Residual norm 6.383882335656e+00 69 KSP Residual norm 5.832825077366e+00 70 KSP Residual norm 5.432878629735e+00 71 KSP Residual norm 5.146488155495e+00 72 KSP Residual norm 4.887711604292e+00 73 KSP Residual norm 4.681997301782e+00 74 KSP Residual norm 4.509331348025e+00 75 KSP Residual norm 4.395795874463e+00 76 KSP Residual norm 4.318788491415e+00 77 KSP Residual norm 4.258374123525e+00 78 KSP Residual norm 4.199786363435e+00 79 KSP Residual norm 4.130223094419e+00 80 KSP Residual norm 4.046405115872e+00 81 KSP Residual norm 3.936078212201e+00 82 KSP Residual norm 3.789954577019e+00 83 KSP Residual norm 3.608070740542e+00 84 KSP Residual norm 3.399083555797e+00 85 KSP Residual norm 3.202559727621e+00 86 KSP Residual norm 3.026605725927e+00 87 KSP Residual norm 2.882123286308e+00 88 KSP Residual norm 2.751158392625e+00 89 KSP Residual norm 2.639472709867e+00 90 KSP Residual norm 2.534517671218e+00 91 KSP Residual norm 2.407657350916e+00 92 KSP Residual norm 2.266498704105e+00 93 KSP Residual norm 2.135823139858e+00 94 KSP Residual norm 2.037936699132e+00 95 KSP Residual norm 1.958715965927e+00 96 KSP Residual norm 1.889001563270e+00 97 KSP Residual norm 1.834240286215e+00 98 KSP Residual norm 1.780686054485e+00 99 KSP Residual norm 1.720783899532e+00 100 KSP Residual norm 1.666546951991e+00 101 KSP Residual norm 1.666523656538e+00 102 KSP Residual norm 1.666513142804e+00 103 KSP Residual norm 1.666495481830e+00 104 KSP Residual norm 1.666474726336e+00 105 KSP Residual norm 1.666425295299e+00 106 KSP Residual norm 1.666384187465e+00 107 KSP Residual norm 1.666291653481e+00 108 KSP Residual norm 1.666247733236e+00 109 KSP Residual norm 1.666123300155e+00 110 KSP Residual norm 1.666067030737e+00 111 KSP Residual norm 1.666022997006e+00 112 KSP Residual norm 1.665910480956e+00 113 KSP Residual norm 1.665400110612e+00 114 KSP Residual norm 1.664932302233e+00 115 KSP Residual norm 1.662815067298e+00 116 KSP Residual norm 1.660543343911e+00 117 KSP Residual norm 1.655659666809e+00 118 KSP Residual norm 1.650584641754e+00 119 KSP Residual norm 1.641158310720e+00 120 KSP Residual norm 1.630525923759e+00 121 KSP Residual norm 1.611208544210e+00 122 KSP Residual norm 1.595974417415e+00 123 KSP Residual norm 1.580969463350e+00 124 KSP Residual norm 1.570321748771e+00 125 KSP Residual norm 1.556174551208e+00 126 KSP Residual norm 1.543422627177e+00 127 KSP Residual norm 1.527003283546e+00 128 KSP Residual norm 1.509122951771e+00 129 KSP Residual norm 1.488351796101e+00 130 KSP Residual norm 1.463925324107e+00 131 KSP Residual norm 1.429567531261e+00 132 KSP Residual norm 1.388351856477e+00 133 KSP Residual norm 1.338006064598e+00 134 KSP Residual norm 1.294025110591e+00 135 KSP Residual norm 1.257196428534e+00 136 KSP Residual norm 1.225722851828e+00 137 KSP Residual norm 1.197459861941e+00 138 KSP Residual norm 1.180030332991e+00 139 KSP Residual norm 1.169887710661e+00 140 KSP Residual norm 1.157721092802e+00 141 KSP Residual norm 1.137590830867e+00 142 KSP Residual norm 1.109488020356e+00 143 KSP Residual norm 1.069880137654e+00 144 KSP Residual norm 1.027994943237e+00 145 KSP Residual norm 9.838774600292e-01 146 KSP Residual norm 9.504924320947e-01 147 KSP Residual norm 9.238450678655e-01 148 KSP Residual norm 9.066595527083e-01 149 KSP Residual norm 8.962197965563e-01 150 KSP Residual norm 8.920072654936e-01 151 KSP Residual norm 8.899918835355e-01 152 KSP Residual norm 8.871163348474e-01 153 KSP Residual norm 8.774082918160e-01 154 KSP Residual norm 8.510204229716e-01 155 KSP Residual norm 8.020950925204e-01 156 KSP Residual norm 7.287995168520e-01 157 KSP Residual norm 6.432885216051e-01 158 KSP Residual norm 5.620279655327e-01 159 KSP Residual norm 4.854781516386e-01 160 KSP Residual norm 4.261036315706e-01 161 KSP Residual norm 3.854545756085e-01 162 KSP Residual norm 3.594004435727e-01 163 KSP Residual norm 3.406546727197e-01 164 KSP Residual norm 3.225987218618e-01 165 KSP Residual norm 3.024774649764e-01 166 KSP Residual norm 2.753054760910e-01 167 KSP Residual norm 2.496777632655e-01 168 KSP Residual norm 2.313738843752e-01 169 KSP Residual norm 2.187863357620e-01 170 KSP Residual norm 2.112093992304e-01 171 KSP Residual norm 2.081913509559e-01 172 KSP Residual norm 2.070809454023e-01 173 KSP Residual norm 2.064971042050e-01 174 KSP Residual norm 2.060104489654e-01 175 KSP Residual norm 2.046376343042e-01 176 KSP Residual norm 2.028962425915e-01 177 KSP Residual norm 2.002102715976e-01 178 KSP Residual norm 1.967841901788e-01 179 KSP Residual norm 1.927703234186e-01 180 KSP Residual norm 1.867315322002e-01 181 KSP Residual norm 1.776615015520e-01 182 KSP Residual norm 1.643841092728e-01 183 KSP Residual norm 1.495549626484e-01 184 KSP Residual norm 1.352602271329e-01 185 KSP Residual norm 1.229335649576e-01 186 KSP Residual norm 1.144222074610e-01 187 KSP Residual norm 1.082894703145e-01 188 KSP Residual norm 1.049014913912e-01 189 KSP Residual norm 1.031395995602e-01 190 KSP Residual norm 1.024949013605e-01 191 KSP Residual norm 1.022798142870e-01 192 KSP Residual norm 1.021273104837e-01 193 KSP Residual norm 1.019883159157e-01 194 KSP Residual norm 1.018188800019e-01 195 KSP Residual norm 1.016666766870e-01 196 KSP Residual norm 1.015881280555e-01 197 KSP Residual norm 1.014325050613e-01 198 KSP Residual norm 1.009555057498e-01 199 KSP Residual norm 9.916500979184e-02 200 KSP Residual norm 9.512357698278e-02 201 KSP Residual norm 9.511263638783e-02 202 KSP Residual norm 9.511119162807e-02 203 KSP Residual norm 9.510974418328e-02 204 KSP Residual norm 9.510921134403e-02 205 KSP Residual norm 9.510820728151e-02 206 KSP Residual norm 9.510655923969e-02 207 KSP Residual norm 9.510097348854e-02 208 KSP Residual norm 9.510094973959e-02 209 KSP Residual norm 9.509198177705e-02 210 KSP Residual norm 9.509098268393e-02 211 KSP Residual norm 9.508856069439e-02 212 KSP Residual norm 9.508455427144e-02 213 KSP Residual norm 9.505029268519e-02 214 KSP Residual norm 9.502613444994e-02 215 KSP Residual norm 9.487007866844e-02 216 KSP Residual norm 9.472776655154e-02 217 KSP Residual norm 9.435325245714e-02 218 KSP Residual norm 9.415036666330e-02 219 KSP Residual norm 9.356785367004e-02 220 KSP Residual norm 9.311470491983e-02 221 KSP Residual norm 9.219322242883e-02 222 KSP Residual norm 9.122085790065e-02 223 KSP Residual norm 8.990959648900e-02 224 KSP Residual norm 8.878223615034e-02 225 KSP Residual norm 8.688581774085e-02 226 KSP Residual norm 8.488886828575e-02 227 KSP Residual norm 8.201413442023e-02 228 KSP Residual norm 7.803476694805e-02 229 KSP Residual norm 7.416369785251e-02 230 KSP Residual norm 7.025188096999e-02 231 KSP Residual norm 6.682434688549e-02 232 KSP Residual norm 6.377692127244e-02 233 KSP Residual norm 6.152498733933e-02 234 KSP Residual norm 6.015856249857e-02 235 KSP Residual norm 5.938697735965e-02 236 KSP Residual norm 5.888951009425e-02 237 KSP Residual norm 5.847235569041e-02 238 KSP Residual norm 5.800188816763e-02 239 KSP Residual norm 5.738546406416e-02 240 KSP Residual norm 5.623383204919e-02 241 KSP Residual norm 5.451251858027e-02 242 KSP Residual norm 5.317697585382e-02 243 KSP Residual norm 5.200006525184e-02 244 KSP Residual norm 5.136580418529e-02 245 KSP Residual norm 5.100119851935e-02 246 KSP Residual norm 5.085728322485e-02 247 KSP Residual norm 5.069362450273e-02 248 KSP Residual norm 5.051923627544e-02 249 KSP Residual norm 5.026395471778e-02 250 KSP Residual norm 4.986813166368e-02 251 KSP Residual norm 4.890230775916e-02 252 KSP Residual norm 4.698301115055e-02 253 KSP Residual norm 4.453852876078e-02 254 KSP Residual norm 4.226671954955e-02 255 KSP Residual norm 4.034279143119e-02 256 KSP Residual norm 3.826685223962e-02 257 KSP Residual norm 3.546410244985e-02 258 KSP Residual norm 3.152510292709e-02 259 KSP Residual norm 2.666643086336e-02 260 KSP Residual norm 2.273555055252e-02 261 KSP Residual norm 2.033053074492e-02 262 KSP Residual norm 1.911293418832e-02 263 KSP Residual norm 1.870204051412e-02 264 KSP Residual norm 1.859593883588e-02 265 KSP Residual norm 1.856990938526e-02 266 KSP Residual norm 1.855868777284e-02 267 KSP Residual norm 1.855351178386e-02 268 KSP Residual norm 1.854538206212e-02 269 KSP Residual norm 1.854280587239e-02 270 KSP Residual norm 1.854276387522e-02 271 KSP Residual norm 1.853646968542e-02 272 KSP Residual norm 1.853320979137e-02 273 KSP Residual norm 1.852623150851e-02 274 KSP Residual norm 1.842021417198e-02 275 KSP Residual norm 1.814943787886e-02 276 KSP Residual norm 1.766559867069e-02 277 KSP Residual norm 1.688980801928e-02 278 KSP Residual norm 1.582920024028e-02 279 KSP Residual norm 1.443305452771e-02 280 KSP Residual norm 1.276807126640e-02 281 KSP Residual norm 1.117920819147e-02 282 KSP Residual norm 9.945238622787e-03 283 KSP Residual norm 9.274731796559e-03 284 KSP Residual norm 8.947483194588e-03 285 KSP Residual norm 8.794447437306e-03 286 KSP Residual norm 8.693262653595e-03 287 KSP Residual norm 8.623664730161e-03 288 KSP Residual norm 8.539731384035e-03 289 KSP Residual norm 8.467365890416e-03 290 KSP Residual norm 8.354688380369e-03 291 KSP Residual norm 8.224235778388e-03 292 KSP Residual norm 8.055281629964e-03 293 KSP Residual norm 7.901660239048e-03 294 KSP Residual norm 7.736169545487e-03 295 KSP Residual norm 7.573351994412e-03 296 KSP Residual norm 7.275933026631e-03 297 KSP Residual norm 6.926479773868e-03 298 KSP Residual norm 6.455519149623e-03 299 KSP Residual norm 6.135977712406e-03 300 KSP Residual norm 5.815683635987e-03 301 KSP Residual norm 5.809146057896e-03 302 KSP Residual norm 5.803486088091e-03 303 KSP Residual norm 5.794784795152e-03 304 KSP Residual norm 5.789500414155e-03 305 KSP Residual norm 5.781321985995e-03 306 KSP Residual norm 5.776051285592e-03 307 KSP Residual norm 5.765409767218e-03 308 KSP Residual norm 5.761945549764e-03 309 KSP Residual norm 5.750328078829e-03 310 KSP Residual norm 5.748139885148e-03 311 KSP Residual norm 5.746553021969e-03 312 KSP Residual norm 5.746347387752e-03 313 KSP Residual norm 5.742692911091e-03 314 KSP Residual norm 5.742168080928e-03 315 KSP Residual norm 5.732239688621e-03 316 KSP Residual norm 5.730568468742e-03 317 KSP Residual norm 5.712053691009e-03 318 KSP Residual norm 5.707572724610e-03 319 KSP Residual norm 5.676251395237e-03 320 KSP Residual norm 5.638931585969e-03 321 KSP Residual norm 5.572677855676e-03 322 KSP Residual norm 5.486823689784e-03 323 KSP Residual norm 5.363008991929e-03 324 KSP Residual norm 5.264035290717e-03 325 KSP Residual norm 5.159355626463e-03 326 KSP Residual norm 5.086907490885e-03 327 KSP Residual norm 5.051530706858e-03 328 KSP Residual norm 5.022152659530e-03 329 KSP Residual norm 5.009629704330e-03 330 KSP Residual norm 4.995531904988e-03 331 KSP Residual norm 4.983956605585e-03 332 KSP Residual norm 4.965590223693e-03 333 KSP Residual norm 4.942988177863e-03 334 KSP Residual norm 4.912408143932e-03 335 KSP Residual norm 4.863220248506e-03 336 KSP Residual norm 4.786694913657e-03 337 KSP Residual norm 4.672207423334e-03 338 KSP Residual norm 4.540542662174e-03 339 KSP Residual norm 4.418374224038e-03 340 KSP Residual norm 4.301370761201e-03 341 KSP Residual norm 4.168757751072e-03 342 KSP Residual norm 4.033008065213e-03 343 KSP Residual norm 3.867033898002e-03 344 KSP Residual norm 3.721252934868e-03 345 KSP Residual norm 3.556227479583e-03 346 KSP Residual norm 3.365339316905e-03 347 KSP Residual norm 3.084862040616e-03 348 KSP Residual norm 2.783564675568e-03 349 KSP Residual norm 2.501765217511e-03 350 KSP Residual norm 2.335053894487e-03 351 KSP Residual norm 2.213037947705e-03 352 KSP Residual norm 2.112129091189e-03 353 KSP Residual norm 2.015177812051e-03 354 KSP Residual norm 1.921594629339e-03 355 KSP Residual norm 1.846988911408e-03 356 KSP Residual norm 1.795965282507e-03 357 KSP Residual norm 1.761405356393e-03 358 KSP Residual norm 1.731751697203e-03 359 KSP Residual norm 1.698655810878e-03 360 KSP Residual norm 1.653142854083e-03 361 KSP Residual norm 1.597008784353e-03 362 KSP Residual norm 1.535496824955e-03 363 KSP Residual norm 1.486865314880e-03 364 KSP Residual norm 1.445888537290e-03 365 KSP Residual norm 1.414811114092e-03 366 KSP Residual norm 1.386469693105e-03 367 KSP Residual norm 1.368097138167e-03 368 KSP Residual norm 1.354721317254e-03 369 KSP Residual norm 1.345451133476e-03 370 KSP Residual norm 1.334576782051e-03 371 KSP Residual norm 1.322617039752e-03 372 KSP Residual norm 1.308210823250e-03 373 KSP Residual norm 1.289025770705e-03 374 KSP Residual norm 1.264990586362e-03 375 KSP Residual norm 1.235544814977e-03 376 KSP Residual norm 1.210201337677e-03 377 KSP Residual norm 1.191234883376e-03 378 KSP Residual norm 1.179443636270e-03 379 KSP Residual norm 1.169172356530e-03 380 KSP Residual norm 1.155345744322e-03 381 KSP Residual norm 1.125713595538e-03 382 KSP Residual norm 1.062976637316e-03 383 KSP Residual norm 9.762641652374e-04 384 KSP Residual norm 8.716038682560e-04 385 KSP Residual norm 8.012962266541e-04 386 KSP Residual norm 7.590116849294e-04 387 KSP Residual norm 7.325360408237e-04 388 KSP Residual norm 7.092734657377e-04 389 KSP Residual norm 6.905408683972e-04 390 KSP Residual norm 6.650966259938e-04 391 KSP Residual norm 6.329106926544e-04 392 KSP Residual norm 5.934149712095e-04 393 KSP Residual norm 5.507080289724e-04 394 KSP Residual norm 5.188880043285e-04 395 KSP Residual norm 4.985962058471e-04 396 KSP Residual norm 4.859150223756e-04 397 KSP Residual norm 4.790086199191e-04 398 KSP Residual norm 4.753422113532e-04 399 KSP Residual norm 4.724714675948e-04 400 KSP Residual norm 4.702504240004e-04 401 KSP Residual norm 4.702442019163e-04 402 KSP Residual norm 4.702397512609e-04 403 KSP Residual norm 4.702388631292e-04 404 KSP Residual norm 4.702162416798e-04 405 KSP Residual norm 4.701459333256e-04 406 KSP Residual norm 4.700554170236e-04 407 KSP Residual norm 4.698807226304e-04 408 KSP Residual norm 4.697472896512e-04 409 KSP Residual norm 4.694010882714e-04 410 KSP Residual norm 4.689287553821e-04 411 KSP Residual norm 4.679276995809e-04 412 KSP Residual norm 4.658949482341e-04 413 KSP Residual norm 4.629542814226e-04 414 KSP Residual norm 4.605113028598e-04 415 KSP Residual norm 4.560316390586e-04 416 KSP Residual norm 4.513565949552e-04 417 KSP Residual norm 4.450321431784e-04 418 KSP Residual norm 4.404841558713e-04 419 KSP Residual norm 4.335194352888e-04 420 KSP Residual norm 4.264668885211e-04 421 KSP Residual norm 4.181214590158e-04 422 KSP Residual norm 4.088915918672e-04 423 KSP Residual norm 3.985660786171e-04 424 KSP Residual norm 3.864072783567e-04 425 KSP Residual norm 3.734820872173e-04 426 KSP Residual norm 3.635428937919e-04 427 KSP Residual norm 3.577334329466e-04 428 KSP Residual norm 3.555749555722e-04 429 KSP Residual norm 3.551356405094e-04 430 KSP Residual norm 3.550235148181e-04 431 KSP Residual norm 3.550227136427e-04 432 KSP Residual norm 3.550218592117e-04 433 KSP Residual norm 3.550218486085e-04 434 KSP Residual norm 3.550033870440e-04 435 KSP Residual norm 3.549760337811e-04 436 KSP Residual norm 3.548894167055e-04 437 KSP Residual norm 3.547929031145e-04 438 KSP Residual norm 3.545622399114e-04 439 KSP Residual norm 3.540630377786e-04 440 KSP Residual norm 3.530214586366e-04 441 KSP Residual norm 3.511606368326e-04 442 KSP Residual norm 3.487445502299e-04 443 KSP Residual norm 3.465666915659e-04 444 KSP Residual norm 3.448968636169e-04 445 KSP Residual norm 3.426265085719e-04 446 KSP Residual norm 3.395600026506e-04 447 KSP Residual norm 3.344920917956e-04 448 KSP Residual norm 3.289366319599e-04 449 KSP Residual norm 3.211722441080e-04 450 KSP Residual norm 3.124100821602e-04 451 KSP Residual norm 3.003489022678e-04 452 KSP Residual norm 2.839872899584e-04 453 KSP Residual norm 2.610230481486e-04 454 KSP Residual norm 2.329979888007e-04 455 KSP Residual norm 2.043883637229e-04 456 KSP Residual norm 1.811509203361e-04 457 KSP Residual norm 1.643766366681e-04 458 KSP Residual norm 1.527855154103e-04 459 KSP Residual norm 1.460844832408e-04 460 KSP Residual norm 1.426488168586e-04 461 KSP Residual norm 1.399609319481e-04 462 KSP Residual norm 1.373602625668e-04 463 KSP Residual norm 1.345857440613e-04 464 KSP Residual norm 1.316861516476e-04 465 KSP Residual norm 1.283131728679e-04 466 KSP Residual norm 1.249706287515e-04 467 KSP Residual norm 1.218939942205e-04 468 KSP Residual norm 1.193793955139e-04 469 KSP Residual norm 1.172323333536e-04 470 KSP Residual norm 1.148196506681e-04 471 KSP Residual norm 1.120629276565e-04 KSP Object:(coupledsolve_) 1 MPI processes type: gmres GMRES: restart=100, using Modified Gram-Schmidt Orthogonalization GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=2.42143e-09, absolute=1e-50, divergence=10000 left preconditioning has attached null space using PRECONDITIONED norm type for convergence test PC Object:(coupledsolve_) 1 MPI processes type: ilu ILU: out-of-place factorization 1 level of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 2.44153 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 package used to perform factorization: petsc total: nonzeros=3.56993e+08, allocated nonzeros=3.56993e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 total: nonzeros=1.46217e+08, allocated nonzeros=1.46217e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines 0000014 0.2421E-06 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 4.646794810691e+04 1 KSP Residual norm 3.162242075761e+04 2 KSP Residual norm 2.266759983726e+04 3 KSP Residual norm 1.760418141753e+04 4 KSP Residual norm 1.299759449714e+04 5 KSP Residual norm 9.645934642400e+03 6 KSP Residual norm 6.543290988195e+03 7 KSP Residual norm 4.115535306869e+03 8 KSP Residual norm 2.254598778350e+03 9 KSP Residual norm 1.263992493945e+03 10 KSP Residual norm 8.900187831475e+02 11 KSP Residual norm 6.980866059791e+02 12 KSP Residual norm 5.307255116026e+02 13 KSP Residual norm 4.512838836038e+02 14 KSP Residual norm 3.659219299843e+02 15 KSP Residual norm 3.024730598341e+02 16 KSP Residual norm 2.506979349814e+02 17 KSP Residual norm 2.202161829815e+02 18 KSP Residual norm 1.961318690385e+02 19 KSP Residual norm 1.790019398102e+02 20 KSP Residual norm 1.602207865930e+02 21 KSP Residual norm 1.442240604348e+02 22 KSP Residual norm 1.308409321019e+02 23 KSP Residual norm 1.214749304872e+02 24 KSP Residual norm 1.121535063557e+02 25 KSP Residual norm 1.031732593214e+02 26 KSP Residual norm 9.423226016898e+01 27 KSP Residual norm 8.567648909121e+01 28 KSP Residual norm 7.965946410115e+01 29 KSP Residual norm 7.537791446986e+01 30 KSP Residual norm 7.202651510202e+01 31 KSP Residual norm 6.920203229413e+01 32 KSP Residual norm 6.616533744580e+01 33 KSP Residual norm 6.336488635282e+01 34 KSP Residual norm 6.088985273694e+01 35 KSP Residual norm 5.861918915608e+01 36 KSP Residual norm 5.591702541758e+01 37 KSP Residual norm 5.311106371072e+01 38 KSP Residual norm 5.076253667168e+01 39 KSP Residual norm 4.836542178274e+01 40 KSP Residual norm 4.529675757395e+01 41 KSP Residual norm 4.221261972185e+01 42 KSP Residual norm 3.833131069090e+01 43 KSP Residual norm 3.429264042593e+01 44 KSP Residual norm 3.020863469488e+01 45 KSP Residual norm 2.696209461326e+01 46 KSP Residual norm 2.359856230637e+01 47 KSP Residual norm 2.061166657150e+01 48 KSP Residual norm 1.843540363329e+01 49 KSP Residual norm 1.719361592234e+01 50 KSP Residual norm 1.635140219452e+01 51 KSP Residual norm 1.590547386991e+01 52 KSP Residual norm 1.565697761497e+01 53 KSP Residual norm 1.550563387357e+01 54 KSP Residual norm 1.541475864742e+01 55 KSP Residual norm 1.536043001005e+01 56 KSP Residual norm 1.527574578790e+01 57 KSP Residual norm 1.517730630754e+01 58 KSP Residual norm 1.506734423730e+01 59 KSP Residual norm 1.489909005769e+01 60 KSP Residual norm 1.460003611591e+01 61 KSP Residual norm 1.412286059144e+01 62 KSP Residual norm 1.340676807635e+01 63 KSP Residual norm 1.247697836334e+01 64 KSP Residual norm 1.142213318359e+01 65 KSP Residual norm 9.961255438204e+00 66 KSP Residual norm 8.481178586570e+00 67 KSP Residual norm 7.206821619221e+00 68 KSP Residual norm 6.383878391759e+00 69 KSP Residual norm 5.832821558498e+00 70 KSP Residual norm 5.432875576703e+00 71 KSP Residual norm 5.146485558122e+00 72 KSP Residual norm 4.887709497378e+00 73 KSP Residual norm 4.681995603370e+00 74 KSP Residual norm 4.509330025672e+00 75 KSP Residual norm 4.395794840334e+00 76 KSP Residual norm 4.318787688289e+00 77 KSP Residual norm 4.258373522003e+00 78 KSP Residual norm 4.199785941638e+00 79 KSP Residual norm 4.130222873727e+00 80 KSP Residual norm 4.046405113141e+00 81 KSP Residual norm 3.936078464802e+00 82 KSP Residual norm 3.789955081463e+00 83 KSP Residual norm 3.608071435774e+00 84 KSP Residual norm 3.399084293504e+00 85 KSP Residual norm 3.202560363586e+00 86 KSP Residual norm 3.026606154748e+00 87 KSP Residual norm 2.882123472968e+00 88 KSP Residual norm 2.751158365072e+00 89 KSP Residual norm 2.639472547185e+00 90 KSP Residual norm 2.534517485730e+00 91 KSP Residual norm 2.407657284280e+00 92 KSP Residual norm 2.266498938600e+00 93 KSP Residual norm 2.135823756718e+00 94 KSP Residual norm 2.037937625809e+00 95 KSP Residual norm 1.958717118067e+00 96 KSP Residual norm 1.889002834876e+00 97 KSP Residual norm 1.834241562542e+00 98 KSP Residual norm 1.780687234093e+00 99 KSP Residual norm 1.720784895351e+00 100 KSP Residual norm 1.666547741509e+00 101 KSP Residual norm 1.666524446025e+00 102 KSP Residual norm 1.666513932252e+00 103 KSP Residual norm 1.666496271222e+00 104 KSP Residual norm 1.666475515679e+00 105 KSP Residual norm 1.666426084569e+00 106 KSP Residual norm 1.666384976650e+00 107 KSP Residual norm 1.666292442454e+00 108 KSP Residual norm 1.666248522068e+00 109 KSP Residual norm 1.666124088842e+00 110 KSP Residual norm 1.666067819431e+00 111 KSP Residual norm 1.666023785908e+00 112 KSP Residual norm 1.665911270639e+00 113 KSP Residual norm 1.665400903213e+00 114 KSP Residual norm 1.664933098387e+00 115 KSP Residual norm 1.662815875229e+00 116 KSP Residual norm 1.660544163086e+00 117 KSP Residual norm 1.655660502538e+00 118 KSP Residual norm 1.650585489940e+00 119 KSP Residual norm 1.641159162737e+00 120 KSP Residual norm 1.630526756611e+00 121 KSP Residual norm 1.611209286575e+00 122 KSP Residual norm 1.595975040180e+00 123 KSP Residual norm 1.580969959292e+00 124 KSP Residual norm 1.570322169950e+00 125 KSP Residual norm 1.556174913393e+00 126 KSP Residual norm 1.543422986009e+00 127 KSP Residual norm 1.527003678615e+00 128 KSP Residual norm 1.509123453301e+00 129 KSP Residual norm 1.488352452639e+00 130 KSP Residual norm 1.463926200309e+00 131 KSP Residual norm 1.429568664926e+00 132 KSP Residual norm 1.388353224988e+00 133 KSP Residual norm 1.338007545801e+00 134 KSP Residual norm 1.294026572451e+00 135 KSP Residual norm 1.257197786726e+00 136 KSP Residual norm 1.225724065108e+00 137 KSP Residual norm 1.197460892123e+00 138 KSP Residual norm 1.180031223617e+00 139 KSP Residual norm 1.169888523316e+00 140 KSP Residual norm 1.157721847695e+00 141 KSP Residual norm 1.137591536124e+00 142 KSP Residual norm 1.109488712008e+00 143 KSP Residual norm 1.069880849650e+00 144 KSP Residual norm 1.027995689890e+00 145 KSP Residual norm 9.838782250260e-01 146 KSP Residual norm 9.504931872701e-01 147 KSP Residual norm 9.238457984743e-01 148 KSP Residual norm 9.066602524981e-01 149 KSP Residual norm 8.962204739838e-01 150 KSP Residual norm 8.920079408034e-01 151 KSP Residual norm 8.899925704449e-01 152 KSP Residual norm 8.871170511077e-01 153 KSP Residual norm 8.774090851420e-01 154 KSP Residual norm 8.510213670927e-01 155 KSP Residual norm 8.020962052070e-01 156 KSP Residual norm 7.288007260549e-01 157 KSP Residual norm 6.432897269366e-01 158 KSP Residual norm 5.620290976014e-01 159 KSP Residual norm 4.854791425503e-01 160 KSP Residual norm 4.261044655838e-01 161 KSP Residual norm 3.854552902030e-01 162 KSP Residual norm 3.594010817075e-01 163 KSP Residual norm 3.406552700481e-01 164 KSP Residual norm 3.225992899331e-01 165 KSP Residual norm 3.024779973603e-01 166 KSP Residual norm 2.753059168495e-01 167 KSP Residual norm 2.496780747110e-01 168 KSP Residual norm 2.313740619370e-01 169 KSP Residual norm 2.187863961222e-01 170 KSP Residual norm 2.112093693900e-01 171 KSP Residual norm 2.081912701810e-01 172 KSP Residual norm 2.070808391910e-01 173 KSP Residual norm 2.064969824895e-01 174 KSP Residual norm 2.060103174300e-01 175 KSP Residual norm 2.046374935573e-01 176 KSP Residual norm 2.028961032415e-01 177 KSP Residual norm 2.002101464764e-01 178 KSP Residual norm 1.967840971492e-01 179 KSP Residual norm 1.927702854852e-01 180 KSP Residual norm 1.867315884746e-01 181 KSP Residual norm 1.776616859760e-01 182 KSP Residual norm 1.643844380850e-01 183 KSP Residual norm 1.495553614517e-01 184 KSP Residual norm 1.352606113439e-01 185 KSP Residual norm 1.229338709486e-01 186 KSP Residual norm 1.144224341323e-01 187 KSP Residual norm 1.082896348414e-01 188 KSP Residual norm 1.049016189933e-01 189 KSP Residual norm 1.031397081841e-01 190 KSP Residual norm 1.024950019179e-01 191 KSP Residual norm 1.022799109141e-01 192 KSP Residual norm 1.021274036252e-01 193 KSP Residual norm 1.019884057480e-01 194 KSP Residual norm 1.018189669410e-01 195 KSP Residual norm 1.016667615568e-01 196 KSP Residual norm 1.015882116213e-01 197 KSP Residual norm 1.014325868938e-01 198 KSP Residual norm 1.009555851518e-01 199 KSP Residual norm 9.916508450329e-02 200 KSP Residual norm 9.512363998600e-02 201 KSP Residual norm 9.511269937225e-02 202 KSP Residual norm 9.511125460601e-02 203 KSP Residual norm 9.510980715868e-02 204 KSP Residual norm 9.510927431938e-02 205 KSP Residual norm 9.510827025568e-02 206 KSP Residual norm 9.510662221392e-02 207 KSP Residual norm 9.510103645969e-02 208 KSP Residual norm 9.510101271187e-02 209 KSP Residual norm 9.509204477214e-02 210 KSP Residual norm 9.509104568254e-02 211 KSP Residual norm 9.508862370566e-02 212 KSP Residual norm 9.508461729726e-02 213 KSP Residual norm 9.505035574463e-02 214 KSP Residual norm 9.502619756294e-02 215 KSP Residual norm 9.487014180223e-02 216 KSP Residual norm 9.472782946844e-02 217 KSP Residual norm 9.435331438293e-02 218 KSP Residual norm 9.415042732283e-02 219 KSP Residual norm 9.356791090739e-02 220 KSP Residual norm 9.311475849175e-02 221 KSP Residual norm 9.219326719359e-02 222 KSP Residual norm 9.122089187296e-02 223 KSP Residual norm 8.990961608637e-02 224 KSP Residual norm 8.878224474434e-02 225 KSP Residual norm 8.688581595611e-02 226 KSP Residual norm 8.488886503277e-02 227 KSP Residual norm 8.201413767821e-02 228 KSP Residual norm 7.803478069870e-02 229 KSP Residual norm 7.416371267022e-02 230 KSP Residual norm 7.025188664633e-02 231 KSP Residual norm 6.682434007888e-02 232 KSP Residual norm 6.377690154012e-02 233 KSP Residual norm 6.152495703153e-02 234 KSP Residual norm 6.015852321817e-02 235 KSP Residual norm 5.938693002418e-02 236 KSP Residual norm 5.888945489824e-02 237 KSP Residual norm 5.847229168462e-02 238 KSP Residual norm 5.800181326437e-02 239 KSP Residual norm 5.738537648091e-02 240 KSP Residual norm 5.623372330132e-02 241 KSP Residual norm 5.451238039824e-02 242 KSP Residual norm 5.317681700245e-02 243 KSP Residual norm 5.199989138181e-02 244 KSP Residual norm 5.136562467978e-02 245 KSP Residual norm 5.100101829002e-02 246 KSP Residual norm 5.085710505038e-02 247 KSP Residual norm 5.069344997416e-02 248 KSP Residual norm 5.051906659675e-02 249 KSP Residual norm 5.026379117139e-02 250 KSP Residual norm 4.986797540495e-02 251 KSP Residual norm 4.890216163225e-02 252 KSP Residual norm 4.698288152440e-02 253 KSP Residual norm 4.453842173030e-02 254 KSP Residual norm 4.226663865825e-02 255 KSP Residual norm 4.034273933076e-02 256 KSP Residual norm 3.826683519246e-02 257 KSP Residual norm 3.546412355480e-02 258 KSP Residual norm 3.152515422767e-02 259 KSP Residual norm 2.666649589111e-02 260 KSP Residual norm 2.273561318108e-02 261 KSP Residual norm 2.033058636919e-02 262 KSP Residual norm 1.911298513974e-02 263 KSP Residual norm 1.870209036244e-02 264 KSP Residual norm 1.859598888377e-02 265 KSP Residual norm 1.856996004328e-02 266 KSP Residual norm 1.855873921750e-02 267 KSP Residual norm 1.855356398188e-02 268 KSP Residual norm 1.854543545982e-02 269 KSP Residual norm 1.854285995895e-02 270 KSP Residual norm 1.854281804937e-02 271 KSP Residual norm 1.853652288942e-02 272 KSP Residual norm 1.853326245045e-02 273 KSP Residual norm 1.852628472766e-02 274 KSP Residual norm 1.842026872696e-02 275 KSP Residual norm 1.814949376584e-02 276 KSP Residual norm 1.766565542352e-02 277 KSP Residual norm 1.688986514795e-02 278 KSP Residual norm 1.582925629649e-02 279 KSP Residual norm 1.443310622619e-02 280 KSP Residual norm 1.276811308283e-02 281 KSP Residual norm 1.117923508808e-02 282 KSP Residual norm 9.945250354564e-03 283 KSP Residual norm 9.274734482235e-03 284 KSP Residual norm 8.947481829721e-03 285 KSP Residual norm 8.794444600849e-03 286 KSP Residual norm 8.693259428721e-03 287 KSP Residual norm 8.623661497136e-03 288 KSP Residual norm 8.539728449687e-03 289 KSP Residual norm 8.467363352760e-03 290 KSP Residual norm 8.354686695037e-03 291 KSP Residual norm 8.224235408159e-03 292 KSP Residual norm 8.055283155129e-03 293 KSP Residual norm 7.901663955282e-03 294 KSP Residual norm 7.736175520138e-03 295 KSP Residual norm 7.573360103243e-03 296 KSP Residual norm 7.275943927412e-03 297 KSP Residual norm 6.926492953090e-03 298 KSP Residual norm 6.455533246443e-03 299 KSP Residual norm 6.135990305827e-03 300 KSP Residual norm 5.815690583724e-03 301 KSP Residual norm 5.809152629618e-03 302 KSP Residual norm 5.803492395627e-03 303 KSP Residual norm 5.794790802164e-03 304 KSP Residual norm 5.789506330147e-03 305 KSP Residual norm 5.781327854368e-03 306 KSP Residual norm 5.776057239817e-03 307 KSP Residual norm 5.765415815221e-03 308 KSP Residual norm 5.761951834111e-03 309 KSP Residual norm 5.750334446552e-03 310 KSP Residual norm 5.748146325205e-03 311 KSP Residual norm 5.746559479322e-03 312 KSP Residual norm 5.746353820849e-03 313 KSP Residual norm 5.742699345674e-03 314 KSP Residual norm 5.742174512041e-03 315 KSP Residual norm 5.732246080288e-03 316 KSP Residual norm 5.730574880264e-03 317 KSP Residual norm 5.712059936522e-03 318 KSP Residual norm 5.707578948264e-03 319 KSP Residual norm 5.676257411415e-03 320 KSP Residual norm 5.638936819304e-03 321 KSP Residual norm 5.572682586723e-03 322 KSP Residual norm 5.486826842445e-03 323 KSP Residual norm 5.363010543759e-03 324 KSP Residual norm 5.264036004148e-03 325 KSP Residual norm 5.159357348633e-03 326 KSP Residual norm 5.086910701296e-03 327 KSP Residual norm 5.051535547500e-03 328 KSP Residual norm 5.022158695710e-03 329 KSP Residual norm 5.009636465614e-03 330 KSP Residual norm 4.995539141414e-03 331 KSP Residual norm 4.983964108461e-03 332 KSP Residual norm 4.965597835405e-03 333 KSP Residual norm 4.942995754939e-03 334 KSP Residual norm 4.912415347661e-03 335 KSP Residual norm 4.863226213769e-03 336 KSP Residual norm 4.786698474513e-03 337 KSP Residual norm 4.672207559890e-03 338 KSP Residual norm 4.540539583071e-03 339 KSP Residual norm 4.418368827383e-03 340 KSP Residual norm 4.301364458880e-03 341 KSP Residual norm 4.168751433557e-03 342 KSP Residual norm 4.033002137488e-03 343 KSP Residual norm 3.867028284959e-03 344 KSP Residual norm 3.721247815503e-03 345 KSP Residual norm 3.556222267004e-03 346 KSP Residual norm 3.365333558372e-03 347 KSP Residual norm 3.084855662912e-03 348 KSP Residual norm 2.783559072230e-03 349 KSP Residual norm 2.501761321000e-03 350 KSP Residual norm 2.335051700362e-03 351 KSP Residual norm 2.213037125387e-03 352 KSP Residual norm 2.112128898680e-03 353 KSP Residual norm 2.015177068780e-03 354 KSP Residual norm 1.921592494891e-03 355 KSP Residual norm 1.846985309803e-03 356 KSP Residual norm 1.795960768872e-03 357 KSP Residual norm 1.761400523236e-03 358 KSP Residual norm 1.731747073136e-03 359 KSP Residual norm 1.698651887652e-03 360 KSP Residual norm 1.653140049924e-03 361 KSP Residual norm 1.597007193699e-03 362 KSP Residual norm 1.535496047114e-03 363 KSP Residual norm 1.486864751207e-03 364 KSP Residual norm 1.445887903275e-03 365 KSP Residual norm 1.414810259573e-03 366 KSP Residual norm 1.386468592605e-03 367 KSP Residual norm 1.368095797393e-03 368 KSP Residual norm 1.354719697465e-03 369 KSP Residual norm 1.345449245927e-03 370 KSP Residual norm 1.334574645450e-03 371 KSP Residual norm 1.322614697314e-03 372 KSP Residual norm 1.308208428381e-03 373 KSP Residual norm 1.289023494425e-03 374 KSP Residual norm 1.264988604541e-03 375 KSP Residual norm 1.235543199360e-03 376 KSP Residual norm 1.210200100254e-03 377 KSP Residual norm 1.191233959455e-03 378 KSP Residual norm 1.179442872209e-03 379 KSP Residual norm 1.169171556271e-03 380 KSP Residual norm 1.155344659783e-03 381 KSP Residual norm 1.125711734362e-03 382 KSP Residual norm 1.062973061923e-03 383 KSP Residual norm 9.762588906071e-04 384 KSP Residual norm 8.715976322063e-04 385 KSP Residual norm 8.012907725167e-04 386 KSP Residual norm 7.590075280667e-04 387 KSP Residual norm 7.325330604107e-04 388 KSP Residual norm 7.092713404573e-04 389 KSP Residual norm 6.905393205645e-04 390 KSP Residual norm 6.650952132692e-04 391 KSP Residual norm 6.329088933802e-04 392 KSP Residual norm 5.934125703047e-04 393 KSP Residual norm 5.507050911113e-04 394 KSP Residual norm 5.188848130310e-04 395 KSP Residual norm 4.985928665174e-04 396 KSP Residual norm 4.859115220543e-04 397 KSP Residual norm 4.790050823536e-04 398 KSP Residual norm 4.753387230773e-04 399 KSP Residual norm 4.724681005677e-04 400 KSP Residual norm 4.702472628809e-04 401 KSP Residual norm 4.702410414784e-04 402 KSP Residual norm 4.702365908185e-04 403 KSP Residual norm 4.702357025969e-04 404 KSP Residual norm 4.702130823616e-04 405 KSP Residual norm 4.701427754872e-04 406 KSP Residual norm 4.700522615340e-04 407 KSP Residual norm 4.698775763982e-04 408 KSP Residual norm 4.697441484206e-04 409 KSP Residual norm 4.693979701890e-04 410 KSP Residual norm 4.689256564867e-04 411 KSP Residual norm 4.679246358983e-04 412 KSP Residual norm 4.658918884626e-04 413 KSP Residual norm 4.629512249622e-04 414 KSP Residual norm 4.605082857274e-04 415 KSP Residual norm 4.560287612790e-04 416 KSP Residual norm 4.513538217795e-04 417 KSP Residual norm 4.450295091477e-04 418 KSP Residual norm 4.404815834668e-04 419 KSP Residual norm 4.335170152215e-04 420 KSP Residual norm 4.264646317886e-04 421 KSP Residual norm 4.181191977349e-04 422 KSP Residual norm 4.088892967807e-04 423 KSP Residual norm 3.985636049611e-04 424 KSP Residual norm 3.864046420567e-04 425 KSP Residual norm 3.734792076123e-04 426 KSP Residual norm 3.635398919625e-04 427 KSP Residual norm 3.577302611420e-04 428 KSP Residual norm 3.555716139671e-04 429 KSP Residual norm 3.551322255585e-04 430 KSP Residual norm 3.550200597119e-04 431 KSP Residual norm 3.550192551964e-04 432 KSP Residual norm 3.550183967314e-04 433 KSP Residual norm 3.550183864539e-04 434 KSP Residual norm 3.549999153823e-04 435 KSP Residual norm 3.549725570512e-04 436 KSP Residual norm 3.548859346539e-04 437 KSP Residual norm 3.547894212027e-04 438 KSP Residual norm 3.545587714835e-04 439 KSP Residual norm 3.540596213996e-04 440 KSP Residual norm 3.530181415334e-04 441 KSP Residual norm 3.511574671087e-04 442 KSP Residual norm 3.487415811318e-04 443 KSP Residual norm 3.465638702458e-04 444 KSP Residual norm 3.448941735613e-04 445 KSP Residual norm 3.426239551630e-04 446 KSP Residual norm 3.395576418709e-04 447 KSP Residual norm 3.344899652857e-04 448 KSP Residual norm 3.289348942811e-04 449 KSP Residual norm 3.211711509651e-04 450 KSP Residual norm 3.124097804007e-04 451 KSP Residual norm 3.003490411097e-04 452 KSP Residual norm 2.839876088999e-04 453 KSP Residual norm 2.610235867574e-04 454 KSP Residual norm 2.329994531450e-04 455 KSP Residual norm 2.043909809151e-04 456 KSP Residual norm 1.811539138107e-04 457 KSP Residual norm 1.643791291553e-04 458 KSP Residual norm 1.527871243231e-04 459 KSP Residual norm 1.460853545617e-04 460 KSP Residual norm 1.426492657245e-04 461 KSP Residual norm 1.399611387208e-04 462 KSP Residual norm 1.373603159317e-04 463 KSP Residual norm 1.345857583069e-04 464 KSP Residual norm 1.316861856642e-04 465 KSP Residual norm 1.283133419902e-04 466 KSP Residual norm 1.249709793605e-04 467 KSP Residual norm 1.218945425499e-04 468 KSP Residual norm 1.193800265417e-04 469 KSP Residual norm 1.172329156101e-04 470 KSP Residual norm 1.148200174082e-04 471 KSP Residual norm 1.120629726159e-04 472 KSP Residual norm 1.091118129848e-04 473 KSP Residual norm 1.061725614793e-04 474 KSP Residual norm 1.033153656826e-04 475 KSP Residual norm 1.005018109536e-04 476 KSP Residual norm 9.646565792756e-05 477 KSP Residual norm 9.020951231770e-05 478 KSP Residual norm 8.296762926069e-05 479 KSP Residual norm 7.593618510068e-05 480 KSP Residual norm 7.097572660506e-05 481 KSP Residual norm 6.732987174641e-05 482 KSP Residual norm 6.412880276792e-05 483 KSP Residual norm 6.167793404088e-05 484 KSP Residual norm 6.035778814743e-05 485 KSP Residual norm 5.960656188693e-05 486 KSP Residual norm 5.899064558115e-05 487 KSP Residual norm 5.827127982777e-05 488 KSP Residual norm 5.813772750608e-05 489 KSP Residual norm 5.810336099034e-05 490 KSP Residual norm 5.803721683450e-05 491 KSP Residual norm 5.778784731204e-05 492 KSP Residual norm 5.734565431260e-05 493 KSP Residual norm 5.645647836154e-05 494 KSP Residual norm 5.434940735722e-05 495 KSP Residual norm 5.040606975969e-05 496 KSP Residual norm 4.552604078033e-05 497 KSP Residual norm 4.244123530600e-05 498 KSP Residual norm 3.980419532993e-05 499 KSP Residual norm 3.804071829180e-05 500 KSP Residual norm 3.666594738094e-05 501 KSP Residual norm 3.664637051487e-05 502 KSP Residual norm 3.664377567650e-05 503 KSP Residual norm 3.663032470292e-05 504 KSP Residual norm 3.662738424657e-05 505 KSP Residual norm 3.657511123991e-05 506 KSP Residual norm 3.652168542525e-05 507 KSP Residual norm 3.640554719656e-05 508 KSP Residual norm 3.625587635226e-05 509 KSP Residual norm 3.601156273871e-05 510 KSP Residual norm 3.589038227556e-05 511 KSP Residual norm 3.580921879189e-05 KSP Object:(coupledsolve_) 1 MPI processes type: gmres GMRES: restart=100, using Modified Gram-Schmidt Orthogonalization GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=7.71289e-10, absolute=1e-50, divergence=10000 left preconditioning has attached null space using PRECONDITIONED norm type for convergence test PC Object:(coupledsolve_) 1 MPI processes type: ilu ILU: out-of-place factorization 1 level of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 2.44153 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 package used to perform factorization: petsc total: nonzeros=3.56993e+08, allocated nonzeros=3.56993e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 total: nonzeros=1.46217e+08, allocated nonzeros=1.46217e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines 0000015 0.7713E-07 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 4.646794820899e+04 1 KSP Residual norm 3.162242076737e+04 2 KSP Residual norm 2.266759987827e+04 3 KSP Residual norm 1.760418141741e+04 4 KSP Residual norm 1.299759449795e+04 5 KSP Residual norm 9.645934629749e+03 6 KSP Residual norm 6.543290973422e+03 7 KSP Residual norm 4.115535296707e+03 8 KSP Residual norm 2.254598774104e+03 9 KSP Residual norm 1.263992507458e+03 10 KSP Residual norm 8.900187992408e+02 11 KSP Residual norm 6.980866268682e+02 12 KSP Residual norm 5.307255288249e+02 13 KSP Residual norm 4.512839010567e+02 14 KSP Residual norm 3.659219422601e+02 15 KSP Residual norm 3.024730703737e+02 16 KSP Residual norm 2.506979426991e+02 17 KSP Residual norm 2.202161894880e+02 18 KSP Residual norm 1.961318752240e+02 19 KSP Residual norm 1.790019453230e+02 20 KSP Residual norm 1.602207929459e+02 21 KSP Residual norm 1.442240663618e+02 22 KSP Residual norm 1.308409395461e+02 23 KSP Residual norm 1.214749373733e+02 24 KSP Residual norm 1.121535142813e+02 25 KSP Residual norm 1.031732663136e+02 26 KSP Residual norm 9.423226807834e+01 27 KSP Residual norm 8.567649654448e+01 28 KSP Residual norm 7.965947268645e+01 29 KSP Residual norm 7.537792309374e+01 30 KSP Residual norm 7.202652449618e+01 31 KSP Residual norm 6.920204188935e+01 32 KSP Residual norm 6.616534761933e+01 33 KSP Residual norm 6.336489685963e+01 34 KSP Residual norm 6.088986346481e+01 35 KSP Residual norm 5.861920017382e+01 36 KSP Residual norm 5.591703639485e+01 37 KSP Residual norm 5.311107475454e+01 38 KSP Residual norm 5.076254698594e+01 39 KSP Residual norm 4.836543147442e+01 40 KSP Residual norm 4.529676540526e+01 41 KSP Residual norm 4.221262587245e+01 42 KSP Residual norm 3.833131373392e+01 43 KSP Residual norm 3.429264120656e+01 44 KSP Residual norm 3.020863307889e+01 45 KSP Residual norm 2.696209206379e+01 46 KSP Residual norm 2.359855888159e+01 47 KSP Residual norm 2.061166319364e+01 48 KSP Residual norm 1.843540039459e+01 49 KSP Residual norm 1.719361305322e+01 50 KSP Residual norm 1.635139961358e+01 51 KSP Residual norm 1.590547156595e+01 52 KSP Residual norm 1.565697549896e+01 53 KSP Residual norm 1.550563184332e+01 54 KSP Residual norm 1.541475665628e+01 55 KSP Residual norm 1.536042800080e+01 56 KSP Residual norm 1.527574375082e+01 57 KSP Residual norm 1.517730416740e+01 58 KSP Residual norm 1.506734202036e+01 59 KSP Residual norm 1.489908764776e+01 60 KSP Residual norm 1.460003346776e+01 61 KSP Residual norm 1.412285748193e+01 62 KSP Residual norm 1.340676458921e+01 63 KSP Residual norm 1.247697451702e+01 64 KSP Residual norm 1.142212946011e+01 65 KSP Residual norm 9.961252075805e+00 66 KSP Residual norm 8.481175718201e+00 67 KSP Residual norm 7.206819103175e+00 68 KSP Residual norm 6.383876179541e+00 69 KSP Residual norm 5.832819615006e+00 70 KSP Residual norm 5.432873907996e+00 71 KSP Residual norm 5.146484146324e+00 72 KSP Residual norm 4.887708341839e+00 73 KSP Residual norm 4.681994675563e+00 74 KSP Residual norm 4.509329296295e+00 75 KSP Residual norm 4.395794276130e+00 76 KSP Residual norm 4.318787253012e+00 77 KSP Residual norm 4.258373205363e+00 78 KSP Residual norm 4.199785724010e+00 79 KSP Residual norm 4.130222774285e+00 80 KSP Residual norm 4.046405133887e+00 81 KSP Residual norm 3.936078645839e+00 82 KSP Residual norm 3.789955420792e+00 83 KSP Residual norm 3.608071911567e+00 84 KSP Residual norm 3.399084809541e+00 85 KSP Residual norm 3.202560820748e+00 86 KSP Residual norm 3.026606474824e+00 87 KSP Residual norm 2.882123626323e+00 88 KSP Residual norm 2.751158368300e+00 89 KSP Residual norm 2.639472452003e+00 90 KSP Residual norm 2.534517366302e+00 91 KSP Residual norm 2.407657234227e+00 92 KSP Residual norm 2.266499085399e+00 93 KSP Residual norm 2.135824151210e+00 94 KSP Residual norm 2.037938220478e+00 95 KSP Residual norm 1.958717852318e+00 96 KSP Residual norm 1.889003641921e+00 97 KSP Residual norm 1.834242367736e+00 98 KSP Residual norm 1.780687975558e+00 99 KSP Residual norm 1.720785519730e+00 100 KSP Residual norm 1.666548235620e+00 101 KSP Residual norm 1.666524940135e+00 102 KSP Residual norm 1.666514426342e+00 103 KSP Residual norm 1.666496765281e+00 104 KSP Residual norm 1.666476009713e+00 105 KSP Residual norm 1.666426578570e+00 106 KSP Residual norm 1.666385470610e+00 107 KSP Residual norm 1.666292936289e+00 108 KSP Residual norm 1.666249015817e+00 109 KSP Residual norm 1.666124582511e+00 110 KSP Residual norm 1.666068313142e+00 111 KSP Residual norm 1.666024279764e+00 112 KSP Residual norm 1.665911765170e+00 113 KSP Residual norm 1.665401400090e+00 114 KSP Residual norm 1.664933598256e+00 115 KSP Residual norm 1.662816384738e+00 116 KSP Residual norm 1.660544682643e+00 117 KSP Residual norm 1.655661036668e+00 118 KSP Residual norm 1.650586037021e+00 119 KSP Residual norm 1.641159717532e+00 120 KSP Residual norm 1.630527305334e+00 121 KSP Residual norm 1.611209780143e+00 122 KSP Residual norm 1.595975456969e+00 123 KSP Residual norm 1.580970290601e+00 124 KSP Residual norm 1.570322452493e+00 125 KSP Residual norm 1.556175156915e+00 126 KSP Residual norm 1.543423229669e+00 127 KSP Residual norm 1.527003948263e+00 128 KSP Residual norm 1.509123796585e+00 129 KSP Residual norm 1.488352901634e+00 130 KSP Residual norm 1.463926797764e+00 131 KSP Residual norm 1.429569435243e+00 132 KSP Residual norm 1.388354150635e+00 133 KSP Residual norm 1.338008540021e+00 134 KSP Residual norm 1.294027544802e+00 135 KSP Residual norm 1.257198684139e+00 136 KSP Residual norm 1.225724862325e+00 137 KSP Residual norm 1.197461568428e+00 138 KSP Residual norm 1.180031810001e+00 139 KSP Residual norm 1.169889062148e+00 140 KSP Residual norm 1.157722353867e+00 141 KSP Residual norm 1.137592017324e+00 142 KSP Residual norm 1.109489192043e+00 143 KSP Residual norm 1.069881349467e+00 144 KSP Residual norm 1.027996212098e+00 145 KSP Residual norm 9.838787526692e-01 146 KSP Residual norm 9.504937002617e-01 147 KSP Residual norm 9.238462867868e-01 148 KSP Residual norm 9.066607148063e-01 149 KSP Residual norm 8.962209185061e-01 150 KSP Residual norm 8.920083838305e-01 151 KSP Residual norm 8.899930225962e-01 152 KSP Residual norm 8.871175275252e-01 153 KSP Residual norm 8.774096256709e-01 154 KSP Residual norm 8.510220318430e-01 155 KSP Residual norm 8.020970089755e-01 156 KSP Residual norm 7.288016156249e-01 157 KSP Residual norm 6.432906268503e-01 158 KSP Residual norm 5.620299536321e-01 159 KSP Residual norm 4.854799041426e-01 160 KSP Residual norm 4.261051186962e-01 161 KSP Residual norm 3.854558583944e-01 162 KSP Residual norm 3.594015924885e-01 163 KSP Residual norm 3.406557474513e-01 164 KSP Residual norm 3.225997398989e-01 165 KSP Residual norm 3.024784131243e-01 166 KSP Residual norm 2.753062559938e-01 167 KSP Residual norm 2.496783119768e-01 168 KSP Residual norm 2.313741977446e-01 169 KSP Residual norm 2.187864466343e-01 170 KSP Residual norm 2.112093555882e-01 171 KSP Residual norm 2.081912204893e-01 172 KSP Residual norm 2.070807718406e-01 173 KSP Residual norm 2.064969044019e-01 174 KSP Residual norm 2.060102327457e-01 175 KSP Residual norm 2.046374033842e-01 176 KSP Residual norm 2.028960156085e-01 177 KSP Residual norm 2.002100716468e-01 178 KSP Residual norm 1.967840485581e-01 179 KSP Residual norm 1.927702800414e-01 180 KSP Residual norm 1.867316538947e-01 181 KSP Residual norm 1.776618442764e-01 182 KSP Residual norm 1.643846963201e-01 183 KSP Residual norm 1.495556635578e-01 184 KSP Residual norm 1.352608966176e-01 185 KSP Residual norm 1.229340952173e-01 186 KSP Residual norm 1.144225979608e-01 187 KSP Residual norm 1.082897505083e-01 188 KSP Residual norm 1.049017051345e-01 189 KSP Residual norm 1.031397784489e-01 190 KSP Residual norm 1.024950652316e-01 191 KSP Residual norm 1.022799709701e-01 192 KSP Residual norm 1.021274610510e-01 193 KSP Residual norm 1.019884609121e-01 194 KSP Residual norm 1.018190203156e-01 195 KSP Residual norm 1.016668138214e-01 196 KSP Residual norm 1.015882632672e-01 197 KSP Residual norm 1.014326378376e-01 198 KSP Residual norm 1.009556354161e-01 199 KSP Residual norm 9.916513392061e-02 200 KSP Residual norm 9.512368538380e-02 201 KSP Residual norm 9.511274478236e-02 202 KSP Residual norm 9.511130001611e-02 203 KSP Residual norm 9.510985257092e-02 204 KSP Residual norm 9.510931973258e-02 205 KSP Residual norm 9.510831566974e-02 206 KSP Residual norm 9.510666762985e-02 207 KSP Residual norm 9.510108188183e-02 208 KSP Residual norm 9.510105813453e-02 209 KSP Residual norm 9.509209022090e-02 210 KSP Residual norm 9.509109113343e-02 211 KSP Residual norm 9.508866916632e-02 212 KSP Residual norm 9.508466276696e-02 213 KSP Residual norm 9.505040127082e-02 214 KSP Residual norm 9.502624314039e-02 215 KSP Residual norm 9.487018755074e-02 216 KSP Residual norm 9.472787517931e-02 217 KSP Residual norm 9.435335979333e-02 218 KSP Residual norm 9.415047197756e-02 219 KSP Residual norm 9.356795364966e-02 220 KSP Residual norm 9.311479898934e-02 221 KSP Residual norm 9.219330203039e-02 222 KSP Residual norm 9.122091980114e-02 223 KSP Residual norm 8.990963482712e-02 224 KSP Residual norm 8.878225650886e-02 225 KSP Residual norm 8.688582159444e-02 226 KSP Residual norm 8.488887073618e-02 227 KSP Residual norm 8.201414912578e-02 228 KSP Residual norm 7.803480100034e-02 229 KSP Residual norm 7.416373475897e-02 230 KSP Residual norm 7.025190233864e-02 231 KSP Residual norm 6.682434550799e-02 232 KSP Residual norm 6.377689492521e-02 233 KSP Residual norm 6.152493963587e-02 234 KSP Residual norm 6.015849670484e-02 235 KSP Residual norm 5.938689591366e-02 236 KSP Residual norm 5.888941363968e-02 237 KSP Residual norm 5.847224274652e-02 238 KSP Residual norm 5.800175520498e-02 239 KSP Residual norm 5.738530826903e-02 240 KSP Residual norm 5.623363890917e-02 241 KSP Residual norm 5.451227438244e-02 242 KSP Residual norm 5.317669641447e-02 243 KSP Residual norm 5.199976060994e-02 244 KSP Residual norm 5.136549006714e-02 245 KSP Residual norm 5.100088301516e-02 246 KSP Residual norm 5.085697096328e-02 247 KSP Residual norm 5.069331792919e-02 248 KSP Residual norm 5.051893730666e-02 249 KSP Residual norm 5.026366546374e-02 250 KSP Residual norm 4.986785419431e-02 251 KSP Residual norm 4.890204703365e-02 252 KSP Residual norm 4.698277898145e-02 253 KSP Residual norm 4.453833645345e-02 254 KSP Residual norm 4.226657387554e-02 255 KSP Residual norm 4.034269721906e-02 256 KSP Residual norm 3.826682067816e-02 257 KSP Residual norm 3.546413911587e-02 258 KSP Residual norm 3.152519390496e-02 259 KSP Residual norm 2.666654704666e-02 260 KSP Residual norm 2.273566358078e-02 261 KSP Residual norm 2.033063246893e-02 262 KSP Residual norm 1.911302837641e-02 263 KSP Residual norm 1.870213294628e-02 264 KSP Residual norm 1.859603156285e-02 265 KSP Residual norm 1.857000308512e-02 266 KSP Residual norm 1.855878275541e-02 267 KSP Residual norm 1.855360801823e-02 268 KSP Residual norm 1.854548031699e-02 269 KSP Residual norm 1.854290530044e-02 270 KSP Residual norm 1.854286345349e-02 271 KSP Residual norm 1.853656759117e-02 272 KSP Residual norm 1.853330675049e-02 273 KSP Residual norm 1.852632944725e-02 274 KSP Residual norm 1.842031450563e-02 275 KSP Residual norm 1.814954070685e-02 276 KSP Residual norm 1.766570334322e-02 277 KSP Residual norm 1.688991379841e-02 278 KSP Residual norm 1.582930449636e-02 279 KSP Residual norm 1.443315120077e-02 280 KSP Residual norm 1.276815012247e-02 281 KSP Residual norm 1.117926002092e-02 282 KSP Residual norm 9.945262873317e-03 283 KSP Residual norm 9.274739395188e-03 284 KSP Residual norm 8.947483109571e-03 285 KSP Residual norm 8.794444387642e-03 286 KSP Residual norm 8.693258663739e-03 287 KSP Residual norm 8.623660557605e-03 288 KSP Residual norm 8.539727623879e-03 289 KSP Residual norm 8.467362753673e-03 290 KSP Residual norm 8.354686690725e-03 291 KSP Residual norm 8.224236311532e-03 292 KSP Residual norm 8.055285312143e-03 293 KSP Residual norm 7.901667464033e-03 294 KSP Residual norm 7.736180366841e-03 295 KSP Residual norm 7.573366107019e-03 296 KSP Residual norm 7.275951435176e-03 297 KSP Residual norm 6.926501460936e-03 298 KSP Residual norm 6.455541889207e-03 299 KSP Residual norm 6.135997568996e-03 300 KSP Residual norm 5.815693925701e-03 301 KSP Residual norm 5.809155730512e-03 302 KSP Residual norm 5.803495329190e-03 303 KSP Residual norm 5.794793550803e-03 304 KSP Residual norm 5.789509029071e-03 305 KSP Residual norm 5.781330542567e-03 306 KSP Residual norm 5.776060001268e-03 307 KSP Residual norm 5.765418674138e-03 308 KSP Residual norm 5.761954871014e-03 309 KSP Residual norm 5.750337588762e-03 310 KSP Residual norm 5.748149532079e-03 311 KSP Residual norm 5.746562707840e-03 312 KSP Residual norm 5.746357030764e-03 313 KSP Residual norm 5.742702573484e-03 314 KSP Residual norm 5.742177743093e-03 315 KSP Residual norm 5.732249311250e-03 316 KSP Residual norm 5.730578133999e-03 317 KSP Residual norm 5.712063107531e-03 318 KSP Residual norm 5.707582112662e-03 319 KSP Residual norm 5.676260458741e-03 320 KSP Residual norm 5.638939374706e-03 321 KSP Residual norm 5.572684845222e-03 322 KSP Residual norm 5.486828117123e-03 323 KSP Residual norm 5.363010856193e-03 324 KSP Residual norm 5.264035858141e-03 325 KSP Residual norm 5.159357930698e-03 326 KSP Residual norm 5.086912300173e-03 327 KSP Residual norm 5.051538230249e-03 328 KSP Residual norm 5.022162172143e-03 329 KSP Residual norm 5.009640426937e-03 330 KSP Residual norm 4.995543436341e-03 331 KSP Residual norm 4.983968600309e-03 332 KSP Residual norm 4.965602450619e-03 333 KSP Residual norm 4.943000408421e-03 334 KSP Residual norm 4.912419833617e-03 335 KSP Residual norm 4.863229988401e-03 336 KSP Residual norm 4.786700790944e-03 337 KSP Residual norm 4.672207708803e-03 338 KSP Residual norm 4.540537562900e-03 339 KSP Residual norm 4.418365115806e-03 340 KSP Residual norm 4.301359916666e-03 341 KSP Residual norm 4.168746636443e-03 342 KSP Residual norm 4.032997409484e-03 343 KSP Residual norm 3.867023623556e-03 344 KSP Residual norm 3.721243438578e-03 345 KSP Residual norm 3.556217889989e-03 346 KSP Residual norm 3.365328979174e-03 347 KSP Residual norm 3.084850936862e-03 348 KSP Residual norm 2.783555190338e-03 349 KSP Residual norm 2.501758951534e-03 350 KSP Residual norm 2.335050694541e-03 351 KSP Residual norm 2.213037167897e-03 352 KSP Residual norm 2.112129476653e-03 353 KSP Residual norm 2.015177394061e-03 354 KSP Residual norm 1.921591985140e-03 355 KSP Residual norm 1.846983886427e-03 356 KSP Residual norm 1.795958774742e-03 357 KSP Residual norm 1.761398316338e-03 358 KSP Residual norm 1.731744971419e-03 359 KSP Residual norm 1.698650181272e-03 360 KSP Residual norm 1.653138967081e-03 361 KSP Residual norm 1.597006768280e-03 362 KSP Residual norm 1.535496011728e-03 363 KSP Residual norm 1.486864749479e-03 364 KSP Residual norm 1.445887762864e-03 365 KSP Residual norm 1.414809899524e-03 366 KSP Residual norm 1.386467990980e-03 367 KSP Residual norm 1.368094978496e-03 368 KSP Residual norm 1.354718649627e-03 369 KSP Residual norm 1.345447991354e-03 370 KSP Residual norm 1.334573194482e-03 371 KSP Residual norm 1.322613081527e-03 372 KSP Residual norm 1.308206752593e-03 373 KSP Residual norm 1.289021877704e-03 374 KSP Residual norm 1.264987183561e-03 375 KSP Residual norm 1.235542045425e-03 376 KSP Residual norm 1.210199245792e-03 377 KSP Residual norm 1.191233370185e-03 378 KSP Residual norm 1.179442432859e-03 379 KSP Residual norm 1.169171129665e-03 380 KSP Residual norm 1.155344075758e-03 381 KSP Residual norm 1.125710666196e-03 382 KSP Residual norm 1.062970903900e-03 383 KSP Residual norm 9.762556633214e-04 384 KSP Residual norm 8.715938325298e-04 385 KSP Residual norm 8.012875105018e-04 386 KSP Residual norm 7.590051160232e-04 387 KSP Residual norm 7.325314113980e-04 388 KSP Residual norm 7.092702403806e-04 389 KSP Residual norm 6.905385804323e-04 390 KSP Residual norm 6.650945416878e-04 391 KSP Residual norm 6.329079460675e-04 392 KSP Residual norm 5.934111947678e-04 393 KSP Residual norm 5.507033384615e-04 394 KSP Residual norm 5.188828726111e-04 395 KSP Residual norm 4.985908057612e-04 396 KSP Residual norm 4.859093362318e-04 397 KSP Residual norm 4.790028538432e-04 398 KSP Residual norm 4.753365148561e-04 399 KSP Residual norm 4.724659632135e-04 400 KSP Residual norm 4.702452561267e-04 401 KSP Residual norm 4.702390351765e-04 402 KSP Residual norm 4.702345845105e-04 403 KSP Residual norm 4.702336962324e-04 404 KSP Residual norm 4.702110767488e-04 405 KSP Residual norm 4.701407708655e-04 406 KSP Residual norm 4.700502584088e-04 407 KSP Residual norm 4.698755792008e-04 408 KSP Residual norm 4.697421541860e-04 409 KSP Residual norm 4.693959896717e-04 410 KSP Residual norm 4.689236864800e-04 411 KSP Residual norm 4.679226852698e-04 412 KSP Residual norm 4.658899348373e-04 413 KSP Residual norm 4.629492696532e-04 414 KSP Residual norm 4.605063543842e-04 415 KSP Residual norm 4.560269176983e-04 416 KSP Residual norm 4.513520390475e-04 417 KSP Residual norm 4.450278080215e-04 418 KSP Residual norm 4.404799204991e-04 419 KSP Residual norm 4.335154571563e-04 420 KSP Residual norm 4.264631913032e-04 421 KSP Residual norm 4.181177671534e-04 422 KSP Residual norm 4.088878546919e-04 423 KSP Residual norm 3.985620577737e-04 424 KSP Residual norm 3.864029984060e-04 425 KSP Residual norm 3.734774164136e-04 426 KSP Residual norm 3.635380328465e-04 427 KSP Residual norm 3.577283023425e-04 428 KSP Residual norm 3.555695534752e-04 429 KSP Residual norm 3.551301208574e-04 430 KSP Residual norm 3.550179308363e-04 431 KSP Residual norm 3.550171242782e-04 432 KSP Residual norm 3.550162633063e-04 433 KSP Residual norm 3.550162532351e-04 434 KSP Residual norm 3.549977756105e-04 435 KSP Residual norm 3.549704130061e-04 436 KSP Residual norm 3.548837850119e-04 437 KSP Residual norm 3.547872695997e-04 438 KSP Residual norm 3.545566260772e-04 439 KSP Residual norm 3.540575066932e-04 440 KSP Residual norm 3.530160856286e-04 441 KSP Residual norm 3.511554954001e-04 442 KSP Residual norm 3.487397180877e-04 443 KSP Residual norm 3.465620800106e-04 444 KSP Residual norm 3.448924457216e-04 445 KSP Residual norm 3.426222866913e-04 446 KSP Residual norm 3.395560600771e-04 447 KSP Residual norm 3.344884923351e-04 448 KSP Residual norm 3.289336439103e-04 449 KSP Residual norm 3.211703064033e-04 450 KSP Residual norm 3.124094462207e-04 451 KSP Residual norm 3.003489954096e-04 452 KSP Residual norm 2.839876696933e-04 453 KSP Residual norm 2.610237808449e-04 454 KSP Residual norm 2.330002634976e-04 455 KSP Residual norm 2.043925900147e-04 456 KSP Residual norm 1.811558179928e-04 457 KSP Residual norm 1.643807313360e-04 458 KSP Residual norm 1.527881498162e-04 459 KSP Residual norm 1.460858908095e-04 460 KSP Residual norm 1.426495199025e-04 461 KSP Residual norm 1.399612283907e-04 462 KSP Residual norm 1.373602999485e-04 463 KSP Residual norm 1.345857091568e-04 464 KSP Residual norm 1.316861427335e-04 465 KSP Residual norm 1.283133793096e-04 466 KSP Residual norm 1.249711324232e-04 467 KSP Residual norm 1.218948259787e-04 468 KSP Residual norm 1.193803696279e-04 469 KSP Residual norm 1.172332341933e-04 470 KSP Residual norm 1.148202045262e-04 471 KSP Residual norm 1.120629586644e-04 472 KSP Residual norm 1.091116441167e-04 473 KSP Residual norm 1.061723050322e-04 474 KSP Residual norm 1.033150688044e-04 475 KSP Residual norm 1.005014880236e-04 476 KSP Residual norm 9.646535630483e-05 477 KSP Residual norm 9.020933738294e-05 478 KSP Residual norm 8.296759033794e-05 479 KSP Residual norm 7.593625966553e-05 480 KSP Residual norm 7.097578933365e-05 481 KSP Residual norm 6.732988998414e-05 482 KSP Residual norm 6.412873790041e-05 483 KSP Residual norm 6.167786833766e-05 484 KSP Residual norm 6.035769984769e-05 485 KSP Residual norm 5.960644103590e-05 486 KSP Residual norm 5.899046838854e-05 487 KSP Residual norm 5.827107803373e-05 488 KSP Residual norm 5.813751844255e-05 489 KSP Residual norm 5.810315799590e-05 490 KSP Residual norm 5.803702209937e-05 491 KSP Residual norm 5.778769091719e-05 492 KSP Residual norm 5.734554115354e-05 493 KSP Residual norm 5.645638974202e-05 494 KSP Residual norm 5.434934669677e-05 495 KSP Residual norm 5.040611056136e-05 496 KSP Residual norm 4.552620049061e-05 497 KSP Residual norm 4.244133017525e-05 498 KSP Residual norm 3.980423557742e-05 499 KSP Residual norm 3.804067673840e-05 500 KSP Residual norm 3.666583472557e-05 501 KSP Residual norm 3.664625757887e-05 502 KSP Residual norm 3.664366300196e-05 503 KSP Residual norm 3.663021288051e-05 504 KSP Residual norm 3.662727247821e-05 505 KSP Residual norm 3.657499867291e-05 506 KSP Residual norm 3.652156888548e-05 507 KSP Residual norm 3.640542016812e-05 508 KSP Residual norm 3.625574112310e-05 509 KSP Residual norm 3.601141808469e-05 510 KSP Residual norm 3.589023724195e-05 511 KSP Residual norm 3.580907281057e-05 512 KSP Residual norm 3.578919639818e-05 513 KSP Residual norm 3.564505339443e-05 514 KSP Residual norm 3.555880752915e-05 515 KSP Residual norm 3.539369911404e-05 516 KSP Residual norm 3.522870124460e-05 517 KSP Residual norm 3.505167799097e-05 518 KSP Residual norm 3.486732506700e-05 519 KSP Residual norm 3.447155992189e-05 520 KSP Residual norm 3.368541633781e-05 521 KSP Residual norm 3.241794908384e-05 522 KSP Residual norm 3.071082769022e-05 523 KSP Residual norm 2.887798403104e-05 524 KSP Residual norm 2.732422721810e-05 525 KSP Residual norm 2.626683896557e-05 526 KSP Residual norm 2.553182411319e-05 527 KSP Residual norm 2.500000821245e-05 528 KSP Residual norm 2.473966089595e-05 529 KSP Residual norm 2.462914628294e-05 530 KSP Residual norm 2.457423894088e-05 531 KSP Residual norm 2.451837664784e-05 532 KSP Residual norm 2.437998317263e-05 533 KSP Residual norm 2.415824026520e-05 534 KSP Residual norm 2.394728366071e-05 535 KSP Residual norm 2.373648130184e-05 536 KSP Residual norm 2.353235724776e-05 537 KSP Residual norm 2.335112025751e-05 538 KSP Residual norm 2.324328002623e-05 539 KSP Residual norm 2.316336537669e-05 540 KSP Residual norm 2.306013033500e-05 541 KSP Residual norm 2.292513301106e-05 542 KSP Residual norm 2.272089490802e-05 543 KSP Residual norm 2.241217075528e-05 544 KSP Residual norm 2.199495432613e-05 545 KSP Residual norm 2.153388199436e-05 546 KSP Residual norm 2.108236140818e-05 547 KSP Residual norm 2.063720513125e-05 548 KSP Residual norm 2.020755800710e-05 549 KSP Residual norm 1.977765172809e-05 550 KSP Residual norm 1.936173741585e-05 551 KSP Residual norm 1.883849999339e-05 552 KSP Residual norm 1.808110100702e-05 553 KSP Residual norm 1.677710817069e-05 554 KSP Residual norm 1.521036213774e-05 555 KSP Residual norm 1.374436968929e-05 KSP Object:(coupledsolve_) 1 MPI processes type: gmres GMRES: restart=100, using Modified Gram-Schmidt Orthogonalization GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=3.12126e-10, absolute=1e-50, divergence=10000 left preconditioning has attached null space using PRECONDITIONED norm type for convergence test PC Object:(coupledsolve_) 1 MPI processes type: ilu ILU: out-of-place factorization 1 level of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 2.44153 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 package used to perform factorization: petsc total: nonzeros=3.56993e+08, allocated nonzeros=3.56993e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 total: nonzeros=1.46217e+08, allocated nonzeros=1.46217e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines 0000016 0.3121E-07 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 4.646794820121e+04 1 KSP Residual norm 3.162242077295e+04 2 KSP Residual norm 2.266759988501e+04 3 KSP Residual norm 1.760418142308e+04 4 KSP Residual norm 1.299759450213e+04 5 KSP Residual norm 9.645934632692e+03 6 KSP Residual norm 6.543290976185e+03 7 KSP Residual norm 4.115535299190e+03 8 KSP Residual norm 2.254598776897e+03 9 KSP Residual norm 1.263992510339e+03 10 KSP Residual norm 8.900188030816e+02 11 KSP Residual norm 6.980866318235e+02 12 KSP Residual norm 5.307255349302e+02 13 KSP Residual norm 4.512839074387e+02 14 KSP Residual norm 3.659219482847e+02 15 KSP Residual norm 3.024730757817e+02 16 KSP Residual norm 2.506979472260e+02 17 KSP Residual norm 2.202161931302e+02 18 KSP Residual norm 1.961318778407e+02 19 KSP Residual norm 1.790019470572e+02 20 KSP Residual norm 1.602207935481e+02 21 KSP Residual norm 1.442240661885e+02 22 KSP Residual norm 1.308409388208e+02 23 KSP Residual norm 1.214749365150e+02 24 KSP Residual norm 1.121535134198e+02 25 KSP Residual norm 1.031732657430e+02 26 KSP Residual norm 9.423226787189e+01 27 KSP Residual norm 8.567649691696e+01 28 KSP Residual norm 7.965947346121e+01 29 KSP Residual norm 7.537792423036e+01 30 KSP Residual norm 7.202652583670e+01 31 KSP Residual norm 6.920204343174e+01 32 KSP Residual norm 6.616534931960e+01 33 KSP Residual norm 6.336489874026e+01 34 KSP Residual norm 6.088986545116e+01 35 KSP Residual norm 5.861920225904e+01 36 KSP Residual norm 5.591703853556e+01 37 KSP Residual norm 5.311107690712e+01 38 KSP Residual norm 5.076254906901e+01 39 KSP Residual norm 4.836543340085e+01 40 KSP Residual norm 4.529676702383e+01 41 KSP Residual norm 4.221262705742e+01 42 KSP Residual norm 3.833131434012e+01 43 KSP Residual norm 3.429264127033e+01 44 KSP Residual norm 3.020863270589e+01 45 KSP Residual norm 2.696209139846e+01 46 KSP Residual norm 2.359855804301e+01 47 KSP Residual norm 2.061166229149e+01 48 KSP Residual norm 1.843539950349e+01 49 KSP Residual norm 1.719361218498e+01 50 KSP Residual norm 1.635139880498e+01 51 KSP Residual norm 1.590547081364e+01 52 KSP Residual norm 1.565697479183e+01 53 KSP Residual norm 1.550563116215e+01 54 KSP Residual norm 1.541475599507e+01 55 KSP Residual norm 1.536042734626e+01 56 KSP Residual norm 1.527574309808e+01 57 KSP Residual norm 1.517730351007e+01 58 KSP Residual norm 1.506734135824e+01 59 KSP Residual norm 1.489908696943e+01 60 KSP Residual norm 1.460003277175e+01 61 KSP Residual norm 1.412285675200e+01 62 KSP Residual norm 1.340676381237e+01 63 KSP Residual norm 1.247697369631e+01 64 KSP Residual norm 1.142212862306e+01 65 KSP Residual norm 9.961251195833e+00 66 KSP Residual norm 8.481174768037e+00 67 KSP Residual norm 7.206818055059e+00 68 KSP Residual norm 6.383875108463e+00 69 KSP Residual norm 5.832818592797e+00 70 KSP Residual norm 5.432872978840e+00 71 KSP Residual norm 5.146483330802e+00 72 KSP Residual norm 4.887707660868e+00 73 KSP Residual norm 4.681994111614e+00 74 KSP Residual norm 4.509328834763e+00 75 KSP Residual norm 4.395793886939e+00 76 KSP Residual norm 4.318786917182e+00 77 KSP Residual norm 4.258372913662e+00 78 KSP Residual norm 4.199785470681e+00 79 KSP Residual norm 4.130222564846e+00 80 KSP Residual norm 4.046404973517e+00 81 KSP Residual norm 3.936078545267e+00 82 KSP Residual norm 3.789955381440e+00 83 KSP Residual norm 3.608071923332e+00 84 KSP Residual norm 3.399084845196e+00 85 KSP Residual norm 3.202560850738e+00 86 KSP Residual norm 3.026606480568e+00 87 KSP Residual norm 2.882123604494e+00 88 KSP Residual norm 2.751158326935e+00 89 KSP Residual norm 2.639472405167e+00 90 KSP Residual norm 2.534517333109e+00 91 KSP Residual norm 2.407657242197e+00 92 KSP Residual norm 2.266499166310e+00 93 KSP Residual norm 2.135824314991e+00 94 KSP Residual norm 2.037938449027e+00 95 KSP Residual norm 1.958718126866e+00 96 KSP Residual norm 1.889003940158e+00 97 KSP Residual norm 1.834242666046e+00 98 KSP Residual norm 1.780688252567e+00 99 KSP Residual norm 1.720785755618e+00 100 KSP Residual norm 1.666548422081e+00 101 KSP Residual norm 1.666525126573e+00 102 KSP Residual norm 1.666514612766e+00 103 KSP Residual norm 1.666496951685e+00 104 KSP Residual norm 1.666476196101e+00 105 KSP Residual norm 1.666426764930e+00 106 KSP Residual norm 1.666385656954e+00 107 KSP Residual norm 1.666293122578e+00 108 KSP Residual norm 1.666249202090e+00 109 KSP Residual norm 1.666124768731e+00 110 KSP Residual norm 1.666068499357e+00 111 KSP Residual norm 1.666024466004e+00 112 KSP Residual norm 1.665911951522e+00 113 KSP Residual norm 1.665401586714e+00 114 KSP Residual norm 1.664933785327e+00 115 KSP Residual norm 1.662816572848e+00 116 KSP Residual norm 1.660544871722e+00 117 KSP Residual norm 1.655661226319e+00 118 KSP Residual norm 1.650586226633e+00 119 KSP Residual norm 1.641159903328e+00 120 KSP Residual norm 1.630527482983e+00 121 KSP Residual norm 1.611209932633e+00 122 KSP Residual norm 1.595975581440e+00 123 KSP Residual norm 1.580970386495e+00 124 KSP Residual norm 1.570322531587e+00 125 KSP Residual norm 1.556175221818e+00 126 KSP Residual norm 1.543423292128e+00 127 KSP Residual norm 1.527004016369e+00 128 KSP Residual norm 1.509123885026e+00 129 KSP Residual norm 1.488353021083e+00 130 KSP Residual norm 1.463926960803e+00 131 KSP Residual norm 1.429569649275e+00 132 KSP Residual norm 1.388354409322e+00 133 KSP Residual norm 1.338008816625e+00 134 KSP Residual norm 1.294027811825e+00 135 KSP Residual norm 1.257198925438e+00 136 KSP Residual norm 1.225725069512e+00 137 KSP Residual norm 1.197461734920e+00 138 KSP Residual norm 1.180031946650e+00 139 KSP Residual norm 1.169889183013e+00 140 KSP Residual norm 1.157722463646e+00 141 KSP Residual norm 1.137592119057e+00 142 KSP Residual norm 1.109489294519e+00 143 KSP Residual norm 1.069881463745e+00 144 KSP Residual norm 1.027996344679e+00 145 KSP Residual norm 9.838789034108e-01 146 KSP Residual norm 9.504938609612e-01 147 KSP Residual norm 9.238464523166e-01 148 KSP Residual norm 9.066608784988e-01 149 KSP Residual norm 8.962210776621e-01 150 KSP Residual norm 8.920085404757e-01 151 KSP Residual norm 8.899931787411e-01 152 KSP Residual norm 8.871176840512e-01 153 KSP Residual norm 8.774097852543e-01 154 KSP Residual norm 8.510222023397e-01 155 KSP Residual norm 8.020971952653e-01 156 KSP Residual norm 7.288018116067e-01 157 KSP Residual norm 6.432908218984e-01 158 KSP Residual norm 5.620301400923e-01 159 KSP Residual norm 4.854800735374e-01 160 KSP Residual norm 4.261052693861e-01 161 KSP Residual norm 3.854559958811e-01 162 KSP Residual norm 3.594017212634e-01 163 KSP Residual norm 3.406558707185e-01 164 KSP Residual norm 3.225998576206e-01 165 KSP Residual norm 3.024785228215e-01 166 KSP Residual norm 2.753063465357e-01 167 KSP Residual norm 2.496783764715e-01 168 KSP Residual norm 2.313742356493e-01 169 KSP Residual norm 2.187864615458e-01 170 KSP Residual norm 2.112093529273e-01 171 KSP Residual norm 2.081912080364e-01 172 KSP Residual norm 2.070807546835e-01 173 KSP Residual norm 2.064968844770e-01 174 KSP Residual norm 2.060102111602e-01 175 KSP Residual norm 2.046373803173e-01 176 KSP Residual norm 2.028959931109e-01 177 KSP Residual norm 2.002100521179e-01 178 KSP Residual norm 1.967840351086e-01 179 KSP Residual norm 1.927702762238e-01 180 KSP Residual norm 1.867316661520e-01 181 KSP Residual norm 1.776618786193e-01 182 KSP Residual norm 1.643847566082e-01 183 KSP Residual norm 1.495557376661e-01 184 KSP Residual norm 1.352609696004e-01 185 KSP Residual norm 1.229341548569e-01 186 KSP Residual norm 1.144226428216e-01 187 KSP Residual norm 1.082897831466e-01 188 KSP Residual norm 1.049017302936e-01 189 KSP Residual norm 1.031397997305e-01 190 KSP Residual norm 1.024950850002e-01 191 KSP Residual norm 1.022799901920e-01 192 KSP Residual norm 1.021274799866e-01 193 KSP Residual norm 1.019884797110e-01 194 KSP Residual norm 1.018190390773e-01 195 KSP Residual norm 1.016668326031e-01 196 KSP Residual norm 1.015882820594e-01 197 KSP Residual norm 1.014326565943e-01 198 KSP Residual norm 1.009556540043e-01 199 KSP Residual norm 9.916515171835e-02 200 KSP Residual norm 9.512370044069e-02 201 KSP Residual norm 9.511275983111e-02 202 KSP Residual norm 9.511131506271e-02 203 KSP Residual norm 9.510986761576e-02 204 KSP Residual norm 9.510933477694e-02 205 KSP Residual norm 9.510833071353e-02 206 KSP Residual norm 9.510668267384e-02 207 KSP Residual norm 9.510109692511e-02 208 KSP Residual norm 9.510107317806e-02 209 KSP Residual norm 9.509210526366e-02 210 KSP Residual norm 9.509110617656e-02 211 KSP Residual norm 9.508868421041e-02 212 KSP Residual norm 9.508467781345e-02 213 KSP Residual norm 9.505041631624e-02 214 KSP Residual norm 9.502625819529e-02 215 KSP Residual norm 9.487020259295e-02 216 KSP Residual norm 9.472789018845e-02 217 KSP Residual norm 9.435337458810e-02 218 KSP Residual norm 9.415048658853e-02 219 KSP Residual norm 9.356796773537e-02 220 KSP Residual norm 9.311481258198e-02 221 KSP Residual norm 9.219331440403e-02 222 KSP Residual norm 9.122093067600e-02 223 KSP Residual norm 8.990964367346e-02 224 KSP Residual norm 8.878226382269e-02 225 KSP Residual norm 8.688582735745e-02 226 KSP Residual norm 8.488887608393e-02 227 KSP Residual norm 8.201415476002e-02 228 KSP Residual norm 7.803480677285e-02 229 KSP Residual norm 7.416373936907e-02 230 KSP Residual norm 7.025190482766e-02 231 KSP Residual norm 6.682434643966e-02 232 KSP Residual norm 6.377689478540e-02 233 KSP Residual norm 6.152493881049e-02 234 KSP Residual norm 6.015849505485e-02 235 KSP Residual norm 5.938689333013e-02 236 KSP Residual norm 5.888940992912e-02 237 KSP Residual norm 5.847223762170e-02 238 KSP Residual norm 5.800174821599e-02 239 KSP Residual norm 5.738529909769e-02 240 KSP Residual norm 5.623362622164e-02 241 KSP Residual norm 5.451225692018e-02 242 KSP Residual norm 5.317667566871e-02 243 KSP Residual norm 5.199973768688e-02 244 KSP Residual norm 5.136546664176e-02 245 KSP Residual norm 5.100085993635e-02 246 KSP Residual norm 5.085694856545e-02 247 KSP Residual norm 5.069329659465e-02 248 KSP Residual norm 5.051891723651e-02 249 KSP Residual norm 5.026364676362e-02 250 KSP Residual norm 4.986783676072e-02 251 KSP Residual norm 4.890203070691e-02 252 KSP Residual norm 4.698276330245e-02 253 KSP Residual norm 4.453832120123e-02 254 KSP Residual norm 4.226655991312e-02 255 KSP Residual norm 4.034268586182e-02 256 KSP Residual norm 3.826681348000e-02 257 KSP Residual norm 3.546413750381e-02 258 KSP Residual norm 3.152519768271e-02 259 KSP Residual norm 2.666655449770e-02 260 KSP Residual norm 2.273567265713e-02 261 KSP Residual norm 2.033064210676e-02 262 KSP Residual norm 1.911303817580e-02 263 KSP Residual norm 1.870214285337e-02 264 KSP Residual norm 1.859604156274e-02 265 KSP Residual norm 1.857001317550e-02 266 KSP Residual norm 1.855879295226e-02 267 KSP Residual norm 1.855361831653e-02 268 KSP Residual norm 1.854549077950e-02 269 KSP Residual norm 1.854291586046e-02 270 KSP Residual norm 1.854287402608e-02 271 KSP Residual norm 1.853657802312e-02 272 KSP Residual norm 1.853331710124e-02 273 KSP Residual norm 1.852633988804e-02 274 KSP Residual norm 1.842032519920e-02 275 KSP Residual norm 1.814955175628e-02 276 KSP Residual norm 1.766571474999e-02 277 KSP Residual norm 1.688992551541e-02 278 KSP Residual norm 1.582931607683e-02 279 KSP Residual norm 1.443316170578e-02 280 KSP Residual norm 1.276815839331e-02 281 KSP Residual norm 1.117926539043e-02 282 KSP Residual norm 9.945265555672e-03 283 KSP Residual norm 9.274740564630e-03 284 KSP Residual norm 8.947483602577e-03 285 KSP Residual norm 8.794444649207e-03 286 KSP Residual norm 8.693258885772e-03 287 KSP Residual norm 8.623660810084e-03 288 KSP Residual norm 8.539727968957e-03 289 KSP Residual norm 8.467363202209e-03 290 KSP Residual norm 8.354687309574e-03 291 KSP Residual norm 8.224237137113e-03 292 KSP Residual norm 8.055286368630e-03 293 KSP Residual norm 7.901668706091e-03 294 KSP Residual norm 7.736181748603e-03 295 KSP Residual norm 7.573367598708e-03 296 KSP Residual norm 7.275953096869e-03 297 KSP Residual norm 6.926503289955e-03 298 KSP Residual norm 6.455543874566e-03 299 KSP Residual norm 6.135999530427e-03 300 KSP Residual norm 5.815695447129e-03 301 KSP Residual norm 5.809157211314e-03 302 KSP Residual norm 5.803496781030e-03 303 KSP Residual norm 5.794794970337e-03 304 KSP Residual norm 5.789510438756e-03 305 KSP Residual norm 5.781331947972e-03 306 KSP Residual norm 5.776061415510e-03 307 KSP Residual norm 5.765420097917e-03 308 KSP Residual norm 5.761956316981e-03 309 KSP Residual norm 5.750339033038e-03 310 KSP Residual norm 5.748150977455e-03 311 KSP Residual norm 5.746564149784e-03 312 KSP Residual norm 5.746358473418e-03 313 KSP Residual norm 5.742704000622e-03 314 KSP Residual norm 5.742179163934e-03 315 KSP Residual norm 5.732250704249e-03 316 KSP Residual norm 5.730579521621e-03 317 KSP Residual norm 5.712064466143e-03 318 KSP Residual norm 5.707583467054e-03 319 KSP Residual norm 5.676261801920e-03 320 KSP Residual norm 5.638940655289e-03 321 KSP Residual norm 5.572686119465e-03 322 KSP Residual norm 5.486829270739e-03 323 KSP Residual norm 5.363011900677e-03 324 KSP Residual norm 5.264036861538e-03 325 KSP Residual norm 5.159359122740e-03 326 KSP Residual norm 5.086913731986e-03 327 KSP Residual norm 5.051539888910e-03 328 KSP Residual norm 5.022163992716e-03 329 KSP Residual norm 5.009642335424e-03 330 KSP Residual norm 4.995545393353e-03 331 KSP Residual norm 4.983970567331e-03 332 KSP Residual norm 4.965604387243e-03 333 KSP Residual norm 4.943002271641e-03 334 KSP Residual norm 4.912421560658e-03 335 KSP Residual norm 4.863231436581e-03 336 KSP Residual norm 4.786701764192e-03 337 KSP Residual norm 4.672208025754e-03 338 KSP Residual norm 4.540537274945e-03 339 KSP Residual norm 4.418364423945e-03 340 KSP Residual norm 4.301359063959e-03 341 KSP Residual norm 4.168745791627e-03 342 KSP Residual norm 4.032996661213e-03 343 KSP Residual norm 3.867022987699e-03 344 KSP Residual norm 3.721242920022e-03 345 KSP Residual norm 3.556217408130e-03 346 KSP Residual norm 3.365328459562e-03 347 KSP Residual norm 3.084850335568e-03 348 KSP Residual norm 2.783554644191e-03 349 KSP Residual norm 2.501758587950e-03 350 KSP Residual norm 2.335050554919e-03 351 KSP Residual norm 2.213037226800e-03 352 KSP Residual norm 2.112129649281e-03 353 KSP Residual norm 2.015177538088e-03 354 KSP Residual norm 1.921591991170e-03 355 KSP Residual norm 1.846983730772e-03 356 KSP Residual norm 1.795958510851e-03 357 KSP Residual norm 1.761398005030e-03 358 KSP Residual norm 1.731744669742e-03 359 KSP Residual norm 1.698649941093e-03 360 KSP Residual norm 1.653138830187e-03 361 KSP Residual norm 1.597006736777e-03 362 KSP Residual norm 1.535496029318e-03 363 KSP Residual norm 1.486864751966e-03 364 KSP Residual norm 1.445887722523e-03 365 KSP Residual norm 1.414809810311e-03 366 KSP Residual norm 1.386467859902e-03 367 KSP Residual norm 1.368094817408e-03 368 KSP Residual norm 1.354718460954e-03 369 KSP Residual norm 1.345447779901e-03 370 KSP Residual norm 1.334572964939e-03 371 KSP Residual norm 1.322612840183e-03 372 KSP Residual norm 1.308206515946e-03 373 KSP Residual norm 1.289021663464e-03 374 KSP Residual norm 1.264987008466e-03 375 KSP Residual norm 1.235541919237e-03 376 KSP Residual norm 1.210199171309e-03 377 KSP Residual norm 1.191233343534e-03 378 KSP Residual norm 1.179442434523e-03 379 KSP Residual norm 1.169171134718e-03 380 KSP Residual norm 1.155344051311e-03 381 KSP Residual norm 1.125710546909e-03 382 KSP Residual norm 1.062970567251e-03 383 KSP Residual norm 9.762551092829e-04 384 KSP Residual norm 8.715931560316e-04 385 KSP Residual norm 8.012869359532e-04 386 KSP Residual norm 7.590047084899e-04 387 KSP Residual norm 7.325311548101e-04 388 KSP Residual norm 7.092700905607e-04 389 KSP Residual norm 6.905385000869e-04 390 KSP Residual norm 6.650944698267e-04 391 KSP Residual norm 6.329078135345e-04 392 KSP Residual norm 5.934109628314e-04 393 KSP Residual norm 5.507029998022e-04 394 KSP Residual norm 5.188824609645e-04 395 KSP Residual norm 4.985903395866e-04 396 KSP Residual norm 4.859088238936e-04 397 KSP Residual norm 4.790023202304e-04 398 KSP Residual norm 4.753359807683e-04 399 KSP Residual norm 4.724654435006e-04 400 KSP Residual norm 4.702447667809e-04 401 KSP Residual norm 4.702385459535e-04 402 KSP Residual norm 4.702340952853e-04 403 KSP Residual norm 4.702332069864e-04 404 KSP Residual norm 4.702105876309e-04 405 KSP Residual norm 4.701402820496e-04 406 KSP Residual norm 4.700497700580e-04 407 KSP Residual norm 4.698750923377e-04 408 KSP Residual norm 4.697416679318e-04 409 KSP Residual norm 4.693955056765e-04 410 KSP Residual norm 4.689232036487e-04 411 KSP Residual norm 4.679222047688e-04 412 KSP Residual norm 4.658894513198e-04 413 KSP Residual norm 4.629487867544e-04 414 KSP Residual norm 4.605058797028e-04 415 KSP Residual norm 4.560264680471e-04 416 KSP Residual norm 4.513516060096e-04 417 KSP Residual norm 4.450273961016e-04 418 KSP Residual norm 4.404795214622e-04 419 KSP Residual norm 4.335150901076e-04 420 KSP Residual norm 4.264628565582e-04 421 KSP Residual norm 4.181174346641e-04 422 KSP Residual norm 4.088875103290e-04 423 KSP Residual norm 3.985616737365e-04 424 KSP Residual norm 3.864025638874e-04 425 KSP Residual norm 3.734769219525e-04 426 KSP Residual norm 3.635375118893e-04 427 KSP Residual norm 3.577277614055e-04 428 KSP Residual norm 3.555689955859e-04 429 KSP Residual norm 3.551295558307e-04 430 KSP Residual norm 3.550173617798e-04 431 KSP Residual norm 3.550165548427e-04 432 KSP Residual norm 3.550156933547e-04 433 KSP Residual norm 3.550156833320e-04 434 KSP Residual norm 3.549972037875e-04 435 KSP Residual norm 3.549698393113e-04 436 KSP Residual norm 3.548832082026e-04 437 KSP Residual norm 3.547866905840e-04 438 KSP Residual norm 3.545560468118e-04 439 KSP Residual norm 3.540569338306e-04 440 KSP Residual norm 3.530155279079e-04 441 KSP Residual norm 3.511549618114e-04 442 KSP Residual norm 3.487392176101e-04 443 KSP Residual norm 3.465616071158e-04 444 KSP Residual norm 3.448919983631e-04 445 KSP Residual norm 3.426218669471e-04 446 KSP Residual norm 3.395556754858e-04 447 KSP Residual norm 3.344881487351e-04 448 KSP Residual norm 3.289333584185e-04 449 KSP Residual norm 3.211701098462e-04 450 KSP Residual norm 3.124093538201e-04 451 KSP Residual norm 3.003489562746e-04 452 KSP Residual norm 2.839876442654e-04 453 KSP Residual norm 2.610237751511e-04 454 KSP Residual norm 2.330003787422e-04 455 KSP Residual norm 2.043928706990e-04 456 KSP Residual norm 1.811561719118e-04 457 KSP Residual norm 1.643810475895e-04 458 KSP Residual norm 1.527883735615e-04 459 KSP Residual norm 1.460860306959e-04 460 KSP Residual norm 1.426496087387e-04 461 KSP Residual norm 1.399612850653e-04 462 KSP Residual norm 1.373603321984e-04 463 KSP Residual norm 1.345857300321e-04 464 KSP Residual norm 1.316861590374e-04 465 KSP Residual norm 1.283134020302e-04 466 KSP Residual norm 1.249711643149e-04 467 KSP Residual norm 1.218948694600e-04 468 KSP Residual norm 1.193804148917e-04 469 KSP Residual norm 1.172332693152e-04 470 KSP Residual norm 1.148202125137e-04 471 KSP Residual norm 1.120629298054e-04 472 KSP Residual norm 1.091115909292e-04 473 KSP Residual norm 1.061722410488e-04 474 KSP Residual norm 1.033150015427e-04 475 KSP Residual norm 1.005014185661e-04 476 KSP Residual norm 9.646529417265e-05 477 KSP Residual norm 9.020930158675e-05 478 KSP Residual norm 8.296757556483e-05 479 KSP Residual norm 7.593626126040e-05 480 KSP Residual norm 7.097578650000e-05 481 KSP Residual norm 6.732987878802e-05 482 KSP Residual norm 6.412870991308e-05 483 KSP Residual norm 6.167784122976e-05 484 KSP Residual norm 6.035766938838e-05 485 KSP Residual norm 5.960640487175e-05 486 KSP Residual norm 5.899042079356e-05 487 KSP Residual norm 5.827102498813e-05 488 KSP Residual norm 5.813746340944e-05 489 KSP Residual norm 5.810310441546e-05 490 KSP Residual norm 5.803696984630e-05 491 KSP Residual norm 5.778764627336e-05 492 KSP Residual norm 5.734550618969e-05 493 KSP Residual norm 5.645636107473e-05 494 KSP Residual norm 5.434932625934e-05 495 KSP Residual norm 5.040611547404e-05 496 KSP Residual norm 4.552623748666e-05 497 KSP Residual norm 4.244135931970e-05 498 KSP Residual norm 3.980425878943e-05 499 KSP Residual norm 3.804068563001e-05 500 KSP Residual norm 3.666582841476e-05 501 KSP Residual norm 3.664625115645e-05 502 KSP Residual norm 3.664365663964e-05 503 KSP Residual norm 3.663020673719e-05 504 KSP Residual norm 3.662726636131e-05 505 KSP Residual norm 3.657499234638e-05 506 KSP Residual norm 3.652156164018e-05 507 KSP Residual norm 3.640541061396e-05 508 KSP Residual norm 3.625572962518e-05 509 KSP Residual norm 3.601140429895e-05 510 KSP Residual norm 3.589022311180e-05 511 KSP Residual norm 3.580905842091e-05 512 KSP Residual norm 3.578918154702e-05 513 KSP Residual norm 3.564503738238e-05 514 KSP Residual norm 3.555879011704e-05 515 KSP Residual norm 3.539367812531e-05 516 KSP Residual norm 3.522867490617e-05 517 KSP Residual norm 3.505164606126e-05 518 KSP Residual norm 3.486728700443e-05 519 KSP Residual norm 3.447151133505e-05 520 KSP Residual norm 3.368534926696e-05 521 KSP Residual norm 3.241786154838e-05 522 KSP Residual norm 3.071072865434e-05 523 KSP Residual norm 2.887789224509e-05 524 KSP Residual norm 2.732415348856e-05 525 KSP Residual norm 2.626678458948e-05 526 KSP Residual norm 2.553178744050e-05 527 KSP Residual norm 2.499998708708e-05 528 KSP Residual norm 2.473964778841e-05 529 KSP Residual norm 2.462913726389e-05 530 KSP Residual norm 2.457423226619e-05 531 KSP Residual norm 2.451837260327e-05 532 KSP Residual norm 2.437998385498e-05 533 KSP Residual norm 2.415824540315e-05 534 KSP Residual norm 2.394729052208e-05 535 KSP Residual norm 2.373648745952e-05 536 KSP Residual norm 2.353236145250e-05 537 KSP Residual norm 2.335112365027e-05 538 KSP Residual norm 2.324328424042e-05 539 KSP Residual norm 2.316337106701e-05 540 KSP Residual norm 2.306013865188e-05 541 KSP Residual norm 2.292514500673e-05 542 KSP Residual norm 2.272091128957e-05 543 KSP Residual norm 2.241219147523e-05 544 KSP Residual norm 2.199497746037e-05 545 KSP Residual norm 2.153390383940e-05 546 KSP Residual norm 2.108237866348e-05 547 KSP Residual norm 2.063721552310e-05 548 KSP Residual norm 2.020756105562e-05 549 KSP Residual norm 1.977764681226e-05 550 KSP Residual norm 1.936172334631e-05 551 KSP Residual norm 1.883847330385e-05 552 KSP Residual norm 1.808105646329e-05 553 KSP Residual norm 1.677704172193e-05 554 KSP Residual norm 1.521028214595e-05 555 KSP Residual norm 1.374429270314e-05 556 KSP Residual norm 1.263008460808e-05 557 KSP Residual norm 1.172761484747e-05 558 KSP Residual norm 1.122125157933e-05 559 KSP Residual norm 1.093194966962e-05 560 KSP Residual norm 1.074985240635e-05 561 KSP Residual norm 1.063172228100e-05 562 KSP Residual norm 1.055582757389e-05 563 KSP Residual norm 1.047465660246e-05 564 KSP Residual norm 1.035869979742e-05 565 KSP Residual norm 1.018370782707e-05 566 KSP Residual norm 1.000060114212e-05 567 KSP Residual norm 9.851104029364e-06 568 KSP Residual norm 9.744970140937e-06 569 KSP Residual norm 9.678200471464e-06 570 KSP Residual norm 9.593727341280e-06 571 KSP Residual norm 9.478937238697e-06 572 KSP Residual norm 9.289971875556e-06 573 KSP Residual norm 9.077432414607e-06 574 KSP Residual norm 8.809603923972e-06 575 KSP Residual norm 8.465913470656e-06 576 KSP Residual norm 8.032248889436e-06 577 KSP Residual norm 7.586651043516e-06 578 KSP Residual norm 7.280953016823e-06 579 KSP Residual norm 7.055471862536e-06 580 KSP Residual norm 6.860406178092e-06 581 KSP Residual norm 6.714491274270e-06 582 KSP Residual norm 6.582344451042e-06 583 KSP Residual norm 6.464658082349e-06 584 KSP Residual norm 6.361107589779e-06 585 KSP Residual norm 6.268887549994e-06 586 KSP Residual norm 6.164151375870e-06 587 KSP Residual norm 6.031138843377e-06 588 KSP Residual norm 5.877778342818e-06 589 KSP Residual norm 5.700706671045e-06 590 KSP Residual norm 5.520474933152e-06 591 KSP Residual norm 5.362284063344e-06 592 KSP Residual norm 5.203914907852e-06 593 KSP Residual norm 5.051299481294e-06 594 KSP Residual norm 4.864744993054e-06 595 KSP Residual norm 4.652319505484e-06 596 KSP Residual norm 4.456044786944e-06 KSP Object:(coupledsolve_) 1 MPI processes type: gmres GMRES: restart=100, using Modified Gram-Schmidt Orthogonalization GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=9.6184e-11, absolute=1e-50, divergence=10000 left preconditioning has attached null space using PRECONDITIONED norm type for convergence test PC Object:(coupledsolve_) 1 MPI processes type: ilu ILU: out-of-place factorization 1 level of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 2.44153 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 package used to perform factorization: petsc total: nonzeros=3.56993e+08, allocated nonzeros=3.56993e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 total: nonzeros=1.46217e+08, allocated nonzeros=1.46217e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines 0000017 0.9618E-08 0.0000E+00 TIME FOR CALCULATION: 0.8095E+04 L2-NORM ERROR U VELOCITY 2.803369989662701E-005 L2-NORM ERROR V VELOCITY 2.790402445981405E-005 L2-NORM ERROR W VELOCITY 2.917168464344412E-005 L2-NORM ERROR ABS. VELOCITY 3.168282567469746E-005 L2-NORM ERROR PRESSURE 1.392955988162038E-003 *** CALCULATION FINISHED - SEE RESULTS *** ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./caffa3d.cpld.lnx on a arch-openmpi-opt-intel-hlr-ext named hpb0024 with 1 processor, by gu08vomo Mon Feb 2 02:21:24 2015 Using Petsc Release Version 3.5.3, Jan, 31, 2015 Max Max/Min Avg Total Time (sec): 8.124e+03 1.00000 8.124e+03 Objects: 3.930e+02 1.00000 3.930e+02 Flops: 1.317e+13 1.00000 1.317e+13 1.317e+13 Flops/sec: 1.621e+09 1.00000 1.621e+09 1.621e+09 MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Reductions: 0.000e+00 0.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 9.9606e+01 1.2% 2.6364e+07 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 1: CPLD_SOL: 8.0248e+03 98.8% 1.3166e+13 100.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage ThreadCommRunKer 73 1.0 6.5088e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNorm 1 1.0 2.3034e-01 1.0 1.76e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 67 0 0 0 76 VecScale 1 1.0 6.9630e-03 1.0 8.79e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 33 0 0 0 1262 VecSet 590 1.0 5.9857e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 VecScatterBegin 628 1.0 1.7977e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 1 1.0 6.9640e-03 1.0 8.79e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 33 0 0 0 1262 MatAssemblyBegin 34 1.0 1.6212e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 34 1.0 1.7639e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 2 0 0 0 0 0 MatZeroEntries 17 1.0 1.3217e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 --- Event Stage 1: CPLD_SOL VecDot 224557 1.0 2.0988e+03 1.0 3.95e+12 1.0 0.0e+00 0.0e+00 0.0e+00 26 30 0 0 0 26 30 0 0 0 1881 VecMDot 4819 1.0 4.6719e+01 1.0 8.47e+10 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 1813 VecNorm 4836 1.0 2.7221e+01 1.0 8.50e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 3123 VecScale 4819 1.0 4.2187e+01 1.0 4.23e+10 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 1004 VecCopy 72 1.0 5.5840e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 72 1.0 2.6911e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 224667 1.0 2.9981e+03 1.0 3.95e+12 1.0 0.0e+00 0.0e+00 0.0e+00 37 30 0 0 0 37 30 0 0 0 1317 VecMAXPY 4874 1.0 8.9722e+01 1.0 1.68e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 1877 VecNormalize 4819 1.0 6.9335e+01 1.0 1.27e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 1832 MatMult 4819 1.0 7.0751e+02 1.0 1.37e+12 1.0 0.0e+00 0.0e+00 0.0e+00 9 10 0 0 0 9 10 0 0 0 1932 MatSolve 4819 1.0 1.7652e+03 1.0 3.40e+12 1.0 0.0e+00 0.0e+00 0.0e+00 22 26 0 0 0 22 26 0 0 0 1925 MatLUFactorNum 17 1.0 8.0536e+01 1.0 1.25e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 1546 MatILUFactorSym 17 1.0 1.6276e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 MatGetRowIJ 17 1.0 4.2915e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 17 1.0 6.3720e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 34 1.0 3.3295e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPGMRESOrthog 4764 1.0 5.0960e+03 1.0 7.89e+12 1.0 0.0e+00 0.0e+00 0.0e+00 63 60 0 0 0 64 60 0 0 0 1549 KSPSetUp 17 1.0 1.0602e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 17 1.0 8.0201e+03 1.0 1.32e+13 1.0 0.0e+00 0.0e+00 0.0e+00 99100 0 0 0 100100 0 0 0 1641 PCSetUp 17 1.0 2.4407e+02 1.0 1.25e+11 1.0 0.0e+00 0.0e+00 0.0e+00 3 1 0 0 0 3 1 0 0 0 510 PCApply 4819 1.0 1.7652e+03 1.0 3.40e+12 1.0 0.0e+00 0.0e+00 0.0e+00 22 26 0 0 0 22 26 0 0 0 1925 --- Event Stage 2: Unknown ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Vector 71 280 8208424320 0 Vector Scatter 4 3 1956 0 Index Set 12 11 70312712 0 IS L to G Mapping 4 3 79093788 0 Matrix 2 3 6381679896 0 Krylov Solver 0 1 169864 0 Preconditioner 0 1 1016 0 --- Event Stage 1: CPLD_SOL Vector 212 0 0 0 Index Set 51 48 1124902016 0 Matrix 17 16 68542708736 0 Matrix Null Space 17 0 0 0 Krylov Solver 1 0 0 0 Preconditioner 1 0 0 0 Viewer 1 0 0 0 --- Event Stage 2: Unknown ======================================================================================================================== Average time to get PetscTime(): 9.53674e-08 #PETSc Option Table entries: -coupledsolve_ksp_gmres_modifiedgramschmidt -coupledsolve_ksp_gmres_restart 100 -coupledsolve_ksp_monitor -coupledsolve_ksp_view -coupledsolve_pc_factor_levels 1 -log_summary -on_error_abort #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure options: PETSC_ARCH=arch-openmpi-opt-intel-hlr-ext PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3 -prefix=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr-ext --with-blas-lapack-dir=/shared/apps/intel/2015/composer_xe_2015/mkl/lib/intel64/ --with-mpi-dir=/shared/apps/openmpi/1.8.2_intel COPTFLAGS="-O3 -xHost" FOPTFLAGS="-O3 -xHost" CXXOPTFLAGS="-O3 -xHost" --with-debugging=0 --download-hypre --download-ml ----------------------------------------- Libraries compiled on Sun Feb 1 16:09:22 2015 on hla0003 Machine characteristics: Linux-3.0.101-0.40-default-x86_64-with-SuSE-11-x86_64 Using PETSc directory: /home/gu08vomo/soft/petsc/3.5.3 Using PETSc arch: arch-openmpi-opt-intel-hlr-ext ----------------------------------------- Using C compiler: /shared/apps/openmpi/1.8.2_intel/bin/mpicc -fPIC -wd1572 -O3 -xHost ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /shared/apps/openmpi/1.8.2_intel/bin/mpif90 -fPIC -O3 -xHost ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/include -I/home/gu08vomo/soft/petsc/3.5.3/include -I/home/gu08vomo/soft/petsc/3.5.3/include -I/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/include -I/shared/apps/openmpi/1.8.2_intel/include ----------------------------------------- Using C linker: /shared/apps/openmpi/1.8.2_intel/bin/mpicc Using Fortran linker: /shared/apps/openmpi/1.8.2_intel/bin/mpif90 Using libraries: -Wl,-rpath,/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -L/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -lpetsc -Wl,-rpath,/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -L/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -lHYPRE -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -L/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib -L/shared/apps/gcc/4.8.3/lib -lmpi_cxx -lml -lmpi_cxx -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm -lX11 -lpthread -lssl -lcrypto -lmpi_usempi_ignore_tkr -lmpi_mpifh -lifport -lifcore -lm -lmpi_cxx -ldl -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -lmpi -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -L/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib -L/shared/apps/gcc/4.8.3/lib -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -limf -lsvml -lirng -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lpthread -lirc_s -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -L/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib -L/shared/apps/gcc/4.8.3/lib -ldl ----------------------------------------- From dave.mayhem23 at gmail.com Mon Feb 2 03:49:00 2015 From: dave.mayhem23 at gmail.com (Dave May) Date: Mon, 2 Feb 2015 10:49:00 +0100 Subject: [petsc-users] Field Split PC for Fully-Coupled 3d stationary incompressible Navier-Stokes Solution Algorithm In-Reply-To: <1422869962.961.2.camel@gmail.com> References: <1422869962.961.2.camel@gmail.com> Message-ID: On 2 February 2015 at 10:39, Fabian Gabel wrote: > Dear PETSc Team, > > I implemented a fully-coupled solution algorithm (finite volume method) > for the 3d stationary incompressible Navier-Stokes equations. Currently > I am solving the resulting linear systems using GMRES and ILU and I > wanted to ask, if solver convergence could be improved using a field > split preconditioner. Given the convergence in the attached file, I would say yes, fieldsplit should be better. > The possibility to use PCFIELDSPLIT (Matrix is > stored as interlaced) has already been implemented in my solver program, > but I am not sure on how to choose the parameters. > What do you mean? Is your matrix defined to have a block size of 4, or have you set the IS's for each split (u,v,w,p)? The options for fieldsplit will be different depending on the answer. Before discussion any options for fielldsplit, what is App? Is that an approximate pressure Schur complement? Cheers, Dave > Each field corresponds to one of the variables (u,v,w,p). Considering > the corresponding blocks A_.., the non-interlaced matrix would read as > > [A_uu 0 0 A_up] > [0 A_vv 0 A_vp] > [0 0 A_ww A_up] > [A_pu A_pv A_pw A_pp] > > where furthermore A_uu = A_vv = A_ww. This might be considered to > further improve the efficiency of the solve. > > You find attached the solver output for an analytical test case with 2e6 > cells each having 4 degrees of freedom. I used the command-line options: > > -log_summary > -coupledsolve_ksp_view > -coupledsolve_ksp_monitor > -coupledsolve_ksp_gmres_restart 100 > -coupledsolve_pc_factor_levels 1 > -coupledsolve_ksp_gmres_modifiedgramschmidt > > Regards, > Fabian Gabel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gabel.fabian at gmail.com Mon Feb 2 04:10:32 2015 From: gabel.fabian at gmail.com (Fabian Gabel) Date: Mon, 02 Feb 2015 11:10:32 +0100 Subject: [petsc-users] Field Split PC for Fully-Coupled 3d stationary incompressible Navier-Stokes Solution Algorithm In-Reply-To: References: <1422869962.961.2.camel@gmail.com> Message-ID: <1422871832.961.4.camel@gmail.com> > > > > The possibility to use PCFIELDSPLIT (Matrix is > stored as interlaced) has already been implemented in my > solver program, > but I am not sure on how to choose the parameters. > > > What do you mean? Is your matrix defined to have a block size of 4, or > have you set the IS's for each split (u,v,w,p)? I used CALL PCFieldSplitSetIS(PRECON,PETSC_NULL_CHARACTER,ISU,IERR) ... The matrix is created as AIJ, but BAIJ with block size of 4 is also possible (though not used as you can see from the output) > The options for fieldsplit will be different depending on the answer. > > > Before discussion any options for fielldsplit, what is App? > > Is that an approximate pressure Schur complement? Sorry. A_ij, where i,j \in {u,v,w,p}, refers to the block matrices for each variable and the respective coupling matrices to the other variables. E.g. - A_pp is defined as the matrix resulting from the discretization of the pressure equation that considers only the pressure related terms. - A_up is defined as the matrix resulting form the discretization u momentum balance that considers the u-velocity-to-pressure coupling (i.e. the discretized x component of the pressure gradient) Note that the matrix is not stored as this, since I use field interlacing. > > > Cheers, > > Dave > > > Each field corresponds to one of the variables (u,v,w,p). > Considering > the corresponding blocks A_.., the non-interlaced matrix would > read as > > [A_uu 0 0 A_up] > [0 A_vv 0 A_vp] > [0 0 A_ww A_up] > [A_pu A_pv A_pw A_pp] > > where furthermore A_uu = A_vv = A_ww. This might be considered > to > further improve the efficiency of the solve. > > You find attached the solver output for an analytical test > case with 2e6 > cells each having 4 degrees of freedom. I used the > command-line options: > > -log_summary > -coupledsolve_ksp_view > -coupledsolve_ksp_monitor > -coupledsolve_ksp_gmres_restart 100 > -coupledsolve_pc_factor_levels 1 > -coupledsolve_ksp_gmres_modifiedgramschmidt > > Regards, > Fabian Gabel > > > From dave.mayhem23 at gmail.com Mon Feb 2 04:49:31 2015 From: dave.mayhem23 at gmail.com (Dave May) Date: Mon, 2 Feb 2015 11:49:31 +0100 Subject: [petsc-users] Field Split PC for Fully-Coupled 3d stationary incompressible Navier-Stokes Solution Algorithm In-Reply-To: <1422871832.961.4.camel@gmail.com> References: <1422869962.961.2.camel@gmail.com> <1422871832.961.4.camel@gmail.com> Message-ID: > I used > > CALL PCFieldSplitSetIS(PRECON,PETSC_NULL_CHARACTER,ISU,IERR) > ... > > Here are two suggestions to play with: [1] When using the same object for the operator and preconditioner, you will need to fieldsplit factorization type = schur. This require two-splits (U,p). Thus, your basic field split configuration will look like -coupledsolve_pc_type fieldsplit -coupledsolve_pc_fieldsplit_0_fields 0,1,2 -coupledsolve_pc_fieldsplit_1_fields 3 -coupledsolve_pc_fieldsplit_type SCHUR Petsc has some support to generate approximate pressure schur complements for you, but these will not be as good as the ones specifically constructed for you particular discretization. If you want to perform solves on you scalar sub-problems (e.g. you have a nice AMG implementation for each scalar block), you will need to split UU block again (nested fieldsplit) [2] If you assembled a different operator for your preconditioner in which the B_pp slot contained a pressure schur complement approximation, you could use the simpler and likley more robust option (assuming you know of a decent schur complement approximation for you discretisation and physical problem) -coupledsolve_pc_type fieldsplit -coupledsolve_pc_fieldsplit_type MULTIPLICATIVE which include you U-p coupling, or just -coupledsolve_pc_fieldsplit_type ADDITIVE which would define the following preconditioner inv(B) = diag( inv(B_uu,) , inv(B_vv) , inv(B_ww) , inv(B_pp) ) Option 2 would be better as your operator doesn't have an u_i-u_j, i != j coupling and you could use efficient AMG implementations for each scalar terms associated with u-u, v-v, w-w coupled terms without having to split again. Also, fieldsplit will not be aware of the fact that the Auu, Avv, Aww blocks are all identical - thus it cannot do anything "smart" in order to save memory. Accordingly, the KSP defined for each u,v,w split will be a unique KSP object. If your A_ii are all identical and you want to save memory, you could use MatNest but as Matt will always yell out, "MatNest is ONLY a memory optimization and should be ONLY be used once all solver exploration/testing is performed". > - A_pp is defined as the matrix resulting from the discretization of the > pressure equation that considers only the pressure related terms. > Hmm okay, i assumed for incompressible NS the pressure equation that the pressure equation would be just \div(u) = 0. Note that the matrix is not stored as this, since I use field > interlacing. > yeah sure > > > > > > Cheers, > > > > Dave > > > > > > Each field corresponds to one of the variables (u,v,w,p). > > Considering > > the corresponding blocks A_.., the non-interlaced matrix would > > read as > > > > [A_uu 0 0 A_up] > > [0 A_vv 0 A_vp] > > [0 0 A_ww A_up] > > [A_pu A_pv A_pw A_pp] > > > > where furthermore A_uu = A_vv = A_ww. This might be considered > > to > > > further improve the efficiency of the solve. > > > > You find attached the solver output for an analytical test > > case with 2e6 > > cells each having 4 degrees of freedom. I used the > > command-line options: > > > > -log_summary > > -coupledsolve_ksp_view > > -coupledsolve_ksp_monitor > > -coupledsolve_ksp_gmres_restart 100 > > -coupledsolve_pc_factor_levels 1 > > -coupledsolve_ksp_gmres_modifiedgramschmidt > > > > Regards, > > Fabian Gabel > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Feb 2 07:48:40 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 2 Feb 2015 07:48:40 -0600 Subject: [petsc-users] PETSc and AMPI In-Reply-To: <87d25t54pt.fsf@jedbrown.org> References: <030a01d03ca3$a5244da0$ef6ce8e0$@engr.wisc.edu> <87twz65lat.fsf@jedbrown.org> <87oapd5b1g.fsf@jedbrown.org> <26042233-4C7B-4006-9BB2-8AB873E1C935@mcs.anl.gov> <87fvap56o4.fsf@jedbrown.org> <4E5CE853-8950-4607-A7AF-DBE8C6F39712@mcs.anl.gov> <87d25t54pt.fsf@jedbrown.org> Message-ID: > On Feb 1, 2015, at 10:15 PM, Jed Brown wrote: > > Barry Smith writes: > >>> On Feb 1, 2015, at 9:33 PM, Jed Brown wrote: >>> >>> Barry Smith writes: >>>> We could possibly "cheat" with AMPI to essentially have >>>> PetscInitialize()/Finalize() run through most their code only on >>>> thread 0 (assuming we have a way of determining thread 0) >>> >>> Just guard it with a lock. >> >> Not sure what you mean here. We want that code to be only run >> through once, we don't want or need it to be run by each thread. It >> makes no sense for each thread to call TSInitializePackage() for >> example. > > Yes, as long as the threads see TSPackageInitialized as true, it's safe > to call. So the only problem is that we have a race condition. One way > to do this is to make TSPackageInitialized an int. The code looks > something like this (depending on the primitives) > > if (AtomicCompareAndSwap(&TSPackageInitialized,0,1)) { > do the initialization > TSPackageInitialized = 2; > MemoryFenceWrite(); > } else { > while (AccessOnce(TSPackageInitialized) != 2) CPURelax(); > } Way more complicated then needed :-) > >>> What about debugging and profiling? >> >> This is the same issue for "thread safety"* as well as AMPI. I >> don't think AMPI introduces any particular additional hitches. >> >> Barry >> >> * in the sense that it is currently implemented meaning each thread >> works on each own objects so doesn't need to lock "MatSetValues" >> etc. This other "thread safety" has its own can of worms. > > If AMPI creates threads dynamically, How could it possibly create threads dynamically and still be running in the MPI 1 model of a consistent number of MPI "processes" at all times? > we don't have the luxury of having > hooks that can run when threads are spawned or finish. How do we ensure > that profiling information has been propagated into the parent > structure? From negrut at engr.wisc.edu Mon Feb 2 22:25:12 2015 From: negrut at engr.wisc.edu (Dan Negrut) Date: Mon, 2 Feb 2015 22:25:12 -0600 Subject: [petsc-users] PETSc and AMPI In-Reply-To: References: <030a01d03ca3$a5244da0$ef6ce8e0$@engr.wisc.edu> <87twz65lat.fsf@jedbrown.org> <87oapd5b1g.fsf@jedbrown.org> <26042233-4C7B-4006-9BB2-8AB873E1C935@mcs.anl.gov> <87fvap56o4.fsf@jedbrown.org> <4E5CE853-8950-4607-A7AF-DBE8C6F39712@mcs.anl.gov> <87d25t54pt.fsf@jedbrown.org> Message-ID: <01b001d03f69$672ddfc0$35899f40$@engr.wisc.edu> Barry - I'll make an attempt to translate what you guys discussed in this thread. Long story short, I imagine that current thread safety issues in PETSc combined with the philosophy of Charm, which asynchronously works with chares, will probably prevent me in the short term from using the PETSc-charm combo. Have I got this right? Thank you guys very much, I really appreciate that you spent the time on this issue. Dan -----Original Message----- From: Barry Smith [mailto:bsmith at mcs.anl.gov] Sent: Monday, February 02, 2015 7:49 AM To: Jed Brown Cc: Matthew Knepley; petsc-users at mcs.anl.gov; Dan Negrut Subject: Re: [petsc-users] PETSc and AMPI > On Feb 1, 2015, at 10:15 PM, Jed Brown wrote: > > Barry Smith writes: > >>> On Feb 1, 2015, at 9:33 PM, Jed Brown wrote: >>> >>> Barry Smith writes: >>>> We could possibly "cheat" with AMPI to essentially have >>>> PetscInitialize()/Finalize() run through most their code only on >>>> thread 0 (assuming we have a way of determining thread 0) >>> >>> Just guard it with a lock. >> >> Not sure what you mean here. We want that code to be only run >> through once, we don't want or need it to be run by each thread. It >> makes no sense for each thread to call TSInitializePackage() for >> example. > > Yes, as long as the threads see TSPackageInitialized as true, it's > safe to call. So the only problem is that we have a race condition. > One way to do this is to make TSPackageInitialized an int. The code > looks something like this (depending on the primitives) > > if (AtomicCompareAndSwap(&TSPackageInitialized,0,1)) { > do the initialization > TSPackageInitialized = 2; > MemoryFenceWrite(); > } else { > while (AccessOnce(TSPackageInitialized) != 2) CPURelax(); } Way more complicated then needed :-) > >>> What about debugging and profiling? >> >> This is the same issue for "thread safety"* as well as AMPI. I >> don't think AMPI introduces any particular additional hitches. >> >> Barry >> >> * in the sense that it is currently implemented meaning each thread >> works on each own objects so doesn't need to lock "MatSetValues" >> etc. This other "thread safety" has its own can of worms. > > If AMPI creates threads dynamically, How could it possibly create threads dynamically and still be running in the MPI 1 model of a consistent number of MPI "processes" at all times? > we don't have the luxury of having > hooks that can run when threads are spawned or finish. How do we > ensure that profiling information has been propagated into the parent > structure? From jed at jedbrown.org Mon Feb 2 23:27:20 2015 From: jed at jedbrown.org (Jed Brown) Date: Mon, 02 Feb 2015 22:27:20 -0700 Subject: [petsc-users] PETSc and AMPI In-Reply-To: References: <030a01d03ca3$a5244da0$ef6ce8e0$@engr.wisc.edu> <87twz65lat.fsf@jedbrown.org> <87oapd5b1g.fsf@jedbrown.org> <26042233-4C7B-4006-9BB2-8AB873E1C935@mcs.anl.gov> <87fvap56o4.fsf@jedbrown.org> <4E5CE853-8950-4607-A7AF-DBE8C6F39712@mcs.anl.gov> <87d25t54pt.fsf@jedbrown.org> Message-ID: <87pp9rh8ef.fsf@jedbrown.org> Barry Smith writes: > Way more complicated then needed :-) Heh, I was just spelling it out. >> If AMPI creates threads dynamically, > > How could it possibly create threads dynamically and still be > running in the MPI 1 model of a consistent number of MPI "processes" > at all times? I don't know, but it seems plausible that it would over-subscribe and then dynamically migrate MPI ranks around. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From jed at jedbrown.org Mon Feb 2 23:32:42 2015 From: jed at jedbrown.org (Jed Brown) Date: Mon, 02 Feb 2015 22:32:42 -0700 Subject: [petsc-users] PETSc and AMPI In-Reply-To: <01b001d03f69$672ddfc0$35899f40$@engr.wisc.edu> References: <030a01d03ca3$a5244da0$ef6ce8e0$@engr.wisc.edu> <87twz65lat.fsf@jedbrown.org> <87oapd5b1g.fsf@jedbrown.org> <26042233-4C7B-4006-9BB2-8AB873E1C935@mcs.anl.gov> <87fvap56o4.fsf@jedbrown.org> <4E5CE853-8950-4607-A7AF-DBE8C6F39712@mcs.anl.gov> <87d25t54pt.fsf@jedbrown.org> <01b001d03f69$672ddfc0$35899f40$@engr.wisc.edu> Message-ID: <87mw4vh85h.fsf@jedbrown.org> Dan Negrut writes: > Barry - I'll make an attempt to translate what you guys discussed in this > thread. > Long story short, I imagine that current thread safety issues in PETSc > combined with the philosophy of Charm, which asynchronously works with > chares, will probably prevent me in the short term from using the > PETSc-charm combo. > Have I got this right? I think so at this point, at least if you want PETSc to run in parallel. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From bsmith at mcs.anl.gov Tue Feb 3 06:37:01 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 3 Feb 2015 06:37:01 -0600 Subject: [petsc-users] PETSc and AMPI In-Reply-To: <87pp9rh8ef.fsf@jedbrown.org> References: <030a01d03ca3$a5244da0$ef6ce8e0$@engr.wisc.edu> <87twz65lat.fsf@jedbrown.org> <87oapd5b1g.fsf@jedbrown.org> <26042233-4C7B-4006-9BB2-8AB873E1C935@mcs.anl.gov> <87fvap56o4.fsf@jedbrown.org> <4E5CE853-8950-4607-A7AF-DBE8C6F39712@mcs.anl.gov> <87d25t54pt.fsf@jedbrown.org> <87pp9rh8ef.fsf@jedbrown.org> Message-ID: <7274FA05-A625-4583-88E7-B945FAEB305F@mcs.anl.gov> > On Feb 2, 2015, at 11:27 PM, Jed Brown wrote: > > Barry Smith writes: >> Way more complicated then needed :-) > > Heh, I was just spelling it out. > >>> If AMPI creates threads dynamically, >> >> How could it possibly create threads dynamically and still be >> running in the MPI 1 model of a consistent number of MPI "processes" >> at all times? > > I don't know, but it seems plausible that it would over-subscribe It definitely over-subscribes. > and > then dynamically migrate MPI ranks around. Yes "cheating" with out global data structures won't work if it migrates the MPI ranks around but I can't image how it would do that. From erocha.ssa at gmail.com Tue Feb 3 08:04:40 2015 From: erocha.ssa at gmail.com (Eduardo) Date: Tue, 3 Feb 2015 12:04:40 -0200 Subject: [petsc-users] PETSc and AMPI In-Reply-To: <7274FA05-A625-4583-88E7-B945FAEB305F@mcs.anl.gov> References: <030a01d03ca3$a5244da0$ef6ce8e0$@engr.wisc.edu> <87twz65lat.fsf@jedbrown.org> <87oapd5b1g.fsf@jedbrown.org> <26042233-4C7B-4006-9BB2-8AB873E1C935@mcs.anl.gov> <87fvap56o4.fsf@jedbrown.org> <4E5CE853-8950-4607-A7AF-DBE8C6F39712@mcs.anl.gov> <87d25t54pt.fsf@jedbrown.org> <87pp9rh8ef.fsf@jedbrown.org> <7274FA05-A625-4583-88E7-B945FAEB305F@mcs.anl.gov> Message-ID: If you can pack and unpack the global data (there are a few ways of doing that in AMPI), it should work, I believe. Further, even without migration, since you over-subscribe and each MPI rank in AMPI is implemented as a thread, you would need to guarantee that each thread in a physical processor has its own copy of the "global" variables. You could use TLS, for instance to privatize this global data. There some provision for that in AMPI, however, as far as I can tell it requires static compilation. On Tue, Feb 3, 2015 at 10:37 AM, Barry Smith wrote: > > > On Feb 2, 2015, at 11:27 PM, Jed Brown wrote: > > > > Barry Smith writes: > >> Way more complicated then needed :-) > > > > Heh, I was just spelling it out. > > > >>> If AMPI creates threads dynamically, > >> > >> How could it possibly create threads dynamically and still be > >> running in the MPI 1 model of a consistent number of MPI "processes" > >> at all times? > > > > I don't know, but it seems plausible that it would over-subscribe > > It definitely over-subscribes. > > > and > > then dynamically migrate MPI ranks around. > > Yes "cheating" with out global data structures won't work if it > migrates the MPI ranks around but I can't image how it would do that. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sanjay.Kharche at manchester.ac.uk Tue Feb 3 08:21:26 2015 From: Sanjay.Kharche at manchester.ac.uk (Sanjay Kharche) Date: Tue, 3 Feb 2015 14:21:26 +0000 Subject: [petsc-users] calloc with MPI and PetSc Message-ID: Dear All I have a code in C that uses Petsc and MPI. My code is an extension of ex15.c in the ts tutorials. I am trying to allocate memory for 3 int arrays, for which I have already declared int pointers. These arrays are not intended for use by the petsc functions. I am allocating memory using calloc. The use of 1 calloc call is fine, however when I try to allocate memory for 2 or more arrays, the TSSolve(ts,u) gives an error. I found this by including and excluding the TSSolve call. I have tried making the array pointers PetscInt but with same result. The first few lines of the error message are also pasted after the relevant code snippet. Can you let me know how I can allocate memory for 3 arrays. These arrays are not relevant to any petsc functions. thanks Sanjay Relevant code in main(): PetscInt size = 0; /* Petsc/MPI */ PetscInt rank = 0; int *usr_mybase; // mybase, myend, myblocksize are to be used in non-petsc part of code. int *usr_myend; int *usr_myblocksize; int R_base, transit; MPI_Status status; MPI_Request request; /*********************************end of declarations in main************************/ PetscInitialize(&argc,&argv,(char*)0,help); /* Initialize user application context */ user.da = NULL; user.boundary = 1; /* 0: Drichlet BC; 1: Neumann BC */ user.viewJacobian = PETSC_FALSE; MPI_Comm_size(PETSC_COMM_WORLD, &size); MPI_Comm_rank(PETSC_COMM_WORLD, &rank); printf("my size is %d, and rank is %d\n",size, rank); usr_mybase = (int*) calloc (size,sizeof(int)); // 1st call to calloc is ok. // usr_myend = (int*) calloc (size,sizeof(int)); // when I uncomment this call to calloc, TSSolve fails. error below. // usr_myblocksize = (int*) calloc (size,sizeof(int)); . . . TSSolve(ts,u); // has a problem when I use 2 callocs. The output and error message: mpiexec -n 4 ./sk2d -draw_pause .1 -ts_monitor_draw_solution my size is 4, and rank is 2 my size is 4, and rank is 0 my size is 4, and rank is 3 my size is 4, and rank is 1 [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Floating point exception [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [1]PETSC ERROR: Floating point exception [1]PETSC ERROR: Vec entry at local location 320 is not-a-number or infinite at beginning of function: Parameter number 2 [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [2]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [2]PETSC ERROR: Floating point exception [2]PETSC ERROR: Vec entry at local location 10 is not-a-number or infinite at beginning of function: Parameter number 2 [2]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [2]PETSC ERROR: Petsc Release Version 3.5.2, unknown [3]PETSC ERROR: [0]PETSC ERROR: Vec entry at local location 293 is not-a-number or infinite at beginning of function: Parameter number 2 From rupp at iue.tuwien.ac.at Tue Feb 3 08:42:40 2015 From: rupp at iue.tuwien.ac.at (Karl Rupp) Date: Tue, 03 Feb 2015 15:42:40 +0100 Subject: [petsc-users] calloc with MPI and PetSc In-Reply-To: References: Message-ID: <54D0DE60.3070908@iue.tuwien.ac.at> Hi Sanjay, this sounds a lot like a memory corruption somewhere in the code. Could you please verify first that the code is valgrind-clean? Does the same problem show up with one MPI rank? Best regards, Karli On 02/03/2015 03:21 PM, Sanjay Kharche wrote: > > Dear All > > I have a code in C that uses Petsc and MPI. My code is an extension of ex15.c in the ts tutorials. > > I am trying to allocate memory for 3 int arrays, for which I have already declared int pointers. These arrays are not intended for use by the petsc functions. I am allocating memory using calloc. The use of 1 calloc call is fine, however when I try to allocate memory for 2 or more arrays, the TSSolve(ts,u) gives an error. I found this by including and excluding the TSSolve call. I have tried making the array pointers PetscInt but with same result. The first few lines of the error message are also pasted after the relevant code snippet. Can you let me know how I can allocate memory for 3 arrays. These arrays are not relevant to any petsc functions. > > thanks > Sanjay > > Relevant code in main(): > > PetscInt size = 0; /* Petsc/MPI */ > PetscInt rank = 0; > > int *usr_mybase; // mybase, myend, myblocksize are to be used in non-petsc part of code. > int *usr_myend; > int *usr_myblocksize; > int R_base, transit; > MPI_Status status; > MPI_Request request; > /*********************************end of declarations in main************************/ > PetscInitialize(&argc,&argv,(char*)0,help); > /* Initialize user application context */ > user.da = NULL; > user.boundary = 1; /* 0: Drichlet BC; 1: Neumann BC */ > user.viewJacobian = PETSC_FALSE; > > MPI_Comm_size(PETSC_COMM_WORLD, &size); > MPI_Comm_rank(PETSC_COMM_WORLD, &rank); > > printf("my size is %d, and rank is %d\n",size, rank); > > usr_mybase = (int*) calloc (size,sizeof(int)); // 1st call to calloc is ok. > // usr_myend = (int*) calloc (size,sizeof(int)); // when I uncomment this call to calloc, TSSolve fails. error below. > // usr_myblocksize = (int*) calloc (size,sizeof(int)); > . > . > . > TSSolve(ts,u); // has a problem when I use 2 callocs. > > > The output and error message: > > mpiexec -n 4 ./sk2d -draw_pause .1 -ts_monitor_draw_solution > my size is 4, and rank is 2 > my size is 4, and rank is 0 > my size is 4, and rank is 3 > my size is 4, and rank is 1 > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Floating point exception > [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [1]PETSC ERROR: Floating point exception > [1]PETSC ERROR: Vec entry at local location 320 is not-a-number or infinite at beginning of function: Parameter number 2 > [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [2]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [2]PETSC ERROR: Floating point exception > [2]PETSC ERROR: Vec entry at local location 10 is not-a-number or infinite at beginning of function: Parameter number 2 > [2]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [2]PETSC ERROR: Petsc Release Version 3.5.2, unknown > [3]PETSC ERROR: [0]PETSC ERROR: Vec entry at local location 293 is not-a-number or infinite at beginning of function: Parameter number 2 > From Sanjay.Kharche at manchester.ac.uk Tue Feb 3 08:48:08 2015 From: Sanjay.Kharche at manchester.ac.uk (Sanjay Kharche) Date: Tue, 3 Feb 2015 14:48:08 +0000 Subject: [petsc-users] calloc with MPI and PetSc In-Reply-To: <54D0DE60.3070908@iue.tuwien.ac.at> References: , <54D0DE60.3070908@iue.tuwien.ac.at> Message-ID: Hi Karl You are right - the code is not valgrind clean even on single processor. The Valgrind output below shows the line number of the TSSolve in my code. valgrind ./sk2d ==7907== Memcheck, a memory error detector ==7907== Copyright (C) 2002-2010, and GNU GPL'd, by Julian Seward et al. ==7907== Using Valgrind-3.6.1 and LibVEX; rerun with -h for copyright info ==7907== Command: ./sk2d ==7907== ==7907== Invalid read of size 4 ==7907== at 0x55985C6: opal_os_dirpath_create (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) ==7907== by 0x553A2C7: orte_session_dir (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) ==7907== by 0x554DAD1: orte_ess_base_app_setup (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) ==7907== by 0x545B584: ??? (in /usr/lib/openmpi/lib/openmpi/mca_ess_singleton.so) ==7907== by 0x552C213: orte_init (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) ==7907== by 0x54E4FBB: ??? (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) ==7907== by 0x54FE30F: PMPI_Init_thread (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) ==7907== by 0x4136FAA: PetscInitialize (pinit.c:781) ==7907== by 0x8049448: main (sk2d.c:109) ==7907== Address 0x580e9f4 is 68 bytes inside a block of size 71 alloc'd ==7907== at 0x4006D69: malloc (vg_replace_malloc.c:236) ==7907== by 0x5598542: opal_os_dirpath_create (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) ==7907== by 0x553A2C7: orte_session_dir (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) ==7907== by 0x554DAD1: orte_ess_base_app_setup (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) ==7907== by 0x545B584: ??? (in /usr/lib/openmpi/lib/openmpi/mca_ess_singleton.so) ==7907== by 0x552C213: orte_init (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) ==7907== by 0x54E4FBB: ??? (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) ==7907== by 0x54FE30F: PMPI_Init_thread (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) ==7907== by 0x4136FAA: PetscInitialize (pinit.c:781) ==7907== by 0x8049448: main (sk2d.c:109) ________________________________________ From: Karl Rupp [rupp at iue.tuwien.ac.at] Sent: 03 February 2015 14:42 To: Sanjay Kharche; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] calloc with MPI and PetSc Hi Sanjay, this sounds a lot like a memory corruption somewhere in the code. Could you please verify first that the code is valgrind-clean? Does the same problem show up with one MPI rank? Best regards, Karli On 02/03/2015 03:21 PM, Sanjay Kharche wrote: > > Dear All > > I have a code in C that uses Petsc and MPI. My code is an extension of ex15.c in the ts tutorials. > > I am trying to allocate memory for 3 int arrays, for which I have already declared int pointers. These arrays are not intended for use by the petsc functions. I am allocating memory using calloc. The use of 1 calloc call is fine, however when I try to allocate memory for 2 or more arrays, the TSSolve(ts,u) gives an error. I found this by including and excluding the TSSolve call. I have tried making the array pointers PetscInt but with same result. The first few lines of the error message are also pasted after the relevant code snippet. Can you let me know how I can allocate memory for 3 arrays. These arrays are not relevant to any petsc functions. > > thanks > Sanjay > > Relevant code in main(): > > PetscInt size = 0; /* Petsc/MPI */ > PetscInt rank = 0; > > int *usr_mybase; // mybase, myend, myblocksize are to be used in non-petsc part of code. > int *usr_myend; > int *usr_myblocksize; > int R_base, transit; > MPI_Status status; > MPI_Request request; > /*********************************end of declarations in main************************/ > PetscInitialize(&argc,&argv,(char*)0,help); > /* Initialize user application context */ > user.da = NULL; > user.boundary = 1; /* 0: Drichlet BC; 1: Neumann BC */ > user.viewJacobian = PETSC_FALSE; > > MPI_Comm_size(PETSC_COMM_WORLD, &size); > MPI_Comm_rank(PETSC_COMM_WORLD, &rank); > > printf("my size is %d, and rank is %d\n",size, rank); > > usr_mybase = (int*) calloc (size,sizeof(int)); // 1st call to calloc is ok. > // usr_myend = (int*) calloc (size,sizeof(int)); // when I uncomment this call to calloc, TSSolve fails. error below. > // usr_myblocksize = (int*) calloc (size,sizeof(int)); > . > . > . > TSSolve(ts,u); // has a problem when I use 2 callocs. > > > The output and error message: > > mpiexec -n 4 ./sk2d -draw_pause .1 -ts_monitor_draw_solution > my size is 4, and rank is 2 > my size is 4, and rank is 0 > my size is 4, and rank is 3 > my size is 4, and rank is 1 > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Floating point exception > [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [1]PETSC ERROR: Floating point exception > [1]PETSC ERROR: Vec entry at local location 320 is not-a-number or infinite at beginning of function: Parameter number 2 > [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [2]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [2]PETSC ERROR: Floating point exception > [2]PETSC ERROR: Vec entry at local location 10 is not-a-number or infinite at beginning of function: Parameter number 2 > [2]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [2]PETSC ERROR: Petsc Release Version 3.5.2, unknown > [3]PETSC ERROR: [0]PETSC ERROR: Vec entry at local location 293 is not-a-number or infinite at beginning of function: Parameter number 2 > From rupp at iue.tuwien.ac.at Tue Feb 3 09:58:57 2015 From: rupp at iue.tuwien.ac.at (Karl Rupp) Date: Tue, 03 Feb 2015 16:58:57 +0100 Subject: [petsc-users] calloc with MPI and PetSc In-Reply-To: References: , <54D0DE60.3070908@iue.tuwien.ac.at> Message-ID: <54D0F041.2090804@iue.tuwien.ac.at> Hi Sanjay, is this the full output? The errors/warnings is due to OpenMPI, so they may not be harmful. You may try building and running with mpich instead to get rid of these. If these are the only errors reported by valgrind, can you also try to use malloc instead of calloc? Best regards, Karli On 02/03/2015 03:48 PM, Sanjay Kharche wrote: > > Hi Karl > > You are right - the code is not valgrind clean even on single processor. The Valgrind output below shows the line number of the TSSolve in my code. > > valgrind ./sk2d > ==7907== Memcheck, a memory error detector > ==7907== Copyright (C) 2002-2010, and GNU GPL'd, by Julian Seward et al. > ==7907== Using Valgrind-3.6.1 and LibVEX; rerun with -h for copyright info > ==7907== Command: ./sk2d > ==7907== > ==7907== Invalid read of size 4 > ==7907== at 0x55985C6: opal_os_dirpath_create (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x553A2C7: orte_session_dir (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x554DAD1: orte_ess_base_app_setup (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x545B584: ??? (in /usr/lib/openmpi/lib/openmpi/mca_ess_singleton.so) > ==7907== by 0x552C213: orte_init (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x54E4FBB: ??? (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x54FE30F: PMPI_Init_thread (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x4136FAA: PetscInitialize (pinit.c:781) > ==7907== by 0x8049448: main (sk2d.c:109) > ==7907== Address 0x580e9f4 is 68 bytes inside a block of size 71 alloc'd > ==7907== at 0x4006D69: malloc (vg_replace_malloc.c:236) > ==7907== by 0x5598542: opal_os_dirpath_create (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x553A2C7: orte_session_dir (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x554DAD1: orte_ess_base_app_setup (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x545B584: ??? (in /usr/lib/openmpi/lib/openmpi/mca_ess_singleton.so) > ==7907== by 0x552C213: orte_init (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x54E4FBB: ??? (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x54FE30F: PMPI_Init_thread (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x4136FAA: PetscInitialize (pinit.c:781) > ==7907== by 0x8049448: main (sk2d.c:109) > ________________________________________ > From: Karl Rupp [rupp at iue.tuwien.ac.at] > Sent: 03 February 2015 14:42 > To: Sanjay Kharche; petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] calloc with MPI and PetSc > > Hi Sanjay, > > this sounds a lot like a memory corruption somewhere in the code. Could > you please verify first that the code is valgrind-clean? Does the same > problem show up with one MPI rank? > > Best regards, > Karli > > > On 02/03/2015 03:21 PM, Sanjay Kharche wrote: >> >> Dear All >> >> I have a code in C that uses Petsc and MPI. My code is an extension of ex15.c in the ts tutorials. >> >> I am trying to allocate memory for 3 int arrays, for which I have already declared int pointers. These arrays are not intended for use by the petsc functions. I am allocating memory using calloc. The use of 1 calloc call is fine, however when I try to allocate memory for 2 or more arrays, the TSSolve(ts,u) gives an error. I found this by including and excluding the TSSolve call. I have tried making the array pointers PetscInt but with same result. The first few lines of the error message are also pasted after the relevant code snippet. Can you let me know how I can allocate memory for 3 arrays. These arrays are not relevant to any petsc functions. >> >> thanks >> Sanjay >> >> Relevant code in main(): >> >> PetscInt size = 0; /* Petsc/MPI */ >> PetscInt rank = 0; >> >> int *usr_mybase; // mybase, myend, myblocksize are to be used in non-petsc part of code. >> int *usr_myend; >> int *usr_myblocksize; >> int R_base, transit; >> MPI_Status status; >> MPI_Request request; >> /*********************************end of declarations in main************************/ >> PetscInitialize(&argc,&argv,(char*)0,help); >> /* Initialize user application context */ >> user.da = NULL; >> user.boundary = 1; /* 0: Drichlet BC; 1: Neumann BC */ >> user.viewJacobian = PETSC_FALSE; >> >> MPI_Comm_size(PETSC_COMM_WORLD, &size); >> MPI_Comm_rank(PETSC_COMM_WORLD, &rank); >> >> printf("my size is %d, and rank is %d\n",size, rank); >> >> usr_mybase = (int*) calloc (size,sizeof(int)); // 1st call to calloc is ok. >> // usr_myend = (int*) calloc (size,sizeof(int)); // when I uncomment this call to calloc, TSSolve fails. error below. >> // usr_myblocksize = (int*) calloc (size,sizeof(int)); >> . >> . >> . >> TSSolve(ts,u); // has a problem when I use 2 callocs. >> >> >> The output and error message: >> >> mpiexec -n 4 ./sk2d -draw_pause .1 -ts_monitor_draw_solution >> my size is 4, and rank is 2 >> my size is 4, and rank is 0 >> my size is 4, and rank is 3 >> my size is 4, and rank is 1 >> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [0]PETSC ERROR: Floating point exception >> [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [1]PETSC ERROR: Floating point exception >> [1]PETSC ERROR: Vec entry at local location 320 is not-a-number or infinite at beginning of function: Parameter number 2 >> [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >> [2]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [2]PETSC ERROR: Floating point exception >> [2]PETSC ERROR: Vec entry at local location 10 is not-a-number or infinite at beginning of function: Parameter number 2 >> [2]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >> [2]PETSC ERROR: Petsc Release Version 3.5.2, unknown >> [3]PETSC ERROR: [0]PETSC ERROR: Vec entry at local location 293 is not-a-number or infinite at beginning of function: Parameter number 2 >> > From bsmith at mcs.anl.gov Tue Feb 3 10:06:24 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 3 Feb 2015 10:06:24 -0600 Subject: [petsc-users] PETSc and AMPI In-Reply-To: References: <030a01d03ca3$a5244da0$ef6ce8e0$@engr.wisc.edu> <87twz65lat.fsf@jedbrown.org> <87oapd5b1g.fsf@jedbrown.org> <26042233-4C7B-4006-9BB2-8AB873E1C935@mcs.anl.gov> <87fvap56o4.fsf@jedbrown.org> <4E5CE853-8950-4607-A7AF-DBE8C6F39712@mcs.anl.gov> <87d25t54pt.fsf@jedbrown.org> <87pp9rh8ef.fsf@jedbrown.org> <7274FA05-A625-4583-88E7-B945FAEB305F@mcs.anl.gov> Message-ID: <00827347-603E-4CED-AB47-F871A9D34D77@mcs.anl.gov> > On Feb 3, 2015, at 8:04 AM, Eduardo wrote: > > If you can pack and unpack the global data (there are a few ways of doing that in AMPI), it should work, I believe. The "global" data contains function pointers which can't be trivially shipped around. > Further, even without migration, since you over-subscribe and each MPI rank in AMPI is implemented as a thread, you would need to guarantee that each thread in a physical processor has its own copy of the "global" variables. The "global" data we are talking about is essentially read only once it is initialized in PetscInitialize() Barry > You could use TLS, for instance to privatize this global data. There some provision for that in AMPI, however, as far as I can tell it requires static compilation. > > On Tue, Feb 3, 2015 at 10:37 AM, Barry Smith wrote: > > > On Feb 2, 2015, at 11:27 PM, Jed Brown wrote: > > > > Barry Smith writes: > >> Way more complicated then needed :-) > > > > Heh, I was just spelling it out. > > > >>> If AMPI creates threads dynamically, > >> > >> How could it possibly create threads dynamically and still be > >> running in the MPI 1 model of a consistent number of MPI "processes" > >> at all times? > > > > I don't know, but it seems plausible that it would over-subscribe > > It definitely over-subscribes. > > > and > > then dynamically migrate MPI ranks around. > > Yes "cheating" with out global data structures won't work if it migrates the MPI ranks around but I can't image how it would do that. > > > From S.R.Kharche at exeter.ac.uk Tue Feb 3 10:11:32 2015 From: S.R.Kharche at exeter.ac.uk (Kharche, Sanjay) Date: Tue, 3 Feb 2015 16:11:32 +0000 Subject: [petsc-users] calloc with MPI and PetSc In-Reply-To: <54D0F041.2090804@iue.tuwien.ac.at> References: , <54D0DE60.3070908@iue.tuwien.ac.at> , <54D0F041.2090804@iue.tuwien.ac.at> Message-ID: Hi Karli The OpenMPI errors may be a consequence of the curropt memory that I cannot identify. I tried all combinations of memory allocation: (int *) calloc(size,sizeof(int)); // typecasting and calloc(size, sizeof(int)) and also tried it by replacing with malloc. None of them work. In addition, I have now added some very simple non-petsc part to my code - a for loop with some additions and substractions. This loop does not use the arrarys that I am trying to allocate memory, neither do they use Petsc. Now, even the first calloc of the 3 that I would like to use does not work! I will appreciate knowing the reason for this. thanks for your time. Sanjay ________________________________________ From: petsc-users-bounces at mcs.anl.gov [petsc-users-bounces at mcs.anl.gov] on behalf of Karl Rupp [rupp at iue.tuwien.ac.at] Sent: 03 February 2015 15:58 To: Sanjay Kharche; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] calloc with MPI and PetSc Hi Sanjay, is this the full output? The errors/warnings is due to OpenMPI, so they may not be harmful. You may try building and running with mpich instead to get rid of these. If these are the only errors reported by valgrind, can you also try to use malloc instead of calloc? Best regards, Karli On 02/03/2015 03:48 PM, Sanjay Kharche wrote: > > Hi Karl > > You are right - the code is not valgrind clean even on single processor. The Valgrind output below shows the line number of the TSSolve in my code. > > valgrind ./sk2d > ==7907== Memcheck, a memory error detector > ==7907== Copyright (C) 2002-2010, and GNU GPL'd, by Julian Seward et al. > ==7907== Using Valgrind-3.6.1 and LibVEX; rerun with -h for copyright info > ==7907== Command: ./sk2d > ==7907== > ==7907== Invalid read of size 4 > ==7907== at 0x55985C6: opal_os_dirpath_create (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x553A2C7: orte_session_dir (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x554DAD1: orte_ess_base_app_setup (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x545B584: ??? (in /usr/lib/openmpi/lib/openmpi/mca_ess_singleton.so) > ==7907== by 0x552C213: orte_init (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x54E4FBB: ??? (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x54FE30F: PMPI_Init_thread (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x4136FAA: PetscInitialize (pinit.c:781) > ==7907== by 0x8049448: main (sk2d.c:109) > ==7907== Address 0x580e9f4 is 68 bytes inside a block of size 71 alloc'd > ==7907== at 0x4006D69: malloc (vg_replace_malloc.c:236) > ==7907== by 0x5598542: opal_os_dirpath_create (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x553A2C7: orte_session_dir (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x554DAD1: orte_ess_base_app_setup (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x545B584: ??? (in /usr/lib/openmpi/lib/openmpi/mca_ess_singleton.so) > ==7907== by 0x552C213: orte_init (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x54E4FBB: ??? (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x54FE30F: PMPI_Init_thread (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x4136FAA: PetscInitialize (pinit.c:781) > ==7907== by 0x8049448: main (sk2d.c:109) > ________________________________________ > From: Karl Rupp [rupp at iue.tuwien.ac.at] > Sent: 03 February 2015 14:42 > To: Sanjay Kharche; petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] calloc with MPI and PetSc > > Hi Sanjay, > > this sounds a lot like a memory corruption somewhere in the code. Could > you please verify first that the code is valgrind-clean? Does the same > problem show up with one MPI rank? > > Best regards, > Karli > > > On 02/03/2015 03:21 PM, Sanjay Kharche wrote: >> >> Dear All >> >> I have a code in C that uses Petsc and MPI. My code is an extension of ex15.c in the ts tutorials. >> >> I am trying to allocate memory for 3 int arrays, for which I have already declared int pointers. These arrays are not intended for use by the petsc functions. I am allocating memory using calloc. The use of 1 calloc call is fine, however when I try to allocate memory for 2 or more arrays, the TSSolve(ts,u) gives an error. I found this by including and excluding the TSSolve call. I have tried making the array pointers PetscInt but with same result. The first few lines of the error message are also pasted after the relevant code snippet. Can you let me know how I can allocate memory for 3 arrays. These arrays are not relevant to any petsc functions. >> >> thanks >> Sanjay >> >> Relevant code in main(): >> >> PetscInt size = 0; /* Petsc/MPI */ >> PetscInt rank = 0; >> >> int *usr_mybase; // mybase, myend, myblocksize are to be used in non-petsc part of code. >> int *usr_myend; >> int *usr_myblocksize; >> int R_base, transit; >> MPI_Status status; >> MPI_Request request; >> /*********************************end of declarations in main************************/ >> PetscInitialize(&argc,&argv,(char*)0,help); >> /* Initialize user application context */ >> user.da = NULL; >> user.boundary = 1; /* 0: Drichlet BC; 1: Neumann BC */ >> user.viewJacobian = PETSC_FALSE; >> >> MPI_Comm_size(PETSC_COMM_WORLD, &size); >> MPI_Comm_rank(PETSC_COMM_WORLD, &rank); >> >> printf("my size is %d, and rank is %d\n",size, rank); >> >> usr_mybase = (int*) calloc (size,sizeof(int)); // 1st call to calloc is ok. >> // usr_myend = (int*) calloc (size,sizeof(int)); // when I uncomment this call to calloc, TSSolve fails. error below. >> // usr_myblocksize = (int*) calloc (size,sizeof(int)); >> . >> . >> . >> TSSolve(ts,u); // has a problem when I use 2 callocs. >> >> >> The output and error message: >> >> mpiexec -n 4 ./sk2d -draw_pause .1 -ts_monitor_draw_solution >> my size is 4, and rank is 2 >> my size is 4, and rank is 0 >> my size is 4, and rank is 3 >> my size is 4, and rank is 1 >> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [0]PETSC ERROR: Floating point exception >> [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [1]PETSC ERROR: Floating point exception >> [1]PETSC ERROR: Vec entry at local location 320 is not-a-number or infinite at beginning of function: Parameter number 2 >> [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >> [2]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [2]PETSC ERROR: Floating point exception >> [2]PETSC ERROR: Vec entry at local location 10 is not-a-number or infinite at beginning of function: Parameter number 2 >> [2]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >> [2]PETSC ERROR: Petsc Release Version 3.5.2, unknown >> [3]PETSC ERROR: [0]PETSC ERROR: Vec entry at local location 293 is not-a-number or infinite at beginning of function: Parameter number 2 >> > From bsmith at mcs.anl.gov Tue Feb 3 10:11:53 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 3 Feb 2015 10:11:53 -0600 Subject: [petsc-users] calloc with MPI and PetSc In-Reply-To: References: <, > <54D0DE60.3070908@iue.tuwien.ac.at> Message-ID: <98CE9AE5-E2FD-4D3D-8D44-FE376F0F772A@mcs.anl.gov> Do you get more valgrind errors or is that the only one? That one is likely harmless. Barry > On Feb 3, 2015, at 8:48 AM, Sanjay Kharche wrote: > > > Hi Karl > > You are right - the code is not valgrind clean even on single processor. The Valgrind output below shows the line number of the TSSolve in my code. > > valgrind ./sk2d > ==7907== Memcheck, a memory error detector > ==7907== Copyright (C) 2002-2010, and GNU GPL'd, by Julian Seward et al. > ==7907== Using Valgrind-3.6.1 and LibVEX; rerun with -h for copyright info > ==7907== Command: ./sk2d > ==7907== > ==7907== Invalid read of size 4 > ==7907== at 0x55985C6: opal_os_dirpath_create (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x553A2C7: orte_session_dir (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x554DAD1: orte_ess_base_app_setup (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x545B584: ??? (in /usr/lib/openmpi/lib/openmpi/mca_ess_singleton.so) > ==7907== by 0x552C213: orte_init (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x54E4FBB: ??? (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x54FE30F: PMPI_Init_thread (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x4136FAA: PetscInitialize (pinit.c:781) > ==7907== by 0x8049448: main (sk2d.c:109) > ==7907== Address 0x580e9f4 is 68 bytes inside a block of size 71 alloc'd > ==7907== at 0x4006D69: malloc (vg_replace_malloc.c:236) > ==7907== by 0x5598542: opal_os_dirpath_create (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x553A2C7: orte_session_dir (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x554DAD1: orte_ess_base_app_setup (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x545B584: ??? (in /usr/lib/openmpi/lib/openmpi/mca_ess_singleton.so) > ==7907== by 0x552C213: orte_init (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x54E4FBB: ??? (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x54FE30F: PMPI_Init_thread (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x4136FAA: PetscInitialize (pinit.c:781) > ==7907== by 0x8049448: main (sk2d.c:109) > ________________________________________ > From: Karl Rupp [rupp at iue.tuwien.ac.at] > Sent: 03 February 2015 14:42 > To: Sanjay Kharche; petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] calloc with MPI and PetSc > > Hi Sanjay, > > this sounds a lot like a memory corruption somewhere in the code. Could > you please verify first that the code is valgrind-clean? Does the same > problem show up with one MPI rank? > > Best regards, > Karli > > > On 02/03/2015 03:21 PM, Sanjay Kharche wrote: >> >> Dear All >> >> I have a code in C that uses Petsc and MPI. My code is an extension of ex15.c in the ts tutorials. >> >> I am trying to allocate memory for 3 int arrays, for which I have already declared int pointers. These arrays are not intended for use by the petsc functions. I am allocating memory using calloc. The use of 1 calloc call is fine, however when I try to allocate memory for 2 or more arrays, the TSSolve(ts,u) gives an error. I found this by including and excluding the TSSolve call. I have tried making the array pointers PetscInt but with same result. The first few lines of the error message are also pasted after the relevant code snippet. Can you let me know how I can allocate memory for 3 arrays. These arrays are not relevant to any petsc functions. >> >> thanks >> Sanjay >> >> Relevant code in main(): >> >> PetscInt size = 0; /* Petsc/MPI */ >> PetscInt rank = 0; >> >> int *usr_mybase; // mybase, myend, myblocksize are to be used in non-petsc part of code. >> int *usr_myend; >> int *usr_myblocksize; >> int R_base, transit; >> MPI_Status status; >> MPI_Request request; >> /*********************************end of declarations in main************************/ >> PetscInitialize(&argc,&argv,(char*)0,help); >> /* Initialize user application context */ >> user.da = NULL; >> user.boundary = 1; /* 0: Drichlet BC; 1: Neumann BC */ >> user.viewJacobian = PETSC_FALSE; >> >> MPI_Comm_size(PETSC_COMM_WORLD, &size); >> MPI_Comm_rank(PETSC_COMM_WORLD, &rank); >> >> printf("my size is %d, and rank is %d\n",size, rank); >> >> usr_mybase = (int*) calloc (size,sizeof(int)); // 1st call to calloc is ok. >> // usr_myend = (int*) calloc (size,sizeof(int)); // when I uncomment this call to calloc, TSSolve fails. error below. >> // usr_myblocksize = (int*) calloc (size,sizeof(int)); >> . >> . >> . >> TSSolve(ts,u); // has a problem when I use 2 callocs. >> >> >> The output and error message: >> >> mpiexec -n 4 ./sk2d -draw_pause .1 -ts_monitor_draw_solution >> my size is 4, and rank is 2 >> my size is 4, and rank is 0 >> my size is 4, and rank is 3 >> my size is 4, and rank is 1 >> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [0]PETSC ERROR: Floating point exception >> [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [1]PETSC ERROR: Floating point exception >> [1]PETSC ERROR: Vec entry at local location 320 is not-a-number or infinite at beginning of function: Parameter number 2 >> [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >> [2]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [2]PETSC ERROR: Floating point exception >> [2]PETSC ERROR: Vec entry at local location 10 is not-a-number or infinite at beginning of function: Parameter number 2 >> [2]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >> [2]PETSC ERROR: Petsc Release Version 3.5.2, unknown >> [3]PETSC ERROR: [0]PETSC ERROR: Vec entry at local location 293 is not-a-number or infinite at beginning of function: Parameter number 2 >> > From Sanjay.Kharche at manchester.ac.uk Tue Feb 3 10:20:08 2015 From: Sanjay.Kharche at manchester.ac.uk (Sanjay Kharche) Date: Tue, 3 Feb 2015 16:20:08 +0000 Subject: [petsc-users] calloc with MPI and PetSc In-Reply-To: <98CE9AE5-E2FD-4D3D-8D44-FE376F0F772A@mcs.anl.gov> References: <,> <54D0DE60.3070908@iue.tuwien.ac.at> , <98CE9AE5-E2FD-4D3D-8D44-FE376F0F772A@mcs.anl.gov> Message-ID: Hi Karli, Barry The valgrind error I got was only that one. In any case, the calloc errors have now completely vanished. I will work on reproducing the errors and have a version of my simple program so that I can understand what was causing it. But that is for another time - as of now, my program is working as I want it to. thanks Sanjay ________________________________________ From: Barry Smith [bsmith at mcs.anl.gov] Sent: 03 February 2015 16:11 To: Sanjay Kharche Cc: Karl Rupp; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] calloc with MPI and PetSc Do you get more valgrind errors or is that the only one? That one is likely harmless. Barry > On Feb 3, 2015, at 8:48 AM, Sanjay Kharche wrote: > > > Hi Karl > > You are right - the code is not valgrind clean even on single processor. The Valgrind output below shows the line number of the TSSolve in my code. > > valgrind ./sk2d > ==7907== Memcheck, a memory error detector > ==7907== Copyright (C) 2002-2010, and GNU GPL'd, by Julian Seward et al. > ==7907== Using Valgrind-3.6.1 and LibVEX; rerun with -h for copyright info > ==7907== Command: ./sk2d > ==7907== > ==7907== Invalid read of size 4 > ==7907== at 0x55985C6: opal_os_dirpath_create (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x553A2C7: orte_session_dir (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x554DAD1: orte_ess_base_app_setup (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x545B584: ??? (in /usr/lib/openmpi/lib/openmpi/mca_ess_singleton.so) > ==7907== by 0x552C213: orte_init (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x54E4FBB: ??? (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x54FE30F: PMPI_Init_thread (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x4136FAA: PetscInitialize (pinit.c:781) > ==7907== by 0x8049448: main (sk2d.c:109) > ==7907== Address 0x580e9f4 is 68 bytes inside a block of size 71 alloc'd > ==7907== at 0x4006D69: malloc (vg_replace_malloc.c:236) > ==7907== by 0x5598542: opal_os_dirpath_create (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x553A2C7: orte_session_dir (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x554DAD1: orte_ess_base_app_setup (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x545B584: ??? (in /usr/lib/openmpi/lib/openmpi/mca_ess_singleton.so) > ==7907== by 0x552C213: orte_init (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x54E4FBB: ??? (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x54FE30F: PMPI_Init_thread (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > ==7907== by 0x4136FAA: PetscInitialize (pinit.c:781) > ==7907== by 0x8049448: main (sk2d.c:109) > ________________________________________ > From: Karl Rupp [rupp at iue.tuwien.ac.at] > Sent: 03 February 2015 14:42 > To: Sanjay Kharche; petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] calloc with MPI and PetSc > > Hi Sanjay, > > this sounds a lot like a memory corruption somewhere in the code. Could > you please verify first that the code is valgrind-clean? Does the same > problem show up with one MPI rank? > > Best regards, > Karli > > > On 02/03/2015 03:21 PM, Sanjay Kharche wrote: >> >> Dear All >> >> I have a code in C that uses Petsc and MPI. My code is an extension of ex15.c in the ts tutorials. >> >> I am trying to allocate memory for 3 int arrays, for which I have already declared int pointers. These arrays are not intended for use by the petsc functions. I am allocating memory using calloc. The use of 1 calloc call is fine, however when I try to allocate memory for 2 or more arrays, the TSSolve(ts,u) gives an error. I found this by including and excluding the TSSolve call. I have tried making the array pointers PetscInt but with same result. The first few lines of the error message are also pasted after the relevant code snippet. Can you let me know how I can allocate memory for 3 arrays. These arrays are not relevant to any petsc functions. >> >> thanks >> Sanjay >> >> Relevant code in main(): >> >> PetscInt size = 0; /* Petsc/MPI */ >> PetscInt rank = 0; >> >> int *usr_mybase; // mybase, myend, myblocksize are to be used in non-petsc part of code. >> int *usr_myend; >> int *usr_myblocksize; >> int R_base, transit; >> MPI_Status status; >> MPI_Request request; >> /*********************************end of declarations in main************************/ >> PetscInitialize(&argc,&argv,(char*)0,help); >> /* Initialize user application context */ >> user.da = NULL; >> user.boundary = 1; /* 0: Drichlet BC; 1: Neumann BC */ >> user.viewJacobian = PETSC_FALSE; >> >> MPI_Comm_size(PETSC_COMM_WORLD, &size); >> MPI_Comm_rank(PETSC_COMM_WORLD, &rank); >> >> printf("my size is %d, and rank is %d\n",size, rank); >> >> usr_mybase = (int*) calloc (size,sizeof(int)); // 1st call to calloc is ok. >> // usr_myend = (int*) calloc (size,sizeof(int)); // when I uncomment this call to calloc, TSSolve fails. error below. >> // usr_myblocksize = (int*) calloc (size,sizeof(int)); >> . >> . >> . >> TSSolve(ts,u); // has a problem when I use 2 callocs. >> >> >> The output and error message: >> >> mpiexec -n 4 ./sk2d -draw_pause .1 -ts_monitor_draw_solution >> my size is 4, and rank is 2 >> my size is 4, and rank is 0 >> my size is 4, and rank is 3 >> my size is 4, and rank is 1 >> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [0]PETSC ERROR: Floating point exception >> [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [1]PETSC ERROR: Floating point exception >> [1]PETSC ERROR: Vec entry at local location 320 is not-a-number or infinite at beginning of function: Parameter number 2 >> [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >> [2]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [2]PETSC ERROR: Floating point exception >> [2]PETSC ERROR: Vec entry at local location 10 is not-a-number or infinite at beginning of function: Parameter number 2 >> [2]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >> [2]PETSC ERROR: Petsc Release Version 3.5.2, unknown >> [3]PETSC ERROR: [0]PETSC ERROR: Vec entry at local location 293 is not-a-number or infinite at beginning of function: Parameter number 2 >> > From knepley at gmail.com Tue Feb 3 10:32:38 2015 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 3 Feb 2015 10:32:38 -0600 Subject: [petsc-users] calloc with MPI and PetSc In-Reply-To: References: <54D0DE60.3070908@iue.tuwien.ac.at> <54D0F041.2090804@iue.tuwien.ac.at> Message-ID: On Tue, Feb 3, 2015 at 10:11 AM, Kharche, Sanjay wrote: > > Hi Karli > > The OpenMPI errors may be a consequence of the curropt memory that I > cannot identify. > > I tried all combinations of memory allocation: > > (int *) calloc(size,sizeof(int)); // typecasting > > and > > calloc(size, sizeof(int)) > > and also tried it by replacing with malloc. None of them work. In > addition, I have now added some very simple non-petsc part to my code - a > for loop with some additions and substractions. This loop does not use the > arrarys that I am trying to allocate memory, neither do they use Petsc. > Now, even the first calloc of the 3 that I would like to use does not work! > I will appreciate knowing the reason for this. > Go to an example. If this does not happen, there is a bug in your code. So cd src/snes/examples/tutorials make ex5 ./ex5 -snes_monitor ./ex5 -snes_monitor If that is fine, you have a bug. Usually valgrind can find them. Thanks, Matt > thanks for your time. > Sanjay > > > ________________________________________ > From: petsc-users-bounces at mcs.anl.gov [petsc-users-bounces at mcs.anl.gov] > on behalf of Karl Rupp [rupp at iue.tuwien.ac.at] > Sent: 03 February 2015 15:58 > To: Sanjay Kharche; petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] calloc with MPI and PetSc > > Hi Sanjay, > > is this the full output? The errors/warnings is due to OpenMPI, so they > may not be harmful. You may try building and running with mpich instead > to get rid of these. If these are the only errors reported by valgrind, > can you also try to use malloc instead of calloc? > > Best regards, > Karli > > > > On 02/03/2015 03:48 PM, Sanjay Kharche wrote: > > > > Hi Karl > > > > You are right - the code is not valgrind clean even on single processor. > The Valgrind output below shows the line number of the TSSolve in my code. > > > > valgrind ./sk2d > > ==7907== Memcheck, a memory error detector > > ==7907== Copyright (C) 2002-2010, and GNU GPL'd, by Julian Seward et al. > > ==7907== Using Valgrind-3.6.1 and LibVEX; rerun with -h for copyright > info > > ==7907== Command: ./sk2d > > ==7907== > > ==7907== Invalid read of size 4 > > ==7907== at 0x55985C6: opal_os_dirpath_create (in > /usr/lib/openmpi/lib/libmpi.so.1.0.2) > > ==7907== by 0x553A2C7: orte_session_dir (in > /usr/lib/openmpi/lib/libmpi.so.1.0.2) > > ==7907== by 0x554DAD1: orte_ess_base_app_setup (in > /usr/lib/openmpi/lib/libmpi.so.1.0.2) > > ==7907== by 0x545B584: ??? (in > /usr/lib/openmpi/lib/openmpi/mca_ess_singleton.so) > > ==7907== by 0x552C213: orte_init (in > /usr/lib/openmpi/lib/libmpi.so.1.0.2) > > ==7907== by 0x54E4FBB: ??? (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > > ==7907== by 0x54FE30F: PMPI_Init_thread (in > /usr/lib/openmpi/lib/libmpi.so.1.0.2) > > ==7907== by 0x4136FAA: PetscInitialize (pinit.c:781) > > ==7907== by 0x8049448: main (sk2d.c:109) > > ==7907== Address 0x580e9f4 is 68 bytes inside a block of size 71 alloc'd > > ==7907== at 0x4006D69: malloc (vg_replace_malloc.c:236) > > ==7907== by 0x5598542: opal_os_dirpath_create (in > /usr/lib/openmpi/lib/libmpi.so.1.0.2) > > ==7907== by 0x553A2C7: orte_session_dir (in > /usr/lib/openmpi/lib/libmpi.so.1.0.2) > > ==7907== by 0x554DAD1: orte_ess_base_app_setup (in > /usr/lib/openmpi/lib/libmpi.so.1.0.2) > > ==7907== by 0x545B584: ??? (in > /usr/lib/openmpi/lib/openmpi/mca_ess_singleton.so) > > ==7907== by 0x552C213: orte_init (in > /usr/lib/openmpi/lib/libmpi.so.1.0.2) > > ==7907== by 0x54E4FBB: ??? (in /usr/lib/openmpi/lib/libmpi.so.1.0.2) > > ==7907== by 0x54FE30F: PMPI_Init_thread (in > /usr/lib/openmpi/lib/libmpi.so.1.0.2) > > ==7907== by 0x4136FAA: PetscInitialize (pinit.c:781) > > ==7907== by 0x8049448: main (sk2d.c:109) > > ________________________________________ > > From: Karl Rupp [rupp at iue.tuwien.ac.at] > > Sent: 03 February 2015 14:42 > > To: Sanjay Kharche; petsc-users at mcs.anl.gov > > Subject: Re: [petsc-users] calloc with MPI and PetSc > > > > Hi Sanjay, > > > > this sounds a lot like a memory corruption somewhere in the code. Could > > you please verify first that the code is valgrind-clean? Does the same > > problem show up with one MPI rank? > > > > Best regards, > > Karli > > > > > > On 02/03/2015 03:21 PM, Sanjay Kharche wrote: > >> > >> Dear All > >> > >> I have a code in C that uses Petsc and MPI. My code is an extension of > ex15.c in the ts tutorials. > >> > >> I am trying to allocate memory for 3 int arrays, for which I have > already declared int pointers. These arrays are not intended for use by the > petsc functions. I am allocating memory using calloc. The use of 1 calloc > call is fine, however when I try to allocate memory for 2 or more arrays, > the TSSolve(ts,u) gives an error. I found this by including and excluding > the TSSolve call. I have tried making the array pointers PetscInt but with > same result. The first few lines of the error message are also pasted after > the relevant code snippet. Can you let me know how I can allocate memory > for 3 arrays. These arrays are not relevant to any petsc functions. > >> > >> thanks > >> Sanjay > >> > >> Relevant code in main(): > >> > >> PetscInt size = 0; /* Petsc/MPI > */ > >> PetscInt rank = 0; > >> > >> int *usr_mybase; // mybase, myend, myblocksize are to be used in > non-petsc part of code. > >> int *usr_myend; > >> int *usr_myblocksize; > >> int R_base, transit; > >> MPI_Status status; > >> MPI_Request request; > >> /*********************************end of declarations in > main************************/ > >> PetscInitialize(&argc,&argv,(char*)0,help); > >> /* Initialize user application context > */ > >> user.da = NULL; > >> user.boundary = 1; /* 0: Drichlet BC; 1: Neumann BC > */ > >> user.viewJacobian = PETSC_FALSE; > >> > >> MPI_Comm_size(PETSC_COMM_WORLD, &size); > >> MPI_Comm_rank(PETSC_COMM_WORLD, &rank); > >> > >> printf("my size is %d, and rank is %d\n",size, rank); > >> > >> usr_mybase = (int*) calloc (size,sizeof(int)); // 1st call to > calloc is ok. > >> // usr_myend = (int*) calloc (size,sizeof(int)); // when I > uncomment this call to calloc, TSSolve fails. error below. > >> // usr_myblocksize = (int*) calloc (size,sizeof(int)); > >> . > >> . > >> . > >> TSSolve(ts,u); // has a problem when I use 2 callocs. > >> > >> > >> The output and error message: > >> > >> mpiexec -n 4 ./sk2d -draw_pause .1 -ts_monitor_draw_solution > >> my size is 4, and rank is 2 > >> my size is 4, and rank is 0 > >> my size is 4, and rank is 3 > >> my size is 4, and rank is 1 > >> [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > >> [0]PETSC ERROR: Floating point exception > >> [1]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > >> [1]PETSC ERROR: Floating point exception > >> [1]PETSC ERROR: Vec entry at local location 320 is not-a-number or > infinite at beginning of function: Parameter number 2 > >> [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > >> [2]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > >> [2]PETSC ERROR: Floating point exception > >> [2]PETSC ERROR: Vec entry at local location 10 is not-a-number or > infinite at beginning of function: Parameter number 2 > >> [2]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > >> [2]PETSC ERROR: Petsc Release Version 3.5.2, unknown > >> [3]PETSC ERROR: [0]PETSC ERROR: Vec entry at local location 293 is > not-a-number or infinite at beginning of function: Parameter number 2 > >> > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From erocha.ssa at gmail.com Tue Feb 3 11:28:20 2015 From: erocha.ssa at gmail.com (Eduardo) Date: Tue, 3 Feb 2015 15:28:20 -0200 Subject: [petsc-users] PETSc and AMPI In-Reply-To: <00827347-603E-4CED-AB47-F871A9D34D77@mcs.anl.gov> References: <030a01d03ca3$a5244da0$ef6ce8e0$@engr.wisc.edu> <87twz65lat.fsf@jedbrown.org> <87oapd5b1g.fsf@jedbrown.org> <26042233-4C7B-4006-9BB2-8AB873E1C935@mcs.anl.gov> <87fvap56o4.fsf@jedbrown.org> <4E5CE853-8950-4607-A7AF-DBE8C6F39712@mcs.anl.gov> <87d25t54pt.fsf@jedbrown.org> <87pp9rh8ef.fsf@jedbrown.org> <7274FA05-A625-4583-88E7-B945FAEB305F@mcs.anl.gov> <00827347-603E-4CED-AB47-F871A9D34D77@mcs.anl.gov> Message-ID: On Tue, Feb 3, 2015 at 2:06 PM, Barry Smith wrote: > The "global" data contains function pointers which can't be trivially > shipped around. > You don't need to copy the function itself, just the pointer, because the binary is the same, and (I believe) AMPI requires this binary to be loaded in the same virtual address space in all physical processors. > The "global" data we are talking about is essentially read only once > it is initialized in PetscInitialize() Even though, "global" date is read-only, you'd have to make sure that the MPI rank (inside a thread) access its appropriate copy when it is running. Again, TLS may do the trick. Eduardo -------------- next part -------------- An HTML attachment was scrubbed... URL: From vijay.m at gmail.com Tue Feb 3 17:38:52 2015 From: vijay.m at gmail.com (Vijay S. Mahadevan) Date: Tue, 3 Feb 2015 17:38:52 -0600 Subject: [petsc-users] AIJ ftn-kernels update Message-ID: When using configure options: --with-fortran-kernels=1 --with-fortran=1, the build fails with the following errors. /home/vijaysm/code/petsc/src/mat/impls/aij/seq/aij.c: In function 'PetscErrorCode MatMultTransposeAdd_SeqAIJ(Mat, Vec, Vec, Vec)': /home/vijaysm/code/petsc/src/mat/impls/aij/seq/aij.c:1265:52: error: invalid conversion from 'const void*' to 'void*' [-fpermissive] fortranmulttransposeaddaij_(&m,x,a->i,a->j,a->a,y); ^ In file included from /home/vijaysm/code/petsc/src/mat/impls/aij/seq/aij.c:1240:0: /home/vijaysm/code/petsc/include/../src/mat/impls/aij/seq/ftn-kernels/fmult.h:14:19: error: initializing argument 2 of 'void fortranmulttransposeaddaij_(PetscInt*, void*, PetscInt*, PetscInt*, void*, void*)' [-fpermissive] PETSC_EXTERN void fortranmulttransposeaddaij_(PetscInt*,void*,PetscInt*,PetscInt*,void*,void*); ^ /home/vijaysm/code/petsc/src/mat/impls/aij/seq/aij.c: In function 'PetscErrorCode MatMultAdd_SeqAIJ(Mat, Vec, Vec, Vec)': /home/vijaysm/code/petsc/src/mat/impls/aij/seq/aij.c:1601:41: error: invalid conversion from 'const PetscInt* {aka const int*}' to 'PetscInt* {aka int*}' [-fpermissive] fortranmultaddaij_(&m,x,ii,aj,aa,y,z); ^ In file included from /home/vijaysm/code/petsc/src/mat/impls/aij/seq/aij.c:1562:0: /home/vijaysm/code/petsc/include/../src/mat/impls/aij/seq/ftn-kernels/fmultadd.h:11:19: error: initializing argument 3 of 'void fortranmultaddaij_(PetscInt*, const void*, PetscInt*, PetscInt*, const MatScalar*, void*, void*)' [-fpermissive] PETSC_EXTERN void fortranmultaddaij_(PetscInt*,const void*,PetscInt*,PetscInt*,const MatScalar*,void*,void*); ^ /home/vijaysm/code/petsc/src/mat/impls/aij/seq/aij.c:1601:41: error: invalid conversion from 'const PetscInt* {aka const int*}' to 'PetscInt* {aka int*}' [-fpermissive] fortranmultaddaij_(&m,x,ii,aj,aa,y,z); ^ In file included from /home/vijaysm/code/petsc/src/mat/impls/aij/seq/aij.c:1562:0: /home/vijaysm/code/petsc/include/../src/mat/impls/aij/seq/ftn-kernels/fmultadd.h:11:19: error: initializing argument 4 of 'void fortranmultaddaij_(PetscInt*, const void*, PetscInt*, PetscInt*, const MatScalar*, void*, void*)' [-fpermissive] PETSC_EXTERN void fortranmultaddaij_(PetscInt*,const void*,PetscInt*,PetscInt*,const MatScalar*,void*,void*); ^ CXX standalone_test/obj/src/mat/impls/aij/seq/cholmod/aijcholmod.o CXX standalone_test/obj/src/mat/impls/aij/seq/ftn-auto/aijf.o CXX standalone_test/obj/src/mat/impls/aij/seq/crl/crl.o gmake[2]: *** [standalone_test/obj/src/mat/impls/aij/seq/aij.o] Error 1 gmake[2]: *** Waiting for unfinished jobs.... /home/vijaysm/code/petsc/src/mat/impls/aij/seq/crl/crl.c: In function 'PetscErrorCode MatMult_AIJCRL(Mat, Vec, Vec)': /home/vijaysm/code/petsc/src/mat/impls/aij/seq/crl/crl.c:133:43: error: invalid conversion from 'const PetscScalar* {aka const double*}' to 'PetscScalar* {aka double*}' [-fpermissive] fortranmultcrl_(&m,&rmax,x,y,icols,acols); ^ In file included from /home/vijaysm/code/petsc/src/mat/impls/aij/seq/crl/crl.c:94:0: /home/vijaysm/code/petsc/include/../src/mat/impls/aij/seq/crl/ftn-kernels/fmultcrl.h:10:19: error: initializing argument 3 of 'void fortranmultcrl_(PetscInt*, PetscInt*, PetscScalar*, PetscScalar*, PetscInt*, PetscScalar*)' [-fpermissive] PETSC_EXTERN void fortranmultcrl_(PetscInt*,PetscInt*,PetscScalar*,PetscScalar*,PetscInt*,PetscScalar*); ^ gmake[2]: *** [standalone_test/obj/src/mat/impls/aij/seq/crl/crl.o] Error 1 Looks like some const related changes in the API didn't propagate onto the fortran kernels. The attached patch fixes it. If you need any logs, let me know. Vijay -------------- next part -------------- A non-text attachment was scrubbed... Name: ftnkernel_const_fix.patch Type: text/x-patch Size: 1764 bytes Desc: not available URL: From sghosh2012 at gatech.edu Tue Feb 3 23:07:29 2015 From: sghosh2012 at gatech.edu (Ghosh, Swarnava) Date: Wed, 4 Feb 2015 00:07:29 -0500 (EST) Subject: [petsc-users] Transpose of rectangular dense parallel matrix In-Reply-To: <608815872.1189742.1423003966840.JavaMail.root@mail.gatech.edu> Message-ID: <489295368.1300073.1423026449537.JavaMail.root@mail.gatech.edu> Hi, I am trying to calculate the transpose of a dense rectangular matrix (pSddft->YOrb, size=Npts x Nstates) and then MatMatMult I am creating the dense matrix first of size (Nstates x Npts) and then doing an inplace transpose. Both the dense rectangular matrices have the same parallel communicator PetscObjectComm((PetscObject)pSddft->da). The following steps are the steps PetscInt rowloc,colloc; MatGetLocalSize(pSddft->YOrb,&rowloc,&colloc); MatCreate(PetscObjectComm((PetscObject)pSddft->da),&pSddft->YOrbTranspose); MatSetSizes(pSddft->YOrbTranspose,colloc,rowloc,PETSC_DETERMINE,PETSC_DETERMINE); MatSetType(pSddft->YOrbTranspose,MATDENSE); MatSetFromOptions(pSddft->YOrbTranspose); MatSetUp(pSddft->YOrbTranspose); MatTranspose(pSddft->YOrb,MAT_INITIAL_MATRIX,&pSddft->YOrbTranspose); MatMatMultNumeric(pSddft->YOrbTranspose,HpsiMat,HsubDense); The matrix HpsiMat has the same parallel communicator as pSddft->YOrb The code works fine on 1 core but I am getting segmentation fault in the MatMatMultNumeric step for more than 1 cores. So I think the problem is due to the way I am setting up the communicator of transpose matrix. Could you please tell me if there is a general way of creating a transpose of a rectangular dense parallel matrix and use it for matrix matrix multiplication? -- Swarnava Ghosh PhD Candidate, Structural Engineering, Mechanics and Materials School of Civil and Environmental Engineering Georgia Institute of Technology Atlanta, GA 30332 From hzhang at mcs.anl.gov Tue Feb 3 23:11:42 2015 From: hzhang at mcs.anl.gov (Hong) Date: Tue, 3 Feb 2015 23:11:42 -0600 Subject: [petsc-users] Transpose of rectangular dense parallel matrix In-Reply-To: <489295368.1300073.1423026449537.JavaMail.root@mail.gatech.edu> References: <608815872.1189742.1423003966840.JavaMail.root@mail.gatech.edu> <489295368.1300073.1423026449537.JavaMail.root@mail.gatech.edu> Message-ID: Ghosh: For parallel dense matrix-matrix operations, suggest using Elemental package http://libelemental.org Hong > > > I am trying to calculate the transpose of a dense rectangular matrix > (pSddft->YOrb, size=Npts x Nstates) and then MatMatMult > I am creating the dense matrix first of size (Nstates x Npts) and then > doing an inplace transpose. > Both the dense rectangular matrices have the same parallel communicator > PetscObjectComm((PetscObject)pSddft->da). > > The following steps are the steps > > > PetscInt rowloc,colloc; > MatGetLocalSize(pSddft->YOrb,&rowloc,&colloc); > > > MatCreate(PetscObjectComm((PetscObject)pSddft->da),&pSddft->YOrbTranspose); > > > MatSetSizes(pSddft->YOrbTranspose,colloc,rowloc,PETSC_DETERMINE,PETSC_DETERMINE); > MatSetType(pSddft->YOrbTranspose,MATDENSE); > MatSetFromOptions(pSddft->YOrbTranspose); > MatSetUp(pSddft->YOrbTranspose); > > > MatTranspose(pSddft->YOrb,MAT_INITIAL_MATRIX,&pSddft->YOrbTranspose); > > MatMatMultNumeric(pSddft->YOrbTranspose,HpsiMat,HsubDense); > > The matrix HpsiMat has the same parallel communicator as pSddft->YOrb > The code works fine on 1 core but I am getting segmentation fault in the > MatMatMultNumeric step for more than 1 cores. > So I think the problem is due to the way I am setting up the communicator > of transpose matrix. > > Could you please tell me if there is a general way of creating a transpose > of a rectangular dense parallel matrix and use it for matrix matrix > multiplication? > > > > > > > > > > > -- > Swarnava Ghosh > PhD Candidate, > Structural Engineering, Mechanics and Materials > School of Civil and Environmental Engineering > Georgia Institute of Technology > Atlanta, GA 30332 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bichinhoverde at spwinternet.com.br Wed Feb 4 01:09:50 2015 From: bichinhoverde at spwinternet.com.br (bichinhoverde) Date: Wed, 4 Feb 2015 05:09:50 -0200 Subject: [petsc-users] DMCreateGlobalVector and DMGetGlobalVector Message-ID: Hi. I have some questions. What is the difference between DMCreateGlobalVector and DMGetGlobalVector (and the local counterparts)? What happens when one calls SNESSolve with NULL for the solution vector, as in src/snes/examples/tutorials/ex7.c:158? SNESSolve(snes,NULL,NULL); Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Feb 4 01:16:58 2015 From: jed at jedbrown.org (Jed Brown) Date: Wed, 04 Feb 2015 00:16:58 -0700 Subject: [petsc-users] DMCreateGlobalVector and DMGetGlobalVector In-Reply-To: References: Message-ID: <87egq6du39.fsf@jedbrown.org> bichinhoverde writes: > Hi. I have some questions. > > What is the difference between DMCreateGlobalVector and DMGetGlobalVector > (and the local counterparts)? Create creates a vector that the caller owns. Get merely gets access to a vector from a managed pool (creating it if necessary), to be returned via DMRestoreGlobalVector(). > What happens when one calls SNESSolve with NULL for the solution vector, as > in src/snes/examples/tutorials/ex7.c:158? SNESSolve(snes,NULL,NULL); A vector is created automatically. You can get access to it with SNESGetSolution. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From bichinhoverde at spwinternet.com.br Wed Feb 4 01:28:28 2015 From: bichinhoverde at spwinternet.com.br (bichinhoverde) Date: Wed, 4 Feb 2015 05:28:28 -0200 Subject: [petsc-users] DMCreateGlobalVector and DMGetGlobalVector In-Reply-To: <87egq6du39.fsf@jedbrown.org> References: <87egq6du39.fsf@jedbrown.org> Message-ID: Ok, but from a PETSc user perspective, what is the difference between create and get? When should I use get and when should I use create? Can I call create several times to create several vectors? Is it the same as creating one and then duplicating? Can I call get several times to get several vectors? Is it the same as getting one and then duplicating? If I replace all gets with creates, or all creates with gets in my code, what will change? On Wed, Feb 4, 2015 at 5:16 AM, Jed Brown wrote: > bichinhoverde writes: > > > Hi. I have some questions. > > > > What is the difference between DMCreateGlobalVector and DMGetGlobalVector > > (and the local counterparts)? > > Create creates a vector that the caller owns. Get merely gets access to > a vector from a managed pool (creating it if necessary), to be returned > via DMRestoreGlobalVector(). > > > What happens when one calls SNESSolve with NULL for the solution vector, > as > > in src/snes/examples/tutorials/ex7.c:158? SNESSolve(snes,NULL,NULL); > > A vector is created automatically. You can get access to it with > SNESGetSolution. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Wed Feb 4 03:25:28 2015 From: dave.mayhem23 at gmail.com (Dave May) Date: Wed, 4 Feb 2015 10:25:28 +0100 Subject: [petsc-users] DMCreateGlobalVector and DMGetGlobalVector In-Reply-To: References: <87egq6du39.fsf@jedbrown.org> Message-ID: On Wednesday, 4 February 2015, bichinhoverde < bichinhoverde at spwinternet.com.br> wrote: > Ok, but from a PETSc user perspective, what is the difference between > create and get? > > When should I use get and when should I use create? > It's a memory saving optimization. It's a cache of vectors you can use. It's clever as it lets you reuse data rather always create/destroying objects. > > Can I call create several times to create several vectors? Is it the same > as creating one and then duplicating? > > Yes to both > Can I call get several times to get several vectors? Is it the same as > getting one and then duplicating? > > Functionally, both approaches are equivalent. However duplicating always allocated new memory thus the total memory footprint will increase. > If I replace all gets with creates, or all creates with gets in my code, > what will change? > If you change all creates to gets, probably the most notable difference would be the memory usage. Plus whatever extra time is required for creating which isn't incurred when you re use vectors. Note that the DMGetVec will NOT initialize the entries to zero (unlike the Create variants). The user is responsible for that task. > > > > > On Wed, Feb 4, 2015 at 5:16 AM, Jed Brown > wrote: > >> bichinhoverde > > >> writes: >> >> > Hi. I have some questions. >> > >> > What is the difference between DMCreateGlobalVector and >> DMGetGlobalVector >> > (and the local counterparts)? >> >> Create creates a vector that the caller owns. Get merely gets access to >> a vector from a managed pool (creating it if necessary), to be returned >> via DMRestoreGlobalVector(). >> >> > What happens when one calls SNESSolve with NULL for the solution >> vector, as >> > in src/snes/examples/tutorials/ex7.c:158? SNESSolve(snes,NULL,NULL); >> >> A vector is created automatically. You can get access to it with >> SNESGetSolution. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bichinhoverde at spwinternet.com.br Wed Feb 4 03:35:19 2015 From: bichinhoverde at spwinternet.com.br (bichinhoverde) Date: Wed, 4 Feb 2015 07:35:19 -0200 Subject: [petsc-users] DMCreateGlobalVector and DMGetGlobalVector In-Reply-To: References: <87egq6du39.fsf@jedbrown.org> Message-ID: Got it. Thank you. On Wed, Feb 4, 2015 at 7:25 AM, Dave May wrote: > > > On Wednesday, 4 February 2015, bichinhoverde < > bichinhoverde at spwinternet.com.br> wrote: > >> Ok, but from a PETSc user perspective, what is the difference between >> create and get? >> >> When should I use get and when should I use create? >> > > It's a memory saving optimization. > It's a cache of vectors you can use. It's clever as it lets you reuse data > rather always create/destroying objects. > >> >> Can I call create several times to create several vectors? Is it the same >> as creating one and then duplicating? >> >> Yes to both > > >> Can I call get several times to get several vectors? Is it the same as >> getting one and then duplicating? >> >> > Functionally, both approaches are equivalent. However duplicating always > allocated new memory thus the total memory footprint will increase. > > > >> If I replace all gets with creates, or all creates with gets in my code, >> what will change? >> > > If you change all creates to gets, probably the most notable difference > would be the memory usage. Plus whatever extra time is required for > creating which isn't incurred when you re use vectors. > > Note that the DMGetVec will NOT initialize the entries to zero (unlike the > Create variants). The user is responsible for that task. > > >> >> >> >> >> On Wed, Feb 4, 2015 at 5:16 AM, Jed Brown wrote: >> >>> bichinhoverde writes: >>> >>> > Hi. I have some questions. >>> > >>> > What is the difference between DMCreateGlobalVector and >>> DMGetGlobalVector >>> > (and the local counterparts)? >>> >>> Create creates a vector that the caller owns. Get merely gets access to >>> a vector from a managed pool (creating it if necessary), to be returned >>> via DMRestoreGlobalVector(). >>> >>> > What happens when one calls SNESSolve with NULL for the solution >>> vector, as >>> > in src/snes/examples/tutorials/ex7.c:158? SNESSolve(snes,NULL,NULL); >>> >>> A vector is created automatically. You can get access to it with >>> SNESGetSolution. >>> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sanjay.Kharche at manchester.ac.uk Wed Feb 4 05:39:14 2015 From: Sanjay.Kharche at manchester.ac.uk (Sanjay Kharche) Date: Wed, 4 Feb 2015 11:39:14 +0000 Subject: [petsc-users] VTK output Message-ID: Dear All I am working through the various library dependencies to permit me to use the Petsc VTK output functions. Based on errors I saw (pasted below), I think I need a working VTK installation on my fedora 15 before the PetscVTK functions will work - is this correct? My preference is basic binary VTK in 2D and 3D, but ASCII will be good for manually checking also. Do I need to configure and build petsc with vtk? How to do this? I tried the following, and the configure did not download VTK and it did not give any error messages either. The code for the petsc output I am trying to generate is follows the configure line. thanks Sanjay my configure: ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack --download-mpich --download-sundials --download-scalapack --download-vtk --with-c2html=0 my VTK output code: PetscViewer viewer; PetscViewerVTKOpen(PETSC_COMM_WORLD, "sk.vtk",FILE_MODE_WRITE,&viewer); VecView(u,viewer); PetscViewerDestroy(&viewer); at which point I get errors (these are all the errors I got): [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: No support for this operation for this object type [0]PETSC ERROR: No support for format 'ASCII_VTK' [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.5.3, unknown [0]PETSC ERROR: ./sk2d on a linux-gnu-c-debug named sanjayslaptop.maths.liv.ac.uk by sanjay Wed Feb 4 11:37:26 2015 [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack --download-mpich --download-sundials --download-scalapack --download-vtk --with-c2html=0 [0]PETSC ERROR: #1 DMDAVTKWriteAll() line 487 in /home/sanjay/petsc/src/dm/impls/da/grvtk.c [0]PETSC ERROR: #2 PetscViewerFlush_VTK() line 78 in /home/sanjay/petsc/src/sys/classes/viewer/impls/vtk/vtkv.c [0]PETSC ERROR: #3 PetscViewerFlush() line 30 in /home/sanjay/petsc/src/sys/classes/viewer/interface/flush.c [0]PETSC ERROR: #4 PetscViewerDestroy() line 100 in /home/sanjay/petsc/src/sys/classes/viewer/interface/view.c [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: No support for this operation for this object type [0]PETSC ERROR: No support for format 'ASCII_VTK' [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.5.3, unknown [0]PETSC ERROR: ./sk2d on a linux-gnu-c-debug named sanjayslaptop.maths.liv.ac.uk by sanjay Wed Feb 4 11:37:26 2015 [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack --download-mpich --download-sundials --download-scalapack --download-vtk --with-c2html=0 [0]PETSC ERROR: #1 DMDAVTKWriteAll() line 487 in /home/sanjay/petsc/src/dm/impls/da/grvtk.c [0]PETSC ERROR: #2 PetscViewerFlush_VTK() line 78 in /home/sanjay/petsc/src/sys/classes/viewer/impls/vtk/vtkv.c [0]PETSC ERROR: #3 PetscViewerFlush() line 30 in /home/sanjay/petsc/src/sys/classes/viewer/interface/flush.c [0]PETSC ERROR: #4 PetscViewerDestroy() line 100 in /home/sanjay/petsc/src/sys/classes/viewer/interface/view.c [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: No support for this operation for this object type [0]PETSC ERROR: No support for format 'ASCII_VTK' [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.5.3, unknown [0]PETSC ERROR: ./sk2d on a linux-gnu-c-debug named sanjayslaptop.maths.liv.ac.uk by sanjay Wed Feb 4 11:37:26 2015 [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: No support for this operation for this object type [0]PETSC ERROR: No support for format 'ASCII_VTK' [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.5.3, unknown [0]PETSC ERROR: ./sk2d on a linux-gnu-c-debug named sanjayslaptop.maths.liv.ac.uk by sanjay Wed Feb 4 11:37:26 2015 [0]PETSC ERROR: [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack --download-mpich --download-sundials --download-scalapack --download-vtk --with-c2html=0 [0]PETSC ERROR: #1 DMDAVTKWriteAll() line 487 in /home/sanjay/petsc/src/dm/impls/da/grvtk.c [0]PETSC ERROR: #2 PetscViewerFlush_VTK() line 78 in /home/sanjay/petsc/src/sys/classes/viewer/impls/vtk/vtkv.c [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack --download-mpich --download-sundials --download-scalapack --download-vtk --with-c2html=0 [0]PETSC ERROR: #1 DMDAVTKWriteAll() line 487 in /home/sanjay/petsc/src/dm/impls/da/grvtk.c [0]PETSC ERROR: #2 PetscViewerFlush_VTK() line 78 in /home/sanjay/petsc/src/sys/classes/viewer/impls/vtk/vtkv.c [0]PETSC ERROR: #3 PetscViewerFlush() line 30 in /home/sanjay/petsc/src/sys/classes/viewer/interface/flush.c [0]PETSC ERROR: #4 PetscViewerDestroy() line 100 in /home/sanjay/petsc/src/sys/classes/viewer/interface/view.c #3 PetscViewerFlush() line 30 in /home/sanjay/petsc/src/sys/classes/viewer/interface/flush.c [0]PETSC ERROR: #4 PetscViewerDestroy() line 100 in /home/sanjay/petsc/src/sys/classes/viewer/interface/view.c From jed at jedbrown.org Wed Feb 4 08:08:15 2015 From: jed at jedbrown.org (Jed Brown) Date: Wed, 04 Feb 2015 07:08:15 -0700 Subject: [petsc-users] VTK output In-Reply-To: References: Message-ID: <878ugdepm8.fsf@jedbrown.org> Sanjay Kharche writes: > Dear All > > I am working through the various library dependencies to permit me to > use the Petsc VTK output functions. Based on errors I saw (pasted > below), I think I need a working VTK installation on my fedora 15 > before the PetscVTK functions will work - is this correct? No. > My preference is basic binary VTK in 2D and 3D, but ASCII will be good > for manually checking also. The ASCII VTK format has been deprecated by VTK since almost the beginning of time. Use an XML format (*.vts or *.vtr for DMDA). DMPlex supports *.vtk for historical reasons, though only in serial. If you use DMPlex, you should write *.vtu. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From andrew at spott.us Wed Feb 4 12:32:41 2015 From: andrew at spott.us (Andrew Spott) Date: Wed, 04 Feb 2015 10:32:41 -0800 (PST) Subject: [petsc-users] slepc and overall phase of computed eigenvectors Message-ID: <1423074760111.968fbc04@Nodemailer> When I compute the eigenvectors of a real symmetric matrix, I?m getting eigenvectors that are rotated by approximately pi/4 in the complex plane. ?So what could be purely real eigenvectors have some overall phase factor. Why is that? ?And is there a way to prevent this overall phase factor? -Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Wed Feb 4 12:49:34 2015 From: jroman at dsic.upv.es (Jose E. Roman) Date: Wed, 4 Feb 2015 19:49:34 +0100 Subject: [petsc-users] slepc and overall phase of computed eigenvectors In-Reply-To: <1423074760111.968fbc04@Nodemailer> References: <1423074760111.968fbc04@Nodemailer> Message-ID: <0C6CC6E4-D577-40A5-A95C-2072ACF316BF@dsic.upv.es> El 04/02/2015, a las 19:32, Andrew Spott escribi?: > When I compute the eigenvectors of a real symmetric matrix, I?m getting eigenvectors that are rotated by approximately pi/4 in the complex plane. So what could be purely real eigenvectors have some overall phase factor. > > Why is that? And is there a way to prevent this overall phase factor? > > -Andrew > Eigenvectors are normalized to have 2-norm equal to one. Complex eigenvectors may be scaled by any complex scalar of modulus 1. When computing eigenvectors of a real symmetric matrix in complex arithmetic, the solver cannot control this because the matrix is not checked to be real. Since you know it is real, you could do a postprocessing that scales the eigenvectors as you wish. This is done in function FixSign() in this example: http://slepc.upv.es/documentation/current/src/nep/examples/tutorials/ex20.c.html Jose From andrew at spott.us Wed Feb 4 13:12:31 2015 From: andrew at spott.us (Andrew Spott) Date: Wed, 04 Feb 2015 11:12:31 -0800 (PST) Subject: [petsc-users] slepc and overall phase of computed eigenvectors In-Reply-To: <0C6CC6E4-D577-40A5-A95C-2072ACF316BF@dsic.upv.es> References: <0C6CC6E4-D577-40A5-A95C-2072ACF316BF@dsic.upv.es> Message-ID: <1423077150626.135debc@Nodemailer> Thanks. As an aside, another way that seems to work is to set the initial vector to a random REAL vector. ?It seems to also fix this problem. ?Though I don?t know how robust it is. -Andrew On Wed, Feb 4, 2015 at 11:49 AM, Jose E. Roman wrote: > El 04/02/2015, a las 19:32, Andrew Spott escribi?: >> When I compute the eigenvectors of a real symmetric matrix, I?m getting eigenvectors that are rotated by approximately pi/4 in the complex plane. So what could be purely real eigenvectors have some overall phase factor. >> >> Why is that? And is there a way to prevent this overall phase factor? >> >> -Andrew >> > Eigenvectors are normalized to have 2-norm equal to one. Complex eigenvectors may be scaled by any complex scalar of modulus 1. When computing eigenvectors of a real symmetric matrix in complex arithmetic, the solver cannot control this because the matrix is not checked to be real. > Since you know it is real, you could do a postprocessing that scales the eigenvectors as you wish. This is done in function FixSign() in this example: http://slepc.upv.es/documentation/current/src/nep/examples/tutorials/ex20.c.html > Jose > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bichinhoverde at spwinternet.com.br Wed Feb 4 13:11:54 2015 From: bichinhoverde at spwinternet.com.br (bichinhoverde) Date: Wed, 4 Feb 2015 17:11:54 -0200 Subject: [petsc-users] VTK output In-Reply-To: References: Message-ID: If you want ASCII VTK, try something like this: PetscViewerASCIIOpen(PETSC_COMM_WORLD, "testing.vtk", &viewer); PetscViewerSetFormat(viewer, PETSC_VIEWER_ASCII_VTK); DMDASetUniformCoordinates(params.da, 0.0, params.Lx, 0.0, params.Ly, 0.0, 0.0); DMView(params.da, viewer); PetscObjectSetName((PetscObject) params.u_n, "u_n"); VecView(params.u_n, viewer); PetscObjectSetName((PetscObject) params.u_np1, "u_np1"); VecView(params.u_np1, viewer); PetscObjectSetName((PetscObject) params.cell_type, "cell_type"); VecView(params.cell_type, viewer); PetscViewerDestroy(&viewer); On Wed, Feb 4, 2015 at 9:39 AM, Sanjay Kharche < Sanjay.Kharche at manchester.ac.uk> wrote: > > Dear All > > I am working through the various library dependencies to permit me to use > the Petsc VTK output functions. Based on errors I saw (pasted below), I > think I need a working VTK installation on my fedora 15 before the PetscVTK > functions will work - is this correct? My preference is basic binary VTK in > 2D and 3D, but ASCII will be good for manually checking also. > > Do I need to configure and build petsc with vtk? How to do this? I tried > the following, and the configure did not download VTK and it did not give > any error messages either. The code for the petsc output I am trying to > generate is follows the configure line. > > thanks > Sanjay > > my configure: > > ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran > --download-fblaslapack --download-mpich --download-sundials > --download-scalapack --download-vtk --with-c2html=0 > > > my VTK output code: > > PetscViewer viewer; > PetscViewerVTKOpen(PETSC_COMM_WORLD, "sk.vtk",FILE_MODE_WRITE,&viewer); > VecView(u,viewer); > PetscViewerDestroy(&viewer); > > at which point I get errors (these are all the errors I got): > > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: No support for this operation for this object type > [0]PETSC ERROR: No support for format 'ASCII_VTK' > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.5.3, unknown > [0]PETSC ERROR: ./sk2d on a linux-gnu-c-debug named > sanjayslaptop.maths.liv.ac.uk by sanjay Wed Feb 4 11:37:26 2015 > [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ > --with-fc=gfortran --download-fblaslapack --download-mpich > --download-sundials --download-scalapack --download-vtk --with-c2html=0 > [0]PETSC ERROR: #1 DMDAVTKWriteAll() line 487 in > /home/sanjay/petsc/src/dm/impls/da/grvtk.c > [0]PETSC ERROR: #2 PetscViewerFlush_VTK() line 78 in > /home/sanjay/petsc/src/sys/classes/viewer/impls/vtk/vtkv.c > [0]PETSC ERROR: #3 PetscViewerFlush() line 30 in > /home/sanjay/petsc/src/sys/classes/viewer/interface/flush.c > [0]PETSC ERROR: #4 PetscViewerDestroy() line 100 in > /home/sanjay/petsc/src/sys/classes/viewer/interface/view.c > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: No support for this operation for this object type > [0]PETSC ERROR: No support for format 'ASCII_VTK' > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.5.3, unknown > [0]PETSC ERROR: ./sk2d on a linux-gnu-c-debug named > sanjayslaptop.maths.liv.ac.uk by sanjay Wed Feb 4 11:37:26 2015 > [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ > --with-fc=gfortran --download-fblaslapack --download-mpich > --download-sundials --download-scalapack --download-vtk --with-c2html=0 > [0]PETSC ERROR: #1 DMDAVTKWriteAll() line 487 in > /home/sanjay/petsc/src/dm/impls/da/grvtk.c > [0]PETSC ERROR: #2 PetscViewerFlush_VTK() line 78 in > /home/sanjay/petsc/src/sys/classes/viewer/impls/vtk/vtkv.c > [0]PETSC ERROR: #3 PetscViewerFlush() line 30 in > /home/sanjay/petsc/src/sys/classes/viewer/interface/flush.c > [0]PETSC ERROR: #4 PetscViewerDestroy() line 100 in > /home/sanjay/petsc/src/sys/classes/viewer/interface/view.c > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: No support for this operation for this object type > [0]PETSC ERROR: No support for format 'ASCII_VTK' > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.5.3, unknown > [0]PETSC ERROR: ./sk2d on a linux-gnu-c-debug named > sanjayslaptop.maths.liv.ac.uk by sanjay Wed Feb 4 11:37:26 2015 > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: No support for this operation for this object type > [0]PETSC ERROR: No support for format 'ASCII_VTK' > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.5.3, unknown > [0]PETSC ERROR: ./sk2d on a linux-gnu-c-debug named > sanjayslaptop.maths.liv.ac.uk by sanjay Wed Feb 4 11:37:26 2015 > [0]PETSC ERROR: [0]PETSC ERROR: Configure options --with-cc=gcc > --with-cxx=g++ --with-fc=gfortran --download-fblaslapack --download-mpich > --download-sundials --download-scalapack --download-vtk --with-c2html=0 > [0]PETSC ERROR: #1 DMDAVTKWriteAll() line 487 in > /home/sanjay/petsc/src/dm/impls/da/grvtk.c > [0]PETSC ERROR: #2 PetscViewerFlush_VTK() line 78 in > /home/sanjay/petsc/src/sys/classes/viewer/impls/vtk/vtkv.c > [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ > --with-fc=gfortran --download-fblaslapack --download-mpich > --download-sundials --download-scalapack --download-vtk --with-c2html=0 > [0]PETSC ERROR: #1 DMDAVTKWriteAll() line 487 in > /home/sanjay/petsc/src/dm/impls/da/grvtk.c > [0]PETSC ERROR: #2 PetscViewerFlush_VTK() line 78 in > /home/sanjay/petsc/src/sys/classes/viewer/impls/vtk/vtkv.c > [0]PETSC ERROR: #3 PetscViewerFlush() line 30 in > /home/sanjay/petsc/src/sys/classes/viewer/interface/flush.c > [0]PETSC ERROR: #4 PetscViewerDestroy() line 100 in > /home/sanjay/petsc/src/sys/classes/viewer/interface/view.c > #3 PetscViewerFlush() line 30 in > /home/sanjay/petsc/src/sys/classes/viewer/interface/flush.c > [0]PETSC ERROR: #4 PetscViewerDestroy() line 100 in > /home/sanjay/petsc/src/sys/classes/viewer/interface/view.c > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jychang48 at gmail.com Wed Feb 4 13:43:38 2015 From: jychang48 at gmail.com (Justin Chang) Date: Wed, 4 Feb 2015 13:43:38 -0600 Subject: [petsc-users] Transient FEM diffusion Message-ID: Hi all, Are there any any TS examples or tests for FEM using DMPlex? I want to solve a transient diffusion problem of the form below: du/dt = div[grad[u]] + f using backward euler method. I am curious as to how to implement this, namely into the pointwise functions. Working off of SNES ex12, if I understand this correctly, would I do something like the following: 1) Modify the f0 function such that f0[0] = u_t[0] + 2) Add a g0_uu function such that g0[0] = 1.0 3) Setup the TS solver in the main function Am I missing something crucial here, or am I kind of along the right track? Thanks, Justin -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 4 13:49:56 2015 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 4 Feb 2015 13:49:56 -0600 Subject: [petsc-users] Transient FEM diffusion In-Reply-To: References: Message-ID: On Wed, Feb 4, 2015 at 1:43 PM, Justin Chang wrote: > Hi all, > > Are there any any TS examples or tests for FEM using DMPlex? I want to > solve a transient diffusion problem of the form below: > Sorry, I have not had time to set these up yet. > du/dt = div[grad[u]] + f > > using backward euler method. I am curious as to how to implement this, > namely into the pointwise functions. Working off of SNES ex12, if I > understand this correctly, would I do something like the following: > > 1) Modify the f0 function such that f0[0] = u_t[0] + > Yes, exactly. > 2) Add a g0_uu function such that g0[0] = 1.0 > Yes. > 3) Setup the TS solver in the main function > Replace SNES with TS and use ierr = DMTSSetIFunctionLocal(dm, DMPlexTSComputeIFunctionFEM, &user);CHKERRQ(ierr); Let me know if anything does not work. Thanks, Matt > Am I missing something crucial here, or am I kind of along the right track? > > Thanks, > Justin > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Feb 4 14:04:44 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 4 Feb 2015 14:04:44 -0600 Subject: [petsc-users] AIJ ftn-kernels update In-Reply-To: References: Message-ID: Thanks. Now in master and next > On Feb 3, 2015, at 5:38 PM, Vijay S. Mahadevan wrote: > > When using configure options: --with-fortran-kernels=1 > --with-fortran=1, the build fails with the following errors. > > /home/vijaysm/code/petsc/src/mat/impls/aij/seq/aij.c: In function > 'PetscErrorCode MatMultTransposeAdd_SeqAIJ(Mat, Vec, Vec, Vec)': > /home/vijaysm/code/petsc/src/mat/impls/aij/seq/aij.c:1265:52: error: > invalid conversion from 'const void*' to 'void*' [-fpermissive] > fortranmulttransposeaddaij_(&m,x,a->i,a->j,a->a,y); > ^ > In file included from > /home/vijaysm/code/petsc/src/mat/impls/aij/seq/aij.c:1240:0: > /home/vijaysm/code/petsc/include/../src/mat/impls/aij/seq/ftn-kernels/fmult.h:14:19: > error: initializing argument 2 of 'void > fortranmulttransposeaddaij_(PetscInt*, void*, PetscInt*, PetscInt*, > void*, void*)' [-fpermissive] > PETSC_EXTERN void > fortranmulttransposeaddaij_(PetscInt*,void*,PetscInt*,PetscInt*,void*,void*); > ^ > /home/vijaysm/code/petsc/src/mat/impls/aij/seq/aij.c: In function > 'PetscErrorCode MatMultAdd_SeqAIJ(Mat, Vec, Vec, Vec)': > /home/vijaysm/code/petsc/src/mat/impls/aij/seq/aij.c:1601:41: error: > invalid conversion from 'const PetscInt* {aka const int*}' to > 'PetscInt* {aka int*}' [-fpermissive] > fortranmultaddaij_(&m,x,ii,aj,aa,y,z); > ^ > In file included from > /home/vijaysm/code/petsc/src/mat/impls/aij/seq/aij.c:1562:0: > /home/vijaysm/code/petsc/include/../src/mat/impls/aij/seq/ftn-kernels/fmultadd.h:11:19: > error: initializing argument 3 of 'void > fortranmultaddaij_(PetscInt*, const void*, PetscInt*, PetscInt*, const > MatScalar*, void*, void*)' [-fpermissive] > PETSC_EXTERN void fortranmultaddaij_(PetscInt*,const > void*,PetscInt*,PetscInt*,const MatScalar*,void*,void*); > ^ > /home/vijaysm/code/petsc/src/mat/impls/aij/seq/aij.c:1601:41: error: > invalid conversion from 'const PetscInt* {aka const int*}' to > 'PetscInt* {aka int*}' [-fpermissive] > fortranmultaddaij_(&m,x,ii,aj,aa,y,z); > ^ > In file included from > /home/vijaysm/code/petsc/src/mat/impls/aij/seq/aij.c:1562:0: > /home/vijaysm/code/petsc/include/../src/mat/impls/aij/seq/ftn-kernels/fmultadd.h:11:19: > error: initializing argument 4 of 'void > fortranmultaddaij_(PetscInt*, const void*, PetscInt*, PetscInt*, const > MatScalar*, void*, void*)' [-fpermissive] > PETSC_EXTERN void fortranmultaddaij_(PetscInt*,const > void*,PetscInt*,PetscInt*,const MatScalar*,void*,void*); > ^ > CXX standalone_test/obj/src/mat/impls/aij/seq/cholmod/aijcholmod.o > CXX standalone_test/obj/src/mat/impls/aij/seq/ftn-auto/aijf.o > CXX standalone_test/obj/src/mat/impls/aij/seq/crl/crl.o > gmake[2]: *** [standalone_test/obj/src/mat/impls/aij/seq/aij.o] Error 1 > gmake[2]: *** Waiting for unfinished jobs.... > /home/vijaysm/code/petsc/src/mat/impls/aij/seq/crl/crl.c: In function > 'PetscErrorCode MatMult_AIJCRL(Mat, Vec, Vec)': > /home/vijaysm/code/petsc/src/mat/impls/aij/seq/crl/crl.c:133:43: > error: invalid conversion from 'const PetscScalar* {aka const > double*}' to 'PetscScalar* {aka double*}' [-fpermissive] > fortranmultcrl_(&m,&rmax,x,y,icols,acols); > ^ > In file included from > /home/vijaysm/code/petsc/src/mat/impls/aij/seq/crl/crl.c:94:0: > /home/vijaysm/code/petsc/include/../src/mat/impls/aij/seq/crl/ftn-kernels/fmultcrl.h:10:19: > error: initializing argument 3 of 'void fortranmultcrl_(PetscInt*, > PetscInt*, PetscScalar*, PetscScalar*, PetscInt*, PetscScalar*)' > [-fpermissive] > PETSC_EXTERN void > fortranmultcrl_(PetscInt*,PetscInt*,PetscScalar*,PetscScalar*,PetscInt*,PetscScalar*); > ^ > gmake[2]: *** [standalone_test/obj/src/mat/impls/aij/seq/crl/crl.o] Error 1 > > Looks like some const related changes in the API didn't propagate onto > the fortran kernels. The attached patch fixes it. If you need any > logs, let me know. > > Vijay > From gabel.fabian at gmail.com Wed Feb 4 14:34:41 2015 From: gabel.fabian at gmail.com (Fabian Gabel) Date: Wed, 04 Feb 2015 21:34:41 +0100 Subject: [petsc-users] Field Split PC for Fully-Coupled 3d stationary incompressible Navier-Stokes Solution Algorithm In-Reply-To: References: <1422869962.961.2.camel@gmail.com> <1422871832.961.4.camel@gmail.com> Message-ID: <1423082081.3096.6.camel@gmail.com> Thank you for pointing me into the right direction. After some first tests on a test case with 2e6 cells (4dof) I could measure a slight improvement (25%) with respect to wall time, using a nested field split for the velocities: -coupledsolve_pc_type fieldsplit -coupledsolve_pc_fieldsplit_0_fields 0,1,2 -coupledsolve_pc_fieldsplit_1_fields 3 -coupledsolve_pc_fieldsplit_type schur -coupledsolve_pc_fieldsplit_block_size 4 -coupledsolve_fieldsplit_0_ksp_converged_reason -coupledsolve_fieldsplit_1_ksp_converged_reason -coupledsolve_fieldsplit_0_ksp_type gmres -coupledsolve_fieldsplit_0_pc_type fieldsplit -coupledsolve_fieldsplit_0_pc_fieldsplit_block_size 3 -coupledsolve_fieldsplit_0_fieldsplit_0_pc_type ml -coupledsolve_fieldsplit_0_fieldsplit_1_pc_type ml -coupledsolve_fieldsplit_0_fieldsplit_2_pc_type ml Is it normal, that I have to explicitly specify the block size for each fieldsplit? I attached the results (-converged_reason only for readability and another file solely for the output of -ksp_view). I am not sure if this result could be improved by modifying any of the solver options. Are there any guidelines to follow that I could use to avoid taking wild guesses? > Petsc has some support to generate approximate pressure schur > complements for you, but these will not be as good as the ones > specifically constructed for you particular discretization. I came across a tutorial (/snes/examples/tutorials/ex70.c), which shows 2 different approaches: 1- provide a Preconditioner \hat{S}p for the approximation of the true Schur complement 2- use another Matrix (in this case its the Matrix used for constructing the preconditioner in the former approach) as a new approximation of the Schur complement. Speaking in terms of the PETSc-manual p.87, looking at the factorization of the Schur field split preconditioner, approach 1 sets \hat{S}p while approach 2 furthermore sets \hat{S}. Is this correct? > > [2] If you assembled a different operator for your preconditioner in > which the B_pp slot contained a pressure schur complement > approximation, you could use the simpler and likely more robust option > (assuming you know of a decent schur complement approximation for you > discretisation and physical problem) > -coupledsolve_pc_type fieldsplit > -coupledsolve_pc_fieldsplit_type MULTIPLICATIVE > > which include you U-p coupling, or just > > -coupledsolve_pc_fieldsplit_type ADDITIVE > > > which would define the following preconditioner > > inv(B) = diag( inv(B_uu,) , inv(B_vv) , inv(B_ww) , inv(B_pp) ) What do you refer to with "B_pp slot"? I don't understand this approach completely. What would I need a Schur complement approximation for, if I don't use a Schur complement preconditioner? > Option 2 would be better as your operator doesn't have an u_i-u_j, i ! > = j coupling and you could use efficient AMG implementations for each > scalar terms associated with u-u, v-v, w-w coupled terms without > having to split again. > > Also, fieldsplit will not be aware of the fact that the Auu, Avv, Aww > blocks are all identical - thus it cannot do anything "smart" in order > to save memory. Accordingly, the KSP defined for each u,v,w split will > be a unique KSP object. If your A_ii are all identical and you want to > save memory, you could use MatNest but as Matt will always yell out, > "MatNest is ONLY a memory optimization and should be ONLY be used once > all solver exploration/testing is performed". Thanks, I will keep this in mind. Does this mean that I would only have to assemble one matrix for the velocities instead of three? > > - A_pp is defined as the matrix resulting from the > discretization of the > pressure equation that considers only the pressure related > terms. > > > Hmm okay, i assumed for incompressible NS the pressure equation > > that the pressure equation would be just \div(u) = 0. > Indeed, many finite element(!) formulations I found while researching use this approach, which leads to the block A_pp being zero. I however use a collocated finite volume formulation and, to avoid checkerboarding of the pressure field, I deploy a pressure weighted interpolation method to approximate the velocities surging from the discretisation of \div{u}. This gives me an equation with the pressure as the dominant variable. -------------- next part -------------- Sender: LSF System Subject: Job 408293: in cluster Done Job was submitted from host by user in cluster . Job was executed on host(s) , in queue , as user in cluster . was used as the home directory. was used as the working directory. Started at Wed Feb 4 09:41:27 2015 Results reported at Wed Feb 4 11:23:23 2015 Your job looked like: ------------------------------------------------------------ # LSBATCH: User input #! /bin/sh #BSUB -J fieldsplit_128 #BSUB -o /home/gu08vomo/thesis/fieldsplit/cpld_128.out.%J #BSUB -n 1 #BSUB -W 24:00 #BSUB -x #BSUB -q test_mpi2 #BSUB -a openmpi module load openmpi/intel/1.8.2 export PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr export MYWORKDIR=/work/scratch/gu08vomo/thesis/singleblock/128_1_1/ export OUTPUTDIR=/home/gu08vomo/thesis/coupling export PETSC_OPS="-options_file ops" #export PETSC_OPS="-options_file ops.def" echo "PETSC_DIR="$PETSC_DIR echo "MYWORKDIR="$MYWORKDIR echo "PETSC_OPS="$PETSC_OPS cd $MYWORKDIR mpirun -n 1 ./caffa3d.cpld.lnx ${PETSC_OPS} ------------------------------------------------------------ Successfully completed. Resource usage summary: CPU time : 6115.59 sec. Max Memory : 11949 MB Average Memory : 11588.18 MB Total Requested Memory : - Delta Memory : - (Delta: the difference between total requested memory and actual max usage.) Max Swap : 12826 MB Max Processes : 6 Max Threads : 11 The output (if any) follows: Modules: loading gcc/4.8.3 Modules: loading intel/2015 Modules: loading openmpi/intel/1.8.2 PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr MYWORKDIR=/work/scratch/gu08vomo/thesis/singleblock/128_1_1/ PETSC_OPS=-options_file ops ENTER PROBLEM NAME (SIX CHARACTERS): **************************************************** NAME OF PROBLEM SOLVED control **************************************************** *************************************************** CONTROL SETTINGS *************************************************** LREAD,LWRITE,LPOST,LTEST,LOUTS,LOUTE,LTIME,LGRAD F F T F F F F F IMON, JMON, KMON, MMON, RMON, IPR, JPR, KPR, MPR,NPCOR,NIGRAD 8 9 8 1 0 2 2 3 1 1 1 SORMAX, SLARGE, ALFA 0.1000E-07 0.1000E+31 0.9200E+00 (URF(I),I=1,6) 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 (SOR(I),I=1,6) 0.1000E-02 0.1000E-01 0.1000E-01 0.1000E-01 0.1000E-01 0.1000E-07 (GDS(I),I=1,6) - BLENDING (CDS-UDS) 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 LSG 20 *************************************************** START COUPLED ALGORITHM *************************************************** Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 4 Linear solve converged due to CONVERGED_RTOL iterations 6 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 1.037040763958e+04 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 4 Linear solve converged due to CONVERGED_RTOL iterations 6 1 KSP Residual norm 1.820301124259e+02 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 2 KSP Residual norm 1.918291450815e+00 0000001 0.1000E+01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 4 Linear solve converged due to CONVERGED_RTOL iterations 6 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 1.023809154509e+04 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 4 Linear solve converged due to CONVERGED_RTOL iterations 6 1 KSP Residual norm 3.435441118311e+01 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 2 KSP Residual norm 3.845041351098e-01 0000002 0.2240E+00 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 1.253367530914e+03 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 1 KSP Residual norm 1.035050845597e+01 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 2 KSP Residual norm 7.886700674837e-02 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 3 KSP Residual norm 5.051143145928e-04 0000003 0.4427E-01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 1.192891689143e+03 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 1 KSP Residual norm 4.373785312405e+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 2 KSP Residual norm 1.508617145525e-02 0000004 0.1864E-01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 1.063056228890e+03 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 1 KSP Residual norm 5.221617643142e+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 2 KSP Residual norm 1.702817267846e-02 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 3 KSP Residual norm 1.094137462022e-04 0000005 0.4104E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 1.062202020432e+03 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 1 KSP Residual norm 4.326846513717e+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 2 KSP Residual norm 1.531682932145e-02 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 3 KSP Residual norm 1.029941882864e-04 0000006 0.1801E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 1.061334126191e+03 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 1 KSP Residual norm 4.337871426870e+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 2 KSP Residual norm 1.534097657692e-02 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 3 KSP Residual norm 1.029664077719e-04 0000007 0.4160E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 1.061324173792e+03 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 1 KSP Residual norm 4.315857858273e+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 2 KSP Residual norm 1.537945131376e-02 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 3 KSP Residual norm 1.031173163530e-04 0000008 0.1848E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 1.061317257655e+03 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 1 KSP Residual norm 4.314492943454e+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 2 KSP Residual norm 1.538683173899e-02 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 3 KSP Residual norm 1.031713833017e-04 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 4 KSP Residual norm 3.599982126738e-07 0000009 0.4450E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 1.061317147625e+03 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 1 KSP Residual norm 4.314373274276e+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 2 KSP Residual norm 1.538651291802e-02 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 3 KSP Residual norm 1.031589080118e-04 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 4 KSP Residual norm 3.601032727860e-07 0000010 0.1982E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 1.061317055113e+03 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 1 KSP Residual norm 4.314199309723e+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 2 KSP Residual norm 1.538699611154e-02 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 3 KSP Residual norm 1.031690161587e-04 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 4 KSP Residual norm 3.601173420863e-07 0000011 0.5015E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 1.061317068473e+03 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 1 KSP Residual norm 4.314228017250e+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 2 KSP Residual norm 1.538741285541e-02 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 3 KSP Residual norm 1.031619248943e-04 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 4 KSP Residual norm 3.601008760122e-07 0000012 0.2225E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 1.061317075229e+03 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 1 KSP Residual norm 4.314202201274e+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 2 KSP Residual norm 1.538662628122e-02 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 3 KSP Residual norm 1.031753252907e-04 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 4 KSP Residual norm 3.601602035056e-07 0000013 0.6001E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 1.061317064817e+03 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 1 KSP Residual norm 4.314207628650e+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 2 KSP Residual norm 1.538714820949e-02 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 3 KSP Residual norm 1.031624832465e-04 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 4 KSP Residual norm 3.601058284873e-07 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 5 KSP Residual norm 2.711147491646e-09 0000014 0.2640E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 1.061317065303e+03 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 1 KSP Residual norm 4.314210822353e+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 2 KSP Residual norm 1.538750779210e-02 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 3 KSP Residual norm 1.031646930709e-04 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 4 KSP Residual norm 3.601111939693e-07 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 5 KSP Residual norm 2.710026732371e-09 0000015 0.7712E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 1.061317065606e+03 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 1 KSP Residual norm 4.314213175941e+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 2 KSP Residual norm 1.538753978218e-02 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 3 KSP Residual norm 1.031653392986e-04 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 4 KSP Residual norm 3.601142782441e-07 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 5 KSP Residual norm 2.711212471088e-09 0000016 0.3455E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 1.061317077674e+03 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 1 KSP Residual norm 4.314208812309e+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 2 KSP Residual norm 1.538784092617e-02 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 3 KSP Residual norm 1.031663931282e-04 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 4 KSP Residual norm 3.601165007389e-07 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 5 KSP Residual norm 2.711310183065e-09 0000017 0.1385E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 1.061317064500e+03 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 1 KSP Residual norm 4.314212836216e+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 2 KSP Residual norm 1.538710308210e-02 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 3 KSP Residual norm 1.031672844814e-04 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 4 KSP Residual norm 3.601238377940e-07 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 5 KSP Residual norm 2.710467667727e-09 0000018 0.1014E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 1.061317068254e+03 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 1 KSP Residual norm 4.314211913659e+00 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 2 KSP Residual norm 1.538827062974e-02 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 3 KSP Residual norm 1.031590089559e-04 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 4 KSP Residual norm 3.600831497655e-07 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 5 KSP Residual norm 2.709837887968e-09 0000019 0.9232E-08 0.0000E+00 TIME FOR CALCULATION: 0.6083E+04 L2-NORM ERROR U VELOCITY 2.808739609933520E-005 L2-NORM ERROR V VELOCITY 2.786225784858970E-005 L2-NORM ERROR W VELOCITY 2.914198563073946E-005 L2-NORM ERROR ABS. VELOCITY 3.154082267224834E-005 L2-NORM ERROR PRESSURE 1.387416651056730E-003 *** CALCULATION FINISHED - SEE RESULTS *** ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./caffa3d.cpld.lnx on a arch-openmpi-opt-intel-hlr-ext named hpb0039 with 1 processor, by gu08vomo Wed Feb 4 11:23:23 2015 Using Petsc Release Version 3.5.3, Jan, 31, 2015 Max Max/Min Avg Total Time (sec): 6.113e+03 1.00000 6.113e+03 Objects: 3.286e+03 1.00000 3.286e+03 Flops: 6.812e+12 1.00000 6.812e+12 6.812e+12 Flops/sec: 1.114e+09 1.00000 1.114e+09 1.114e+09 MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Reductions: 0.000e+00 0.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 1.0866e+02 1.8% 2.6364e+07 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 1: CPLD_SOL: 6.0048e+03 98.2% 6.8122e+12 100.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage ThreadCommRunKer 5 1.0 5.2452e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNorm 1 1.0 2.4110e-01 1.0 1.76e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 67 0 0 0 73 VecScale 1 1.0 6.9940e-03 1.0 8.79e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 33 0 0 0 1257 VecSet 656 1.0 6.7253e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 VecScatterBegin 698 1.0 2.0170e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 1 1.0 6.9940e-03 1.0 8.79e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 33 0 0 0 1257 MatAssemblyBegin 38 1.0 1.2875e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 38 1.0 1.9623e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 2 0 0 0 0 0 MatZeroEntries 19 1.0 1.4439e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 PetscBarrier 76 1.0 4.0770e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 --- Event Stage 1: CPLD_SOL VecMDot 5308 1.0 7.4653e+01 1.0 2.25e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 3 0 0 0 1 3 0 0 0 3014 VecNorm 6107 1.0 2.0860e+01 1.0 7.51e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 3600 VecScale 17413 1.0 4.6473e+01 1.0 6.23e+10 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 1340 VecCopy 889 1.0 4.9240e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 135930 1.0 5.3929e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecAXPY 979 1.0 7.5397e+00 1.0 1.23e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1629 VecAYPX 79905 1.0 4.7291e+01 1.0 3.94e+10 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 834 VecMAXPY 6178 1.0 1.0748e+02 1.0 2.89e+11 1.0 0.0e+00 0.0e+00 0.0e+00 2 4 0 0 0 2 4 0 0 0 2689 VecScatterBegin 32322 1.0 1.7972e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0 VecNormalize 6088 1.0 4.9487e+01 1.0 1.12e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 1 2 0 0 0 2266 MatMult 42821 1.0 4.6083e+03 1.0 5.58e+12 1.0 0.0e+00 0.0e+00 0.0e+00 75 82 0 0 0 77 82 0 0 0 1211 MatMultAdd 80486 1.0 2.3782e+02 1.0 3.01e+11 1.0 0.0e+00 0.0e+00 0.0e+00 4 4 0 0 0 4 4 0 0 0 1267 MatSolve 16652 1.0 1.9250e+01 1.0 1.82e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 947 MatSOR 159810 1.0 3.7930e+03 1.0 4.01e+12 1.0 0.0e+00 0.0e+00 0.0e+00 62 59 0 0 0 63 59 0 0 0 1056 MatLUFactorSym 57 1.0 3.6383e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatLUFactorNum 76 1.0 2.0026e+00 1.0 8.68e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 434 MatILUFactorSym 1 1.0 8.4865e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatResidual 79905 1.0 4.5082e+02 1.0 7.28e+11 1.0 0.0e+00 0.0e+00 0.0e+00 7 11 0 0 0 8 11 0 0 0 1615 MatAssemblyBegin 1045 1.0 1.6451e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 1045 1.0 8.9879e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRowIJ 58 1.0 5.9843e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetSubMatrice 190 1.0 4.0555e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 MatGetOrdering 58 1.0 9.6858e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatZeroEntries 54 1.0 3.3162e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPGMRESOrthog 5218 1.0 1.5732e+02 1.0 4.47e+11 1.0 0.0e+00 0.0e+00 0.0e+00 3 7 0 0 0 3 7 0 0 0 2840 KSPSetUp 456 1.0 1.2921e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 19 1.0 5.9995e+03 1.0 6.81e+12 1.0 0.0e+00 0.0e+00 0.0e+00 98100 0 0 0 100100 0 0 0 1134 PCSetUp 114 1.0 2.8368e+02 1.0 1.64e+10 1.0 0.0e+00 0.0e+00 0.0e+00 5 0 0 0 0 5 0 0 0 0 58 PCApply 90 1.0 5.9555e+03 1.0 6.77e+12 1.0 0.0e+00 0.0e+00 0.0e+00 97 99 0 0 0 99 99 0 0 0 1137 --- Event Stage 2: Unknown ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Vector 71 227 3758900776 0 Vector Scatter 4 8 5216 0 Index Set 12 30 17599784 0 IS L to G Mapping 4 3 79093788 0 Matrix 2 62 6128226616 0 Krylov Solver 0 25 82528 0 Preconditioner 0 25 28144 0 Distributed Mesh 0 1 4448 0 Star Forest Bipartite Graph 0 2 1632 0 Discrete System 0 1 800 0 --- Event Stage 1: CPLD_SOL Vector 1904 1745 6580253192 0 Vector Scatter 5 0 0 0 Index Set 284 262 207936 0 Matrix 924 864 6427997568 0 Matrix Null Space 19 0 0 0 Krylov Solver 26 1 1328 0 Preconditioner 26 1 1016 0 Viewer 1 0 0 0 Distributed Mesh 1 0 0 0 Star Forest Bipartite Graph 2 0 0 0 Discrete System 1 0 0 0 --- Event Stage 2: Unknown ======================================================================================================================== Average time to get PetscTime(): 0 #PETSc Option Table entries: -coupledsolve_fieldsplit_0_fieldsplit_0_pc_type ml -coupledsolve_fieldsplit_0_fieldsplit_1_pc_type ml -coupledsolve_fieldsplit_0_fieldsplit_2_pc_type ml -coupledsolve_fieldsplit_0_ksp_converged_reason -coupledsolve_fieldsplit_0_ksp_type gmres -coupledsolve_fieldsplit_0_pc_fieldsplit_block_size 3 -coupledsolve_fieldsplit_0_pc_type fieldsplit -coupledsolve_fieldsplit_1_ksp_converged_reason -coupledsolve_fieldsplit_1_ksp_rtol 1e-2 -coupledsolve_ksp_monitor -coupledsolve_pc_fieldsplit_0_fields 0,1,2 -coupledsolve_pc_fieldsplit_1_fields 3 -coupledsolve_pc_fieldsplit_block_size 4 -coupledsolve_pc_fieldsplit_type schur -coupledsolve_pc_type fieldsplit -log_summary -on_error_abort -options_left -options_table #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure options: PETSC_ARCH=arch-openmpi-opt-intel-hlr-ext PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3 -prefix=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr-ext --with-blas-lapack-dir=/shared/apps/intel/2015/composer_xe_2015/mkl/lib/intel64/ --with-mpi-dir=/shared/apps/openmpi/1.8.2_intel COPTFLAGS="-O3 -xHost" FOPTFLAGS="-O3 -xHost" CXXOPTFLAGS="-O3 -xHost" --with-debugging=0 --download-hypre --download-ml ----------------------------------------- Libraries compiled on Sun Feb 1 16:09:22 2015 on hla0003 Machine characteristics: Linux-3.0.101-0.40-default-x86_64-with-SuSE-11-x86_64 Using PETSc directory: /home/gu08vomo/soft/petsc/3.5.3 Using PETSc arch: arch-openmpi-opt-intel-hlr-ext ----------------------------------------- Using C compiler: /shared/apps/openmpi/1.8.2_intel/bin/mpicc -fPIC -wd1572 -O3 -xHost ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /shared/apps/openmpi/1.8.2_intel/bin/mpif90 -fPIC -O3 -xHost ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/include -I/home/gu08vomo/soft/petsc/3.5.3/include -I/home/gu08vomo/soft/petsc/3.5.3/include -I/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/include -I/shared/apps/openmpi/1.8.2_intel/include ----------------------------------------- Using C linker: /shared/apps/openmpi/1.8.2_intel/bin/mpicc Using Fortran linker: /shared/apps/openmpi/1.8.2_intel/bin/mpif90 Using libraries: -Wl,-rpath,/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -L/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -lpetsc -Wl,-rpath,/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -L/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -lHYPRE -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -L/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib -L/shared/apps/gcc/4.8.3/lib -lmpi_cxx -lml -lmpi_cxx -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm -lX11 -lpthread -lssl -lcrypto -lmpi_usempi_ignore_tkr -lmpi_mpifh -lifport -lifcore -lm -lmpi_cxx -ldl -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -lmpi -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -L/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib -L/shared/apps/gcc/4.8.3/lib -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -limf -lsvml -lirng -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lpthread -lirc_s -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -L/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib -L/shared/apps/gcc/4.8.3/lib -ldl ----------------------------------------- #PETSc Option Table entries: -coupledsolve_fieldsplit_0_fieldsplit_0_pc_type ml -coupledsolve_fieldsplit_0_fieldsplit_1_pc_type ml -coupledsolve_fieldsplit_0_fieldsplit_2_pc_type ml -coupledsolve_fieldsplit_0_ksp_converged_reason -coupledsolve_fieldsplit_0_ksp_type gmres -coupledsolve_fieldsplit_0_pc_fieldsplit_block_size 3 -coupledsolve_fieldsplit_0_pc_type fieldsplit -coupledsolve_fieldsplit_1_ksp_converged_reason -coupledsolve_fieldsplit_1_ksp_rtol 1e-2 -coupledsolve_ksp_monitor -coupledsolve_pc_fieldsplit_0_fields 0,1,2 -coupledsolve_pc_fieldsplit_1_fields 3 -coupledsolve_pc_fieldsplit_block_size 4 -coupledsolve_pc_fieldsplit_type schur -coupledsolve_pc_type fieldsplit -log_summary -on_error_abort -options_left -options_table #End of PETSc Option Table entries There is one unused database option. It is: Option left: name:-options_table (no value) -------------- next part -------------- KSP Object:(coupledsolve_) 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=0.001, absolute=1e-50, divergence=10000 left preconditioning has attached null space using PRECONDITIONED norm type for convergence test PC Object:(coupledsolve_) 1 MPI processes type: fieldsplit FieldSplit with Schur preconditioner, blocksize = 4, factorization FULL Preconditioner for the Schur complement formed from A11 Split info: Split number 0 Fields 0, 1, 2 Split number 1 Fields 3 KSP solver for A00 block KSP Object: (coupledsolve_fieldsplit_0_) 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_) 1 MPI processes type: fieldsplit FieldSplit with MULTIPLICATIVE composition: total splits = 3, blocksize = 3 Solver info for each split is in the following KSP objects: Split number 0 Fields 0 KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_) 1 MPI processes type: ml MG: type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_coarse_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5, needed 1 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=1, cols=1 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=1, cols=1 total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_1_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_1_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=4, cols=4 total: nonzeros=16, allocated nonzeros=16 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 1 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_2_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_2_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=105, cols=105 total: nonzeros=3963, allocated nonzeros=3963 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_3_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_3_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=6667, cols=6667 total: nonzeros=356943, allocated nonzeros=356943 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_4_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_4_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=263552, cols=263552 total: nonzeros=7.74159e+06, allocated nonzeros=7.74159e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_5_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_5_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_fieldsplit_0_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_fieldsplit_0_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines Split number 1 Fields 1 KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_) 1 MPI processes type: ml MG: type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_coarse_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5, needed 1 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=1, cols=1 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=1, cols=1 total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_1_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_1_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=4, cols=4 total: nonzeros=16, allocated nonzeros=16 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 1 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_2_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_2_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=105, cols=105 total: nonzeros=3963, allocated nonzeros=3963 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_3_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_3_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=6667, cols=6667 total: nonzeros=356943, allocated nonzeros=356943 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_4_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_4_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=263552, cols=263552 total: nonzeros=7.74159e+06, allocated nonzeros=7.74159e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_5_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_5_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_fieldsplit_1_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_fieldsplit_1_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines Split number 2 Fields 2 KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_) 1 MPI processes type: ml MG: type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_coarse_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5, needed 1 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=1, cols=1 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=1, cols=1 total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_1_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_1_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=4, cols=4 total: nonzeros=16, allocated nonzeros=16 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 1 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_2_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_2_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=105, cols=105 total: nonzeros=3963, allocated nonzeros=3963 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_3_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_3_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=6667, cols=6667 total: nonzeros=356943, allocated nonzeros=356943 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_4_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_4_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=263552, cols=263552 total: nonzeros=7.74159e+06, allocated nonzeros=7.74159e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_5_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_5_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_fieldsplit_2_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_fieldsplit_2_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_) 1 MPI processes type: seqaij rows=6591000, cols=6591000 total: nonzeros=4.40448e+07, allocated nonzeros=4.40448e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines KSP solver for S = A11 - A10 inv(A00) A01 KSP Object: (coupledsolve_fieldsplit_1_) 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=0.01, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: (coupledsolve_fieldsplit_1_) 1 MPI processes type: ilu ILU: out-of-place factorization 0 levels of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 1 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=2197000, cols=2197000 package used to perform factorization: petsc total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix followed by preconditioner matrix: Mat Object: (coupledsolve_fieldsplit_1_) 1 MPI processes type: schurcomplement rows=2197000, cols=2197000 Schur complement A11 - A10 inv(A00) A01 A11 Mat Object: (coupledsolve_fieldsplit_1_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines A10 Mat Object: 1 MPI processes type: seqaij rows=2197000, cols=6591000 total: nonzeros=4.37453e+07, allocated nonzeros=4.37453e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines KSP of A00 KSP Object: (coupledsolve_fieldsplit_0_) 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_) 1 MPI processes type: fieldsplit FieldSplit with MULTIPLICATIVE composition: total splits = 3, blocksize = 3 Solver info for each split is in the following KSP objects: Split number 0 Fields 0 KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_) 1 MPI processes type: ml MG: type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_coarse_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5, needed 1 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=1, cols=1 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=1, cols=1 total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_1_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_1_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=4, cols=4 total: nonzeros=16, allocated nonzeros=16 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 1 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_2_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_2_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=105, cols=105 total: nonzeros=3963, allocated nonzeros=3963 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_3_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_3_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=6667, cols=6667 total: nonzeros=356943, allocated nonzeros=356943 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_4_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_4_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=263552, cols=263552 total: nonzeros=7.74159e+06, allocated nonzeros=7.74159e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_5_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_5_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_fieldsplit_0_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_fieldsplit_0_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines Split number 1 Fields 1 KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_) 1 MPI processes type: ml MG: type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_coarse_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5, needed 1 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=1, cols=1 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=1, cols=1 total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_1_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_1_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=4, cols=4 total: nonzeros=16, allocated nonzeros=16 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 1 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_2_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_2_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=105, cols=105 total: nonzeros=3963, allocated nonzeros=3963 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_3_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_3_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=6667, cols=6667 total: nonzeros=356943, allocated nonzeros=356943 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_4_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_4_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=263552, cols=263552 total: nonzeros=7.74159e+06, allocated nonzeros=7.74159e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_5_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_5_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_fieldsplit_1_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_fieldsplit_1_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines Split number 2 Fields 2 KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_) 1 MPI processes type: ml MG: type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_coarse_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5, needed 1 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=1, cols=1 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=1, cols=1 total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_1_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_1_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=4, cols=4 total: nonzeros=16, allocated nonzeros=16 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 1 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_2_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_2_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=105, cols=105 total: nonzeros=3963, allocated nonzeros=3963 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_3_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_3_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=6667, cols=6667 total: nonzeros=356943, allocated nonzeros=356943 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_4_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_4_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=263552, cols=263552 total: nonzeros=7.74159e+06, allocated nonzeros=7.74159e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_5_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_5_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_fieldsplit_2_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_fieldsplit_2_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_) 1 MPI processes type: seqaij rows=6591000, cols=6591000 total: nonzeros=4.40448e+07, allocated nonzeros=4.40448e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines A01 Mat Object: 1 MPI processes type: seqaij rows=6591000, cols=2197000 total: nonzeros=4.37453e+07, allocated nonzeros=4.37453e+07 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 2170168 nodes, limit used is 5 Mat Object: (coupledsolve_fieldsplit_1_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 total: nonzeros=1.46217e+08, allocated nonzeros=1.46217e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines From Sanjay.Kharche at manchester.ac.uk Wed Feb 4 14:59:20 2015 From: Sanjay.Kharche at manchester.ac.uk (Sanjay Kharche) Date: Wed, 4 Feb 2015 20:59:20 +0000 Subject: [petsc-users] passing information to TSIFunction Message-ID: Hi I started with the ex15.c example from ts. Now I would like to pass a 2D int array I call data2d to the FormIFunction which constructs the udot - RHS. FormIFunction is used in Petsc's TSSetIFunction. My data2d is determined at run time in the initialisation on each rank. data2d is the same size as the solution array and the residual array. I tried adding a Vec to FormIFunction, but Petsc's TSIFunction ( TSSetIFunction(ts,r,FormIFunction,&user); ) expects a set number & type of arguments to FormIFunction. I tried passing data2d as a regular int pointer as well as a Vec. As a Vec, I tried to access the data2d in a similar way as the solution vector, which caused the serial and parallel execution to produce errors. Any ideas on how I can get an array of ints to FormIFunction? thanks Sanjay The function declaration: // petsc functions. extern PetscInt FormIFunction(TS,PetscReal,Vec,Vec,Vec,void*, Vec); // last Vec is supposed to be my data2D, which is a duplicate of the u. I duplicate as follows: DMDACreate2d(PETSC_COMM_WORLD, DM_BOUNDARY_NONE, DM_BOUNDARY_NONE,DMDA_STENCIL_STAR,usr_MX,usr_MY,PETSC_DECIDE,PETSC_DECIDE,1,1,NULL,NULL,&da); user.da = da; DMCreateGlobalVector(da,&u); VecDuplicate(u,&r); VecDuplicate(u,&Data2D); // so my assumption is that data2D is part of da, but I cannot see/set its type anywhere The warnings/notes at build time: > make sk2d /home/sanjay/petsc/linux-gnu-c-debug/bin/mpicc -o sk2d.o -c -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -O0 -I/home/sanjay/petsc/include -I/home/sanjay/petsc/linux-gnu-c-debug/include `pwd`/sk2d.c /home/sanjay/petscProgs/Work/twod/sk2d.c: In function ?main?: /home/sanjay/petscProgs/Work/twod/sk2d.c:228:4: warning: passing argument 3 of ?TSSetIFunction? from incompatible pointer type [enabled by default] /home/sanjay/petsc/include/petscts.h:261:29: note: expected ?TSIFunction? but argument is of type ?PetscInt (*)(struct _p_TS *, PetscReal, struct _p_Vec *, struct _p_Vec *, struct _p_Vec *, void *, struct _p_Vec *)? From knepley at gmail.com Wed Feb 4 15:03:09 2015 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 4 Feb 2015 15:03:09 -0600 Subject: [petsc-users] passing information to TSIFunction In-Reply-To: References: Message-ID: On Wed, Feb 4, 2015 at 2:59 PM, Sanjay Kharche < Sanjay.Kharche at manchester.ac.uk> wrote: > > Hi > > I started with the ex15.c example from ts. Now I would like to pass a 2D > int array I call data2d to the FormIFunction which constructs the udot - > RHS. FormIFunction is used in Petsc's TSSetIFunction. My data2d is > determined at run time in the initialisation on each rank. data2d is the > same size as the solution array and the residual array. > > I tried adding a Vec to FormIFunction, but Petsc's TSIFunction ( > TSSetIFunction(ts,r,FormIFunction,&user); ) expects a set number & type of > arguments to FormIFunction. I tried passing data2d as a regular int pointer > as well as a Vec. As a Vec, I tried to access the data2d in a similar way > as the solution vector, which caused the serial and parallel execution to > produce errors. > 1) This is auxiliary data which must come in through the context argument. Many many example use a context 2) You should read the chapter on DAs in the manual. It describes the data layout. In order for your code to work in parallel I suggest you use a Vec and cast to int when you need the value. Thanks, Matt > Any ideas on how I can get an array of ints to FormIFunction? > > thanks > Sanjay > > > The function declaration: > // petsc functions. > extern PetscInt FormIFunction(TS,PetscReal,Vec,Vec,Vec,void*, Vec); // > last Vec is supposed to be my data2D, which is a duplicate of the u. > > I duplicate as follows: > DMDACreate2d(PETSC_COMM_WORLD, DM_BOUNDARY_NONE, > DM_BOUNDARY_NONE,DMDA_STENCIL_STAR,usr_MX,usr_MY,PETSC_DECIDE,PETSC_DECIDE,1,1,NULL,NULL,&da); > user.da = da; > DMCreateGlobalVector(da,&u); > VecDuplicate(u,&r); > VecDuplicate(u,&Data2D); // so my assumption is that data2D is part of > da, but I cannot see/set its type anywhere > > The warnings/notes at build time: > > > make sk2d > /home/sanjay/petsc/linux-gnu-c-debug/bin/mpicc -o sk2d.o -c -fPIC -Wall > -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -O0 > -I/home/sanjay/petsc/include > -I/home/sanjay/petsc/linux-gnu-c-debug/include `pwd`/sk2d.c > /home/sanjay/petscProgs/Work/twod/sk2d.c: In function ?main?: > /home/sanjay/petscProgs/Work/twod/sk2d.c:228:4: warning: passing argument > 3 of ?TSSetIFunction? from incompatible pointer type [enabled by default] > /home/sanjay/petsc/include/petscts.h:261:29: note: expected ?TSIFunction? > but argument is of type ?PetscInt (*)(struct _p_TS *, PetscReal, struct > _p_Vec *, struct _p_Vec *, struct _p_Vec *, void *, struct _p_Vec *)? > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From gideon.simpson at gmail.com Wed Feb 4 22:00:20 2015 From: gideon.simpson at gmail.com (Gideon Simpson) Date: Wed, 4 Feb 2015 23:00:20 -0500 Subject: [petsc-users] numerical quadrature Message-ID: Suppose I have a function f at sample points x, with x and f both stored as Vec distributed structures. What I would like to do is compute an estimate of the anti derivative of f, \int_a^x f(s)ds for a<= x <=b. One way I can see how to compute this efficiently is to do the numerical quadrature on each node, and then use standard MPI to send the successive cumulative quantity from processor 0 to 1 to 2, and so on. I am wondering if there is a ?PETSc? way to do this kind of calculation, as opposed to relying on MPI code. -gideon -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 4 22:02:56 2015 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 4 Feb 2015 22:02:56 -0600 Subject: [petsc-users] numerical quadrature In-Reply-To: References: Message-ID: On Wed, Feb 4, 2015 at 10:00 PM, Gideon Simpson wrote: > Suppose I have a function f at sample points x, with x and f both stored > as Vec distributed structures. What I would like to do is compute an > estimate of the anti derivative of f, > > \int_a^x f(s)ds > > for a<= x <=b. > > One way I can see how to compute this efficiently is to do the numerical > quadrature on each node, and then use standard MPI to send the successive > cumulative quantity from processor 0 to 1 to 2, and so on. I am wondering > if there is a ?PETSc? way to do this kind of calculation, as opposed to > relying on MPI code. > I would use MPI, but I would use http://www.mpich.org/static/docs/v3.1/www3/MPI_Scan.html which will give you all the partial sums at once. Matt > -gideon > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Feb 4 22:37:43 2015 From: jed at jedbrown.org (Jed Brown) Date: Wed, 04 Feb 2015 21:37:43 -0700 Subject: [petsc-users] numerical quadrature In-Reply-To: References: Message-ID: <8761bhas88.fsf@jedbrown.org> Matthew Knepley writes: > I would use MPI, but I would use > > http://www.mpich.org/static/docs/v3.1/www3/MPI_Scan.html > > which will give you all the partial sums at once. Yes, MPI_Scan if you want the owner of each x to know ?_a^x f MPI_Allreduce if you want everyone to know ?_a^y f for a single y known globally. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From Sanjay.Kharche at manchester.ac.uk Thu Feb 5 03:28:41 2015 From: Sanjay.Kharche at manchester.ac.uk (Sanjay Kharche) Date: Thu, 5 Feb 2015 09:28:41 +0000 Subject: [petsc-users] passing information to TSIFunction In-Reply-To: References: , Message-ID: Hi I need to pass a 2D array of ints to user defined functions, especially RHS. Ideally, this 2D array is dynamically created at run time to make my application general. Yesterday's discussion is below. I did this in the application context: /* User-defined data structures and routines */ /* AppCtx: used by FormIFunction() */ typedef struct { DM da; // DM instance in which u, r are placed. PetscInt geometry[usr_MY][usr_MX]; // This is static, so the whole thing is visible to everybody who gets the context. This is my working solution as of now. Vec geom; // I duplicate u for this in calling function. VecDuplicate( u , &user.geom). I cannot pass this to RHS function, I cannot access values in geom in the called function. PetscInt **geomet; // I calloc this in the calling function. I cannot access the data in RHS function int **anotherGeom; // just int. } AppCtx; This static geometry 2D array can be seen in all functions that receive the application context. This is a working solution to my problem, although not ideal. The ideal solution would be if I can pass and receive something like geom or geomet which are dynamically created after the 2D DA is created in the calling function. I am working through the manual and the examples, but some indication of how to solve this specific issue will be great. cheers Sanjay ________________________________ From: Matthew Knepley [knepley at gmail.com] Sent: 04 February 2015 21:03 To: Sanjay Kharche Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] passing information to TSIFunction On Wed, Feb 4, 2015 at 2:59 PM, Sanjay Kharche > wrote: Hi I started with the ex15.c example from ts. Now I would like to pass a 2D int array I call data2d to the FormIFunction which constructs the udot - RHS. FormIFunction is used in Petsc's TSSetIFunction. My data2d is determined at run time in the initialisation on each rank. data2d is the same size as the solution array and the residual array. I tried adding a Vec to FormIFunction, but Petsc's TSIFunction ( TSSetIFunction(ts,r,FormIFunction,&user); ) expects a set number & type of arguments to FormIFunction. I tried passing data2d as a regular int pointer as well as a Vec. As a Vec, I tried to access the data2d in a similar way as the solution vector, which caused the serial and parallel execution to produce errors. 1) This is auxiliary data which must come in through the context argument. Many many example use a context 2) You should read the chapter on DAs in the manual. It describes the data layout. In order for your code to work in parallel I suggest you use a Vec and cast to int when you need the value. Thanks, Matt Any ideas on how I can get an array of ints to FormIFunction? thanks Sanjay The function declaration: // petsc functions. extern PetscInt FormIFunction(TS,PetscReal,Vec,Vec,Vec,void*, Vec); // last Vec is supposed to be my data2D, which is a duplicate of the u. I duplicate as follows: DMDACreate2d(PETSC_COMM_WORLD, DM_BOUNDARY_NONE, DM_BOUNDARY_NONE,DMDA_STENCIL_STAR,usr_MX,usr_MY,PETSC_DECIDE,PETSC_DECIDE,1,1,NULL,NULL,&da); user.da = da; DMCreateGlobalVector(da,&u); VecDuplicate(u,&r); VecDuplicate(u,&Data2D); // so my assumption is that data2D is part of da, but I cannot see/set its type anywhere The warnings/notes at build time: > make sk2d /home/sanjay/petsc/linux-gnu-c-debug/bin/mpicc -o sk2d.o -c -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -O0 -I/home/sanjay/petsc/include -I/home/sanjay/petsc/linux-gnu-c-debug/include `pwd`/sk2d.c /home/sanjay/petscProgs/Work/twod/sk2d.c: In function ?main?: /home/sanjay/petscProgs/Work/twod/sk2d.c:228:4: warning: passing argument 3 of ?TSSetIFunction? from incompatible pointer type [enabled by default] /home/sanjay/petsc/include/petscts.h:261:29: note: expected ?TSIFunction? but argument is of type ?PetscInt (*)(struct _p_TS *, PetscReal, struct _p_Vec *, struct _p_Vec *, struct _p_Vec *, void *, struct _p_Vec *)? -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From leoni.massimiliano1 at gmail.com Thu Feb 5 04:14:23 2015 From: leoni.massimiliano1 at gmail.com (Massimiliano Leoni) Date: Thu, 05 Feb 2015 11:14:23 +0100 Subject: [petsc-users] KSP not solving correctly Message-ID: <2936161.5A09K4NSlq@debianxps> Hi petsc-users, I stumbled across a curious issue while trying to setup some solver for Navier-Stokes. In one sentence, I tested a solver computing b = Ax with x known, then solving the system with b to find the original x back, but it's not working. In detail, I want to implement a matrix free method for a matrix C2 = D^T M_L^{-1} D, where M_L^{-1} is the inverse lumped of M; here's what I do [I'm sorry for posting code, but I suspect the error could be quite subtle...] ////// define utility struct typedef struct _C2ctx { Mat D; Vec Mdiag; } C2ctx; ////// define multiplication int C2mult(Mat A,Vec x,Vec y) { C2ctx* ctx; MatShellGetContext(A,&ctx); Mat D = ctx->D; Vec Mdiag = ctx->Mdiag; int Nu; VecGetSize(Mdiag,&Nu); Vec tmp; VecCreate(PETSC_COMM_WORLD,&tmp); VecSetType(tmp,VECSTANDARD); VecSetSizes(tmp,PETSC_DECIDE,Nu); MatMultTranspose(D,x,tmp); VecPointwiseDivide(tmp,tmp,Mdiag); MatMult(D,tmp,y); VecDestroy(&tmp); return 0; } Later on, I assemble D and M, and then ////// lump M into _Mdiag Vec _Mdiag; VecCreate(PETSC_COMM_WORLD,&_Mdiag); VecSetType(_Mdiag,VECSTANDARD); VecSetSizes(_Mdiag,PETSC_DECIDE,Nu); MatGetRowSum(_M,_Mdiag); ////// set context C2ctx ctx; ctx.D = _D; ctx.Mdiag = _Mdiag; ////// create _C2 [= A] Mat _C2; MatCreateShell(PETSC_COMM_WORLD,Np,Np,PETSC_DETERMINE,PETSC_DETERMINE,&ctx,&_C2); MatShellSetOperation(_C2,MATOP_MULT,(void(*)(void))C2mult); ////// create _b2 [= b] Vec _b2; VecCreate(PETSC_COMM_WORLD,&_b2); VecSetType(_b2,VECSTANDARD); VecSetSizes(_b2,PETSC_DECIDE,Np); ////// create _deltaP [= x] Vec _deltaP; VecCreate(PETSC_COMM_WORLD,&_deltaP); VecSetType(_deltaP,VECSTANDARD); VecSetSizes(_deltaP,PETSC_DECIDE,Np); ////// set _deltaP = 1 and _b2 = _C2*_deltaP, then change _deltaP for check VecSet(_deltaP,1.0); MatMult(_C2,_deltaP,_b2); VecSet(_deltaP,2.0); ////// setup KSP KSP _ksp2; KSPCreate(PETSC_COMM_WORLD,&_ksp2); KSPSetOperators(_ksp2,_C2,_C2,DIFFERENT_NONZERO_PATTERN); KSPSetType(_ksp2,"cg"); KSPSolve(_ksp2,_b2,_deltaP); According to my understanding, now _deltaP should be a nice row on 1s, but what happens is that it has some [apparently] random zeros here and there. Additional infos: [*] the problem comes from the cavity problem with navier-stokes. This particular step is to solve for the pressure and the matrix _C2 is singular [it's a laplacian] with nullspace the constants. I was told I can ignore this and avoid setting the nullspace as long as I use a CG or GMRES solver. [*] the number of "random zero entries" is very high with P2-P1 elements, and is significantly lower with P1+Bubble - P1. [*] Matrices M and D are automatically assembled by FEniCS, not by me. Can anybody please advice on what I am doing wrong? What could be causing this? Thanks in advance Massimiliano From bichinhoverde at spwinternet.com.br Thu Feb 5 04:38:16 2015 From: bichinhoverde at spwinternet.com.br (bichinhoverde) Date: Thu, 5 Feb 2015 08:38:16 -0200 Subject: [petsc-users] KSP not solving correctly In-Reply-To: <2936161.5A09K4NSlq@debianxps> References: <2936161.5A09K4NSlq@debianxps> Message-ID: If your linear system comes from a Laplace equation with Neumann boundary conditions (singular matrix) it means for a given b there are infinite possible x. So, you get one x and calculate the corresponding b. When you solve Ax=b, you might not get the exact same x, since there are infinite possible x for the same b. On Thu, Feb 5, 2015 at 8:14 AM, Massimiliano Leoni < leoni.massimiliano1 at gmail.com> wrote: > Hi petsc-users, > I stumbled across a curious issue while trying to setup some solver for > Navier-Stokes. > In one sentence, I tested a solver computing b = Ax with x known, then > solving > the system with b to find the original x back, but it's not working. > > In detail, I want to implement a matrix free method for a matrix C2 = D^T > M_L^{-1} D, where M_L^{-1} is the inverse lumped of M; here's what I do > > [I'm sorry for posting code, but I suspect the error could be quite > subtle...] > > ////// define utility struct > typedef struct _C2ctx > { > Mat D; > Vec Mdiag; > } C2ctx; > > ////// define multiplication > int C2mult(Mat A,Vec x,Vec y) > { > C2ctx* ctx; > MatShellGetContext(A,&ctx); > Mat D = ctx->D; > Vec Mdiag = ctx->Mdiag; > int Nu; > VecGetSize(Mdiag,&Nu); > > Vec tmp; > VecCreate(PETSC_COMM_WORLD,&tmp); > VecSetType(tmp,VECSTANDARD); > VecSetSizes(tmp,PETSC_DECIDE,Nu); > > MatMultTranspose(D,x,tmp); > VecPointwiseDivide(tmp,tmp,Mdiag); > MatMult(D,tmp,y); > > VecDestroy(&tmp); > return 0; > } > > Later on, I assemble D and M, and then > > ////// lump M into _Mdiag > Vec _Mdiag; > VecCreate(PETSC_COMM_WORLD,&_Mdiag); > VecSetType(_Mdiag,VECSTANDARD); > VecSetSizes(_Mdiag,PETSC_DECIDE,Nu); > MatGetRowSum(_M,_Mdiag); > > ////// set context > C2ctx ctx; > ctx.D = _D; > ctx.Mdiag = _Mdiag; > > ////// create _C2 [= A] > Mat _C2; > > MatCreateShell(PETSC_COMM_WORLD,Np,Np,PETSC_DETERMINE,PETSC_DETERMINE,&ctx,&_C2); > MatShellSetOperation(_C2,MATOP_MULT,(void(*)(void))C2mult); > > ////// create _b2 [= b] > Vec _b2; > VecCreate(PETSC_COMM_WORLD,&_b2); > VecSetType(_b2,VECSTANDARD); > VecSetSizes(_b2,PETSC_DECIDE,Np); > > ////// create _deltaP [= x] > Vec _deltaP; > VecCreate(PETSC_COMM_WORLD,&_deltaP); > VecSetType(_deltaP,VECSTANDARD); > VecSetSizes(_deltaP,PETSC_DECIDE,Np); > > ////// set _deltaP = 1 and _b2 = _C2*_deltaP, then change _deltaP for check > VecSet(_deltaP,1.0); > MatMult(_C2,_deltaP,_b2); > VecSet(_deltaP,2.0); > > ////// setup KSP > KSP _ksp2; > KSPCreate(PETSC_COMM_WORLD,&_ksp2); > KSPSetOperators(_ksp2,_C2,_C2,DIFFERENT_NONZERO_PATTERN); > KSPSetType(_ksp2,"cg"); > > KSPSolve(_ksp2,_b2,_deltaP); > > According to my understanding, now _deltaP should be a nice row on 1s, but > what happens is that it has some [apparently] random zeros here and there. > > Additional infos: > [*] the problem comes from the cavity problem with navier-stokes. This > particular step is to solve for the pressure and the matrix _C2 is singular > [it's a laplacian] with nullspace the constants. I was told I can ignore > this > and avoid setting the nullspace as long as I use a CG or GMRES solver. > > [*] the number of "random zero entries" is very high with P2-P1 elements, > and > is significantly lower with P1+Bubble - P1. > > [*] Matrices M and D are automatically assembled by FEniCS, not by me. > > Can anybody please advice on what I am doing wrong? What could be causing > this? > > Thanks in advance > Massimiliano > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Feb 5 04:40:52 2015 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 5 Feb 2015 04:40:52 -0600 Subject: [petsc-users] KSP not solving correctly In-Reply-To: <2936161.5A09K4NSlq@debianxps> References: <2936161.5A09K4NSlq@debianxps> Message-ID: On Thu, Feb 5, 2015 at 4:14 AM, Massimiliano Leoni < leoni.massimiliano1 at gmail.com> wrote: > Hi petsc-users, > I stumbled across a curious issue while trying to setup some solver for > Navier-Stokes. > In one sentence, I tested a solver computing b = Ax with x known, then > solving > the system with b to find the original x back, but it's not working. > > In detail, I want to implement a matrix free method for a matrix C2 = D^T > M_L^{-1} D, where M_L^{-1} is the inverse lumped of M; here's what I do > > [I'm sorry for posting code, but I suspect the error could be quite > subtle...] > > ////// define utility struct > typedef struct _C2ctx > { > Mat D; > Vec Mdiag; > } C2ctx; > > ////// define multiplication > int C2mult(Mat A,Vec x,Vec y) > { > C2ctx* ctx; > MatShellGetContext(A,&ctx); > Mat D = ctx->D; > Vec Mdiag = ctx->Mdiag; > int Nu; > VecGetSize(Mdiag,&Nu); > > Vec tmp; > VecCreate(PETSC_COMM_WORLD,&tmp); > VecSetType(tmp,VECSTANDARD); > VecSetSizes(tmp,PETSC_DECIDE,Nu); > > MatMultTranspose(D,x,tmp); > VecPointwiseDivide(tmp,tmp,Mdiag); > MatMult(D,tmp,y); > > VecDestroy(&tmp); > return 0; > } > > Later on, I assemble D and M, and then > > ////// lump M into _Mdiag > Vec _Mdiag; > VecCreate(PETSC_COMM_WORLD,&_Mdiag); > VecSetType(_Mdiag,VECSTANDARD); > VecSetSizes(_Mdiag,PETSC_DECIDE,Nu); > MatGetRowSum(_M,_Mdiag); > > ////// set context > C2ctx ctx; > ctx.D = _D; > ctx.Mdiag = _Mdiag; > > ////// create _C2 [= A] > Mat _C2; > > MatCreateShell(PETSC_COMM_WORLD,Np,Np,PETSC_DETERMINE,PETSC_DETERMINE,&ctx,&_C2); > MatShellSetOperation(_C2,MATOP_MULT,(void(*)(void))C2mult); > > ////// create _b2 [= b] > Vec _b2; > VecCreate(PETSC_COMM_WORLD,&_b2); > VecSetType(_b2,VECSTANDARD); > VecSetSizes(_b2,PETSC_DECIDE,Np); > > ////// create _deltaP [= x] > Vec _deltaP; > VecCreate(PETSC_COMM_WORLD,&_deltaP); > VecSetType(_deltaP,VECSTANDARD); > VecSetSizes(_deltaP,PETSC_DECIDE,Np); > > ////// set _deltaP = 1 and _b2 = _C2*_deltaP, then change _deltaP for check > VecSet(_deltaP,1.0); > MatMult(_C2,_deltaP,_b2); > VecSet(_deltaP,2.0); > > ////// setup KSP > KSP _ksp2; > KSPCreate(PETSC_COMM_WORLD,&_ksp2); > KSPSetOperators(_ksp2,_C2,_C2,DIFFERENT_NONZERO_PATTERN); > KSPSetType(_ksp2,"cg"); > > KSPSolve(_ksp2,_b2,_deltaP); > > According to my understanding, now _deltaP should be a nice row on 1s, but > what happens is that it has some [apparently] random zeros here and there. > > Additional infos: > [*] the problem comes from the cavity problem with navier-stokes. This > particular step is to solve for the pressure and the matrix _C2 is singular > [it's a laplacian] with nullspace the constants. I was told I can ignore > this > and avoid setting the nullspace as long as I use a CG or GMRES solver. > What the person who told you this may have meant is that CG/GMRES will not fail with your singular operator. However, you can get any one of the infinite possible solutions depending on your rhs vector. Set the nullspace. Matt > [*] the number of "random zero entries" is very high with P2-P1 elements, > and > is significantly lower with P1+Bubble - P1. > > [*] Matrices M and D are automatically assembled by FEniCS, not by me. > > Can anybody please advice on what I am doing wrong? What could be causing > this? > > Thanks in advance > Massimiliano > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Thu Feb 5 09:32:09 2015 From: dave.mayhem23 at gmail.com (Dave May) Date: Thu, 5 Feb 2015 16:32:09 +0100 Subject: [petsc-users] Field Split PC for Fully-Coupled 3d stationary incompressible Navier-Stokes Solution Algorithm In-Reply-To: <1423082081.3096.6.camel@gmail.com> References: <1422869962.961.2.camel@gmail.com> <1422871832.961.4.camel@gmail.com> <1423082081.3096.6.camel@gmail.com> Message-ID: On 4 February 2015 at 21:34, Fabian Gabel wrote: > Thank you for pointing me into the right direction. After some first > tests on a test case with 2e6 cells (4dof) I could measure a slight > improvement (25%) with respect to wall time, using a nested > field split for the velocities: > Great. That's a good start. It can be made ever faster. > -coupledsolve_pc_type fieldsplit > -coupledsolve_pc_fieldsplit_0_fields 0,1,2 > -coupledsolve_pc_fieldsplit_1_fields 3 > -coupledsolve_pc_fieldsplit_type schur > -coupledsolve_pc_fieldsplit_block_size 4 > -coupledsolve_fieldsplit_0_ksp_converged_reason > -coupledsolve_fieldsplit_1_ksp_converged_reason > -coupledsolve_fieldsplit_0_ksp_type gmres > -coupledsolve_fieldsplit_0_pc_type fieldsplit > -coupledsolve_fieldsplit_0_pc_fieldsplit_block_size 3 > -coupledsolve_fieldsplit_0_fieldsplit_0_pc_type ml > -coupledsolve_fieldsplit_0_fieldsplit_1_pc_type ml > -coupledsolve_fieldsplit_0_fieldsplit_2_pc_type ml > > Is it normal, that I have to explicitly specify the block size for each > fieldsplit? > No. You should be able to just specify -coupledsolve_fieldsplit_ksp_converged -coupledsolve_fieldsplit_0_fieldsplit_pc_type ml and same options will be applied to all splits (0,1,2). Does this functionality not work? > I attached the results (-converged_reason only for readability and another > file > solely for the output of -ksp_view). I am not sure if this result could > be improved by modifying any of the solver options. > Yes, I believe they can. > > Are there any guidelines to follow that I could use to avoid taking wild > guesses? > Sure. There are lots of papers published on how to construct robust block preconditioners for saddle point problems arising from Navier Stokes. I would start by looking at this book: Finite Elements and Fast Iterative Solvers Howard Elman, David Silvester and Andy Wathen Oxford University Press See chapters 6 and 8. Your preconditioner is performing a full block LDU factorization with quite accurate inner solves. This is probably overkill - but it will be robust. The most relaxed approach would be : -coupledsolve_fieldsplit_0_ksp_type preonly -coupledsolve_fieldsplit_1_ksp_type preonly -coupledsolve_pc_fieldsplit_schur_fact_type DIAG Something more aggressive (but less stringent that your original test) would be: -coupledsolve_fieldsplit_0_ksp_type gmres -coupledsolve_fieldsplit_0_ksp_rtol 1.0e-2 -coupledsolve_fieldsplit_1_ksp_type preonly -coupledsolve_pc_fieldsplit_schur_fact_type UPPER When building the FS preconditioner, you can start with the absolute most robust choices and then relax those choices to improve speed and hopefully not destroy the convergence, or you can start with a light weight preconditioner and make it stronger to improve convergence. Where the balance lies is very much problem dependent. > > > Petsc has some support to generate approximate pressure schur > > complements for you, but these will not be as good as the ones > > specifically constructed for you particular discretization. > > I came across a tutorial (/snes/examples/tutorials/ex70.c), which shows > 2 different approaches: > > 1- provide a Preconditioner \hat{S}p for the approximation of the true > Schur complement > > 2- use another Matrix (in this case its the Matrix used for constructing > the preconditioner in the former approach) as a new approximation of the > Schur complement. > > Speaking in terms of the PETSc-manual p.87, looking at the factorization > of the Schur field split preconditioner, approach 1 sets \hat{S}p while > approach 2 furthermore sets \hat{S}. Is this correct? > > No this is not correct. \hat{S} is always constructed by PETSc as \hat{S} = A11 - A10 KSP(A00) A01 This is the definition of the pressure schur complement used by FieldSplit. Note that it is inexact since the action y = inv(A00) x is replaced by a Krylov solve, e.g. we solve A00 y = x for y You have two choices in how to define the preconditioned, \hat{S_p}: [1] Assemble you own matrix (as is done in ex70) [2] Let PETSc build one. PETSc does this according to \hat{S_p} = A11 - A10 inv(diag(A00)) A01 > > > > [2] If you assembled a different operator for your preconditioner in > > which the B_pp slot contained a pressure schur complement > > approximation, you could use the simpler and likely more robust option > > (assuming you know of a decent schur complement approximation for you > > discretisation and physical problem) > > > -coupledsolve_pc_type fieldsplit > > -coupledsolve_pc_fieldsplit_type MULTIPLICATIVE > > > > which include you U-p coupling, or just > > > > -coupledsolve_pc_fieldsplit_type ADDITIVE > > > > > > which would define the following preconditioner > > > > inv(B) = diag( inv(B_uu,) , inv(B_vv) , inv(B_ww) , inv(B_pp) ) > > > What do you refer to with "B_pp slot"? I don't understand this approach > completely. What would I need a Schur complement approximation for, if I > don't use a Schur complement preconditioner? > > I was referring to constructing an operator which approximates the schur complement and inserting it into the pressure-pressure coupling block (pp slot) > > Option 2 would be better as your operator doesn't have an u_i-u_j, i ! > > = j coupling and you could use efficient AMG implementations for each > > scalar terms associated with u-u, v-v, w-w coupled terms without > > having to split again. > > > > Also, fieldsplit will not be aware of the fact that the Auu, Avv, Aww > > blocks are all identical - thus it cannot do anything "smart" in order > > to save memory. Accordingly, the KSP defined for each u,v,w split will > > be a unique KSP object. If your A_ii are all identical and you want to > > save memory, you could use MatNest but as Matt will always yell out, > > "MatNest is ONLY a memory optimization and should be ONLY be used once > > all solver exploration/testing is performed". > > Thanks, I will keep this in mind. Does this mean that I would only have > to assemble one matrix for the velocities instead of three? > Yes, exactly. > > > > > - A_pp is defined as the matrix resulting from the > > discretization of the > > pressure equation that considers only the pressure related > > terms. > > > > > > Hmm okay, i assumed for incompressible NS the pressure equation > > > > that the pressure equation would be just \div(u) = 0. > > > > Indeed, many finite element(!) formulations I found while researching use > this approach, which leads to the block A_pp being zero. I however use a > collocated finite volume formulation and, to avoid checkerboarding of the > pressure field, I deploy a pressure weighted interpolation method to > approximate the velocities surging from the discretisation of \div{u}. > This gives me an equation with the pressure as the dominant variable. > Ah okay, you stabilize the discretisation. Now I understand why you have entries in the PP block. Cheers Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Feb 5 09:35:08 2015 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 5 Feb 2015 09:35:08 -0600 Subject: [petsc-users] Field Split PC for Fully-Coupled 3d stationary incompressible Navier-Stokes Solution Algorithm In-Reply-To: References: <1422869962.961.2.camel@gmail.com> <1422871832.961.4.camel@gmail.com> <1423082081.3096.6.camel@gmail.com> Message-ID: On Thu, Feb 5, 2015 at 9:32 AM, Dave May wrote: > > > On 4 February 2015 at 21:34, Fabian Gabel wrote: > >> Thank you for pointing me into the right direction. After some first >> tests on a test case with 2e6 cells (4dof) I could measure a slight >> improvement (25%) with respect to wall time, using a nested >> field split for the velocities: >> > > Great. That's a good start. It can be made ever faster. > > > >> -coupledsolve_pc_type fieldsplit >> -coupledsolve_pc_fieldsplit_0_fields 0,1,2 >> -coupledsolve_pc_fieldsplit_1_fields 3 >> -coupledsolve_pc_fieldsplit_type schur >> -coupledsolve_pc_fieldsplit_block_size 4 >> -coupledsolve_fieldsplit_0_ksp_converged_reason >> -coupledsolve_fieldsplit_1_ksp_converged_reason >> -coupledsolve_fieldsplit_0_ksp_type gmres >> -coupledsolve_fieldsplit_0_pc_type fieldsplit >> -coupledsolve_fieldsplit_0_pc_fieldsplit_block_size 3 >> -coupledsolve_fieldsplit_0_fieldsplit_0_pc_type ml >> -coupledsolve_fieldsplit_0_fieldsplit_1_pc_type ml >> -coupledsolve_fieldsplit_0_fieldsplit_2_pc_type ml >> >> Is it normal, that I have to explicitly specify the block size for each >> fieldsplit? >> > > No. You should be able to just specify > > -coupledsolve_fieldsplit_ksp_converged > -coupledsolve_fieldsplit_0_fieldsplit_pc_type ml > > and same options will be applied to all splits (0,1,2). > Does this functionality not work? > > > >> I attached the results (-converged_reason only for readability and >> another file >> solely for the output of -ksp_view). I am not sure if this result could >> be improved by modifying any of the solver options. >> > > Yes, I believe they can. > > > >> >> Are there any guidelines to follow that I could use to avoid taking wild >> guesses? >> > > Sure. There are lots of papers published on how to construct robust block > preconditioners for saddle point problems arising from Navier Stokes. > I would start by looking at this book: > > Finite Elements and Fast Iterative Solvers > Howard Elman, David Silvester and Andy Wathen > Oxford University Press > See chapters 6 and 8. > > Your preconditioner is performing a full block LDU factorization with > quite accurate inner solves. This is probably overkill - but it will be > robust. > > The most relaxed approach would be : > -coupledsolve_fieldsplit_0_ksp_type preonly > -coupledsolve_fieldsplit_1_ksp_type preonly > -coupledsolve_pc_fieldsplit_schur_fact_type DIAG > > Something more aggressive (but less stringent that your original test) > would be: > -coupledsolve_fieldsplit_0_ksp_type gmres > -coupledsolve_fieldsplit_0_ksp_rtol 1.0e-2 > -coupledsolve_fieldsplit_1_ksp_type preonly > -coupledsolve_pc_fieldsplit_schur_fact_type UPPER > > When building the FS preconditioner, you can start with the absolute most > robust choices and then relax those choices to improve speed and hopefully > not destroy the convergence, or you can start with a light weight > preconditioner and make it stronger to improve convergence. > Where the balance lies is very much problem dependent. > > > >> >> > Petsc has some support to generate approximate pressure schur >> > complements for you, but these will not be as good as the ones >> > specifically constructed for you particular discretization. >> >> I came across a tutorial (/snes/examples/tutorials/ex70.c), which shows >> 2 different approaches: >> >> 1- provide a Preconditioner \hat{S}p for the approximation of the true >> Schur complement >> >> 2- use another Matrix (in this case its the Matrix used for constructing >> the preconditioner in the former approach) as a new approximation of the >> Schur complement. >> >> Speaking in terms of the PETSc-manual p.87, looking at the factorization >> of the Schur field split preconditioner, approach 1 sets \hat{S}p while >> approach 2 furthermore sets \hat{S}. Is this correct? >> >> > No this is not correct. > \hat{S} is always constructed by PETSc as > \hat{S} = A11 - A10 KSP(A00) A01 > This is the definition of the pressure schur complement used by > FieldSplit. > Note that it is inexact since the action > y = inv(A00) x > is replaced by a Krylov solve, e.g. we solve A00 y = x for y > > You have two choices in how to define the preconditioned, \hat{S_p}: > [1] Assemble you own matrix (as is done in ex70) > [2] Let PETSc build one. PETSc does this according to > \hat{S_p} = A11 - A10 inv(diag(A00)) A01 > There are a few options here: http://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/PC/PCFieldSplitSetSchurPre.html Thanks, Matt > > > >> > >> > [2] If you assembled a different operator for your preconditioner in >> > which the B_pp slot contained a pressure schur complement >> > approximation, you could use the simpler and likely more robust option >> > (assuming you know of a decent schur complement approximation for you >> > discretisation and physical problem) >> >> > -coupledsolve_pc_type fieldsplit >> > -coupledsolve_pc_fieldsplit_type MULTIPLICATIVE >> > >> > which include you U-p coupling, or just >> > >> > -coupledsolve_pc_fieldsplit_type ADDITIVE >> > >> > >> > which would define the following preconditioner >> > >> > inv(B) = diag( inv(B_uu,) , inv(B_vv) , inv(B_ww) , inv(B_pp) ) >> >> >> What do you refer to with "B_pp slot"? I don't understand this approach >> completely. What would I need a Schur complement approximation for, if I >> don't use a Schur complement preconditioner? >> >> > I was referring to constructing an operator which approximates the schur > complement and inserting it into the pressure-pressure coupling block (pp > slot) > > > >> > Option 2 would be better as your operator doesn't have an u_i-u_j, i ! >> > = j coupling and you could use efficient AMG implementations for each >> > scalar terms associated with u-u, v-v, w-w coupled terms without >> > having to split again. >> > >> > Also, fieldsplit will not be aware of the fact that the Auu, Avv, Aww >> > blocks are all identical - thus it cannot do anything "smart" in order >> > to save memory. Accordingly, the KSP defined for each u,v,w split will >> > be a unique KSP object. If your A_ii are all identical and you want to >> > save memory, you could use MatNest but as Matt will always yell out, >> > "MatNest is ONLY a memory optimization and should be ONLY be used once >> > all solver exploration/testing is performed". >> >> Thanks, I will keep this in mind. Does this mean that I would only have >> to assemble one matrix for the velocities instead of three? >> > > Yes, exactly. > > >> >> > >> > - A_pp is defined as the matrix resulting from the >> > discretization of the >> > pressure equation that considers only the pressure related >> > terms. >> > >> > >> > Hmm okay, i assumed for incompressible NS the pressure equation >> > >> > that the pressure equation would be just \div(u) = 0. >> > >> >> Indeed, many finite element(!) formulations I found while researching use >> this approach, which leads to the block A_pp being zero. I however use a >> collocated finite volume formulation and, to avoid checkerboarding of the >> pressure field, I deploy a pressure weighted interpolation method to >> approximate the velocities surging from the discretisation of \div{u}. >> This gives me an equation with the pressure as the dominant variable. >> > > Ah okay, you stabilize the discretisation. > Now I understand why you have entries in the PP block. > > Cheers > Dave > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From ansp6066 at colorado.edu Thu Feb 5 12:45:34 2015 From: ansp6066 at colorado.edu (Andrew Spott) Date: Thu, 05 Feb 2015 10:45:34 -0800 (PST) Subject: [petsc-users] SLEPc choosing inner product Message-ID: <1423161934325.aaaf3c38@Nodemailer> If I have some inner matrix upon which I want the eigensolver context to check orthogonality and norm, how do I set that? Is it: ? ? BV bv; ? ? EPSGetBV( e, &bv ); ? ? BVSetMatrix( bv, InnerProductMatrix, PETSC_FALSE ); This doesn?t appear to be changing the norm that is being used. ?Is the norm used for the eigensolver context when normalizing the eigenvectors always just the 2norm? -Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Thu Feb 5 12:54:24 2015 From: jroman at dsic.upv.es (Jose E. Roman) Date: Thu, 5 Feb 2015 19:54:24 +0100 Subject: [petsc-users] SLEPc choosing inner product In-Reply-To: <1423161934325.aaaf3c38@Nodemailer> References: <1423161934325.aaaf3c38@Nodemailer> Message-ID: <295C9900-82F7-4C76-9787-13F21AE4366A@dsic.upv.es> El 05/02/2015, a las 19:45, Andrew Spott escribi?: > If I have some inner matrix upon which I want the eigensolver context to check orthogonality and norm, how do I set that? > > Is it: > > BV bv; > EPSGetBV( e, &bv ); > BVSetMatrix( bv, InnerProductMatrix, PETSC_FALSE ); > > This doesn?t appear to be changing the norm that is being used. Is the norm used for the eigensolver context when normalizing the eigenvectors always just the 2norm? > > -Andrew > In normal usage, the user should not mess with the BV object. If you set the EPS problem type to GHEP then the solver (at least the default one) will use B-innerproducts so that eigenvectors with satisfy the B-orthogonality condition. Also, in this case EPS should provide eigenvectors with unit B-norm. So InnerProductMatrix should be the B matrix of your problem Ax=\lambda Bx. Is this what you need to do? If you want to check orthogonality a posteriori, call SlepcCheckOrthogonality with the B matrix. Jose From chaw0023 at umn.edu Thu Feb 5 15:33:29 2015 From: chaw0023 at umn.edu (Saurabh Chawdhary) Date: Thu, 05 Feb 2015 15:33:29 -0600 Subject: [petsc-users] How to load IS from file Message-ID: <54D3E1A9.9020508@umn.edu> Hi guys, With ISView we can dump the IS into a viewer and file and save it. But how can we load the IS back from file into the code. ISLoad is a function in current petsc version 3.5 but it doesn't exist in version 3.4.5. Is there an alternate way to di the job? All we want is the ability to save an IS in current program and read it back later. How can this be done in Petsc 3.4.5? Help! Help! Thanks, Saurabh From knepley at gmail.com Thu Feb 5 15:38:27 2015 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 5 Feb 2015 15:38:27 -0600 Subject: [petsc-users] How to load IS from file In-Reply-To: <54D3E1A9.9020508@umn.edu> References: <54D3E1A9.9020508@umn.edu> Message-ID: On Thu, Feb 5, 2015 at 3:33 PM, Saurabh Chawdhary wrote: > Hi guys, > With ISView we can dump the IS into a viewer and file and save it. But how > can we load the IS back from file into the code. > ISLoad is a function in current petsc version 3.5 but it doesn't exist in > version 3.4.5. Is there an alternate way to di the job? > All we want is the ability to save an IS in current program and read it > back later. How can this be done in Petsc 3.4.5? > Help! Help! > You should really upgrade. You could use ISGetIndices() and PetscBinaryWrite() yourself to write the data, but you will have to manage the parallelism yourself. Matt > Thanks, > Saurabh > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From gabel.fabian at gmail.com Thu Feb 5 16:15:13 2015 From: gabel.fabian at gmail.com (Fabian Gabel) Date: Thu, 05 Feb 2015 23:15:13 +0100 Subject: [petsc-users] Field Split PC for Fully-Coupled 3d stationary incompressible Navier-Stokes Solution Algorithm In-Reply-To: References: <1422869962.961.2.camel@gmail.com> <1422871832.961.4.camel@gmail.com> <1423082081.3096.6.camel@gmail.com> Message-ID: <1423174513.3627.1.camel@gmail.com> Thank you for your feedback. > -coupledsolve_pc_type fieldsplit > -coupledsolve_pc_fieldsplit_0_fields 0,1,2 > -coupledsolve_pc_fieldsplit_1_fields 3 > -coupledsolve_pc_fieldsplit_type schur > -coupledsolve_pc_fieldsplit_block_size 4 > -coupledsolve_fieldsplit_0_ksp_converged_reason > -coupledsolve_fieldsplit_1_ksp_converged_reason > -coupledsolve_fieldsplit_0_ksp_type gmres > -coupledsolve_fieldsplit_0_pc_type fieldsplit > -coupledsolve_fieldsplit_0_pc_fieldsplit_block_size 3 > -coupledsolve_fieldsplit_0_fieldsplit_0_pc_type ml > -coupledsolve_fieldsplit_0_fieldsplit_1_pc_type ml > -coupledsolve_fieldsplit_0_fieldsplit_2_pc_type ml > > Is it normal, that I have to explicitly specify the block size > for each > fieldsplit? > > > No. You should be able to just specify > > > -coupledsolve_fieldsplit_ksp_converged > -coupledsolve_fieldsplit_0_fieldsplit_pc_type ml > > > and same options will be applied to all splits (0,1,2). > > Does this functionality not work? > > It does work indeed, but what I actually was referring to, was the use of -coupledsolve_pc_fieldsplit_block_size 4 -coupledsolve_fieldsplit_0_pc_fieldsplit_block_size 3 Without them, I get the error message [0]PETSC ERROR: PCFieldSplitSetDefaults() line 468 in /work/build/petsc/src/ksp/pc/impls/fieldsplit/fieldsplit.c Unhandled case, must have at least two fields, not 1 I thought PETSc would already know, what I want to do, since I initialized the fieldsplit with CALL PCFieldSplitSetIS(PRECON,PETSC_NULL_CHARACTER,ISU,IERR) etc. > > > Are there any guidelines to follow that I could use to avoid > taking wild > guesses? > > > Sure. There are lots of papers published on how to construct robust > block preconditioners for saddle point problems arising from Navier > Stokes. > I would start by looking at this book: > > > Finite Elements and Fast Iterative Solvers > > Howard Elman, David Silvester and Andy Wathen > > Oxford University Press > > See chapters 6 and 8. > As a matter of fact I spent the last days digging through papers on the regard of preconditioners or approximate Schur complements and the names Elman and Silvester have come up quite often. The problem I experience is, that, except for one publication, all the other ones I checked deal with finite element formulations. Only Klaij, C. and Vuik, C. SIMPLE-type preconditioners for cell-centered, colocated finite volume discretization of incompressible Reynolds-averaged Navier?Stokes equations presented an approach for finite volume methods. Furthermore, a lot of literature is found on saddle point problems, since the linear system from stable finite element formulations comes with a 0 block as pressure matrix. I'm not sure how I can benefit from the work that has already been done for finite element methods, since I neither use finite elements nor I am trying to solve a saddle point problem (?). > > > Petsc has some support to generate approximate pressure > schur > > complements for you, but these will not be as good as the > ones > > specifically constructed for you particular discretization. > > I came across a tutorial (/snes/examples/tutorials/ex70.c), > which shows > 2 different approaches: > > 1- provide a Preconditioner \hat{S}p for the approximation of > the true > Schur complement > > 2- use another Matrix (in this case its the Matrix used for > constructing > the preconditioner in the former approach) as a new > approximation of the > Schur complement. > > Speaking in terms of the PETSc-manual p.87, looking at the > factorization > of the Schur field split preconditioner, approach 1 sets > \hat{S}p while > approach 2 furthermore sets \hat{S}. Is this correct? > > > > No this is not correct. > \hat{S} is always constructed by PETSc as > \hat{S} = A11 - A10 KSP(A00) A01 But then what happens in this line from the tutorial /snes/examples/tutorials/ex70.c ierr = KSPSetOperators(subksp[1], s->myS, s->myS);CHKERRQ(ierr); It think the approximate Schur complement a (Matrix of type Schur) gets replaced by an explicitely formed Matrix (myS, of type MPIAIJ). > > You have two choices in how to define the preconditioned, \hat{S_p}: > > [1] Assemble you own matrix (as is done in ex70) > > [2] Let PETSc build one. PETSc does this according to > > \hat{S_p} = A11 - A10 inv(diag(A00)) A01 > Regards, Fabian > > From knepley at gmail.com Thu Feb 5 16:45:48 2015 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 5 Feb 2015 16:45:48 -0600 Subject: [petsc-users] Field Split PC for Fully-Coupled 3d stationary incompressible Navier-Stokes Solution Algorithm In-Reply-To: <1423174513.3627.1.camel@gmail.com> References: <1422869962.961.2.camel@gmail.com> <1422871832.961.4.camel@gmail.com> <1423082081.3096.6.camel@gmail.com> <1423174513.3627.1.camel@gmail.com> Message-ID: On Thu, Feb 5, 2015 at 4:15 PM, Fabian Gabel wrote: > Thank you for your feedback. > > > -coupledsolve_pc_type fieldsplit > > -coupledsolve_pc_fieldsplit_0_fields 0,1,2 > > -coupledsolve_pc_fieldsplit_1_fields 3 > > -coupledsolve_pc_fieldsplit_type schur > > -coupledsolve_pc_fieldsplit_block_size 4 > > -coupledsolve_fieldsplit_0_ksp_converged_reason > > -coupledsolve_fieldsplit_1_ksp_converged_reason > > -coupledsolve_fieldsplit_0_ksp_type gmres > > -coupledsolve_fieldsplit_0_pc_type fieldsplit > > -coupledsolve_fieldsplit_0_pc_fieldsplit_block_size 3 > > -coupledsolve_fieldsplit_0_fieldsplit_0_pc_type ml > > -coupledsolve_fieldsplit_0_fieldsplit_1_pc_type ml > > -coupledsolve_fieldsplit_0_fieldsplit_2_pc_type ml > > > > Is it normal, that I have to explicitly specify the block size > > for each > > fieldsplit? > > > > > > No. You should be able to just specify > > > > > > -coupledsolve_fieldsplit_ksp_converged > > -coupledsolve_fieldsplit_0_fieldsplit_pc_type ml > > > > > > and same options will be applied to all splits (0,1,2). > > > > Does this functionality not work? > > > > > It does work indeed, but what I actually was referring to, was the use > of > > -coupledsolve_pc_fieldsplit_block_size 4 > -coupledsolve_fieldsplit_0_pc_fieldsplit_block_size 3 > > Without them, I get the error message > > [0]PETSC ERROR: PCFieldSplitSetDefaults() line 468 > in /work/build/petsc/src/ksp/pc/impls/fieldsplit/fieldsplit.c Unhandled > case, must have at least two fields, not 1 > > I thought PETSc would already know, what I want to do, since I > initialized the fieldsplit with > > CALL PCFieldSplitSetIS(PRECON,PETSC_NULL_CHARACTER,ISU,IERR) > > etc. > > > > > > > Are there any guidelines to follow that I could use to avoid > > taking wild > > guesses? > > > > > > Sure. There are lots of papers published on how to construct robust > > block preconditioners for saddle point problems arising from Navier > > Stokes. > > I would start by looking at this book: > > > > > > Finite Elements and Fast Iterative Solvers > > > > Howard Elman, David Silvester and Andy Wathen > > > > Oxford University Press > > > > See chapters 6 and 8. > > > As a matter of fact I spent the last days digging through papers on the > regard of preconditioners or approximate Schur complements and the names > Elman and Silvester have come up quite often. > > The problem I experience is, that, except for one publication, all the > other ones I checked deal with finite element formulations. Only > > Klaij, C. and Vuik, C. SIMPLE-type preconditioners for cell-centered, > colocated finite volume discretization of incompressible > Reynolds-averaged Navier?Stokes equations > > presented an approach for finite volume methods. Furthermore, a lot of > literature is found on saddle point problems, since the linear system > from stable finite element formulations comes with a 0 block as pressure > matrix. I'm not sure how I can benefit from the work that has already > been done for finite element methods, since I neither use finite > elements nor I am trying to solve a saddle point problem (?). > I believe the operator estimates for FV are very similar to first order FEM, and I believe that you do have a saddle-point system in that there are both positive and negative eigenvalues. Thanks, Matt > > > > > Petsc has some support to generate approximate pressure > > schur > > > complements for you, but these will not be as good as the > > ones > > > specifically constructed for you particular discretization. > > > > I came across a tutorial (/snes/examples/tutorials/ex70.c), > > which shows > > 2 different approaches: > > > > 1- provide a Preconditioner \hat{S}p for the approximation of > > the true > > Schur complement > > > > 2- use another Matrix (in this case its the Matrix used for > > constructing > > the preconditioner in the former approach) as a new > > approximation of the > > Schur complement. > > > > Speaking in terms of the PETSc-manual p.87, looking at the > > factorization > > of the Schur field split preconditioner, approach 1 sets > > \hat{S}p while > > approach 2 furthermore sets \hat{S}. Is this correct? > > > > > > > > No this is not correct. > > \hat{S} is always constructed by PETSc as > > \hat{S} = A11 - A10 KSP(A00) A01 > > But then what happens in this line from the > tutorial /snes/examples/tutorials/ex70.c > > ierr = KSPSetOperators(subksp[1], s->myS, s->myS);CHKERRQ(ierr); > > It think the approximate Schur complement a (Matrix of type Schur) gets > replaced by an explicitely formed Matrix (myS, of type MPIAIJ). > > > > You have two choices in how to define the preconditioned, \hat{S_p}: > > > > [1] Assemble you own matrix (as is done in ex70) > > > > [2] Let PETSc build one. PETSc does this according to > > > > \hat{S_p} = A11 - A10 inv(diag(A00)) A01 > > > Regards, > Fabian > > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From sghosh2012 at gatech.edu Thu Feb 5 17:11:44 2015 From: sghosh2012 at gatech.edu (Ghosh, Swarnava) Date: Thu, 5 Feb 2015 18:11:44 -0500 (EST) Subject: [petsc-users] Large rectangular Dense Transpose multiplication with sparse In-Reply-To: <1016504175.155142.1423176136966.JavaMail.root@mail.gatech.edu> Message-ID: <1631884364.163440.1423177904614.JavaMail.root@mail.gatech.edu> Dear all, I am trying to compute matrices A = transpose(R)*H*R and M = transpose(R)*R where H is a sparse (banded) matrix in MATMPIAIJ format (5 million x 5 million total size) R is a MPI dense matrix of size 5 million x 2000. I tried 1) MatPtAP - Failed, realized this only works for pairs of AIJ matrices 2) First multiplying H*R and storing in 5 million x 2000 MPI dense. Then MatTranspose of R and multiplying the transposed R with 5 million x 2000 dense. This multiplication fails. Could someone please suggest a way of doing this. Regards, Swarnava -- Swarnava Ghosh From bhatiamanav at gmail.com Thu Feb 5 17:47:39 2015 From: bhatiamanav at gmail.com (Manav Bhatia) Date: Thu, 5 Feb 2015 17:47:39 -0600 Subject: [petsc-users] Direct solvers Message-ID: <192B39D7-98BD-45B0-A5F3-D0600A640A66@gmail.com> Hi, I am trying to use an lu decomposition method for a relatively large matrix (~775,000 dofs) coming from a thermoelasticity problem. For the past few weeks, LU solver in 3.5.1 has been solving it just fine. I just upgraded to 3.5.2 from macports (running on Mac OS 10.10.2), and am getting the following ?out of memory" error sab_old_mast_structural_analysis(378,0x7fff75f6e300) malloc: *** mach_vm_map(size=18446744066373115904) failed (error code=3) *** error: can't allocate region *** set a breakpoint in malloc_error_break to debug [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Out of memory. This could be due to allocating [0]PETSC ERROR: too large an object or bleeding by not properly [0]PETSC ERROR: destroying unneeded objects. [0]PETSC ERROR: Memory allocated 3649788624 Memory used by process 3943817216 [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info. [0]PETSC ERROR: Memory requested 18446744066373113856 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.5.2, unknown [0]PETSC ERROR: ./sab_old_mast_structural_analysis on a arch-macports named ws243-49.walker.dynamic.msstate.edu by manav Thu Feb 5 17:30:18 2015 [0]PETSC ERROR: Configure options --prefix=/opt/local --prefix=/opt/local/lib/petsc --with-valgrind=0 --with-shared-libraries --with-c2html-dir=/opt/local --with-x=0 --with-blas-lapack-lib=/System/Library/Frameworks/Accelerate.framework/Versions/Current/Accelerate --with-hwloc-dir=/opt/local --with-suitesparse-dir=/opt/local --with-superlu-dir=/opt/local --with-metis-dir=/opt/local --with-parmetis-dir=/opt/local --with-scalapack-dir=/opt/local --with-mumps-dir=/opt/local CC=/opt/local/bin/mpicc-openmpi-mp CXX=/opt/local/bin/mpicxx-openmpi-mp FC=/opt/local/bin/mpif90-openmpi-mp F77=/opt/local/bin/mpif90-openmpi-mp F90=/opt/local/bin/mpif90-openmpi-mp COPTFLAGS=-Os CXXOPTFLAGS=-Os FOPTFLAGS=-Os LDFLAGS="-L/opt/local/lib -Wl,-headerpad_max_install_names" CPPFLAGS=-I/opt/local/include CFLAGS="-Os -arch x86_64" CXXFLAGS=-Os FFLAGS=-Os FCFLAGS=-Os F90FLAGS=-Os PETSC_ARCH=arch-macports --with-mpiexec=mpiexec-openmpi-mp [0]PETSC ERROR: #1 PetscMallocAlign() line 46 in /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/sys/memory/mal.c [0]PETSC ERROR: #2 PetscTrMallocDefault() line 184 in /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/sys/memory/mtr.c [0]PETSC ERROR: #3 PetscFreeSpaceGet() line 13 in /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/mat/utils/freespace.c [0]PETSC ERROR: #4 MatLUFactorSymbolic_SeqAIJ() line 362 in /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/mat/impls/aij/seq/aijfact.c [0]PETSC ERROR: #5 MatLUFactorSymbolic() line 2842 in /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/mat/interface/matrix.c [0]PETSC ERROR: #6 PCSetUp_LU() line 127 in /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/ksp/pc/impls/factor/lu/lu.c [0]PETSC ERROR: #7 PCSetUp() line 902 in /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/ksp/pc/interface/precon.c [0]PETSC ERROR: #8 KSPSetUp() line 305 in /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: #9 KSPSolve() line 417 in /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: #10 SNESSolve_NEWTONLS() line 232 in /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/snes/impls/ls/ls.c [0]PETSC ERROR: #11 SNESSolve() line 3743 in /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/snes/interface/snes.c [0]PETSC ERROR: #12 solve() line 559 in src/solvers/petsc_nonlinear_solver.C -------------------------------------------------------------------------- A few questions: ? has something changed between 3.5.1 and 3.5.2 that might lead to this behavior? ? So far I have tried the following iterative solver option: -pc_type ilu -pc_factor_levels 1 (and 2) with very slow convergence. Is there a better preconditioner recommended for this problem? This is a solid mechanics problem with thermal load (not a coupled thermal-structural probelm). ? I tried using MUMPS through the option -pc_factor_mat_solver_package mumps -mat_mumps_icntl_ 22 1 -mat_mumps_icntl_ 23 8000 to try to get it to use the disk I/O and limit the memory to 8GB, but that too returned with an out of memory error. Is this the correct format to specify the options? If so, is the write to disk option expected to work with MUMPS called via petsc? I would greatly appreciate your inputs. Thanks, Manav -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Feb 5 17:55:02 2015 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 5 Feb 2015 17:55:02 -0600 Subject: [petsc-users] Direct solvers In-Reply-To: <192B39D7-98BD-45B0-A5F3-D0600A640A66@gmail.com> References: <192B39D7-98BD-45B0-A5F3-D0600A640A66@gmail.com> Message-ID: On Thu, Feb 5, 2015 at 5:47 PM, Manav Bhatia wrote: > Hi, > > I am trying to use an lu decomposition method for a relatively large > matrix (~775,000 dofs) coming from a thermoelasticity problem. > > For the past few weeks, LU solver in 3.5.1 has been solving it just > fine. I just upgraded to 3.5.2 from macports (running on Mac OS 10.10.2), > and am getting the following ?out of memory" error > > sab_old_mast_structural_analysis(378,0x7fff75f6e300) malloc: *** > mach_vm_map(size=18446744066373115904) failed (error code=3) > *** error: can't allocate region > *** set a breakpoint in malloc_error_break to debug > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Out of memory. This could be due to allocating > [0]PETSC ERROR: too large an object or bleeding by not properly > [0]PETSC ERROR: destroying unneeded objects. > [0]PETSC ERROR: Memory allocated 3649788624 Memory used by process > 3943817216 > [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info. > [0]PETSC ERROR: Memory requested 18446744066373113856 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.5.2, unknown > [0]PETSC ERROR: ./sab_old_mast_structural_analysis on a arch-macports > named ws243-49.walker.dynamic.msstate.edu by manav Thu Feb 5 17:30:18 > 2015 > [0]PETSC ERROR: Configure options --prefix=/opt/local > --prefix=/opt/local/lib/petsc --with-valgrind=0 --with-shared-libraries > --with-c2html-dir=/opt/local --with-x=0 > --with-blas-lapack-lib=/System/Library/Frameworks/Accelerate.framework/Versions/Current/Accelerate > --with-hwloc-dir=/opt/local --with-suitesparse-dir=/opt/local > --with-superlu-dir=/opt/local --with-metis-dir=/opt/local > --with-parmetis-dir=/opt/local --with-scalapack-dir=/opt/local > --with-mumps-dir=/opt/local CC=/opt/local/bin/mpicc-openmpi-mp > CXX=/opt/local/bin/mpicxx-openmpi-mp FC=/opt/local/bin/mpif90-openmpi-mp > F77=/opt/local/bin/mpif90-openmpi-mp F90=/opt/local/bin/mpif90-openmpi-mp > COPTFLAGS=-Os CXXOPTFLAGS=-Os FOPTFLAGS=-Os LDFLAGS="-L/opt/local/lib > -Wl,-headerpad_max_install_names" CPPFLAGS=-I/opt/local/include CFLAGS="-Os > -arch x86_64" CXXFLAGS=-Os FFLAGS=-Os FCFLAGS=-Os F90FLAGS=-Os > PETSC_ARCH=arch-macports --with-mpiexec=mpiexec-openmpi-mp > [0]PETSC ERROR: #1 PetscMallocAlign() line 46 in > /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/sys/memory/mal.c > [0]PETSC ERROR: #2 PetscTrMallocDefault() line 184 in > /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/sys/memory/mtr.c > [0]PETSC ERROR: #3 PetscFreeSpaceGet() line 13 in > /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/mat/utils/freespace.c > [0]PETSC ERROR: #4 MatLUFactorSymbolic_SeqAIJ() line 362 in > /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/mat/impls/aij/seq/aijfact.c > [0]PETSC ERROR: #5 MatLUFactorSymbolic() line 2842 in > /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/mat/interface/matrix.c > [0]PETSC ERROR: #6 PCSetUp_LU() line 127 in > /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/ksp/pc/impls/factor/lu/lu.c > [0]PETSC ERROR: #7 PCSetUp() line 902 in > /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: #8 KSPSetUp() line 305 in > /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #9 KSPSolve() line 417 in > /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #10 SNESSolve_NEWTONLS() line 232 in > /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/snes/impls/ls/ls.c > [0]PETSC ERROR: #11 SNESSolve() line 3743 in > /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/snes/interface/snes.c > [0]PETSC ERROR: #12 solve() line 559 in > src/solvers/petsc_nonlinear_solver.C > -------------------------------------------------------------------------- > > > A few questions: > > ? has something changed between 3.5.1 and 3.5.2 that might lead to this > behavior? > I do not see anything: http://www.mcs.anl.gov/petsc/documentation/changes/32.html You should upgrade to the latest release. Then we can start improving it. > ? So far I have tried the following iterative solver option: -pc_type ilu > -pc_factor_levels 1 (and 2) with very slow convergence. Is there a better > preconditioner recommended for this problem? This is a solid mechanics > problem with thermal load (not a coupled thermal-structural probelm). > With the latest release, you should try -pc_type gamg. > ? I tried using MUMPS through the option -pc_factor_mat_solver_package > mumps -mat_mumps_icntl_ 22 1 -mat_mumps_icntl_ 23 8000 to try to get it to > use the disk I/O and limit the memory to 8GB, but that too returned with an > out of memory error. Is this the correct format to specify the options? If > so, is the write to disk option expected to work with MUMPS called via > petsc? > Send the output of -ksp_view so we can see exactly what it is doing. Also I would also try SuperLU. Thanks, Matt > I would greatly appreciate your inputs. > > Thanks, > Manav > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From gabel.fabian at gmail.com Thu Feb 5 18:15:37 2015 From: gabel.fabian at gmail.com (Fabian Gabel) Date: Fri, 06 Feb 2015 01:15:37 +0100 Subject: [petsc-users] Field Split PC for Fully-Coupled 3d stationary incompressible Navier-Stokes Solution Algorithm In-Reply-To: References: <1422869962.961.2.camel@gmail.com> <1422871832.961.4.camel@gmail.com> <1423082081.3096.6.camel@gmail.com> <1423174513.3627.1.camel@gmail.com> Message-ID: <1423181737.3627.3.camel@gmail.com> On Do, 2015-02-05 at 16:45 -0600, Matthew Knepley wrote: > On Thu, Feb 5, 2015 at 4:15 PM, Fabian Gabel > wrote: > Thank you for your feedback. > > > -coupledsolve_pc_type fieldsplit > > -coupledsolve_pc_fieldsplit_0_fields 0,1,2 > > -coupledsolve_pc_fieldsplit_1_fields 3 > > -coupledsolve_pc_fieldsplit_type schur > > -coupledsolve_pc_fieldsplit_block_size 4 > > -coupledsolve_fieldsplit_0_ksp_converged_reason > > -coupledsolve_fieldsplit_1_ksp_converged_reason > > -coupledsolve_fieldsplit_0_ksp_type gmres > > -coupledsolve_fieldsplit_0_pc_type fieldsplit > > -coupledsolve_fieldsplit_0_pc_fieldsplit_block_size > 3 > > -coupledsolve_fieldsplit_0_fieldsplit_0_pc_type ml > > -coupledsolve_fieldsplit_0_fieldsplit_1_pc_type ml > > -coupledsolve_fieldsplit_0_fieldsplit_2_pc_type ml > > > > Is it normal, that I have to explicitly specify the > block size > > for each > > fieldsplit? > > > > > > No. You should be able to just specify > > > > > > -coupledsolve_fieldsplit_ksp_converged > > -coupledsolve_fieldsplit_0_fieldsplit_pc_type ml > > > > > > and same options will be applied to all splits (0,1,2). > > > > Does this functionality not work? > > > > > It does work indeed, but what I actually was referring to, was > the use > of > > -coupledsolve_pc_fieldsplit_block_size 4 > -coupledsolve_fieldsplit_0_pc_fieldsplit_block_size 3 > > Without them, I get the error message > > [0]PETSC ERROR: PCFieldSplitSetDefaults() line 468 > in /work/build/petsc/src/ksp/pc/impls/fieldsplit/fieldsplit.c > Unhandled > case, must have at least two fields, not 1 > > I thought PETSc would already know, what I want to do, since I > initialized the fieldsplit with > > CALL PCFieldSplitSetIS(PRECON,PETSC_NULL_CHARACTER,ISU,IERR) > > etc. > > > > > > > Are there any guidelines to follow that I could use > to avoid > > taking wild > > guesses? > > > > > > Sure. There are lots of papers published on how to construct > robust > > block preconditioners for saddle point problems arising from > Navier > > Stokes. > > I would start by looking at this book: > > > > > > Finite Elements and Fast Iterative Solvers > > > > Howard Elman, David Silvester and Andy Wathen > > > > Oxford University Press > > > > See chapters 6 and 8. > > > As a matter of fact I spent the last days digging through > papers on the > regard of preconditioners or approximate Schur complements and > the names > Elman and Silvester have come up quite often. > > The problem I experience is, that, except for one publication, > all the > other ones I checked deal with finite element formulations. > Only > > Klaij, C. and Vuik, C. SIMPLE-type preconditioners for > cell-centered, > colocated finite volume discretization of incompressible > Reynolds-averaged Navier?Stokes equations > > presented an approach for finite volume methods. Furthermore, > a lot of > literature is found on saddle point problems, since the linear > system > from stable finite element formulations comes with a 0 block > as pressure > matrix. I'm not sure how I can benefit from the work that has > already > been done for finite element methods, since I neither use > finite > elements nor I am trying to solve a saddle point problem (?). > > > I believe the operator estimates for FV are very similar to first > order FEM, Ok, so you would suggest to just discretize the operators differently (FVM instead of FEM discretization)? > and > I believe that you do have a saddle-point system in that there are > both positive > and negative eigenvalues. A first test on a small system in Matlab shows, that my system matrix is positive semi-definite but I am not sure how this result could be derived in general form from the discretization approach I used. > > > Thanks, > > > Matt > > > > > > Petsc has some support to generate approximate > pressure > > schur > > > complements for you, but these will not be as good > as the > > ones > > > specifically constructed for you particular > discretization. > > > > I came across a tutorial > (/snes/examples/tutorials/ex70.c), > > which shows > > 2 different approaches: > > > > 1- provide a Preconditioner \hat{S}p for the > approximation of > > the true > > Schur complement > > > > 2- use another Matrix (in this case its the Matrix > used for > > constructing > > the preconditioner in the former approach) as a new > > approximation of the > > Schur complement. > > > > Speaking in terms of the PETSc-manual p.87, looking > at the > > factorization > > of the Schur field split preconditioner, approach 1 > sets > > \hat{S}p while > > approach 2 furthermore sets \hat{S}. Is this > correct? > > > > > > > > No this is not correct. > > \hat{S} is always constructed by PETSc as > > \hat{S} = A11 - A10 KSP(A00) A01 > > But then what happens in this line from the > tutorial /snes/examples/tutorials/ex70.c > > ierr = KSPSetOperators(subksp[1], s->myS, > s->myS);CHKERRQ(ierr); > > It think the approximate Schur complement a (Matrix of type > Schur) gets > replaced by an explicitely formed Matrix (myS, of type > MPIAIJ). > > > > You have two choices in how to define the preconditioned, > \hat{S_p}: > > > > [1] Assemble you own matrix (as is done in ex70) > > > > [2] Let PETSc build one. PETSc does this according to > > > > \hat{S_p} = A11 - A10 inv(diag(A00)) A01 > > > Regards, > Fabian > > > > > > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener From knepley at gmail.com Thu Feb 5 18:53:58 2015 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 5 Feb 2015 18:53:58 -0600 Subject: [petsc-users] Field Split PC for Fully-Coupled 3d stationary incompressible Navier-Stokes Solution Algorithm In-Reply-To: <1423181737.3627.3.camel@gmail.com> References: <1422869962.961.2.camel@gmail.com> <1422871832.961.4.camel@gmail.com> <1423082081.3096.6.camel@gmail.com> <1423174513.3627.1.camel@gmail.com> <1423181737.3627.3.camel@gmail.com> Message-ID: On Thu, Feb 5, 2015 at 6:15 PM, Fabian Gabel wrote: > On Do, 2015-02-05 at 16:45 -0600, Matthew Knepley wrote: > > On Thu, Feb 5, 2015 at 4:15 PM, Fabian Gabel > > wrote: > > Thank you for your feedback. > > > > > -coupledsolve_pc_type fieldsplit > > > -coupledsolve_pc_fieldsplit_0_fields 0,1,2 > > > -coupledsolve_pc_fieldsplit_1_fields 3 > > > -coupledsolve_pc_fieldsplit_type schur > > > -coupledsolve_pc_fieldsplit_block_size 4 > > > -coupledsolve_fieldsplit_0_ksp_converged_reason > > > -coupledsolve_fieldsplit_1_ksp_converged_reason > > > -coupledsolve_fieldsplit_0_ksp_type gmres > > > -coupledsolve_fieldsplit_0_pc_type fieldsplit > > > -coupledsolve_fieldsplit_0_pc_fieldsplit_block_size > > 3 > > > -coupledsolve_fieldsplit_0_fieldsplit_0_pc_type ml > > > -coupledsolve_fieldsplit_0_fieldsplit_1_pc_type ml > > > -coupledsolve_fieldsplit_0_fieldsplit_2_pc_type ml > > > > > > Is it normal, that I have to explicitly specify the > > block size > > > for each > > > fieldsplit? > > > > > > > > > No. You should be able to just specify > > > > > > > > > -coupledsolve_fieldsplit_ksp_converged > > > -coupledsolve_fieldsplit_0_fieldsplit_pc_type ml > > > > > > > > > and same options will be applied to all splits (0,1,2). > > > > > > Does this functionality not work? > > > > > > > > It does work indeed, but what I actually was referring to, was > > the use > > of > > > > -coupledsolve_pc_fieldsplit_block_size 4 > > -coupledsolve_fieldsplit_0_pc_fieldsplit_block_size 3 > > > > Without them, I get the error message > > > > [0]PETSC ERROR: PCFieldSplitSetDefaults() line 468 > > in /work/build/petsc/src/ksp/pc/impls/fieldsplit/fieldsplit.c > > Unhandled > > case, must have at least two fields, not 1 > > > > I thought PETSc would already know, what I want to do, since I > > initialized the fieldsplit with > > > > CALL PCFieldSplitSetIS(PRECON,PETSC_NULL_CHARACTER,ISU,IERR) > > > > etc. > > > > > > > > > > > Are there any guidelines to follow that I could use > > to avoid > > > taking wild > > > guesses? > > > > > > > > > Sure. There are lots of papers published on how to construct > > robust > > > block preconditioners for saddle point problems arising from > > Navier > > > Stokes. > > > I would start by looking at this book: > > > > > > > > > Finite Elements and Fast Iterative Solvers > > > > > > Howard Elman, David Silvester and Andy Wathen > > > > > > Oxford University Press > > > > > > See chapters 6 and 8. > > > > > As a matter of fact I spent the last days digging through > > papers on the > > regard of preconditioners or approximate Schur complements and > > the names > > Elman and Silvester have come up quite often. > > > > The problem I experience is, that, except for one publication, > > all the > > other ones I checked deal with finite element formulations. > > Only > > > > Klaij, C. and Vuik, C. SIMPLE-type preconditioners for > > cell-centered, > > colocated finite volume discretization of incompressible > > Reynolds-averaged Navier?Stokes equations > > > > presented an approach for finite volume methods. Furthermore, > > a lot of > > literature is found on saddle point problems, since the linear > > system > > from stable finite element formulations comes with a 0 block > > as pressure > > matrix. I'm not sure how I can benefit from the work that has > > already > > been done for finite element methods, since I neither use > > finite > > elements nor I am trying to solve a saddle point problem (?). > > > > > > I believe the operator estimates for FV are very similar to first > > order FEM, > > Ok, so you would suggest to just discretize the operators differently > (FVM instead of FEM discretization)? > I thought you were using FV. > > and > > I believe that you do have a saddle-point system in that there are > > both positive > > and negative eigenvalues. > > A first test on a small system in Matlab shows, that my system matrix is > positive semi-definite but I am not sure how this result could be > derived in general form from the discretization approach I used. > You can always make it definite by adding a large enough A_pp. I thought the penalization would be small. Thanks, Matt > > > > Thanks, > > > > > > Matt > > > > > > > > > Petsc has some support to generate approximate > > pressure > > > schur > > > > complements for you, but these will not be as good > > as the > > > ones > > > > specifically constructed for you particular > > discretization. > > > > > > I came across a tutorial > > (/snes/examples/tutorials/ex70.c), > > > which shows > > > 2 different approaches: > > > > > > 1- provide a Preconditioner \hat{S}p for the > > approximation of > > > the true > > > Schur complement > > > > > > 2- use another Matrix (in this case its the Matrix > > used for > > > constructing > > > the preconditioner in the former approach) as a new > > > approximation of the > > > Schur complement. > > > > > > Speaking in terms of the PETSc-manual p.87, looking > > at the > > > factorization > > > of the Schur field split preconditioner, approach 1 > > sets > > > \hat{S}p while > > > approach 2 furthermore sets \hat{S}. Is this > > correct? > > > > > > > > > > > > No this is not correct. > > > \hat{S} is always constructed by PETSc as > > > \hat{S} = A11 - A10 KSP(A00) A01 > > > > But then what happens in this line from the > > tutorial /snes/examples/tutorials/ex70.c > > > > ierr = KSPSetOperators(subksp[1], s->myS, > > s->myS);CHKERRQ(ierr); > > > > It think the approximate Schur complement a (Matrix of type > > Schur) gets > > replaced by an explicitely formed Matrix (myS, of type > > MPIAIJ). > > > > > > You have two choices in how to define the preconditioned, > > \hat{S_p}: > > > > > > [1] Assemble you own matrix (as is done in ex70) > > > > > > [2] Let PETSc build one. PETSc does this according to > > > > > > \hat{S_p} = A11 - A10 inv(diag(A00)) A01 > > > > > Regards, > > Fabian > > > > > > > > > > > > > > > > > > > > -- > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results to which > > their experiments lead. > > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Thu Feb 5 19:22:13 2015 From: hzhang at mcs.anl.gov (Hong) Date: Thu, 5 Feb 2015 19:22:13 -0600 Subject: [petsc-users] Large rectangular Dense Transpose multiplication with sparse In-Reply-To: <1631884364.163440.1423177904614.JavaMail.root@mail.gatech.edu> References: <1016504175.155142.1423176136966.JavaMail.root@mail.gatech.edu> <1631884364.163440.1423177904614.JavaMail.root@mail.gatech.edu> Message-ID: Swarnava: The matrix product A will be a dense matrix. You may consider using Elemental package for such matrix product. Hong > > Dear all, > > I am trying to compute matrices A = transpose(R)*H*R and M = > transpose(R)*R where > H is a sparse (banded) matrix in MATMPIAIJ format (5 million x 5 million > total size) > R is a MPI dense matrix of size 5 million x 2000. > > I tried 1) MatPtAP - Failed, realized this only works for pairs of AIJ > matrices > 2) First multiplying H*R and storing in 5 million x 2000 MPI > dense. Then MatTranspose of R and multiplying the transposed R with 5 > million x 2000 dense. This multiplication fails. > > Could someone please suggest a way of doing this. > > Regards, > Swarnava > -- > Swarnava Ghosh > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sghosh2012 at gatech.edu Thu Feb 5 23:05:15 2015 From: sghosh2012 at gatech.edu (Ghosh, Swarnava) Date: Fri, 6 Feb 2015 00:05:15 -0500 (EST) Subject: [petsc-users] Large rectangular Dense Transpose multiplication with sparse In-Reply-To: Message-ID: <1800358459.252311.1423199115535.JavaMail.root@mail.gatech.edu> Hong, Thanks for the suggestion of Elemental. However I need to have the dense matrix have the same parallel communicator as the sparse matrix. If I use elemental, then the numbering changes, will I be able to multiply with the sparse matrix which has a different numbering scheme and communicator? In any case I would want the resultant matrices A and M to be elemental type since I need to solve a dense eigenvalue problem. Regards, Swarnava ----- Original Message ----- From: "Hong" To: "Swarnava Ghosh" Cc: "PETSc users list" Sent: Thursday, February 5, 2015 8:22:13 PM Subject: Re: [petsc-users] Large rectangular Dense Transpose multiplication with sparse Swarnava: The matrix product A will be a dense matrix. You may consider using Elemental package for such matrix product. Hong Dear all, I am trying to compute matrices A = transpose(R)*H*R and M = transpose(R)*R where H is a sparse (banded) matrix in MATMPIAIJ format (5 million x 5 million total size) R is a MPI dense matrix of size 5 million x 2000. I tried 1) MatPtAP - Failed, realized this only works for pairs of AIJ matrices 2) First multiplying H*R and storing in 5 million x 2000 MPI dense. Then MatTranspose of R and multiplying the transposed R with 5 million x 2000 dense. This multiplication fails. Could someone please suggest a way of doing this. Regards, Swarnava -- Swarnava Ghosh -- Swarnava Ghosh PhD Candidate, Structural Engineering, Mechanics and Materials School of Civil and Environmental Engineering Georgia Institute of Technology Atlanta, GA 30332 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Feb 5 23:18:54 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 5 Feb 2015 23:18:54 -0600 Subject: [petsc-users] Direct solvers In-Reply-To: <192B39D7-98BD-45B0-A5F3-D0600A640A66@gmail.com> References: <192B39D7-98BD-45B0-A5F3-D0600A640A66@gmail.com> Message-ID: > On Feb 5, 2015, at 5:47 PM, Manav Bhatia wrote: > > Hi, > > I am trying to use an lu decomposition method for a relatively large matrix (~775,000 dofs) coming from a thermoelasticity problem. > > For the past few weeks, LU solver in 3.5.1 has been solving it just fine. I just upgraded to 3.5.2 from macports (running on Mac OS 10.10.2), and am getting the following ?out of memory" error This is surprising. The changes are 3.5.1 to 3.5.2 are supposed to be only minor bug fixes. Is the code otherwise __exactly__ the same with the same options? Are all the external libraries exactly the same in both cases? > Memory allocated 3649788624 Memory used by process 3943817216 > [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info. > [0]PETSC ERROR: Memory requested 18446744066373113856 ^^^^^^^^^^^^^^^^^^^ This memory size here is truly absurd. If you have access to a linux system I suggest running the program with valgrind to see if there is memory corruption messing things up http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind You can also try installing 3.5.2 directly from our tarball instead of macports and see if the same thing happens. Barry > > sab_old_mast_structural_analysis(378,0x7fff75f6e300) malloc: *** mach_vm_map(size=18446744066373115904) failed (error code=3) > *** error: can't allocate region > *** set a breakpoint in malloc_error_break to debug > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Out of memory. This could be due to allocating > [0]PETSC ERROR: too large an object or bleeding by not properly > [0]PETSC ERROR: destroying unneeded objects. > [0]PETSC ERROR: Memory allocated 3649788624 Memory used by process 3943817216 > [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info. > [0]PETSC ERROR: Memory requested 18446744066373113856 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.5.2, unknown > [0]PETSC ERROR: ./sab_old_mast_structural_analysis on a arch-macports named ws243-49.walker.dynamic.msstate.edu by manav Thu Feb 5 17:30:18 2015 > [0]PETSC ERROR: Configure options --prefix=/opt/local --prefix=/opt/local/lib/petsc --with-valgrind=0 --with-shared-libraries --with-c2html-dir=/opt/local --with-x=0 --with-blas-lapack-lib=/System/Library/Frameworks/Accelerate.framework/Versions/Current/Accelerate --with-hwloc-dir=/opt/local --with-suitesparse-dir=/opt/local --with-superlu-dir=/opt/local --with-metis-dir=/opt/local --with-parmetis-dir=/opt/local --with-scalapack-dir=/opt/local --with-mumps-dir=/opt/local CC=/opt/local/bin/mpicc-openmpi-mp CXX=/opt/local/bin/mpicxx-openmpi-mp FC=/opt/local/bin/mpif90-openmpi-mp F77=/opt/local/bin/mpif90-openmpi-mp F90=/opt/local/bin/mpif90-openmpi-mp COPTFLAGS=-Os CXXOPTFLAGS=-Os FOPTFLAGS=-Os LDFLAGS="-L/opt/local/lib -Wl,-headerpad_max_install_names" CPPFLAGS=-I/opt/local/include CFLAGS="-Os -arch x86_64" CXXFLAGS=-Os FFLAGS=-Os FCFLAGS=-Os F90FLAGS=-Os PETSC_ARCH=arch-macports --with-mpiexec=mpiexec-openmpi-mp > [0]PETSC ERROR: #1 PetscMallocAlign() line 46 in /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/sys/memory/mal.c > [0]PETSC ERROR: #2 PetscTrMallocDefault() line 184 in /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/sys/memory/mtr.c > [0]PETSC ERROR: #3 PetscFreeSpaceGet() line 13 in /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/mat/utils/freespace.c > [0]PETSC ERROR: #4 MatLUFactorSymbolic_SeqAIJ() line 362 in /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/mat/impls/aij/seq/aijfact.c > [0]PETSC ERROR: #5 MatLUFactorSymbolic() line 2842 in /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/mat/interface/matrix.c > [0]PETSC ERROR: #6 PCSetUp_LU() line 127 in /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/ksp/pc/impls/factor/lu/lu.c > [0]PETSC ERROR: #7 PCSetUp() line 902 in /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: #8 KSPSetUp() line 305 in /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #9 KSPSolve() line 417 in /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #10 SNESSolve_NEWTONLS() line 232 in /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/snes/impls/ls/ls.c > [0]PETSC ERROR: #11 SNESSolve() line 3743 in /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.2/src/snes/interface/snes.c > [0]PETSC ERROR: #12 solve() line 559 in src/solvers/petsc_nonlinear_solver.C > -------------------------------------------------------------------------- > > > A few questions: > > ? has something changed between 3.5.1 and 3.5.2 that might lead to this behavior? > > ? So far I have tried the following iterative solver option: -pc_type ilu -pc_factor_levels 1 (and 2) with very slow convergence. Is there a better preconditioner recommended for this problem? This is a solid mechanics problem with thermal load (not a coupled thermal-structural probelm). > > ? I tried using MUMPS through the option -pc_factor_mat_solver_package mumps -mat_mumps_icntl_ 22 1 -mat_mumps_icntl_ 23 8000 to try to get it to use the disk I/O and limit the memory to 8GB, but that too returned with an out of memory error. Is this the correct format to specify the options? If so, is the write to disk option expected to work with MUMPS called via petsc? > > I would greatly appreciate your inputs. > > Thanks, > Manav > > > > From jed at jedbrown.org Thu Feb 5 23:23:43 2015 From: jed at jedbrown.org (Jed Brown) Date: Thu, 05 Feb 2015 22:23:43 -0700 Subject: [petsc-users] Direct solvers In-Reply-To: References: <192B39D7-98BD-45B0-A5F3-D0600A640A66@gmail.com> Message-ID: <87ioff8vfk.fsf@jedbrown.org> Barry Smith writes: >> Memory allocated 3649788624 Memory used by process 3943817216 >> [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info. >> [0]PETSC ERROR: Memory requested 18446744066373113856 > ^^^^^^^^^^^^^^^^^^^ > This memory size here is truly absurd. If you have access to a linux > system I suggest running the program with valgrind to see if there is > memory corruption messing things up > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind I've heard that Valgrind works on OSX 10.10. http://ranf.tl/2014/11/28/valgrind-on-mac-os-x-10-10-yosemite/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From dave.mayhem23 at gmail.com Fri Feb 6 01:35:20 2015 From: dave.mayhem23 at gmail.com (Dave May) Date: Fri, 6 Feb 2015 08:35:20 +0100 Subject: [petsc-users] Field Split PC for Fully-Coupled 3d stationary incompressible Navier-Stokes Solution Algorithm In-Reply-To: <1423174513.3627.1.camel@gmail.com> References: <1422869962.961.2.camel@gmail.com> <1422871832.961.4.camel@gmail.com> <1423082081.3096.6.camel@gmail.com> <1423174513.3627.1.camel@gmail.com> Message-ID: > -coupledsolve_pc_fieldsplit_block_size 4 > -coupledsolve_fieldsplit_0_pc_fieldsplit_block_size 3 > > Without them, I get the error message > > [0]PETSC ERROR: PCFieldSplitSetDefaults() line 468 > in /work/build/petsc/src/ksp/pc/impls/fieldsplit/fieldsplit.c Unhandled > case, must have at least two fields, not 1 > > I thought PETSc would already know, what I want to do, since I > initialized the fieldsplit with > > CALL PCFieldSplitSetIS(PRECON,PETSC_NULL_CHARACTER,ISU,IERR) > > Huh. If you called PCFieldSplitSetIS(), then certainly the FS knows you have four fields and you shouldn't need to set the block size 4 option. I know this is true as I use it all the time. When you start grouping new splits (in your case u,v,w) I would have also thought that the block size 3 option would also be redundant - however I use this option less frequently. > As a matter of fact I spent the last days digging through papers on the > regard of preconditioners or approximate Schur complements and the names > Elman and Silvester have come up quite often. > > The problem I experience is, that, except for one publication, all the > other ones I checked deal with finite element formulations. Only > > Klaij, C. and Vuik, C. SIMPLE-type preconditioners for cell-centered, > colocated finite volume discretization of incompressible > Reynolds-averaged Navier?Stokes equations > > presented an approach for finite volume methods. The exact same analysis applies directly to any stable mixed u-p discretization (I agree with all Matt's comments as well). If your stablilization term is doing what is supposed to, then your discretization should be stable. I don't know of any papers which show results using your exact discretization, but here are some preconditioning papers for Stokes which employ FV discretizations: @article{olshanskii1999iterative, title={An iterative solver for the Oseen problem and numerical solution of incompressible Navier--Stokes equations}, author={Olshanskii, Maxim A}, journal={Numerical linear algebra with applications}, volume={6}, number={5}, pages={353--378}, year={1999} } @article{olshanskii2004grad, title={Grad-div stablilization for Stokes equations}, author={Olshanskii, Maxim and Reusken, Arnold}, journal={Mathematics of Computation}, volume={73}, number={248}, pages={1699--1718}, year={2004} } @article{griffith2009accurate, title={An accurate and efficient method for the incompressible Navier--Stokes equations using the projection method as a preconditioner}, author={Griffith, Boyce E}, journal={Journal of Computational Physics}, volume={228}, number={20}, pages={7565--7595}, year={2009}, publisher={Elsevier} } @article{furuichi2011development, title={Development of a Stokes flow solver robust to large viscosity jumps using a Schur complement approach with mixed precision arithmetic}, author={Furuichi, Mikito and May, Dave A and Tackley, Paul J}, journal={Journal of Computational Physics}, volume={230}, number={24}, pages={8835--8851}, year={2011}, publisher={Elsevier} } @article{cai2013efficient, title={Efficient variable-coefficient finite-volume Stokes solvers}, author={Cai, Mingchao and Nonaka, AJ and Bell, John B and Griffith, Boyce E and Donev, Aleksandar}, journal={arXiv preprint arXiv:1308.4605}, year={2013} } Furthermore, a lot of > literature is found on saddle point problems, since the linear system > from stable finite element formulations comes with a 0 block as pressure > matrix. I'm not sure how I can benefit from the work that has already > been done for finite element methods, since I neither use finite > elements nor I am trying to solve a saddle point problem (?). > I would say you are trying to solve a saddle point system, only one which has been stabilized. I expect your stabilization term should vanish in the limit of h -> 0. The block preconditioners are directly applicable to what you are doing, as are all the issues associated with building preconditioners for schur complement approximations. I have used FS based preconditioners for stablized Q1-Q1 finite element discretizations for Stokes problems. Despite the stabilization term in the p-p coupling term, saddle point preconditioning techniques are appropriate. There are examples of this in src/ksp/ksp/examples/tutorials - see ex43.c ex42.c > But then what happens in this line from the > tutorial /snes/examples/tutorials/ex70.c > > ierr = KSPSetOperators(subksp[1], s->myS, s->myS);CHKERRQ(ierr); > > It think the approximate Schur complement a (Matrix of type Schur) gets > replaced by an explicitely formed Matrix (myS, of type MPIAIJ). > Oh yes, you are right, that is what is done in this example. But you should note that this is not the default way petsc's fieldsplit preconditioner will define the schur complement \hat{S}. This particular piece of code lives in the users example. If you really wanted to do this, the same thing could be configured on the command line: -XXX_ksp_type preonly -XXX_pc_type ksp > > > > You have two choices in how to define the preconditioned, \hat{S_p}: > > > > [1] Assemble you own matrix (as is done in ex70) > > > > [2] Let PETSc build one. PETSc does this according to > > > > \hat{S_p} = A11 - A10 inv(diag(A00)) A01 > > > Regards, > Fabian > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Fri Feb 6 02:15:46 2015 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Fri, 6 Feb 2015 08:15:46 +0000 Subject: [petsc-users] Field Split PC for Fully-Coupled 3d stationary incompressible Navier-Stokes Solution Algorithm Message-ID: <1423210546767.74937@marin.nl> Hi Fabian, After reading your thread, I think we have the same discretization. Some thoughts: - The stabilization by pressure-weighted interpolation should lead to a diagonally dominant system that can be solved by algebraic multigrid (done for example in CFX), see http://dx.doi.org/10.1080/10407790.2014.894448 http://dx.doi.org/10.1016/j.jcp.2008.08.027 - If you go for fieldsplit with mild tolerances, the preconditioner will become variable and you might need FGMRES or GCR instead. - As you point out, most preconditioners are designed for stable FEM. My experience is that those conclusions sometimes (but not always) hold for FVM. Due to the specific form of the stabilization, other choices might be feasible. More research is needed :-) Chris dr. ir. Christiaan Klaij CFD Researcher Research & Development E mailto:C.Klaij at marin.nl T +31 317 49 33 44 MARIN 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl From C.Klaij at marin.nl Fri Feb 6 07:14:14 2015 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Fri, 6 Feb 2015 13:14:14 +0000 Subject: [petsc-users] Field Split PC for Fully-Coupled 3d stationary incompressible Navier-Stokes Solution Algorithm Message-ID: <1423228454190.81360@marin.nl> Hi Dave, My understanding is that stabilization by pressure weighted interpolation in FVM is in essence a fourth order pressure derivative with proper scaling. But in FVM it is usually implemented in defect correction form: only part of the stabilization is in the A11 block of the coupled matrix, the rest is in de rhs vector. This makes the coupled system diagonally dominant and solvable by AMG. Early work was done at Waterloo in the late eighties and early nineties, see http://dx.doi.org/10.2514/6.1996-297 Chris dr. ir. Christiaan Klaij CFD Researcher Research & Development E mailto:C.Klaij at marin.nl T +31 317 49 33 44 MARIN 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl From hzhang at mcs.anl.gov Fri Feb 6 10:04:40 2015 From: hzhang at mcs.anl.gov (Hong) Date: Fri, 6 Feb 2015 10:04:40 -0600 Subject: [petsc-users] Large rectangular Dense Transpose multiplication with sparse In-Reply-To: <1800358459.252311.1423199115535.JavaMail.root@mail.gatech.edu> References: <1800358459.252311.1423199115535.JavaMail.root@mail.gatech.edu> Message-ID: Ghosh: > > > Thanks for the suggestion of Elemental. However I need to have the > dense matrix have the same parallel communicator as the sparse matrix. > If I use elemental, then the numbering changes, will I be able to multiply > with the sparse matrix which has a different numbering scheme and > communicator? > > In any case I would want the resultant matrices A and M to be elemental > type since I need to solve a dense eigenvalue problem. > If your final goal is solving a dense eigenvalue problem, and the only sparse matrix is a banded matrix H, Elemental is a suitable package. I just added HermitianGenDefiniteEig() into petsc(master)-elemental(v0.84) interface and tested on some large matrices - seems quite efficient. You may take a look at ~petsc(master)/src/mat/examples/tests/ex174.cxx I'm not sure if the latest Elemental supports banded dense matrix - if not, ask Jack to added :-) Hong > > ------------------------------ > *From: *"Hong" > *To: *"Swarnava Ghosh" > *Cc: *"PETSc users list" > *Sent: *Thursday, February 5, 2015 8:22:13 PM > *Subject: *Re: [petsc-users] Large rectangular Dense Transpose > multiplication with sparse > > > Swarnava: > The matrix product A will be a dense matrix. You may consider using > Elemental package for such matrix product. > > Hong > >> >> Dear all, >> >> I am trying to compute matrices A = transpose(R)*H*R and M = >> transpose(R)*R where >> H is a sparse (banded) matrix in MATMPIAIJ format (5 million x 5 >> million total size) >> R is a MPI dense matrix of size 5 million x 2000. >> >> I tried 1) MatPtAP - Failed, realized this only works for pairs of AIJ >> matrices >> 2) First multiplying H*R and storing in 5 million x 2000 MPI >> dense. Then MatTranspose of R and multiplying the transposed R with 5 >> million x 2000 dense. This multiplication fails. >> >> Could someone please suggest a way of doing this. >> >> Regards, >> Swarnava >> -- >> Swarnava Ghosh >> > > > > > -- > Swarnava Ghosh > PhD Candidate, > Structural Engineering, Mechanics and Materials > School of Civil and Environmental Engineering > Georgia Institute of Technology > Atlanta, GA 30332 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lawrence.mitchell at imperial.ac.uk Fri Feb 6 11:52:02 2015 From: lawrence.mitchell at imperial.ac.uk (Lawrence Mitchell) Date: Fri, 6 Feb 2015 17:52:02 +0000 Subject: [petsc-users] Multiple in-flight communications with PetscSFs Message-ID: <5C9B061D-6834-45E0-9C48-574B1F6B50B8@imperial.ac.uk> Hi all, is it possible to have multiple rounds of communication in flight simultaneously on a single SF? I'd like to be able to do something like: PetscSFBcastBegin(sf, dtype, dataA, dataA_out); PetscSFBcastBegin(sf, dtype2, dataB, dataB_out); ... PetscSFBcastEnd(sf, dtype2, dataB, dataB_out); PetscSFBcastEnd(sf, dtype, dataA, dataA_out); This seems to work unless dtype2 and dtype are identical (and the sf_type is basic), in which case dataA_out ends up with the data I expect in dataB_out. Look at the sfbasic implementation, I wonder if it is as simple as checking the link key when looking for an in use pack: diff --git a/src/vec/is/sf/impls/basic/sfbasic.c b/src/vec/is/sf/impls/basic/sfbasic.c index 2ef9849..9020e9c 100644 --- a/src/vec/is/sf/impls/basic/sfbasic.c +++ b/src/vec/is/sf/impls/basic/sfbasic.c @@ -801,6 +801,7 @@ static PetscErrorCode PetscSFBasicGetPackInUse(PetscSF sf,MPI_Datatype unit,cons for (p=&bas->inuse; (link=*p); p=&link->next) { PetscBool match; ierr = MPIPetsc_Type_compare(unit,link->unit,&match);CHKERRQ(ierr); + match = match && (link->key == key); if (match) { switch (cmode) { case PETSC_OWN_POINTER: *p = link->next; break; /* Remove from inuse list */ Or is this not something that is supposed to work at all and I'm just lucky. Cheers, Lawrence -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From jed at jedbrown.org Fri Feb 6 12:09:38 2015 From: jed at jedbrown.org (Jed Brown) Date: Fri, 06 Feb 2015 11:09:38 -0700 Subject: [petsc-users] Multiple in-flight communications with PetscSFs In-Reply-To: <5C9B061D-6834-45E0-9C48-574B1F6B50B8@imperial.ac.uk> References: <5C9B061D-6834-45E0-9C48-574B1F6B50B8@imperial.ac.uk> Message-ID: <871tm27vz1.fsf@jedbrown.org> Lawrence Mitchell writes: > Hi all, > > is it possible to have multiple rounds of communication in flight simultaneously on a single SF? > > I'd like to be able to do something like: > > PetscSFBcastBegin(sf, dtype, dataA, dataA_out); > PetscSFBcastBegin(sf, dtype2, dataB, dataB_out); > > ... > > PetscSFBcastEnd(sf, dtype2, dataB, dataB_out); > PetscSFBcastEnd(sf, dtype, dataA, dataA_out); > > This seems to work unless dtype2 and dtype are identical (and the sf_type is basic), in which case dataA_out ends up with the data I expect in dataB_out. > > Look at the sfbasic implementation, I wonder if it is as simple as checking the link key when looking for an in use pack: > > diff --git a/src/vec/is/sf/impls/basic/sfbasic.c b/src/vec/is/sf/impls/basic/sfbasic.c > index 2ef9849..9020e9c 100644 > --- a/src/vec/is/sf/impls/basic/sfbasic.c > +++ b/src/vec/is/sf/impls/basic/sfbasic.c > @@ -801,6 +801,7 @@ static PetscErrorCode PetscSFBasicGetPackInUse(PetscSF sf,MPI_Datatype unit,cons > for (p=&bas->inuse; (link=*p); p=&link->next) { > PetscBool match; > ierr = MPIPetsc_Type_compare(unit,link->unit,&match);CHKERRQ(ierr); > + match = match && (link->key == key); > if (match) { > switch (cmode) { > case PETSC_OWN_POINTER: *p = link->next; break; /* Remove from inuse list */ > > Or is this not something that is supposed to work at all and I'm just lucky. It's supposed to work, but I think not tested. Can you test the above (have the conditional check the key instead of changing "match"; there is no implicit conversion from bool to PetscBool) and submit as a patch? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From bhatiamanav at gmail.com Fri Feb 6 22:39:31 2015 From: bhatiamanav at gmail.com (Manav Bhatia) Date: Fri, 6 Feb 2015 22:39:31 -0600 Subject: [petsc-users] MUMPS causing segmentation fault Message-ID: <6A1C6E01-5323-4653-9948-13E6EAA0A507@gmail.com> Hi, I am trying to run MUMPS with the following command line options: -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps -mat_mumps_icntl_4 4 -info This works fine with ex2.c. My code, which runs fine with -ksp_type preonly -pc_type lu throws a segmentation fault error with the command line options listed above. The output of the code is attached. Is there anything that seems obviously wrong here? Thanks, Manav -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: t.txt URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Fri Feb 6 22:56:56 2015 From: jed at jedbrown.org (Jed Brown) Date: Fri, 06 Feb 2015 21:56:56 -0700 Subject: [petsc-users] MUMPS causing segmentation fault In-Reply-To: <6A1C6E01-5323-4653-9948-13E6EAA0A507@gmail.com> References: <6A1C6E01-5323-4653-9948-13E6EAA0A507@gmail.com> Message-ID: <878uga48vb.fsf@jedbrown.org> Manav Bhatia writes: > Hi, > > I am trying to run MUMPS with the following command line options: > -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps -mat_mumps_icntl_4 4 -info > > This works fine with ex2.c. > > My code, which runs fine with -ksp_type preonly -pc_type lu throws > a segmentation fault error with the command line options listed > above. The output of the code is attached. You should run a small problem size in valgrind and/or a debugger. SEGV is usually caused by memory corruption and you have to track that down. It could be a bug in your code or a bug in MUMPS. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From balay at mcs.anl.gov Fri Feb 6 22:57:49 2015 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 6 Feb 2015 22:57:49 -0600 Subject: [petsc-users] MUMPS causing segmentation fault In-Reply-To: <6A1C6E01-5323-4653-9948-13E6EAA0A507@gmail.com> References: <6A1C6E01-5323-4653-9948-13E6EAA0A507@gmail.com> Message-ID: >>>>>> Configure options --prefix=/Users/manav/Documents/codes/numerical_lib/petsc/petsc_with_external_libs/petsc-3.5.3/../ --CC=mpicc-openmpi-mp --CXX=mpicxx-openmpi-mp --FC=mpif90-openmpi-mp --with-clanguage=c++ --with-fortran=0 --with-mpi-include=/opt/local/include/openmpi-mp --with-mpi-lib="[/opt/local/lib/openmpi-mp/libmpi_cxx.dylib,/opt/local/lib/openmpi-mp/libmpi.dylib]" --with-mpiexec=/opt/local/bin/mpiexec-openmpi-mp --with-x=0 --with-debugging=0 --with-lapack-lib=/usr/lib/liblapack.dylib --with-blas-lib=/usr/lib/libblas.dylib --download-superlu=yes --download-superlu_dist=yes --download-suitesparse=yes --download-mumps=yes --download-scalapack=yes --with-parmetis-dir=/opt/local/ --with-metis-dir=/opt/local --with-scalapack-dir=/opt/local <<<< I'm not sure which version of metis/parmetis you have - but mumps has issues with the latest version. Suggest using --download-metis --download-parmetis instead. Satish On Fri, 6 Feb 2015, Manav Bhatia wrote: > Hi,? > ? ?I am trying to run MUMPS with the following command line options:? > ?-ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps -mat_mumps_icntl_4 4 -info > > ? This works fine with ex2.c. > > ? ?My code, which runs fine with -ksp_type preonly -pc_type lu throws a segmentation fault error with the command line > options listed above. The output of the code is attached.? > > ? ?Is there anything that seems obviously wrong here?? > > Thanks, > Manav > > > From bhatiamanav at gmail.com Fri Feb 6 23:35:41 2015 From: bhatiamanav at gmail.com (Manav Bhatia) Date: Fri, 6 Feb 2015 23:35:41 -0600 Subject: [petsc-users] MUMPS causing segmentation fault In-Reply-To: References: <6A1C6E01-5323-4653-9948-13E6EAA0A507@gmail.com> Message-ID: <5483DB82-2F84-4F51-86A0-435B18BBB889@gmail.com> I am using metis and permetis from macports: versions 5.1.0_3 and 4.0.3_3, respectively. They both seem to be the latest versions. -Manav > On Feb 6, 2015, at 10:57 PM, Satish Balay wrote: > >>>>>>> > Configure options --prefix=/Users/manav/Documents/codes/numerical_lib/petsc/petsc_with_external_libs/petsc-3.5.3/../ --CC=mpicc-openmpi-mp --CXX=mpicxx-openmpi-mp --FC=mpif90-openmpi-mp --with-clanguage=c++ --with-fortran=0 --with-mpi-include=/opt/local/include/openmpi-mp --with-mpi-lib="[/opt/local/lib/openmpi-mp/libmpi_cxx.dylib,/opt/local/lib/openmpi-mp/libmpi.dylib]" --with-mpiexec=/opt/local/bin/mpiexec-openmpi-mp --with-x=0 --with-debugging=0 --with-lapack-lib=/usr/lib/liblapack.dylib --with-blas-lib=/usr/lib/libblas.dylib --download-superlu=yes --download-superlu_dist=yes --download-suitesparse=yes --download-mumps=yes --download-scalapack=yes --with-parmetis-dir=/opt/local/ --with-metis-dir=/opt/local --with-scalapack-dir=/opt/local > <<<< > > I'm not sure which version of metis/parmetis you have - but mumps has > issues with the latest version. > > Suggest using --download-metis --download-parmetis instead. > > Satish > > > On Fri, 6 Feb 2015, Manav Bhatia wrote: > >> Hi, >> I am trying to run MUMPS with the following command line options: >> -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps -mat_mumps_icntl_4 4 -info >> >> This works fine with ex2.c. >> >> My code, which runs fine with -ksp_type preonly -pc_type lu throws a segmentation fault error with the command line >> options listed above. The output of the code is attached. >> >> Is there anything that seems obviously wrong here? >> >> Thanks, >> Manav >> >> >> From lawrence.mitchell at imperial.ac.uk Sat Feb 7 05:13:06 2015 From: lawrence.mitchell at imperial.ac.uk (Lawrence Mitchell) Date: Sat, 7 Feb 2015 11:13:06 +0000 Subject: [petsc-users] Multiple in-flight communications with PetscSFs In-Reply-To: <871tm27vz1.fsf@jedbrown.org> References: <5C9B061D-6834-45E0-9C48-574B1F6B50B8@imperial.ac.uk> <871tm27vz1.fsf@jedbrown.org> Message-ID: On 6 Feb 2015, at 18:09, Jed Brown wrote: > Lawrence Mitchell writes: > ... >> >> Or is this not something that is supposed to work at all and I'm just lucky. > > It's supposed to work, but I think not tested. Can you test the above > (have the conditional check the key instead of changing "match"; there > is no implicit conversion from bool to PetscBool) and submit as a patch? Thanks, patch (plus simple test) here https://bitbucket.org/petsc/petsc/pull-request/255/sf-fix-multiple-in-flight-comm-rounds-for/diff Before this change the communication completion into A get's B's data and vice versa. This doesn't occur for -sf_type window. While I'm here, is there any reason that DMs don't call setfromoptions on their SFs: it looks like one can't select an an sf type except programmatically. Maybe the following: diff --git a/src/dm/interface/dm.c b/src/dm/interface/dm.c index 324b101..9e9b130 100644 --- a/src/dm/interface/dm.c +++ b/src/dm/interface/dm.c @@ -49,6 +49,8 @@ PetscErrorCode DMCreate(MPI_Comm comm,DM *dm) v->coloringtype = IS_COLORING_GLOBAL; ierr = PetscSFCreate(comm, &v->sf);CHKERRQ(ierr); ierr = PetscSFCreate(comm, &v->defaultSF);CHKERRQ(ierr); + ierr = PetscSFSetFromOptions(v->sf); CHKERRQ(ierr); + ierr = PetscSFSetFromOptions(v->defaultSF); CHKERRQ(ierr); v->defaultSection = NULL; v->defaultGlobalSection = NULL; v->defaultConstraintSection = NULL; Cheers, Lawrence -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From jed at jedbrown.org Sat Feb 7 07:57:43 2015 From: jed at jedbrown.org (Jed Brown) Date: Sat, 07 Feb 2015 06:57:43 -0700 Subject: [petsc-users] Multiple in-flight communications with PetscSFs In-Reply-To: References: <5C9B061D-6834-45E0-9C48-574B1F6B50B8@imperial.ac.uk> <871tm27vz1.fsf@jedbrown.org> Message-ID: <87r3u13ju0.fsf@jedbrown.org> Lawrence Mitchell writes: > Thanks, patch (plus simple test) here https://bitbucket.org/petsc/petsc/pull-request/255/sf-fix-multiple-in-flight-comm-rounds-for/diff Thanks; I'll test here. > Before this change the communication completion into A get's B's data and vice versa. This doesn't occur for -sf_type window. > > While I'm here, is there any reason that DMs don't call setfromoptions on their SFs: it looks like one can't select an an sf type except programmatically. > > Maybe the following: > > diff --git a/src/dm/interface/dm.c b/src/dm/interface/dm.c > index 324b101..9e9b130 100644 > --- a/src/dm/interface/dm.c > +++ b/src/dm/interface/dm.c > @@ -49,6 +49,8 @@ PetscErrorCode DMCreate(MPI_Comm comm,DM *dm) > v->coloringtype = IS_COLORING_GLOBAL; > ierr = PetscSFCreate(comm, &v->sf);CHKERRQ(ierr); > ierr = PetscSFCreate(comm, &v->defaultSF);CHKERRQ(ierr); > + ierr = PetscSFSetFromOptions(v->sf); CHKERRQ(ierr); > + ierr = PetscSFSetFromOptions(v->defaultSF); CHKERRQ(ierr); Please use PetscObjectSetOptionsPrefix on the SFs and call PetscSFSetFromOptions in DMSetFromOptions. > v->defaultSection = NULL; > v->defaultGlobalSection = NULL; > v->defaultConstraintSection = NULL; -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From qince168 at gmail.com Sat Feb 7 08:06:54 2015 From: qince168 at gmail.com (Ce Qin) Date: Sat, 7 Feb 2015 22:06:54 +0800 Subject: [petsc-users] KSP with Nested Matrix and Vector Message-ID: Dear all, I'm trying to use the KSP solver with nested matrix and vector. If I do not call KSPSetup it goes just fine. When I call the KSPSetUp I am getting the following error [0]PETSC ERROR: Invalid argument [0]PETSC ERROR: Nest vector arguments 1 and 2 have different numbers of blocks. [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: unknown GIT Date: unknown [0]PETSC ERROR: ./test on a linux-gnu-cxx-real-opt named eden by adam Sat Feb 7 21:48:02 2015 [0]PETSC ERROR: Configure options PETSC_DIR=/home/adam/libs/petsc PETSC_ARCH=linux-gnu-cxx-real-opt --with-x=0 --with-mpi=1 --with-debu gging=0 --with-clanguage=cxx --with-c-support=0 --with-scalar-type=real --download-sowing=linux/packages/sowing-1.1.16g.tar.gz --download-c2html=linux/packages/c2html.tar.gz --with-blas-lapack-lib="-L/home/adam/libs/lapack/lib -lopenblas" --download-scalapack=linux/packages/scalapack-2.0.2.tgz --download-metis=linux/packages/metis-5.0.2-p3.tar.gz --download-ptscotch=linux/packages/scotch_6.0.0_esmumps.tar.gz --download-parmetis=linux/packages/parmetis-4.0.2-p5.tar.gz --download-mumps=linux/packages/MUMPS_4.10.0-p3.tar.gz --download-pastix=linux/packages/pastix_5.2.2.20.tar.bz2 --download-superlu_dist=linux/packages/superlu_dist_3.3.tar.gz --download-hypre=linux/packages/hypre-2.10.0b.tar.gz [0]PETSC ERROR: #1 VecCopy_Nest() line 76 in /home/adam/libs/petsc/src/vec/vec/impls/nest/vecnest.c [0]PETSC ERROR: #2 VecCopy() line 1675 in /home/adam/libs/petsc/src/vec/vec/interface/vector.c [0]PETSC ERROR: #3 KSPInitialResidual() line 59 in /home/adam/libs/petsc/src/ksp/ksp/interface/itres.c [0]PETSC ERROR: #4 KSPSolve_GMRES() line 234 in /home/adam/libs/petsc/src/ksp/ksp/impls/gmres/gmres.c [0]PETSC ERROR: #5 KSPSolve() line 547 in /home/adam/libs/petsc/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: #6 main() line 36 in /home/adam/develop/test/src/test.cc [0]PETSC ERROR: PETSc Option Table entries: [0]PETSC ERROR: -ksp_monitor [0]PETSC ERROR: -ksp_view [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- I have attached a simple example to show this problem. Best regards, Ce Qin -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test.cc Type: text/x-c++src Size: 1440 bytes Desc: not available URL: From bsmith at mcs.anl.gov Sat Feb 7 10:52:55 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 7 Feb 2015 10:52:55 -0600 Subject: [petsc-users] MUMPS causing segmentation fault In-Reply-To: <5483DB82-2F84-4F51-86A0-435B18BBB889@gmail.com> References: <6A1C6E01-5323-4653-9948-13E6EAA0A507@gmail.com> <5483DB82-2F84-4F51-86A0-435B18BBB889@gmail.com> Message-ID: <93AA1499-8020-4D7F-A71C-C6BA817AB12A@mcs.anl.gov> > On Feb 6, 2015, at 11:35 PM, Manav Bhatia wrote: > > I am using metis and permetis from macports: versions 5.1.0_3 and 4.0.3_3, respectively. They both seem to be the latest versions. Do as Satish said "use --download-metis --download-parmetis instead." Use this for all packages. Each version of external packages generally work with a very limited (often one) version of other external packages, hence using --download-xxx for everything is the way to go since we have tested them in this combination. "Hoping" that different versions randomly put together is just a waste of time. Barry > > -Manav > > >> On Feb 6, 2015, at 10:57 PM, Satish Balay wrote: >> >>>>>>>> >> Configure options --prefix=/Users/manav/Documents/codes/numerical_lib/petsc/petsc_with_external_libs/petsc-3.5.3/../ --CC=mpicc-openmpi-mp --CXX=mpicxx-openmpi-mp --FC=mpif90-openmpi-mp --with-clanguage=c++ --with-fortran=0 --with-mpi-include=/opt/local/include/openmpi-mp --with-mpi-lib="[/opt/local/lib/openmpi-mp/libmpi_cxx.dylib,/opt/local/lib/openmpi-mp/libmpi.dylib]" --with-mpiexec=/opt/local/bin/mpiexec-openmpi-mp --with-x=0 --with-debugging=0 --with-lapack-lib=/usr/lib/liblapack.dylib --with-blas-lib=/usr/lib/libblas.dylib --download-superlu=yes --download-superlu_dist=yes --download-suitesparse=yes --download-mumps=yes --download-scalapack=yes --with-parmetis-dir=/opt/local/ --with-metis-dir=/opt/local --with-scalapack-dir=/opt/local >> <<<< >> >> I'm not sure which version of metis/parmetis you have - but mumps has >> issues with the latest version. >> >> Suggest using --download-metis --download-parmetis instead. >> >> Satish >> >> >> On Fri, 6 Feb 2015, Manav Bhatia wrote: >> >>> Hi, >>> I am trying to run MUMPS with the following command line options: >>> -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps -mat_mumps_icntl_4 4 -info >>> >>> This works fine with ex2.c. >>> >>> My code, which runs fine with -ksp_type preonly -pc_type lu throws a segmentation fault error with the command line >>> options listed above. The output of the code is attached. >>> >>> Is there anything that seems obviously wrong here? >>> >>> Thanks, >>> Manav >>> >>> >>> > From bsmith at mcs.anl.gov Sat Feb 7 11:04:11 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 7 Feb 2015 11:04:11 -0600 Subject: [petsc-users] Multiple in-flight communications with PetscSFs In-Reply-To: <87r3u13ju0.fsf@jedbrown.org> References: <5C9B061D-6834-45E0-9C48-574B1F6B50B8@imperial.ac.uk> <871tm27vz1.fsf@jedbrown.org> <87r3u13ju0.fsf@jedbrown.org> Message-ID: Lawrence, In general we do not want XXXSetFromOptions() to be randomly called deep within constructors or other places. Ideally we want them called either when I user calls them directly or from another YYYSetFromOptions() that the user called. We do violate this rule occasionally but we don't want new XXXSetFromOptions() put into the code randomly. Barry > On Feb 7, 2015, at 7:57 AM, Jed Brown wrote: > > Lawrence Mitchell writes: >> >> + ierr = PetscSFSetFromOptions(v->sf); CHKERRQ(ierr); >> + ierr = PetscSFSetFromOptions(v->defaultSF); CHKERRQ(ierr); > > Please use PetscObjectSetOptionsPrefix on the SFs and call > PetscSFSetFromOptions in DMSetFromOptions. > >> v->defaultSection = NULL; >> v->defaultGlobalSection = NULL; >> v->defaultConstraintSection = NULL; > From jed at jedbrown.org Sat Feb 7 12:43:14 2015 From: jed at jedbrown.org (Jed Brown) Date: Sat, 07 Feb 2015 11:43:14 -0700 Subject: [petsc-users] MUMPS causing segmentation fault In-Reply-To: <93AA1499-8020-4D7F-A71C-C6BA817AB12A@mcs.anl.gov> References: <6A1C6E01-5323-4653-9948-13E6EAA0A507@gmail.com> <5483DB82-2F84-4F51-86A0-435B18BBB889@gmail.com> <93AA1499-8020-4D7F-A71C-C6BA817AB12A@mcs.anl.gov> Message-ID: <87fvah36m5.fsf@jedbrown.org> Barry Smith writes: > Each version of external packages generally work with a very > limited (often one) version of other external packages, hence using > --download-xxx for everything is the way to go since we have tested > them in this combination. "Hoping" that different versions randomly > put together is just a waste of time. In this particular case, upstream makes releases that don't fix known bugs for which test cases and patches have been submitted. Consequently, some projects find it necessary to maintain patched versions, and those patches need to be rebased onto each upstream release. Meanwhile, the features going into upstream are rarely interesting, so it may not be a first priority. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From gideon.simpson at gmail.com Sun Feb 8 11:43:45 2015 From: gideon.simpson at gmail.com (Gideon Simpson) Date: Sun, 8 Feb 2015 12:43:45 -0500 Subject: [petsc-users] warning query Message-ID: <8E094605-BF03-4B6F-BEB0-2F27D533517C@gmail.com> What should I make of the following warning, where I have used an MPI_Allreduce on a quantity computed on each processor. include -I/opt/local/include -I/opt/local/include/mpich-gcc48 -I/opt/local/include `pwd`/petsc_trapz1.c In file included from /opt/local/lib/petsc/include/petscsys.h:1794:0, from /opt/local/lib/petsc/include/petscis.h:7, from /opt/local/lib/petsc/include/petscvec.h:9, from /Users/gideon/code/trapz/petsc_trapz1.c:3: /Users/gideon/code/trapz/petsc_trapz1.c: In function 'main': /opt/local/lib/petsc/include/petsclog.h:370:57: warning: value computed is not used [-Wunused-value] ((petsc_allreduce_ct += PetscMPIParallelComm(comm),0) || MPI_Allreduce(sendbuf,recvbuf,count,datatype,op,comm)) ^ /Users/gideon/code/trapz/petsc_trapz1.c:110:3: note: in expansion of macro 'MPI_Allreduce' MPI_Allreduce(&local_trapz, &global_trapz, 1, MPI_DOUBLE, MPI_SUM, PETSC_COMM_WORLD); -gideon -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Feb 8 11:51:22 2015 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 8 Feb 2015 11:51:22 -0600 Subject: [petsc-users] warning query In-Reply-To: <8E094605-BF03-4B6F-BEB0-2F27D533517C@gmail.com> References: <8E094605-BF03-4B6F-BEB0-2F27D533517C@gmail.com> Message-ID: On Sun, Feb 8, 2015 at 11:43 AM, Gideon Simpson wrote: > What should I make of the following warning, where I have used an > MPI_Allreduce on a quantity computed on each processor. > > include -I/opt/local/include -I/opt/local/include/mpich-gcc48 > -I/opt/local/include `pwd`/petsc_trapz1.c > In file included from /opt/local/lib/petsc/include/petscsys.h:1794:0, > from /opt/local/lib/petsc/include/petscis.h:7, > from /opt/local/lib/petsc/include/petscvec.h:9, > from /Users/gideon/code/trapz/petsc_trapz1.c:3: > /Users/gideon/code/trapz/petsc_trapz1.c: In function 'main': > /opt/local/lib/petsc/include/petsclog.h:370:57: warning: value computed is > not used [-Wunused-value] > ((petsc_allreduce_ct += PetscMPIParallelComm(comm),0) || > MPI_Allreduce(sendbuf,recvbuf,count,datatype,op,comm)) > ^ > /Users/gideon/code/trapz/petsc_trapz1.c:110:3: note: in expansion of macro > 'MPI_Allreduce' > MPI_Allreduce(&local_trapz, &global_trapz, 1, MPI_DOUBLE, MPI_SUM, > PETSC_COMM_WORLD); > It looks like you are not checking the return code: ierr = MPI_Allreduce(&local_trapz, &global_trapz, 1, MPI_DOUBLE, MPI_SUM, PETSC_COMM_WORLD);CHKERRQ(ierr); Matt > > -gideon > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From gideon.simpson at gmail.com Sun Feb 8 11:57:57 2015 From: gideon.simpson at gmail.com (Gideon Simpson) Date: Sun, 8 Feb 2015 12:57:57 -0500 Subject: [petsc-users] warning query In-Reply-To: References: <8E094605-BF03-4B6F-BEB0-2F27D533517C@gmail.com> Message-ID: <83C24D6A-E697-4680-B890-A6241A20A165@gmail.com> Yup, that cleared it. Thanks. -gideon > On Feb 8, 2015, at 12:51 PM, Matthew Knepley wrote: > > On Sun, Feb 8, 2015 at 11:43 AM, Gideon Simpson > wrote: > What should I make of the following warning, where I have used an MPI_Allreduce on a quantity computed on each processor. > > include -I/opt/local/include -I/opt/local/include/mpich-gcc48 -I/opt/local/include `pwd`/petsc_trapz1.c > In file included from /opt/local/lib/petsc/include/petscsys.h:1794:0, > from /opt/local/lib/petsc/include/petscis.h:7, > from /opt/local/lib/petsc/include/petscvec.h:9, > from /Users/gideon/code/trapz/petsc_trapz1.c:3: > /Users/gideon/code/trapz/petsc_trapz1.c: In function 'main': > /opt/local/lib/petsc/include/petsclog.h:370:57: warning: value computed is not used [-Wunused-value] > ((petsc_allreduce_ct += PetscMPIParallelComm(comm),0) || MPI_Allreduce(sendbuf,recvbuf,count,datatype,op,comm)) > ^ > /Users/gideon/code/trapz/petsc_trapz1.c:110:3: note: in expansion of macro 'MPI_Allreduce' > MPI_Allreduce(&local_trapz, &global_trapz, 1, MPI_DOUBLE, MPI_SUM, PETSC_COMM_WORLD); > > It looks like you are not checking the return code: > > ierr = MPI_Allreduce(&local_trapz, &global_trapz, 1, MPI_DOUBLE, MPI_SUM, PETSC_COMM_WORLD);CHKERRQ(ierr); > > Matt > > > -gideon > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From gideon.simpson at gmail.com Sun Feb 8 12:15:16 2015 From: gideon.simpson at gmail.com (Gideon Simpson) Date: Sun, 8 Feb 2015 13:15:16 -0500 Subject: [petsc-users] DMDAVecGetArray vs VecGetArray Message-ID: If i want to get the array associated with a vector associated with DA, is there anything wrong with using VecGetArray instead of DMDAVecGetArray if I want local indexing (i.e., 0,?N-1), as opposed tot the global indexing? Is there anything ?unsafe? about it? -gideon -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Feb 8 12:33:44 2015 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 8 Feb 2015 12:33:44 -0600 Subject: [petsc-users] DMDAVecGetArray vs VecGetArray In-Reply-To: References: Message-ID: On Sun, Feb 8, 2015 at 12:15 PM, Gideon Simpson wrote: > If i want to get the array associated with a vector associated with DA, is > there anything wrong with using VecGetArray instead of DMDAVecGetArray if I > want local indexing (i.e., 0,?N-1), as opposed tot the global indexing? Is > there anything ?unsafe? about it? > Nope. Matt > -gideon > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronalcelayavzla at gmail.com Sun Feb 8 17:41:50 2015 From: ronalcelayavzla at gmail.com (Ronal Celaya) Date: Sun, 8 Feb 2015 19:11:50 -0430 Subject: [petsc-users] MatMult inside a for loop Message-ID: Hello If I have a MatMult operation inside a for loop (e. g. CG algorithm), and the matrix A is MPIAIJ, vector x is gathered to local process in every loop? I'm sorry for my English. Regards, -- Ronal Celaya -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Feb 8 17:47:46 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 8 Feb 2015 17:47:46 -0600 Subject: [petsc-users] MatMult inside a for loop In-Reply-To: References: Message-ID: <4C707414-A85A-443E-846B-691599B6A82B@mcs.anl.gov> > On Feb 8, 2015, at 5:41 PM, Ronal Celaya wrote: > > Hello > If I have a MatMult operation inside a for loop (e. g. CG algorithm), and the matrix A is MPIAIJ, vector x is gathered to local process in every loop? Yes, internal to MatMult() it calls MatMult_MPIAIJ() which is in src/mat/impls/aij/mpi/mpiaij,c which has the following code: PetscErrorCode MatMult_MPIAIJ(Mat A,Vec xx,Vec yy) { Mat_MPIAIJ *a = (Mat_MPIAIJ*)A->data; PetscErrorCode ierr; PetscInt nt; PetscFunctionBegin; ierr = VecGetLocalSize(xx,&nt);CHKERRQ(ierr); if (nt != A->cmap->n) SETERRQ2(PETSC_COMM_SELF,PETSC_ERR_ARG_SIZ,"Incompatible partition of A (%D) and xx (%D)",A->cmap->n,nt); ierr = VecScatterBegin(a->Mvctx,xx,a->lvec,INSERT_VALUES,SCATTER_FORWARD);CHKERRQ(ierr); ierr = (*a->A->ops->mult)(a->A,xx,yy);CHKERRQ(ierr); ierr = VecScatterEnd(a->Mvctx,xx,a->lvec,INSERT_VALUES,SCATTER_FORWARD);CHKERRQ(ierr); ierr = (*a->B->ops->multadd)(a->B,a->lvec,yy,yy);CHKERRQ(ierr); PetscFunctionReturn(0); The needed values of x are communicated in the VecScatterBegin() to VecScatterEnd(). Note only exactly those values needed by each process are communicated in the scatter so not all values are communicated to all processes. Since the matrix is very sparse (normally) only a small percentage of the values need to be communicated. Barry > > I'm sorry for my English. > > Regards, > > -- > Ronal Celaya From ronalcelayavzla at gmail.com Sun Feb 8 18:14:03 2015 From: ronalcelayavzla at gmail.com (Ronal Celaya) Date: Sun, 8 Feb 2015 19:44:03 -0430 Subject: [petsc-users] MatMult inside a for loop In-Reply-To: <4C707414-A85A-443E-846B-691599B6A82B@mcs.anl.gov> References: <4C707414-A85A-443E-846B-691599B6A82B@mcs.anl.gov> Message-ID: Thank you Barry. Is there a way to reuse the vector x? I don't want to gather the vector in each iteration, I'd rather replicate the vector x in each process. Thanks in advance. On Sun, Feb 8, 2015 at 7:17 PM, Barry Smith wrote: > > > On Feb 8, 2015, at 5:41 PM, Ronal Celaya > wrote: > > > > Hello > > If I have a MatMult operation inside a for loop (e. g. CG algorithm), > and the matrix A is MPIAIJ, vector x is gathered to local process in every > loop? > > Yes, internal to MatMult() it calls MatMult_MPIAIJ() which is in > src/mat/impls/aij/mpi/mpiaij,c which has the following code: > > PetscErrorCode MatMult_MPIAIJ(Mat A,Vec xx,Vec yy) > { > Mat_MPIAIJ *a = (Mat_MPIAIJ*)A->data; > PetscErrorCode ierr; > PetscInt nt; > > PetscFunctionBegin; > ierr = VecGetLocalSize(xx,&nt);CHKERRQ(ierr); > if (nt != A->cmap->n) > SETERRQ2(PETSC_COMM_SELF,PETSC_ERR_ARG_SIZ,"Incompatible partition of A > (%D) and xx (%D)",A->cmap->n,nt); > ierr = > VecScatterBegin(a->Mvctx,xx,a->lvec,INSERT_VALUES,SCATTER_FORWARD);CHKERRQ(ierr); > ierr = (*a->A->ops->mult)(a->A,xx,yy);CHKERRQ(ierr); > ierr = > VecScatterEnd(a->Mvctx,xx,a->lvec,INSERT_VALUES,SCATTER_FORWARD);CHKERRQ(ierr); > ierr = (*a->B->ops->multadd)(a->B,a->lvec,yy,yy);CHKERRQ(ierr); > PetscFunctionReturn(0); > > The needed values of x are communicated in the VecScatterBegin() to > VecScatterEnd(). Note only exactly those values needed by each process are > communicated in the scatter so not all values are communicated to all > processes. Since the matrix is very sparse (normally) only a small > percentage of the values need to be communicated. > > Barry > > > > > I'm sorry for my English. > > > > Regards, > > > > -- > > Ronal Celaya > > -- Ronal Celaya -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Feb 8 18:22:14 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 8 Feb 2015 18:22:14 -0600 Subject: [petsc-users] MatMult inside a for loop In-Reply-To: References: <4C707414-A85A-443E-846B-691599B6A82B@mcs.anl.gov> Message-ID: > On Feb 8, 2015, at 6:14 PM, Ronal Celaya wrote: > > Thank you Barry. > Is there a way to reuse the vector x? I don't want to gather the vector in each iteration, I'd rather replicate the vector x in each process. I don't understand. With each new matrix vector product there are new values in x (which are updated from other parts of the CG algorithm), these new values need to be communicated in the MatMult() to where they are needed; you can't just reuse the old values in x. Barry > > Thanks in advance. > > On Sun, Feb 8, 2015 at 7:17 PM, Barry Smith wrote: > > > On Feb 8, 2015, at 5:41 PM, Ronal Celaya wrote: > > > > Hello > > If I have a MatMult operation inside a for loop (e. g. CG algorithm), and the matrix A is MPIAIJ, vector x is gathered to local process in every loop? > > Yes, internal to MatMult() it calls MatMult_MPIAIJ() which is in src/mat/impls/aij/mpi/mpiaij,c which has the following code: > > PetscErrorCode MatMult_MPIAIJ(Mat A,Vec xx,Vec yy) > { > Mat_MPIAIJ *a = (Mat_MPIAIJ*)A->data; > PetscErrorCode ierr; > PetscInt nt; > > PetscFunctionBegin; > ierr = VecGetLocalSize(xx,&nt);CHKERRQ(ierr); > if (nt != A->cmap->n) SETERRQ2(PETSC_COMM_SELF,PETSC_ERR_ARG_SIZ,"Incompatible partition of A (%D) and xx (%D)",A->cmap->n,nt); > ierr = VecScatterBegin(a->Mvctx,xx,a->lvec,INSERT_VALUES,SCATTER_FORWARD);CHKERRQ(ierr); > ierr = (*a->A->ops->mult)(a->A,xx,yy);CHKERRQ(ierr); > ierr = VecScatterEnd(a->Mvctx,xx,a->lvec,INSERT_VALUES,SCATTER_FORWARD);CHKERRQ(ierr); > ierr = (*a->B->ops->multadd)(a->B,a->lvec,yy,yy);CHKERRQ(ierr); > PetscFunctionReturn(0); > > The needed values of x are communicated in the VecScatterBegin() to VecScatterEnd(). Note only exactly those values needed by each process are communicated in the scatter so not all values are communicated to all processes. Since the matrix is very sparse (normally) only a small percentage of the values need to be communicated. > > Barry > > > > > I'm sorry for my English. > > > > Regards, > > > > -- > > Ronal Celaya > > > > > -- > Ronal Celaya From ronalcelayavzla at gmail.com Sun Feb 8 19:20:51 2015 From: ronalcelayavzla at gmail.com (Ronal Celaya) Date: Sun, 8 Feb 2015 20:50:51 -0430 Subject: [petsc-users] MatMult inside a for loop In-Reply-To: References: <4C707414-A85A-443E-846B-691599B6A82B@mcs.anl.gov> Message-ID: I know that. I want to have all the vector x replicated in all processes and update it in each iteration, so I don't need to communicate the vector x each time MatMult() is called. I'm not sure I'm making myself clear, sorry On Sun, Feb 8, 2015 at 7:52 PM, Barry Smith wrote: > > > On Feb 8, 2015, at 6:14 PM, Ronal Celaya > wrote: > > > > Thank you Barry. > > Is there a way to reuse the vector x? I don't want to gather the vector > in each iteration, I'd rather replicate the vector x in each process. > > I don't understand. With each new matrix vector product there are new > values in x (which are updated from other parts of the CG algorithm), these > new values need to be communicated in the MatMult() to where they are > needed; you can't just reuse the old values in x. > > Barry > > > > > Thanks in advance. > > > > On Sun, Feb 8, 2015 at 7:17 PM, Barry Smith wrote: > > > > > On Feb 8, 2015, at 5:41 PM, Ronal Celaya > wrote: > > > > > > Hello > > > If I have a MatMult operation inside a for loop (e. g. CG algorithm), > and the matrix A is MPIAIJ, vector x is gathered to local process in every > loop? > > > > Yes, internal to MatMult() it calls MatMult_MPIAIJ() which is in > src/mat/impls/aij/mpi/mpiaij,c which has the following code: > > > > PetscErrorCode MatMult_MPIAIJ(Mat A,Vec xx,Vec yy) > > { > > Mat_MPIAIJ *a = (Mat_MPIAIJ*)A->data; > > PetscErrorCode ierr; > > PetscInt nt; > > > > PetscFunctionBegin; > > ierr = VecGetLocalSize(xx,&nt);CHKERRQ(ierr); > > if (nt != A->cmap->n) > SETERRQ2(PETSC_COMM_SELF,PETSC_ERR_ARG_SIZ,"Incompatible partition of A > (%D) and xx (%D)",A->cmap->n,nt); > > ierr = > VecScatterBegin(a->Mvctx,xx,a->lvec,INSERT_VALUES,SCATTER_FORWARD);CHKERRQ(ierr); > > ierr = (*a->A->ops->mult)(a->A,xx,yy);CHKERRQ(ierr); > > ierr = > VecScatterEnd(a->Mvctx,xx,a->lvec,INSERT_VALUES,SCATTER_FORWARD);CHKERRQ(ierr); > > ierr = (*a->B->ops->multadd)(a->B,a->lvec,yy,yy);CHKERRQ(ierr); > > PetscFunctionReturn(0); > > > > The needed values of x are communicated in the VecScatterBegin() to > VecScatterEnd(). Note only exactly those values needed by each process are > communicated in the scatter so not all values are communicated to all > processes. Since the matrix is very sparse (normally) only a small > percentage of the values need to be communicated. > > > > Barry > > > > > > > > I'm sorry for my English. > > > > > > Regards, > > > > > > -- > > > Ronal Celaya > > > > > > > > > > -- > > Ronal Celaya > > -- Ronal Celaya -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Feb 8 19:28:38 2015 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 8 Feb 2015 19:28:38 -0600 Subject: [petsc-users] MatMult inside a for loop In-Reply-To: References: <4C707414-A85A-443E-846B-691599B6A82B@mcs.anl.gov> Message-ID: On Sun, Feb 8, 2015 at 7:20 PM, Ronal Celaya wrote: > I know that. I want to have all the vector x replicated in all processes > and update it in each iteration, so I don't need to communicate the vector > x each time MatMult() is called. > I'm not sure I'm making myself clear, sorry > 1) This is not a scalable strategy. 2) If you know A and all the updates to x locally, why don't you just compute y directly? Thanks, Matt > On Sun, Feb 8, 2015 at 7:52 PM, Barry Smith wrote: > >> >> > On Feb 8, 2015, at 6:14 PM, Ronal Celaya >> wrote: >> > >> > Thank you Barry. >> > Is there a way to reuse the vector x? I don't want to gather the vector >> in each iteration, I'd rather replicate the vector x in each process. >> >> I don't understand. With each new matrix vector product there are new >> values in x (which are updated from other parts of the CG algorithm), these >> new values need to be communicated in the MatMult() to where they are >> needed; you can't just reuse the old values in x. >> >> Barry >> >> > >> > Thanks in advance. >> > >> > On Sun, Feb 8, 2015 at 7:17 PM, Barry Smith wrote: >> > >> > > On Feb 8, 2015, at 5:41 PM, Ronal Celaya >> wrote: >> > > >> > > Hello >> > > If I have a MatMult operation inside a for loop (e. g. CG algorithm), >> and the matrix A is MPIAIJ, vector x is gathered to local process in every >> loop? >> > >> > Yes, internal to MatMult() it calls MatMult_MPIAIJ() which is in >> src/mat/impls/aij/mpi/mpiaij,c which has the following code: >> > >> > PetscErrorCode MatMult_MPIAIJ(Mat A,Vec xx,Vec yy) >> > { >> > Mat_MPIAIJ *a = (Mat_MPIAIJ*)A->data; >> > PetscErrorCode ierr; >> > PetscInt nt; >> > >> > PetscFunctionBegin; >> > ierr = VecGetLocalSize(xx,&nt);CHKERRQ(ierr); >> > if (nt != A->cmap->n) >> SETERRQ2(PETSC_COMM_SELF,PETSC_ERR_ARG_SIZ,"Incompatible partition of A >> (%D) and xx (%D)",A->cmap->n,nt); >> > ierr = >> VecScatterBegin(a->Mvctx,xx,a->lvec,INSERT_VALUES,SCATTER_FORWARD);CHKERRQ(ierr); >> > ierr = (*a->A->ops->mult)(a->A,xx,yy);CHKERRQ(ierr); >> > ierr = >> VecScatterEnd(a->Mvctx,xx,a->lvec,INSERT_VALUES,SCATTER_FORWARD);CHKERRQ(ierr); >> > ierr = (*a->B->ops->multadd)(a->B,a->lvec,yy,yy);CHKERRQ(ierr); >> > PetscFunctionReturn(0); >> > >> > The needed values of x are communicated in the VecScatterBegin() to >> VecScatterEnd(). Note only exactly those values needed by each process are >> communicated in the scatter so not all values are communicated to all >> processes. Since the matrix is very sparse (normally) only a small >> percentage of the values need to be communicated. >> > >> > Barry >> > >> > > >> > > I'm sorry for my English. >> > > >> > > Regards, >> > > >> > > -- >> > > Ronal Celaya >> > >> > >> > >> > >> > -- >> > Ronal Celaya >> >> > > > -- > Ronal Celaya > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Feb 8 19:37:28 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 8 Feb 2015 19:37:28 -0600 Subject: [petsc-users] MatMult inside a for loop In-Reply-To: References: <4C707414-A85A-443E-846B-691599B6A82B@mcs.anl.gov> Message-ID: <4D3F5C5C-850F-4575-AFC6-CC520A8A1297@mcs.anl.gov> > On Feb 8, 2015, at 7:20 PM, Ronal Celaya wrote: > > I know that. I want to have all the vector x replicated in all processes and update it in each iteration You will have to do communication when you "update it each iteration", no less communication then is done for the MatMult() so you will save no communication. Believe me, people have been doing parallel Krylov methods for 25 years, there is no savings on communication that can be had; it can only be shifted around to different points in the algorithm. Barry > , so I don't need to communicate the vector x each time MatMult() is called. > I'm not sure I'm making myself clear, sorry > > On Sun, Feb 8, 2015 at 7:52 PM, Barry Smith wrote: > > > On Feb 8, 2015, at 6:14 PM, Ronal Celaya wrote: > > > > Thank you Barry. > > Is there a way to reuse the vector x? I don't want to gather the vector in each iteration, I'd rather replicate the vector x in each process. > > I don't understand. With each new matrix vector product there are new values in x (which are updated from other parts of the CG algorithm), these new values need to be communicated in the MatMult() to where they are needed; you can't just reuse the old values in x. > > Barry > > > > > Thanks in advance. > > > > On Sun, Feb 8, 2015 at 7:17 PM, Barry Smith wrote: > > > > > On Feb 8, 2015, at 5:41 PM, Ronal Celaya wrote: > > > > > > Hello > > > If I have a MatMult operation inside a for loop (e. g. CG algorithm), and the matrix A is MPIAIJ, vector x is gathered to local process in every loop? > > > > Yes, internal to MatMult() it calls MatMult_MPIAIJ() which is in src/mat/impls/aij/mpi/mpiaij,c which has the following code: > > > > PetscErrorCode MatMult_MPIAIJ(Mat A,Vec xx,Vec yy) > > { > > Mat_MPIAIJ *a = (Mat_MPIAIJ*)A->data; > > PetscErrorCode ierr; > > PetscInt nt; > > > > PetscFunctionBegin; > > ierr = VecGetLocalSize(xx,&nt);CHKERRQ(ierr); > > if (nt != A->cmap->n) SETERRQ2(PETSC_COMM_SELF,PETSC_ERR_ARG_SIZ,"Incompatible partition of A (%D) and xx (%D)",A->cmap->n,nt); > > ierr = VecScatterBegin(a->Mvctx,xx,a->lvec,INSERT_VALUES,SCATTER_FORWARD);CHKERRQ(ierr); > > ierr = (*a->A->ops->mult)(a->A,xx,yy);CHKERRQ(ierr); > > ierr = VecScatterEnd(a->Mvctx,xx,a->lvec,INSERT_VALUES,SCATTER_FORWARD);CHKERRQ(ierr); > > ierr = (*a->B->ops->multadd)(a->B,a->lvec,yy,yy);CHKERRQ(ierr); > > PetscFunctionReturn(0); > > > > The needed values of x are communicated in the VecScatterBegin() to VecScatterEnd(). Note only exactly those values needed by each process are communicated in the scatter so not all values are communicated to all processes. Since the matrix is very sparse (normally) only a small percentage of the values need to be communicated. > > > > Barry > > > > > > > > I'm sorry for my English. > > > > > > Regards, > > > > > > -- > > > Ronal Celaya > > > > > > > > > > -- > > Ronal Celaya > > > > > -- > Ronal Celaya From jed at jedbrown.org Sun Feb 8 19:39:05 2015 From: jed at jedbrown.org (Jed Brown) Date: Sun, 08 Feb 2015 18:39:05 -0700 Subject: [petsc-users] MatMult inside a for loop In-Reply-To: References: <4C707414-A85A-443E-846B-691599B6A82B@mcs.anl.gov> Message-ID: <87y4o7279i.fsf@jedbrown.org> Matthew Knepley writes: > On Sun, Feb 8, 2015 at 7:20 PM, Ronal Celaya > wrote: > >> I know that. I want to have all the vector x replicated in all processes >> and update it in each iteration, so I don't need to communicate the vector >> x each time MatMult() is called. >> I'm not sure I'm making myself clear, sorry >> > > 1) This is not a scalable strategy. Also note that you have to communicate many layers of overlap of A to run CG without neighbor communication. The overhead is especially large if you have small subdomains (the case where communication latency is more important than bandwidth). > 2) If you know A and all the updates to x locally, why don't you just > compute y directly? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From jed at jedbrown.org Sun Feb 8 22:19:32 2015 From: jed at jedbrown.org (Jed Brown) Date: Sun, 08 Feb 2015 21:19:32 -0700 Subject: [petsc-users] passing information to TSIFunction In-Reply-To: References: Message-ID: <87k2zrzpgr.fsf@jedbrown.org> Sanjay Kharche writes: > Hi > > I need to pass a 2D array of ints to user defined functions, especially RHS. Ideally, this 2D array is dynamically created at run time to make my application general. Yesterday's discussion is below. > > I did this in the application context: > > /* User-defined data structures and routines */ > /* AppCtx: used by FormIFunction() */ > typedef struct { > DM da; // DM instance in which u, r are placed. > PetscInt geometry[usr_MY][usr_MX]; // This is static, so the whole thing is visible to everybody who gets the context. This is my working solution as of now. > Vec geom; // I duplicate u for this in calling function. VecDuplicate( u , &user.geom). I cannot pass this to RHS function, I cannot access values in geom in the called function. > PetscInt **geomet; // I calloc this in the calling function. I cannot access the data in RHS function > int **anotherGeom; // just int. > } AppCtx; > > This static geometry 2D array can be seen in all functions that receive the application context. This is a working solution to my problem, although not ideal. The ideal solution would be if I can pass and receive something like geom or geomet which are dynamically created after the 2D DA is created in the calling function. Why not create a DMDA (serial and redundant if you prefer) for the geometry, store the geometry in a Vec, and use DMDAVecGetArray() to access it? I like to compose the geometry with the state DM and use DMCoarsenHooks if using rediscretized geometric multigrid (thus keeping the application context resolution-independent), but if you're putting resolution-dependent stuff in the context, you can just add the dmgeom and vector. > I am working through the manual and the examples, but some indication of how to solve this specific issue will be great. > > cheers > Sanjay > > > > > ________________________________ > From: Matthew Knepley [knepley at gmail.com] > Sent: 04 February 2015 21:03 > To: Sanjay Kharche > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] passing information to TSIFunction > > On Wed, Feb 4, 2015 at 2:59 PM, Sanjay Kharche > wrote: > > Hi > > I started with the ex15.c example from ts. Now I would like to pass a 2D int array I call data2d to the FormIFunction which constructs the udot - RHS. FormIFunction is used in Petsc's TSSetIFunction. My data2d is determined at run time in the initialisation on each rank. data2d is the same size as the solution array and the residual array. > > I tried adding a Vec to FormIFunction, but Petsc's TSIFunction ( TSSetIFunction(ts,r,FormIFunction,&user); ) expects a set number & type of arguments to FormIFunction. I tried passing data2d as a regular int pointer as well as a Vec. As a Vec, I tried to access the data2d in a similar way as the solution vector, which caused the serial and parallel execution to produce errors. > > 1) This is auxiliary data which must come in through the context argument. Many many example use a context > > 2) You should read the chapter on DAs in the manual. It describes the data layout. In order for your code to > work in parallel I suggest you use a Vec and cast to int when you need the value. > > Thanks, > > Matt > > Any ideas on how I can get an array of ints to FormIFunction? > > thanks > Sanjay > > > The function declaration: > // petsc functions. > extern PetscInt FormIFunction(TS,PetscReal,Vec,Vec,Vec,void*, Vec); // last Vec is supposed to be my data2D, which is a duplicate of the u. > > I duplicate as follows: > DMDACreate2d(PETSC_COMM_WORLD, DM_BOUNDARY_NONE, DM_BOUNDARY_NONE,DMDA_STENCIL_STAR,usr_MX,usr_MY,PETSC_DECIDE,PETSC_DECIDE,1,1,NULL,NULL,&da); > user.da = da; > DMCreateGlobalVector(da,&u); > VecDuplicate(u,&r); > VecDuplicate(u,&Data2D); // so my assumption is that data2D is part of da, but I cannot see/set its type anywhere > > The warnings/notes at build time: > >> make sk2d > /home/sanjay/petsc/linux-gnu-c-debug/bin/mpicc -o sk2d.o -c -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -O0 -I/home/sanjay/petsc/include -I/home/sanjay/petsc/linux-gnu-c-debug/include `pwd`/sk2d.c > /home/sanjay/petscProgs/Work/twod/sk2d.c: In function ?main?: > /home/sanjay/petscProgs/Work/twod/sk2d.c:228:4: warning: passing argument 3 of ?TSSetIFunction? from incompatible pointer type [enabled by default] > /home/sanjay/petsc/include/petscts.h:261:29: note: expected ?TSIFunction? but argument is of type ?PetscInt (*)(struct _p_TS *, PetscReal, struct _p_Vec *, struct _p_Vec *, struct _p_Vec *, void *, struct _p_Vec *)? > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From Sanjay.Kharche at manchester.ac.uk Mon Feb 9 03:04:58 2015 From: Sanjay.Kharche at manchester.ac.uk (Sanjay Kharche) Date: Mon, 9 Feb 2015 09:04:58 +0000 Subject: [petsc-users] passing information to TSIFunction In-Reply-To: <87k2zrzpgr.fsf@jedbrown.org> References: , <87k2zrzpgr.fsf@jedbrown.org> Message-ID: Hi Thanks for that. I have since resolved my issue for purpose. I will however implement this suggestion by Jed as it seems more efficient and more consistent with the Petsc paradigm. cheers Sanjay ________________________________________ From: Jed Brown [jed at jedbrown.org] Sent: 09 February 2015 04:19 To: Sanjay Kharche; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] passing information to TSIFunction Sanjay Kharche writes: > Hi > > I need to pass a 2D array of ints to user defined functions, especially RHS. Ideally, this 2D array is dynamically created at run time to make my application general. Yesterday's discussion is below. > > I did this in the application context: > > /* User-defined data structures and routines */ > /* AppCtx: used by FormIFunction() */ > typedef struct { > DM da; // DM instance in which u, r are placed. > PetscInt geometry[usr_MY][usr_MX]; // This is static, so the whole thing is visible to everybody who gets the context. This is my working solution as of now. > Vec geom; // I duplicate u for this in calling function. VecDuplicate( u , &user.geom). I cannot pass this to RHS function, I cannot access values in geom in the called function. > PetscInt **geomet; // I calloc this in the calling function. I cannot access the data in RHS function > int **anotherGeom; // just int. > } AppCtx; > > This static geometry 2D array can be seen in all functions that receive the application context. This is a working solution to my problem, although not ideal. The ideal solution would be if I can pass and receive something like geom or geomet which are dynamically created after the 2D DA is created in the calling function. Why not create a DMDA (serial and redundant if you prefer) for the geometry, store the geometry in a Vec, and use DMDAVecGetArray() to access it? I like to compose the geometry with the state DM and use DMCoarsenHooks if using rediscretized geometric multigrid (thus keeping the application context resolution-independent), but if you're putting resolution-dependent stuff in the context, you can just add the dmgeom and vector. > I am working through the manual and the examples, but some indication of how to solve this specific issue will be great. > > cheers > Sanjay > > > > > ________________________________ > From: Matthew Knepley [knepley at gmail.com] > Sent: 04 February 2015 21:03 > To: Sanjay Kharche > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] passing information to TSIFunction > > On Wed, Feb 4, 2015 at 2:59 PM, Sanjay Kharche > wrote: > > Hi > > I started with the ex15.c example from ts. Now I would like to pass a 2D int array I call data2d to the FormIFunction which constructs the udot - RHS. FormIFunction is used in Petsc's TSSetIFunction. My data2d is determined at run time in the initialisation on each rank. data2d is the same size as the solution array and the residual array. > > I tried adding a Vec to FormIFunction, but Petsc's TSIFunction ( TSSetIFunction(ts,r,FormIFunction,&user); ) expects a set number & type of arguments to FormIFunction. I tried passing data2d as a regular int pointer as well as a Vec. As a Vec, I tried to access the data2d in a similar way as the solution vector, which caused the serial and parallel execution to produce errors. > > 1) This is auxiliary data which must come in through the context argument. Many many example use a context > > 2) You should read the chapter on DAs in the manual. It describes the data layout. In order for your code to > work in parallel I suggest you use a Vec and cast to int when you need the value. > > Thanks, > > Matt > > Any ideas on how I can get an array of ints to FormIFunction? > > thanks > Sanjay > > > The function declaration: > // petsc functions. > extern PetscInt FormIFunction(TS,PetscReal,Vec,Vec,Vec,void*, Vec); // last Vec is supposed to be my data2D, which is a duplicate of the u. > > I duplicate as follows: > DMDACreate2d(PETSC_COMM_WORLD, DM_BOUNDARY_NONE, DM_BOUNDARY_NONE,DMDA_STENCIL_STAR,usr_MX,usr_MY,PETSC_DECIDE,PETSC_DECIDE,1,1,NULL,NULL,&da); > user.da = da; > DMCreateGlobalVector(da,&u); > VecDuplicate(u,&r); > VecDuplicate(u,&Data2D); // so my assumption is that data2D is part of da, but I cannot see/set its type anywhere > > The warnings/notes at build time: > >> make sk2d > /home/sanjay/petsc/linux-gnu-c-debug/bin/mpicc -o sk2d.o -c -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -O0 -I/home/sanjay/petsc/include -I/home/sanjay/petsc/linux-gnu-c-debug/include `pwd`/sk2d.c > /home/sanjay/petscProgs/Work/twod/sk2d.c: In function ?main?: > /home/sanjay/petscProgs/Work/twod/sk2d.c:228:4: warning: passing argument 3 of ?TSSetIFunction? from incompatible pointer type [enabled by default] > /home/sanjay/petsc/include/petscts.h:261:29: note: expected ?TSIFunction? but argument is of type ?PetscInt (*)(struct _p_TS *, PetscReal, struct _p_Vec *, struct _p_Vec *, struct _p_Vec *, void *, struct _p_Vec *)? > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From ronalcelayavzla at gmail.com Mon Feb 9 03:21:43 2015 From: ronalcelayavzla at gmail.com (Ronal Celaya) Date: Mon, 9 Feb 2015 04:51:43 -0430 Subject: [petsc-users] MatMult inside a for loop In-Reply-To: <87y4o7279i.fsf@jedbrown.org> References: <4C707414-A85A-443E-846B-691599B6A82B@mcs.anl.gov> <87y4o7279i.fsf@jedbrown.org> Message-ID: On Sun, Feb 8, 2015 at 9:09 PM, Jed Brown wrote: > Matthew Knepley writes: > > > On Sun, Feb 8, 2015 at 7:20 PM, Ronal Celaya > > wrote: > > > >> I know that. I want to have all the vector x replicated in all processes > >> and update it in each iteration, so I don't need to communicate the > vector > >> x each time MatMult() is called. > >> I'm not sure I'm making myself clear, sorry > >> > > > > 1) This is not a scalable strategy. > > Also note that you have to communicate many layers of overlap of A to > run CG without neighbor communication. The overhead is especially large > if you have small subdomains (the case where communication latency is > more important than bandwidth). > I need to explain this to my partners. Thank you > > > 2) If you know A and all the updates to x locally, why don't you just > > compute y directly? > This is exactly what I'm doing in C implementation Many thanks for your replies. They are very useful. I am doing some tests with CG, comparing C and PETSc implementations and I need to explain the communication used in MatMult(). Once again, thanks for your help. Best regards, -- Ronal Celaya -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at barbierdereuille.net Mon Feb 9 03:57:12 2015 From: pierre at barbierdereuille.net (Pierre Barbier de Reuille) Date: Mon, 09 Feb 2015 09:57:12 +0000 Subject: [petsc-users] Setting step acceptance criteria and/or domain validity using TS module Message-ID: Hello, Looking for methods to ensure negative values are rejected, I found this in the archives: http://lists.mcs.anl.gov/pipermail/petsc-users/2014-June/021978.html The answer gives two options: 1 - Set a function for the step acceptance criteria 2 - Set a domain violation for the function However, I cannot find any information on how to do either things. For (1), I tried to use TSSetPostStep, but I couldn't figure out how to retrieve the current solution (it seems the vector in TSGetSolution is not yet set). For (2), I am not even sure where to start. Thanks, Pierre Barbier de Reuille -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.barbierdereuille at gmail.com Mon Feb 9 04:00:04 2015 From: pierre.barbierdereuille at gmail.com (Pierre Barbier de Reuille) Date: Mon, 09 Feb 2015 10:00:04 +0000 Subject: [petsc-users] Setting step acceptance criteria and/or domain validity using TS module Message-ID: Hello, Looking for methods to ensure negative values are rejected, I found this in the archives: http://lists.mcs.anl.gov/pipermail/petsc-users/2014-June/021978.html The answer gives two options: 1 - Set a function for the step acceptance criteria 2 - Set a domain violation for the function However, I cannot find any information on how to do either things. For (1), I tried to use TSSetPostStep, but I couldn't figure out how to retrieve the current solution (it seems TSGetSolution returns the last valid solution). For (2), I am not even sure where to start. Thanks, Pierre Barbier de Reuille -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabien.raphel at etu.univ-nantes.fr Mon Feb 9 09:47:43 2015 From: fabien.raphel at etu.univ-nantes.fr (Fabien RAPHEL) Date: Mon, 9 Feb 2015 16:47:43 +0100 (CET) Subject: [petsc-users] Full blas-lapack on Windows Message-ID: <358a17ac8339c9239e144f06a60560fd.squirrel@webmail-etu.univ-nantes.fr> Hello, I can configure, compile and use PETSc on Windows with Visual Studio 2008 (in serial and parallel). But I would like to use a LU factorization in parallel (for example, using Superlu_dist or MUMPS library). I don't have FORTRAN compiler on my machine, so I can't compile the full version of BLAS/LAPACK (with the slamch() routine for example). I found a precompiled version of the full libraries (I can run an sample in VS2008). But I have a PETSc configure error: "--with-blas-lapack-lib..... cannot be used" and the configure.log returns a lot of undefined references whereas I set the libraries and the include file during configuration. Do I forgot something in the command line? Or is the error comes from the library? Or, is there a C/C++ library who can do that? Thanks in advance, Fabien -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log.bkp Type: application/octet-stream Size: 2613361 bytes Desc: not available URL: From yongle.du at gmail.com Mon Feb 9 10:09:30 2015 From: yongle.du at gmail.com (DU Yongle) Date: Mon, 9 Feb 2015 11:09:30 -0500 Subject: [petsc-users] PCMG and DMMG Message-ID: Good morning, everyone: I have an existing general CFD solver with multigrid implemented. All functions (initialization, restriction, prolong/interpolation, coarse/fine grids solver......) are working correctly. Now I am trying to rewrite it with PETSc. I found that the manual provides very little information about this and is difficult to follow. I found another lecture notes by Barry Smith on web, which is: http://www.mcs.anl.gov/petsc/documentation/tutorials/Columbia04/DDandMultigrid.pdf However, it is still not clear the difference and connection between PCMG and DMMG. Some questions are: 1. Should DMMG be used to initialize the coarse grids (and boundary conditions) before PCMG could be used? If not, how does PCMG know all information on coarse grids? 2. Due to the customized boundary conditions, indices of the grids, boundary conditions, grid dimensions on each coarse grid levels are required for particular computations. How to extract these information in either DMMG or PCMG? I have not found a function for this purpose. I have set up all information myself, should I pass these information to the coarse grid levels to the coarse levels? How and again how to extract these information? 3. I have the restriction, interpolation, coarse grid solver ... implemented. How could these be integrated with PETSc functions? It appears that some functions like PCMGGetSmoother/UP/Down, PCMGSetInterpolation .... should be used, but how? The online manual simply repeat the name, and provides no other information. Thanks a lot. -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon Feb 9 10:47:27 2015 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 9 Feb 2015 10:47:27 -0600 Subject: [petsc-users] Full blas-lapack on Windows In-Reply-To: <358a17ac8339c9239e144f06a60560fd.squirrel@webmail-etu.univ-nantes.fr> References: <358a17ac8339c9239e144f06a60560fd.squirrel@webmail-etu.univ-nantes.fr> Message-ID: I think we had this conversation before. --download-f2cblaslapack will give you a full blas/lapack. And you can't use MUMPS without a fortran compiler [as far as I know] You should be able to use superlu_dist Satish On Mon, 9 Feb 2015, Fabien RAPHEL wrote: > Hello, > > I can configure, compile and use PETSc on Windows with Visual Studio 2008 > (in serial and parallel). > But I would like to use a LU factorization in parallel (for example, using > Superlu_dist or MUMPS library). > I don't have FORTRAN compiler on my machine, so I can't compile the full > version of BLAS/LAPACK (with the slamch() routine for example). > > I found a precompiled version of the full libraries (I can run an sample > in VS2008). > But I have a PETSc configure error: "--with-blas-lapack-lib..... cannot be > used" and the configure.log returns a lot of undefined references whereas > I set the libraries and the include file during configuration. > Do I forgot something in the command line? Or is the error comes from the > library? > > Or, is there a C/C++ library who can do that? > > Thanks in advance, > > Fabien > From sghosh2012 at gatech.edu Mon Feb 9 10:57:10 2015 From: sghosh2012 at gatech.edu (Ghosh, Swarnava) Date: Mon, 9 Feb 2015 11:57:10 -0500 (EST) Subject: [petsc-users] AllReduce function for mat Message-ID: <1779306439.1711891.1423501030035.JavaMail.root@mail.gatech.edu> Hi, I have sequential matrix on each process. I wanted to do something like MPI_Allreduce to this matrix to add entries from all processes. The resulting matrix is sequential. I wanted to know if there is a PETSc function to do this? -- SG From bsmith at mcs.anl.gov Mon Feb 9 11:09:16 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 9 Feb 2015 11:09:16 -0600 Subject: [petsc-users] PCMG and DMMG In-Reply-To: References: Message-ID: <1215E602-63B5-4856-8261-ED3B7A0C0D45@mcs.anl.gov> Looks like you are looking at a very old PETSc. You should be using version 3.5.3 and nothing earlier. DMMG has been gone from PETSc for a long time. Here is the easiest way to provide the information: Loop over the levels yourself and provide the matrices, function pointers etc. See for example src/ksp/ksp/examples/tests/ex19.c This example only sets up MG for two levels but you can see the pattern from the code. It creates the matrix operator for each level and vectors, and sets it for the level, ierr = FormJacobian_Grid(&user,&user.coarse,&user.coarse.J);CHKERRQ(ierr); ierr = FormJacobian_Grid(&user,&user.fine,&user.fine.J);CHKERRQ(ierr); /* Create coarse level */ ierr = PCMGGetCoarseSolve(pc,&user.ksp_coarse);CHKERRQ(ierr); ierr = KSPSetOptionsPrefix(user.ksp_coarse,"coarse_");CHKERRQ(ierr); ierr = KSPSetFromOptions(user.ksp_coarse);CHKERRQ(ierr); ierr = KSPSetOperators(user.ksp_coarse,user.coarse.J,user.coarse.J);CHKERRQ(ierr); ierr = PCMGSetX(pc,COARSE_LEVEL,user.coarse.x);CHKERRQ(ierr); ierr = PCMGSetRhs(pc,COARSE_LEVEL,user.coarse.b);CHKERRQ(ierr); /* Create fine level */ ierr = PCMGGetSmoother(pc,FINE_LEVEL,&ksp_fine);CHKERRQ(ierr); ierr = KSPSetOptionsPrefix(ksp_fine,"fine_");CHKERRQ(ierr); ierr = KSPSetFromOptions(ksp_fine);CHKERRQ(ierr); ierr = KSPSetOperators(ksp_fine,user.fine.J,user.fine.J);CHKERRQ(ierr); ierr = PCMGSetR(pc,FINE_LEVEL,user.fine.r);CHKERRQ(ierr); and it creates the interpolation and sets it /* Create interpolation between the levels */ ierr = DMCreateInterpolation(user.coarse.da,user.fine.da,&user.Ii,NULL);CHKERRQ(ierr); ierr = PCMGSetInterpolation(pc,FINE_LEVEL,user.Ii);CHKERRQ(ierr); ierr = PCMGSetRestriction(pc,FINE_LEVEL,user.Ii);CHKERRQ(ierr); Note that PETSc by default uses the transpose of the interpolation for the restriction so even though it looks strange to set the same operator for both PETSc automatically uses the transpose when needed. Barry > On Feb 9, 2015, at 10:09 AM, DU Yongle wrote: > > Good morning, everyone: > > I have an existing general CFD solver with multigrid implemented. All functions (initialization, restriction, prolong/interpolation, coarse/fine grids solver......) are working correctly. Now I am trying to rewrite it with PETSc. I found that the manual provides very little information about this and is difficult to follow. I found another lecture notes by Barry Smith on web, which is: > http://www.mcs.anl.gov/petsc/documentation/tutorials/Columbia04/DDandMultigrid.pdf > > However, it is still not clear the difference and connection between PCMG and DMMG. Some questions are: > > 1. Should DMMG be used to initialize the coarse grids (and boundary conditions) before PCMG could be used? If not, how does PCMG know all information on coarse grids? > > 2. Due to the customized boundary conditions, indices of the grids, boundary conditions, grid dimensions on each coarse grid levels are required for particular computations. How to extract these information in either DMMG or PCMG? I have not found a function for this purpose. I have set up all information myself, should I pass these information to the coarse grid levels to the coarse levels? How and again how to extract these information? > > 3. I have the restriction, interpolation, coarse grid solver ... implemented. How could these be integrated with PETSc functions? It appears that some functions like PCMGGetSmoother/UP/Down, PCMGSetInterpolation .... should be used, but how? The online manual simply repeat the name, and provides no other information. > > Thanks a lot. > > > > From bsmith at mcs.anl.gov Mon Feb 9 11:11:07 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 9 Feb 2015 11:11:07 -0600 Subject: [petsc-users] AllReduce function for mat In-Reply-To: <1779306439.1711891.1423501030035.JavaMail.root@mail.gatech.edu> References: <1779306439.1711891.1423501030035.JavaMail.root@mail.gatech.edu> Message-ID: <731B0281-2B27-468A-AF1F-C51F6C828079@mcs.anl.gov> > On Feb 9, 2015, at 10:57 AM, Ghosh, Swarnava wrote: > > Hi, > > I have sequential matrix on each process. I wanted to do something like MPI_Allreduce to this matrix to add entries from all processes. The resulting matrix is sequential. Is the matrix dense or sparse? If sparse is each process providing a different set of nonzero values? Or are all processes providing ALL nonzeros in the matrix (to be added together). > I wanted to know if there is a PETSc function to do this? > > -- > SG > From sghosh2012 at gatech.edu Mon Feb 9 11:12:45 2015 From: sghosh2012 at gatech.edu (Ghosh, Swarnava) Date: Mon, 9 Feb 2015 12:12:45 -0500 (EST) Subject: [petsc-users] AllReduce function for mat In-Reply-To: <731B0281-2B27-468A-AF1F-C51F6C828079@mcs.anl.gov> Message-ID: <26945593.1720177.1423501965272.JavaMail.root@mail.gatech.edu> The matrix is dense and each process has the same number of nonzeros. ----- Original Message ----- From: "Barry Smith" To: "Swarnava Ghosh" Cc: "PETSc users list" Sent: Monday, February 9, 2015 12:11:07 PM Subject: Re: [petsc-users] AllReduce function for mat > On Feb 9, 2015, at 10:57 AM, Ghosh, Swarnava wrote: > > Hi, > > I have sequential matrix on each process. I wanted to do something like MPI_Allreduce to this matrix to add entries from all processes. The resulting matrix is sequential. Is the matrix dense or sparse? If sparse is each process providing a different set of nonzero values? Or are all processes providing ALL nonzeros in the matrix (to be added together). > I wanted to know if there is a PETSc function to do this? > > -- > SG > -- From bsmith at mcs.anl.gov Mon Feb 9 11:22:20 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 9 Feb 2015 11:22:20 -0600 Subject: [petsc-users] AllReduce function for mat In-Reply-To: <26945593.1720177.1423501965272.JavaMail.root@mail.gatech.edu> References: <26945593.1720177.1423501965272.JavaMail.root@mail.gatech.edu> Message-ID: Use MatDenseGetArray() and call MPI_Allreduce() on the resulting array point as the output buffer. Past MPI_IN_PLACE as the input buffer. Barry > On Feb 9, 2015, at 11:12 AM, Ghosh, Swarnava wrote: > > The matrix is dense and each process has the same number of nonzeros. > > ----- Original Message ----- > From: "Barry Smith" > To: "Swarnava Ghosh" > Cc: "PETSc users list" > Sent: Monday, February 9, 2015 12:11:07 PM > Subject: Re: [petsc-users] AllReduce function for mat > > >> On Feb 9, 2015, at 10:57 AM, Ghosh, Swarnava wrote: >> >> Hi, >> >> I have sequential matrix on each process. I wanted to do something like MPI_Allreduce to this matrix to add entries from all processes. The resulting matrix is sequential. > > Is the matrix dense or sparse? > If sparse is each process providing a different set of nonzero values? Or are all processes providing ALL nonzeros in the matrix (to be added together). > >> I wanted to know if there is a PETSc function to do this? >> >> -- >> SG >> > > > -- From lawrence.mitchell at imperial.ac.uk Mon Feb 9 12:22:01 2015 From: lawrence.mitchell at imperial.ac.uk (Lawrence Mitchell) Date: Mon, 9 Feb 2015 18:22:01 +0000 Subject: [petsc-users] Issue with window SF type using derived datatypes for reduction Message-ID: <59C95D6C-FD8F-4986-B42E-1305F6198CF5@imperial.ac.uk> Hi all, I'm trying to use an SFReduce to update some data using a contiguous derived type: sketch: MPI_Type_contiguous(bs, MPI_DOUBLE, &dtype); MPI_Type_commit(&dtype); PetscSFReduceBegin(sf, dtype, rootdata, leafdata, MPI_SUM); PetscSFReduceEnd(sf, dtype, rootdata, leafdata, MPI_SUM); All is well if the block size (bs) is 1. However, for larger block sizes I get unexpected behaviour (the reduction appears not to SUM, but rather overwrite the local data). This is using -sf_type window, with OpenMPI 1.8.3. -sf_type basic works fine. The attached code illustrates the problem. Run with: $ mpiexec -n 2 ./petsc_sftest -data_bs 2 -sf_type basic PetscSF Object: 2 MPI processes type: basic sort=rank-order [0] Number of roots=3, leaves=8, remote ranks=2 [0] 0 <- (0,0) [0] 1 <- (0,1) [0] 2 <- (0,2) [0] 3 <- (1,1) [0] 7 <- (1,5) [0] 5 <- (1,3) [0] 4 <- (1,2) [0] 6 <- (1,4) [1] Number of roots=6, leaves=8, remote ranks=2 [1] 1 <- (1,1) [1] 5 <- (1,5) [1] 0 <- (1,0) [1] 3 <- (1,3) [1] 2 <- (1,2) [1] 4 <- (1,4) [1] 6 <- (0,1) [1] 7 <- (0,2) Vec Object: 2 MPI processes type: mpi Process [0] 10 10 10 10 10 10 0 0 0 0 0 0 0 0 0 0 Process [1] 12 12 12 12 12 12 12 12 12 12 12 12 0 0 0 0 Vec Object: 2 MPI processes type: mpi Process [0] 10 10 10 10 10 10 Process [1] 12 12 12 12 12 12 12 12 12 12 12 12 With the window type: $ mpiexec -n 2 ./petsc_sftest -data_bs 2 -sf_type window PetscSF Object: 2 MPI processes type: window synchronization=FENCE sort=rank-order [0] Number of roots=3, leaves=8, remote ranks=2 [0] 0 <- (0,0) [0] 1 <- (0,1) [0] 2 <- (0,2) [0] 3 <- (1,1) [0] 7 <- (1,5) [0] 5 <- (1,3) [0] 4 <- (1,2) [0] 6 <- (1,4) [1] Number of roots=6, leaves=8, remote ranks=2 [1] 1 <- (1,1) [1] 5 <- (1,5) [1] 0 <- (1,0) [1] 3 <- (1,3) [1] 2 <- (1,2) [1] 4 <- (1,4) [1] 6 <- (0,1) [1] 7 <- (0,2) Vec Object: 2 MPI processes type: mpi Process [0] 10 10 10 10 10 10 0 0 0 0 0 0 0 0 0 0 Process [1] 12 12 12 12 12 12 12 12 12 12 12 12 0 0 0 0 Vec Object: 2 MPI processes type: mpi Process [0] 10 10 0 0 0 0 Process [1] 12 12 12 12 12 12 12 12 12 12 12 12 Note how Vec B on process 0 has entries 10, 10, 0, 0, 0, 0 (rather than all 10). Having just tried a build with --download-mpich, I notice this problem does not occur. So should I shout at the OpenMPI team? Cheers, Lawrence static const char help[] = "Test overlapped communication on a single star forest (PetscSF)\n\n"; #include #include #include #undef __FUNCT__ #define __FUNCT__ "main" int main(int argc, char **argv) { PetscInt ierr; PetscSF sf; Vec A; Vec B; double *bufA; double *bufB; MPI_Comm c; PetscMPIInt rank, size; PetscInt nroots, nleaves; PetscInt i; PetscInt *ilocal; PetscSFNode *iremote; MPI_Datatype dtype; PetscInt bs = 1; PetscInitialize(&argc,&argv,NULL,help); ierr = PetscOptionsGetInt(NULL, "-data_bs", &bs, NULL);CHKERRQ(ierr); c = PETSC_COMM_WORLD; ierr = MPI_Comm_rank(c,&rank);CHKERRQ(ierr); ierr = MPI_Comm_size(c,&size);CHKERRQ(ierr); if (size != 2) { SETERRQ(c, PETSC_ERR_USER, "Only coded for two MPI processes\n"); } ierr = PetscSFCreate(c,&sf);CHKERRQ(ierr); ierr = PetscSFSetFromOptions(sf);CHKERRQ(ierr); nleaves = 8; nroots = 3 * (rank + 1); ierr = PetscMalloc1(nleaves,&ilocal);CHKERRQ(ierr); ierr = PetscMalloc1(nleaves,&iremote);CHKERRQ(ierr); if ( rank == 0 ) { ilocal[0] = 0; ilocal[1] = 1; ilocal[2] = 2; ilocal[3] = 3; ilocal[4] = 7; ilocal[5] = 5; ilocal[6] = 4; ilocal[7] = 6; iremote[0].rank = 0; iremote[0].index = 0; iremote[1].rank = 0; iremote[1].index = 1; iremote[2].rank = 0; iremote[2].index = 2; iremote[3].rank = 1; iremote[3].index = 1; iremote[4].rank = 1; iremote[4].index = 5; iremote[5].rank = 1; iremote[5].index = 3; iremote[6].rank = 1; iremote[6].index = 2; iremote[7].rank = 1; iremote[7].index = 4; } else { ilocal[0] = 1; ilocal[1] = 5; ilocal[2] = 0; ilocal[3] = 3; ilocal[4] = 2; ilocal[5] = 4; ilocal[6] = 6; ilocal[7] = 7; iremote[0].rank = 1; iremote[0].index = 1; iremote[1].rank = 1; iremote[1].index = 5; iremote[2].rank = 1; iremote[2].index = 0; iremote[3].rank = 1; iremote[3].index = 3; iremote[4].rank = 1; iremote[4].index = 2; iremote[5].rank = 1; iremote[5].index = 4; iremote[6].rank = 0; iremote[6].index = 1; iremote[7].rank = 0; iremote[7].index = 2; } ierr = PetscSFSetGraph(sf,nroots,nleaves,ilocal,PETSC_OWN_POINTER, iremote,PETSC_OWN_POINTER);CHKERRQ(ierr); ierr = PetscSFSetUp(sf);CHKERRQ(ierr); ierr = PetscSFView(sf,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); ierr = VecCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); ierr = VecSetSizes(A,nleaves*bs,PETSC_DETERMINE);CHKERRQ(ierr); ierr = VecSetFromOptions(A);CHKERRQ(ierr); ierr = VecSetUp(A);CHKERRQ(ierr); ierr = VecCreate(PETSC_COMM_WORLD,&B);CHKERRQ(ierr); ierr = VecSetSizes(B,nroots*bs,PETSC_DETERMINE);CHKERRQ(ierr); ierr = VecSetFromOptions(B);CHKERRQ(ierr); ierr = VecSetUp(B);CHKERRQ(ierr); ierr = VecGetArray(A,&bufA);CHKERRQ(ierr); for (i=0; i < nroots*bs; i++) { bufA[i] = 10.0 + 2*rank; } ierr = VecRestoreArray(A,&bufA);CHKERRQ(ierr); ierr = VecGetArray(A,&bufA);CHKERRQ(ierr); ierr = VecGetArray(B,&bufB);CHKERRQ(ierr); ierr = MPI_Type_contiguous(bs, MPI_DOUBLE, &dtype); CHKERRQ(ierr); ierr = MPI_Type_commit(&dtype); CHKERRQ(ierr); ierr = PetscSFReduceBegin(sf,dtype,(const void*)bufA,(void *)bufB, MPI_SUM);CHKERRQ(ierr); ierr = PetscSFReduceEnd(sf,dtype,(const void*)bufA,(void *)bufB, MPI_SUM);CHKERRQ(ierr); ierr = VecRestoreArray(A,&bufA);CHKERRQ(ierr); ierr = VecRestoreArray(B,&bufB);CHKERRQ(ierr); ierr = VecView(A,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); ierr = VecView(B,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); ierr = VecDestroy(&A);CHKERRQ(ierr); ierr = VecDestroy(&B);CHKERRQ(ierr); ierr = PetscSFDestroy(&sf);CHKERRQ(ierr); ierr = MPI_Type_free(&dtype); CHKERRQ(ierr); PetscFinalize(); return 0; } -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From zonexo at gmail.com Mon Feb 9 17:30:16 2015 From: zonexo at gmail.com (Wee Beng Tay) Date: Tue, 10 Feb 2015 07:30:16 +0800 Subject: [petsc-users] Installing petsc on Linux with Intel mpi Message-ID: <4626f9a0fbb05f6506c8e0af6c3158@ip-10-0-3-70> An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon Feb 9 17:42:58 2015 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 9 Feb 2015 17:42:58 -0600 Subject: [petsc-users] Installing petsc on Linux with Intel mpi In-Reply-To: <4626f9a0fbb05f6506c8e0af6c3158@ip-10-0-3-70> References: <4626f9a0fbb05f6506c8e0af6c3158@ip-10-0-3-70> Message-ID: Which version of IMPI do you have? Make sure you have the following (or newer?) - as it has the required fixes. https://software.intel.com/en-us/articles/intel-mpi-library-50-update-2-readme Satish On Mon, 9 Feb 2015, Wee Beng Tay wrote: > > Hi, > > I'm trying to install petsc on Linux with Intel mpi 5 and compiler. What should be the configure > Command line to use? Anyone has experience? > > I tried the usual options but they all can't work. > > Thanks > > Sent using CloudMagic > > > From zonexo at gmail.com Mon Feb 9 18:54:25 2015 From: zonexo at gmail.com (Wee Beng Tay) Date: Tue, 10 Feb 2015 08:54:25 +0800 Subject: [petsc-users] Installing petsc on Linux with Intel mpi In-Reply-To: References: <4626f9a0fbb05f6506c8e0af6c3158@ip-10-0-3-70> Message-ID: <0493a10c5587c3c06f6503d8ab7367@ip-10-0-3-214> An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Feb 9 19:18:25 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 9 Feb 2015 19:18:25 -0600 Subject: [petsc-users] Installing petsc on Linux with Intel mpi In-Reply-To: <4626f9a0fbb05f6506c8e0af6c3158@ip-10-0-3-70> References: <4626f9a0fbb05f6506c8e0af6c3158@ip-10-0-3-70> Message-ID: <9D07A9B8-C6A6-42D9-9D81-55FB1156FDD5@mcs.anl.gov> You need always to send configure.log when a configure fails. Barry > On Feb 9, 2015, at 5:30 PM, Wee Beng Tay wrote: > > Hi, > > I'm trying to install petsc on Linux with Intel mpi 5 and compiler. What should be the configure > Command line to use? Anyone has experience? > > I tried the usual options but they all can't work. > > Thanks > > Sent using CloudMagic > From jed at jedbrown.org Mon Feb 9 19:31:27 2015 From: jed at jedbrown.org (Jed Brown) Date: Mon, 09 Feb 2015 18:31:27 -0700 Subject: [petsc-users] Issue with window SF type using derived datatypes for reduction In-Reply-To: <59C95D6C-FD8F-4986-B42E-1305F6198CF5@imperial.ac.uk> References: <59C95D6C-FD8F-4986-B42E-1305F6198CF5@imperial.ac.uk> Message-ID: <871tlyzh5c.fsf@jedbrown.org> Lawrence Mitchell writes: > Having just tried a build with --download-mpich, I notice this problem > does not occur. So should I shout at the OpenMPI team? Open MPI has many long-standing bugs with one-sided and datatypes. I have pleaded with them to error instead of corrupting memory or returning wrong results for unsupported cases. My recommendation is to not use -sf_type window with Open MPI. I hear that they have newfound interest in fixing the decade-old one-sided bugs, so I would say it is worth reporting this issue. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From jed at jedbrown.org Mon Feb 9 19:46:39 2015 From: jed at jedbrown.org (Jed Brown) Date: Mon, 09 Feb 2015 18:46:39 -0700 Subject: [petsc-users] Installing petsc on Linux with Intel mpi In-Reply-To: <0493a10c5587c3c06f6503d8ab7367@ip-10-0-3-214> References: <4626f9a0fbb05f6506c8e0af6c3158@ip-10-0-3-70> <0493a10c5587c3c06f6503d8ab7367@ip-10-0-3-214> Message-ID: <87y4o6y1vk.fsf@jedbrown.org> Wee Beng Tay writes: > Hi, > > I'm using the latest version 5 update 1,just installed 1 wk ago. Why are you installing such an old version now? Satish wasn't joking around when he said you need at least Update 2. Earlier versions return 0 for failure making it impossible to test for features. > Compiling of code uses mpiicc, mpiifort. Compiling a hello mpi code > works. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From jed at jedbrown.org Mon Feb 9 20:02:57 2015 From: jed at jedbrown.org (Jed Brown) Date: Mon, 09 Feb 2015 19:02:57 -0700 Subject: [petsc-users] Setting step acceptance criteria and/or domain validity using TS module In-Reply-To: References: Message-ID: <87sieey14e.fsf@jedbrown.org> Pierre Barbier de Reuille writes: > Hello, > > Looking for methods to ensure negative values are rejected, I found this in > the archives: > > http://lists.mcs.anl.gov/pipermail/petsc-users/2014-June/021978.html > > The answer gives two options: > 1 - Set a function for the step acceptance criteria > 2 - Set a domain violation for the function > > However, I cannot find any information on how to do either things. > > For (1), I tried to use TSSetPostStep, but I couldn't figure out how to > retrieve the current solution (it seems TSGetSolution returns the last > valid solution). I would use TSAdaptSetCheckStage, but it also doesn't give you access to the stage solution, except via TSGetSNES and SNESGetSolution (which should work, but I think I should update the interface to pass in the stage solution). > For (2), I am not even sure where to start. TSGetSNES and SNESSetFunctionDomainError. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From lawrence.mitchell at imperial.ac.uk Tue Feb 10 03:05:23 2015 From: lawrence.mitchell at imperial.ac.uk (Lawrence Mitchell) Date: Tue, 10 Feb 2015 09:05:23 +0000 Subject: [petsc-users] Issue with window SF type using derived datatypes for reduction In-Reply-To: <871tlyzh5c.fsf@jedbrown.org> References: <59C95D6C-FD8F-4986-B42E-1305F6198CF5@imperial.ac.uk> <871tlyzh5c.fsf@jedbrown.org> Message-ID: <6EF84A8C-A313-4289-A5C3-99DD1429773D@imperial.ac.uk> On 10 Feb 2015, at 01:31, Jed Brown wrote: > Lawrence Mitchell writes: >> Having just tried a build with --download-mpich, I notice this problem >> does not occur. So should I shout at the OpenMPI team? > > Open MPI has many long-standing bugs with one-sided and datatypes. I > have pleaded with them to error instead of corrupting memory or > returning wrong results for unsupported cases. My recommendation is to > not use -sf_type window with Open MPI. > > I hear that they have newfound interest in fixing the decade-old > one-sided bugs, so I would say it is worth reporting this issue. Thanks, I'll have a go. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From fabien.raphel at etu.univ-nantes.fr Tue Feb 10 08:27:07 2015 From: fabien.raphel at etu.univ-nantes.fr (Fabien RAPHEL) Date: Tue, 10 Feb 2015 15:27:07 +0100 (CET) Subject: [petsc-users] Full blas-lapack on Windows In-Reply-To: References: <358a17ac8339c9239e144f06a60560fd.squirrel@webmail-etu.univ-nantes.fr> Message-ID: <2d963c2a05cb726cec2f884b8dfecbd1.squirrel@webmail-etu.univ-nantes.fr> Thanks, I had an error when I used the --download-f2cblaslapack command but now it works. The configuration and compilation work well with the superlu library, but not with superlu_dist. I don't think it's a version compatibility problem. I have some errors with the pdgstrf.c file during the configuration. Have I to change the version of the library? (I tried with the SuperLU_DIST_2.5 version, but I still have the same error). Thanks, Fabien > I think we had this conversation before. > > --download-f2cblaslapack will give you a full blas/lapack. > > And you can't use MUMPS without a fortran compiler [as far as I know] > > You should be able to use superlu_dist > > Satish > > On Mon, 9 Feb 2015, Fabien RAPHEL wrote: > >> Hello, >> >> I can configure, compile and use PETSc on Windows with Visual Studio >> 2008 >> (in serial and parallel). >> But I would like to use a LU factorization in parallel (for example, >> using >> Superlu_dist or MUMPS library). >> I don't have FORTRAN compiler on my machine, so I can't compile the full >> version of BLAS/LAPACK (with the slamch() routine for example). >> >> I found a precompiled version of the full libraries (I can run an sample >> in VS2008). >> But I have a PETSc configure error: "--with-blas-lapack-lib..... cannot >> be >> used" and the configure.log returns a lot of undefined references >> whereas >> I set the libraries and the include file during configuration. >> Do I forgot something in the command line? Or is the error comes from >> the >> library? >> >> Or, is there a C/C++ library who can do that? >> >> Thanks in advance, >> >> Fabien >> > > -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 4324233 bytes Desc: not available URL: From fabien.raphel at etu.univ-nantes.fr Tue Feb 10 08:44:47 2015 From: fabien.raphel at etu.univ-nantes.fr (Fabien RAPHEL) Date: Tue, 10 Feb 2015 15:44:47 +0100 (CET) Subject: [petsc-users] Full blas-lapack on Windows In-Reply-To: References: <358a17ac8339c9239e144f06a60560fd.squirrel@webmail-etu.univ-nantes.fr> Message-ID: <7345ed3d8c906b5948e32b83ace2fba3.squirrel@webmail-etu.univ-nantes.fr> Thanks, I had an error when I used the --download-f2cblaslapack command but now it works. The configuration and compilation work well with the superlu library, but not with superlu_dist. I don't think it's a version compatibility problem. I have some errors with the pdgstrf.c file during the configuration. Have I to change the version of the library? (I tried with the SuperLU_DIST_2.5 version, but I still have the same error). Thanks, Fabien > I think we had this conversation before. > > --download-f2cblaslapack will give you a full blas/lapack. > > And you can't use MUMPS without a fortran compiler [as far as I know] > > You should be able to use superlu_dist > > Satish > > On Mon, 9 Feb 2015, Fabien RAPHEL wrote: > >> Hello, >> >> I can configure, compile and use PETSc on Windows with Visual Studio >> 2008 >> (in serial and parallel). >> But I would like to use a LU factorization in parallel (for example, >> using >> Superlu_dist or MUMPS library). >> I don't have FORTRAN compiler on my machine, so I can't compile the full >> version of BLAS/LAPACK (with the slamch() routine for example). >> >> I found a precompiled version of the full libraries (I can run an sample >> in VS2008). >> But I have a PETSc configure error: "--with-blas-lapack-lib..... cannot >> be >> used" and the configure.log returns a lot of undefined references >> whereas >> I set the libraries and the include file during configuration. >> Do I forgot something in the command line? Or is the error comes from >> the >> library? >> >> Or, is there a C/C++ library who can do that? >> >> Thanks in advance, >> >> Fabien >> > > -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 4324233 bytes Desc: not available URL: From bsmith at mcs.anl.gov Tue Feb 10 10:34:30 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 10 Feb 2015 10:34:30 -0600 Subject: [petsc-users] Full blas-lapack on Windows In-Reply-To: <7345ed3d8c906b5948e32b83ace2fba3.squirrel@webmail-etu.univ-nantes.fr> References: <358a17ac8339c9239e144f06a60560fd.squirrel@webmail-etu.univ-nantes.fr> <7345ed3d8c906b5948e32b83ace2fba3.squirrel@webmail-etu.univ-nantes.fr> Message-ID: <64BB4425-DB33-4FA3-9744-6AE34A94D301@mcs.anl.gov> The reason it (superlu_dist) doesn't build is that it uses C99 compiler features while Microsoft C compiler only supports C89. The line of code int nlsupers = nsupers/Pc; is not valid c89 since it declares a new variable after other code. You can try to fix all the C99 uses in that file (by moving the declarations of all the variables to the top of the routine) and run the configure again. I don't know how much Sheri used C99 so it may mean changing many things. Barry > On Feb 10, 2015, at 8:44 AM, Fabien RAPHEL wrote: > > Thanks, > I had an error when I used the --download-f2cblaslapack command but now it > works. > The configuration and compilation work well with the superlu library, but > not with superlu_dist. > I don't think it's a version compatibility problem. > > I have some errors with the pdgstrf.c file during the configuration. > Have I to change the version of the library? (I tried with the > SuperLU_DIST_2.5 version, but I still have the same error). > > > Thanks, > > Fabien > > > > >> I think we had this conversation before. >> >> --download-f2cblaslapack will give you a full blas/lapack. >> >> And you can't use MUMPS without a fortran compiler [as far as I know] >> >> You should be able to use superlu_dist >> >> Satish >> >> On Mon, 9 Feb 2015, Fabien RAPHEL wrote: >> >>> Hello, >>> >>> I can configure, compile and use PETSc on Windows with Visual Studio >>> 2008 >>> (in serial and parallel). >>> But I would like to use a LU factorization in parallel (for example, >>> using >>> Superlu_dist or MUMPS library). >>> I don't have FORTRAN compiler on my machine, so I can't compile the full >>> version of BLAS/LAPACK (with the slamch() routine for example). >>> >>> I found a precompiled version of the full libraries (I can run an sample >>> in VS2008). >>> But I have a PETSc configure error: "--with-blas-lapack-lib..... cannot >>> be >>> used" and the configure.log returns a lot of undefined references >>> whereas >>> I set the libraries and the include file during configuration. >>> Do I forgot something in the command line? Or is the error comes from >>> the >>> library? >>> >>> Or, is there a C/C++ library who can do that? >>> >>> Thanks in advance, >>> >>> Fabien >>> >> >> > From balay at mcs.anl.gov Tue Feb 10 10:46:01 2015 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 10 Feb 2015 10:46:01 -0600 Subject: [petsc-users] Full blas-lapack on Windows In-Reply-To: <64BB4425-DB33-4FA3-9744-6AE34A94D301@mcs.anl.gov> References: <358a17ac8339c9239e144f06a60560fd.squirrel@webmail-etu.univ-nantes.fr> <7345ed3d8c906b5948e32b83ace2fba3.squirrel@webmail-etu.univ-nantes.fr> <64BB4425-DB33-4FA3-9744-6AE34A94D301@mcs.anl.gov> Message-ID: suggest sticking with http://crd-legacy.lbl.gov/~xiaoye/SuperLU/superlu_dist_3.3.tar.gz [for petsc-3.5] Satish On Tue, 10 Feb 2015, Barry Smith wrote: > > The reason it (superlu_dist) doesn't build is that it uses C99 compiler features while Microsoft C compiler only supports C89. The line of code > > int nlsupers = nsupers/Pc; > > is not valid c89 since it declares a new variable after other code. > > You can try to fix all the C99 uses in that file (by moving the declarations of all the variables to the top of the routine) and run the configure again. I don't know how much Sheri used C99 so it may mean changing many things. > > Barry > > > > > On Feb 10, 2015, at 8:44 AM, Fabien RAPHEL wrote: > > > > Thanks, > > I had an error when I used the --download-f2cblaslapack command but now it > > works. > > The configuration and compilation work well with the superlu library, but > > not with superlu_dist. > > I don't think it's a version compatibility problem. > > > > I have some errors with the pdgstrf.c file during the configuration. > > Have I to change the version of the library? (I tried with the > > SuperLU_DIST_2.5 version, but I still have the same error). > > > > > > Thanks, > > > > Fabien > > > > > > > > > >> I think we had this conversation before. > >> > >> --download-f2cblaslapack will give you a full blas/lapack. > >> > >> And you can't use MUMPS without a fortran compiler [as far as I know] > >> > >> You should be able to use superlu_dist > >> > >> Satish > >> > >> On Mon, 9 Feb 2015, Fabien RAPHEL wrote: > >> > >>> Hello, > >>> > >>> I can configure, compile and use PETSc on Windows with Visual Studio > >>> 2008 > >>> (in serial and parallel). > >>> But I would like to use a LU factorization in parallel (for example, > >>> using > >>> Superlu_dist or MUMPS library). > >>> I don't have FORTRAN compiler on my machine, so I can't compile the full > >>> version of BLAS/LAPACK (with the slamch() routine for example). > >>> > >>> I found a precompiled version of the full libraries (I can run an sample > >>> in VS2008). > >>> But I have a PETSc configure error: "--with-blas-lapack-lib..... cannot > >>> be > >>> used" and the configure.log returns a lot of undefined references > >>> whereas > >>> I set the libraries and the include file during configuration. > >>> Do I forgot something in the command line? Or is the error comes from > >>> the > >>> library? > >>> > >>> Or, is there a C/C++ library who can do that? > >>> > >>> Thanks in advance, > >>> > >>> Fabien > >>> > >> > >> > > > > From pierre.barbierdereuille at gmail.com Tue Feb 10 13:34:34 2015 From: pierre.barbierdereuille at gmail.com (Pierre Barbier de Reuille) Date: Tue, 10 Feb 2015 19:34:34 +0000 Subject: [petsc-users] Setting step acceptance criteria and/or domain validity using TS module References: <87sieey14e.fsf@jedbrown.org> Message-ID: Ok, it seems if I set the domain error from the rhs function, it will indeed fail and backtrack. I hope that is what was intended? I tried before to set it in the PostStep function, but I couldn't get the current solution from there, SNESGetSolution returns an empty vector. Cheers, Pierre On Tue Feb 10 2015 at 03:03:06 Jed Brown wrote: > Pierre Barbier de Reuille writes: > > > Hello, > > > > Looking for methods to ensure negative values are rejected, I found this > in > > the archives: > > > > http://lists.mcs.anl.gov/pipermail/petsc-users/2014-June/021978.html > > > > The answer gives two options: > > 1 - Set a function for the step acceptance criteria > > 2 - Set a domain violation for the function > > > > However, I cannot find any information on how to do either things. > > > > For (1), I tried to use TSSetPostStep, but I couldn't figure out how to > > retrieve the current solution (it seems TSGetSolution returns the last > > valid solution). > > I would use TSAdaptSetCheckStage, but it also doesn't give you access to > the stage solution, except via TSGetSNES and SNESGetSolution (which > should work, but I think I should update the interface to pass in the > stage solution). > > > For (2), I am not even sure where to start. > > TSGetSNES and SNESSetFunctionDomainError. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew at spott.us Tue Feb 10 15:46:39 2015 From: andrew at spott.us (Andrew Spott) Date: Tue, 10 Feb 2015 13:46:39 -0800 (PST) Subject: [petsc-users] SLEPc: left eigenvectors? Message-ID: <1423604798945.4b86bb21@Nodemailer> A quick google search shows some work at calculating the left and right eigenvalues simultaneously back in 2005, however not much sooner has popped up. ?Is this possible yet? ?Where can I find more information? Thanks -Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Tue Feb 10 16:14:45 2015 From: jroman at dsic.upv.es (Jose E. Roman) Date: Tue, 10 Feb 2015 23:14:45 +0100 Subject: [petsc-users] SLEPc: left eigenvectors? In-Reply-To: <1423604798945.4b86bb21@Nodemailer> References: <1423604798945.4b86bb21@Nodemailer> Message-ID: <47EF040A-0425-441B-A016-16B61521F0DE@dsic.upv.es> El 10/02/2015, a las 22:46, Andrew Spott escribi?: > A quick google search shows some work at calculating the left and right eigenvalues simultaneously back in 2005, however not much sooner has popped up. Is this possible yet? Where can I find more information? > > Thanks > -Andrew > It is not possible. That functionality was removed a lot of time ago, since no solver provided support for it. Suggest building A-transpose and calling EPSSolve() a second time for the left eigenvectors. Jose From andrew at spott.us Tue Feb 10 16:15:45 2015 From: andrew at spott.us (Andrew Spott) Date: Tue, 10 Feb 2015 14:15:45 -0800 (PST) Subject: [petsc-users] SLEPc: left eigenvectors? In-Reply-To: <47EF040A-0425-441B-A016-16B61521F0DE@dsic.upv.es> References: <47EF040A-0425-441B-A016-16B61521F0DE@dsic.upv.es> Message-ID: <1423606544436.029107de@Nodemailer> Thanks. ?I figured as much and just wanted to confirm it. -Andrew On Tue, Feb 10, 2015 at 3:14 PM, Jose E. Roman wrote: > El 10/02/2015, a las 22:46, Andrew Spott escribi?: >> A quick google search shows some work at calculating the left and right eigenvalues simultaneously back in 2005, however not much sooner has popped up. Is this possible yet? Where can I find more information? >> >> Thanks >> -Andrew >> > It is not possible. That functionality was removed a lot of time ago, since no solver provided support for it. Suggest building A-transpose and calling EPSSolve() a second time for the left eigenvectors. > Jose -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Tue Feb 10 19:45:16 2015 From: jed at jedbrown.org (Jed Brown) Date: Tue, 10 Feb 2015 18:45:16 -0700 Subject: [petsc-users] Setting step acceptance criteria and/or domain validity using TS module In-Reply-To: References: <87sieey14e.fsf@jedbrown.org> Message-ID: <87oap1w79v.fsf@jedbrown.org> Pierre Barbier de Reuille writes: > Ok, it seems if I set the domain error from the rhs function, it will > indeed fail and backtrack. I hope that is what was intended? Yes, the PostStep (or PostStage) callbacks are not intended for this. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From jed at jedbrown.org Wed Feb 11 00:35:49 2015 From: jed at jedbrown.org (Jed Brown) Date: Tue, 10 Feb 2015 23:35:49 -0700 Subject: [petsc-users] Full blas-lapack on Windows In-Reply-To: <64BB4425-DB33-4FA3-9744-6AE34A94D301@mcs.anl.gov> References: <358a17ac8339c9239e144f06a60560fd.squirrel@webmail-etu.univ-nantes.fr> <7345ed3d8c906b5948e32b83ace2fba3.squirrel@webmail-etu.univ-nantes.fr> <64BB4425-DB33-4FA3-9744-6AE34A94D301@mcs.anl.gov> Message-ID: <87d25hvttm.fsf@jedbrown.org> Barry Smith writes: > The reason it (superlu_dist) doesn't build is that it uses C99 compiler features while Microsoft C compiler only supports C89. The line of code > > int nlsupers = nsupers/Pc; > > is not valid c89 since it declares a new variable after other code. Or upgrade to MSVC 2013 (MS C Version 18 or later), which supports a few C99 features. https://msdn.microsoft.com/en-us/library/hh409293.aspx -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From pierre.barbierdereuille at gmail.com Wed Feb 11 10:10:22 2015 From: pierre.barbierdereuille at gmail.com (Pierre Barbier de Reuille) Date: Wed, 11 Feb 2015 16:10:22 +0000 Subject: [petsc-users] Setting step acceptance criteria and/or domain validity using TS module Message-ID: Ok, I made progress. But: 1 - whatever I do, I have very slightly negative values, and therefore all my steps get rejected (values like 1e-16) 2 - As I expected, SNES is only used with implicit methods. So if I use explicit Runge-Kutta, then there is no solution vector stored by the SNES object. Reading the code for the Runge-Kutta solver, it seems that TSPostStage is where I can retrieve the current state, and TSAdaptCheckStage where I can reject it. But is this something I can rely on? Thanks, Pierre On Wed Feb 11 2015 at 02:45:26 Jed Brown wrote: > Pierre Barbier de Reuille writes: > > > Ok, it seems if I set the domain error from the rhs function, it will > > indeed fail and backtrack. I hope that is what was intended? > > Yes, the PostStep (or PostStage) callbacks are not intended for this. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hanglinye at gmail.com Wed Feb 11 10:52:12 2015 From: hanglinye at gmail.com (Hanglin Ye) Date: Wed, 11 Feb 2015 11:52:12 -0500 Subject: [petsc-users] Domain Decomposition Method for Parallel FEM Code Message-ID: Dear all, I am new to PETSc and I want to use it to parallel my current serial FEM code. I want to use Domain Decomposition Method so that the whole FEM domain is partitioned into sub-domains and computations are performed in sub-domains then assemble together. Is there a way to use PETSc to realize that ? I've been searching through tutorials but none seems to be clear about this aspect. I mainly wish to know what solver of PETSc do I need. Thank you very much. -- Hanglin Ye Ph.D.Student MANE, RPI -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 11 11:01:54 2015 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 11 Feb 2015 11:01:54 -0600 Subject: [petsc-users] Domain Decomposition Method for Parallel FEM Code In-Reply-To: References: Message-ID: On Wed, Feb 11, 2015 at 10:52 AM, Hanglin Ye wrote: > Dear all, > > I am new to PETSc and I want to use it to parallel my current serial FEM > code. I want to use Domain Decomposition Method so that the whole FEM > domain is partitioned into sub-domains and computations are performed in > sub-domains then assemble together. Is there a way to use PETSc to realize > that ? > > I've been searching through tutorials but none seems to be clear about > this aspect. I mainly wish to know what solver of PETSc do I need. > Do you have a structured or unstructured mesh? In either case, you will use a DM to encapsulate your mesh, which will give you DMLocalToGlobal() and DMGlobalToLocal() to map between Vecs which are appropriate for the solver (global) and those with ghost regions which are appropriate for assembly (local). You can see an example in SNES ex5, ex12, and ex19. Thanks, Matt > Thank you very much. > -- > Hanglin Ye > Ph.D.Student MANE, RPI > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From parsani.matteo at gmail.com Wed Feb 11 11:32:25 2015 From: parsani.matteo at gmail.com (Matteo Parsani) Date: Wed, 11 Feb 2015 12:32:25 -0500 Subject: [petsc-users] SGI compiler (mpt-2.11) and fortran interface Message-ID: Dear Petsc Users and Developers, Recently, I have compiled my fortran code using the SGI compiler (mpt-2.11). Previously I was using openmpi 1.7.3 and everything worked fine. Now, with when I run the code compiled with SGI I get the following error: At line 1921 of file mpi_module.F90 Fortran runtime error: Array reference out of bounds for array 'xx_v', upper bound of dimension 1 exceeded (349921 > 1) Precisely the line that gives troubles is the following x_ghost(i_dir,i_loc) = xx_v(n_tot+3*(i_loc-1) + i_dir) The variables xx_v is a fortran pointer which I get using call VecGetArrayF90(x_local, xx_v, i_er) where i_err is defined as PetscErrorCode i_err Do you have any idea what I am doing wrong? Thank you! -- Matteo -------------- next part -------------- An HTML attachment was scrubbed... URL: From parsani.matteo at gmail.com Wed Feb 11 11:35:46 2015 From: parsani.matteo at gmail.com (Matteo Parsani) Date: Wed, 11 Feb 2015 12:35:46 -0500 Subject: [petsc-users] SGI compiler (mpt-2.11) and fortran interface In-Reply-To: References: Message-ID: I forgot to say that I am using PETSc 3.5 On Wed, Feb 11, 2015 at 12:32 PM, Matteo Parsani wrote: > Dear Petsc Users and Developers, > Recently, I have compiled my fortran code using the SGI compiler > (mpt-2.11). Previously I was using openmpi 1.7.3 and everything worked fine. > > Now, with when I run the code compiled with SGI I get the following error: > > At line 1921 of file mpi_module.F90 > Fortran runtime error: Array reference out of bounds for array 'xx_v', > upper bound of dimension 1 exceeded (349921 > 1) > > Precisely the line that gives troubles is the following > > x_ghost(i_dir,i_loc) = xx_v(n_tot+3*(i_loc-1) + i_dir) > > > The variables xx_v is a fortran pointer which I get using > > call VecGetArrayF90(x_local, xx_v, i_er) > > where i_err is defined as > > PetscErrorCode i_err > > Do you have any idea what I am doing wrong? > > Thank you! > > -- > Matteo > -- Matteo -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 11 11:38:58 2015 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 11 Feb 2015 11:38:58 -0600 Subject: [petsc-users] SGI compiler (mpt-2.11) and fortran interface In-Reply-To: References: Message-ID: On Wed, Feb 11, 2015 at 11:35 AM, Matteo Parsani wrote: > I forgot to say that I am using PETSc 3.5 > How are you declaring xx_v? Matt > On Wed, Feb 11, 2015 at 12:32 PM, Matteo Parsani > wrote: > >> Dear Petsc Users and Developers, >> Recently, I have compiled my fortran code using the SGI compiler >> (mpt-2.11). Previously I was using openmpi 1.7.3 and everything worked fine. >> >> Now, with when I run the code compiled with SGI I get the following error: >> >> At line 1921 of file mpi_module.F90 >> Fortran runtime error: Array reference out of bounds for array 'xx_v', >> upper bound of dimension 1 exceeded (349921 > 1) >> >> Precisely the line that gives troubles is the following >> >> x_ghost(i_dir,i_loc) = xx_v(n_tot+3*(i_loc-1) + i_dir) >> >> >> The variables xx_v is a fortran pointer which I get using >> >> call VecGetArrayF90(x_local, xx_v, i_er) >> >> where i_err is defined as >> >> PetscErrorCode i_err >> >> Do you have any idea what I am doing wrong? >> >> Thank you! >> >> -- >> Matteo >> > > > > -- > Matteo > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 11 11:42:27 2015 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 11 Feb 2015 11:42:27 -0600 Subject: [petsc-users] SGI compiler (mpt-2.11) and fortran interface In-Reply-To: References: Message-ID: On Wed, Feb 11, 2015 at 11:41 AM, Matteo Parsani wrote: > Should I use > > PetscScalar, pointer, dimension(:) :: xx_v > Yes, but I am not sure that will solve your problem. Do the PETSc F90 examples run with this compiler? Matt > ? > > On Wed, Feb 11, 2015 at 12:39 PM, Matteo Parsani > wrote: > >> In this way: >> >> real(wp), pointer, dimension(:) :: xx_v >> >> where wp is my working precision >> >> On Wed, Feb 11, 2015 at 12:38 PM, Matthew Knepley >> wrote: >> >>> On Wed, Feb 11, 2015 at 11:35 AM, Matteo Parsani < >>> parsani.matteo at gmail.com> wrote: >>> >>>> I forgot to say that I am using PETSc 3.5 >>>> >>> >>> How are you declaring xx_v? >>> >>> Matt >>> >>> >>>> On Wed, Feb 11, 2015 at 12:32 PM, Matteo Parsani < >>>> parsani.matteo at gmail.com> wrote: >>>> >>>>> Dear Petsc Users and Developers, >>>>> Recently, I have compiled my fortran code using the SGI compiler >>>>> (mpt-2.11). Previously I was using openmpi 1.7.3 and everything worked fine. >>>>> >>>>> Now, with when I run the code compiled with SGI I get the following >>>>> error: >>>>> >>>>> At line 1921 of file mpi_module.F90 >>>>> Fortran runtime error: Array reference out of bounds for array 'xx_v', >>>>> upper bound of dimension 1 exceeded (349921 > 1) >>>>> >>>>> Precisely the line that gives troubles is the following >>>>> >>>>> x_ghost(i_dir,i_loc) = xx_v(n_tot+3*(i_loc-1) + i_dir) >>>>> >>>>> >>>>> The variables xx_v is a fortran pointer which I get using >>>>> >>>>> call VecGetArrayF90(x_local, xx_v, i_er) >>>>> >>>>> where i_err is defined as >>>>> >>>>> PetscErrorCode i_err >>>>> >>>>> Do you have any idea what I am doing wrong? >>>>> >>>>> Thank you! >>>>> >>>>> -- >>>>> Matteo >>>>> >>>> >>>> >>>> >>>> -- >>>> Matteo >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> Matteo >> > > > > -- > Matteo > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 11 12:05:44 2015 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 11 Feb 2015 12:05:44 -0600 Subject: [petsc-users] SGI compiler (mpt-2.11) and fortran interface In-Reply-To: References: Message-ID: On Wed, Feb 11, 2015 at 12:03 PM, Matteo Parsani wrote: > I can compile the example but when I run them I always get > > MPT ERROR: mpiexec_mpt must be used to launch all MPI applications > You should use that to launch the run then. Thanks, Matt > > On Wed, Feb 11, 2015 at 12:42 PM, Matthew Knepley > wrote: > >> On Wed, Feb 11, 2015 at 11:41 AM, Matteo Parsani < >> parsani.matteo at gmail.com> wrote: >> >>> Should I use >>> >>> PetscScalar, pointer, dimension(:) :: xx_v >>> >> >> Yes, but I am not sure that will solve your problem. Do the PETSc F90 >> examples run with this compiler? >> >> Matt >> >> >>> ? >>> >>> On Wed, Feb 11, 2015 at 12:39 PM, Matteo Parsani < >>> parsani.matteo at gmail.com> wrote: >>> >>>> In this way: >>>> >>>> real(wp), pointer, dimension(:) :: xx_v >>>> >>>> where wp is my working precision >>>> >>>> On Wed, Feb 11, 2015 at 12:38 PM, Matthew Knepley >>>> wrote: >>>> >>>>> On Wed, Feb 11, 2015 at 11:35 AM, Matteo Parsani < >>>>> parsani.matteo at gmail.com> wrote: >>>>> >>>>>> I forgot to say that I am using PETSc 3.5 >>>>>> >>>>> >>>>> How are you declaring xx_v? >>>>> >>>>> Matt >>>>> >>>>> >>>>>> On Wed, Feb 11, 2015 at 12:32 PM, Matteo Parsani < >>>>>> parsani.matteo at gmail.com> wrote: >>>>>> >>>>>>> Dear Petsc Users and Developers, >>>>>>> Recently, I have compiled my fortran code using the SGI compiler >>>>>>> (mpt-2.11). Previously I was using openmpi 1.7.3 and everything worked fine. >>>>>>> >>>>>>> Now, with when I run the code compiled with SGI I get the following >>>>>>> error: >>>>>>> >>>>>>> At line 1921 of file mpi_module.F90 >>>>>>> Fortran runtime error: Array reference out of bounds for array >>>>>>> 'xx_v', upper bound of dimension 1 exceeded (349921 > 1) >>>>>>> >>>>>>> Precisely the line that gives troubles is the following >>>>>>> >>>>>>> x_ghost(i_dir,i_loc) = xx_v(n_tot+3*(i_loc-1) + i_dir) >>>>>>> >>>>>>> >>>>>>> The variables xx_v is a fortran pointer which I get using >>>>>>> >>>>>>> call VecGetArrayF90(x_local, xx_v, i_er) >>>>>>> >>>>>>> where i_err is defined as >>>>>>> >>>>>>> PetscErrorCode i_err >>>>>>> >>>>>>> Do you have any idea what I am doing wrong? >>>>>>> >>>>>>> Thank you! >>>>>>> >>>>>>> -- >>>>>>> Matteo >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Matteo >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>> >>>> >>>> >>>> -- >>>> Matteo >>>> >>> >>> >>> >>> -- >>> Matteo >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > Matteo > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Feb 11 13:12:43 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 11 Feb 2015 13:12:43 -0600 Subject: [petsc-users] SGI compiler (mpt-2.11) and fortran interface In-Reply-To: References: Message-ID: <31C966BA-1B0B-4368-AF2F-5BAB407F2529@mcs.anl.gov> > On Feb 11, 2015, at 11:32 AM, Matteo Parsani wrote: > > Dear Petsc Users and Developers, > Recently, I have compiled my fortran code using the SGI compiler (mpt-2.11). Previously I was using openmpi 1.7.3 and everything worked fine. Almost for sure your code is now being compiled to check for out of bounds in array access. You need to turn that off if you use VecGetArray() you need to check the documentation for your compiler to find the flag to turn it off. Or you can switch to using VecGetArrayF90() which will also simplify your code slightly and is the post-f77 way of accessing arrays from PETSc vectors. Barry > > Now, with when I run the code compiled with SGI I get the following error: > > At line 1921 of file mpi_module.F90 > Fortran runtime error: Array reference out of bounds for array 'xx_v', upper bound of dimension 1 exceeded (349921 > 1) > > Precisely the line that gives troubles is the following > > x_ghost(i_dir,i_loc) = xx_v(n_tot+3*(i_loc-1) + i_dir) > > > The variables xx_v is a fortran pointer which I get using > > call VecGetArrayF90(x_local, xx_v, i_er) > > where i_err is defined as > > PetscErrorCode i_err > > Do you have any idea what I am doing wrong? > > Thank you! > > -- > Matteo From parsani.matteo at gmail.com Wed Feb 11 13:16:55 2015 From: parsani.matteo at gmail.com (Matteo Parsani) Date: Wed, 11 Feb 2015 14:16:55 -0500 Subject: [petsc-users] SGI compiler (mpt-2.11) and fortran interface In-Reply-To: <31C966BA-1B0B-4368-AF2F-5BAB407F2529@mcs.anl.gov> References: <31C966BA-1B0B-4368-AF2F-5BAB407F2529@mcs.anl.gov> Message-ID: Hello, I am already using VecGetArrayF90(). I am waiting now for the IT help because the issue is only with the SGI compiler on our cluster. Once everything is set up I will try to run the fortran examples in PETSc and then I will let you know if the example work. On Wed, Feb 11, 2015 at 2:12 PM, Barry Smith wrote: > > > On Feb 11, 2015, at 11:32 AM, Matteo Parsani > wrote: > > > > Dear Petsc Users and Developers, > > Recently, I have compiled my fortran code using the SGI compiler > (mpt-2.11). Previously I was using openmpi 1.7.3 and everything worked fine. > > Almost for sure your code is now being compiled to check for out of > bounds in array access. You need to turn that off if you use VecGetArray() > you need to check the documentation for your compiler to find the flag to > turn it off. > > Or you can switch to using VecGetArrayF90() which will also simplify > your code slightly and is the post-f77 way of accessing arrays from PETSc > vectors. > > Barry > > > > > Now, with when I run the code compiled with SGI I get the following > error: > > > > At line 1921 of file mpi_module.F90 > > Fortran runtime error: Array reference out of bounds for array 'xx_v', > upper bound of dimension 1 exceeded (349921 > 1) > > > > Precisely the line that gives troubles is the following > > > > x_ghost(i_dir,i_loc) = xx_v(n_tot+3*(i_loc-1) + i_dir) > > > > > > The variables xx_v is a fortran pointer which I get using > > > > call VecGetArrayF90(x_local, xx_v, i_er) > > > > where i_err is defined as > > > > PetscErrorCode i_err > > > > Do you have any idea what I am doing wrong? > > > > Thank you! > > > > -- > > Matteo > > -- Matteo -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Feb 11 13:23:36 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 11 Feb 2015 13:23:36 -0600 Subject: [petsc-users] SGI compiler (mpt-2.11) and fortran interface In-Reply-To: References: <31C966BA-1B0B-4368-AF2F-5BAB407F2529@mcs.anl.gov> Message-ID: Oh, yes, sorry I should have read more clearly. Then I am not sure what the issue could be. Barry > On Feb 11, 2015, at 1:16 PM, Matteo Parsani wrote: > > Hello, > I am already using VecGetArrayF90(). > > I am waiting now for the IT help because the issue is only with the SGI compiler on our cluster. Once everything is set up I will try to run the fortran examples in PETSc and then I will let you know if the example work. > > > > On Wed, Feb 11, 2015 at 2:12 PM, Barry Smith wrote: > > > On Feb 11, 2015, at 11:32 AM, Matteo Parsani wrote: > > > > Dear Petsc Users and Developers, > > Recently, I have compiled my fortran code using the SGI compiler (mpt-2.11). Previously I was using openmpi 1.7.3 and everything worked fine. > > Almost for sure your code is now being compiled to check for out of bounds in array access. You need to turn that off if you use VecGetArray() you need to check the documentation for your compiler to find the flag to turn it off. > > Or you can switch to using VecGetArrayF90() which will also simplify your code slightly and is the post-f77 way of accessing arrays from PETSc vectors. > > Barry > > > > > Now, with when I run the code compiled with SGI I get the following error: > > > > At line 1921 of file mpi_module.F90 > > Fortran runtime error: Array reference out of bounds for array 'xx_v', upper bound of dimension 1 exceeded (349921 > 1) > > > > Precisely the line that gives troubles is the following > > > > x_ghost(i_dir,i_loc) = xx_v(n_tot+3*(i_loc-1) + i_dir) > > > > > > The variables xx_v is a fortran pointer which I get using > > > > call VecGetArrayF90(x_local, xx_v, i_er) > > > > where i_err is defined as > > > > PetscErrorCode i_err > > > > Do you have any idea what I am doing wrong? > > > > Thank you! > > > > -- > > Matteo > > > > > -- > Matteo From balay at mcs.anl.gov Wed Feb 11 13:31:29 2015 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 11 Feb 2015 13:31:29 -0600 Subject: [petsc-users] SGI compiler (mpt-2.11) and fortran interface In-Reply-To: References: <31C966BA-1B0B-4368-AF2F-5BAB407F2529@mcs.anl.gov> Message-ID: I suspect its one of the following: - missing including petscvec.h90 [i.e missing prototype for VecGetArrayF90(). - perhaps building PETSc with fortran-compiler-a -but attempt to use with fortran-compiler-b. - unknown fortran compile that does crazy things with f90 pointers.. It would be best to reproduce the problem with a PETSc example - perhaps one from the list below http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Vec/VecGetArrayF90.html BTW: looks like SGI mpt refers to MPI from SGI? - but then - you can use either intel or gnu compilers with it. http://www.nas.nasa.gov/hecc/support/kb/SGI-MPT_89.html [and VecGetArrayF90 should work with both these compiler sets..] Satish On Wed, 11 Feb 2015, Barry Smith wrote: > > Oh, yes, sorry I should have read more clearly. Then I am not sure what the issue could be. > > Barry > > > On Feb 11, 2015, at 1:16 PM, Matteo Parsani wrote: > > > > Hello, > > I am already using VecGetArrayF90(). > > > > I am waiting now for the IT help because the issue is only with the SGI compiler on our cluster. Once everything is set up I will try to run the fortran examples in PETSc and then I will let you know if the example work. > > > > > > > > On Wed, Feb 11, 2015 at 2:12 PM, Barry Smith wrote: > > > > > On Feb 11, 2015, at 11:32 AM, Matteo Parsani wrote: > > > > > > Dear Petsc Users and Developers, > > > Recently, I have compiled my fortran code using the SGI compiler (mpt-2.11). Previously I was using openmpi 1.7.3 and everything worked fine. > > > > Almost for sure your code is now being compiled to check for out of bounds in array access. You need to turn that off if you use VecGetArray() you need to check the documentation for your compiler to find the flag to turn it off. > > > > Or you can switch to using VecGetArrayF90() which will also simplify your code slightly and is the post-f77 way of accessing arrays from PETSc vectors. > > > > Barry > > > > > > > > Now, with when I run the code compiled with SGI I get the following error: > > > > > > At line 1921 of file mpi_module.F90 > > > Fortran runtime error: Array reference out of bounds for array 'xx_v', upper bound of dimension 1 exceeded (349921 > 1) > > > > > > Precisely the line that gives troubles is the following > > > > > > x_ghost(i_dir,i_loc) = xx_v(n_tot+3*(i_loc-1) + i_dir) > > > > > > > > > The variables xx_v is a fortran pointer which I get using > > > > > > call VecGetArrayF90(x_local, xx_v, i_er) > > > > > > where i_err is defined as > > > > > > PetscErrorCode i_err > > > > > > Do you have any idea what I am doing wrong? > > > > > > Thank you! > > > > > > -- > > > Matteo > > > > > > > > > > -- > > Matteo > > From bsmith at mcs.anl.gov Wed Feb 11 13:42:43 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 11 Feb 2015 13:42:43 -0600 Subject: [petsc-users] Direct solvers In-Reply-To: References: <192B39D7-98BD-45B0-A5F3-D0600A640A66@gmail.com> Message-ID: <763A3000-F99A-41CC-8AB1-03E94D671A30@mcs.anl.gov> Is this still an issue for you? If so could you build both 3.5.1 and 3.5.2 from the tarballs using the --download-xxx options for installing external packages then compile and run the same application with both with the same options and send the output from running both with -log_summary. This will help us see if the the memory/performance between the two has somehow unexpectedly changed. Barry > On Feb 6, 2015, at 8:53 AM, Manav Bhatia wrote: > > >> On Feb 5, 2015, at 11:18 PM, Barry Smith wrote: >> >>> I am trying to use an lu decomposition method for a relatively large matrix (~775,000 dofs) coming from a thermoelasticity problem. >>> >>> For the past few weeks, LU solver in 3.5.1 has been solving it just fine. I just upgraded to 3.5.2 from macports (running on Mac OS 10.10.2), and am getting the following ?out of memory" error >> >> This is surprising. The changes are 3.5.1 to 3.5.2 are supposed to be only minor bug fixes. Is the code otherwise __exactly__ the same with the same options? Are all the external libraries exactly the same in both cases? > > The 3.5.1 version I had been using was one that I had downloaded from the petsc website and compiled myself, without some external packages, like MUMPS, suitesparse and superlu. > > The 3.5.2 version was build by macports with the added options of MUMPS, suitesparse and superlu. > > My source code remained the same between these separate runs. > > I will try rebuilding this from scratch to see if it makes any change. > > -Manav From hanglinye at gmail.com Wed Feb 11 14:07:01 2015 From: hanglinye at gmail.com (Hanglin Ye) Date: Wed, 11 Feb 2015 15:07:01 -0500 Subject: [petsc-users] Domain Decomposition Method for Parallel FEM Code In-Reply-To: References: Message-ID: Hi, Thank you again for the reply. I am trying to run ex62. But there is an error : "Mesh generation needs external package support. Please reconfigure with --download-triangle." I then configure again with --download triangle, and it is download and installed. But this error keeps showing up. Could you please let me know if I am missing anything? Thank you. On Wed, Feb 11, 2015 at 1:05 PM, Matthew Knepley wrote: > On Wed, Feb 11, 2015 at 12:01 PM, Hanglin Ye wrote: > >> Thank you very much for the reply. >> >> I am not very sure if structured/unstructured mesh means differently in >> PETSc, but my mesh are simply hexahedral mesh, which I assume is >> structured. Is there any difference when dealing with structured and >> unstructured mesh? >> > > >> And another question is: Can I say that I only need to provide the whole >> domain, and PETSc can take care of the decomposition, so that I do not need >> to use software such as Metis to partition the mesh in advance? >> > > Now your mesh sounds unstructured. SNES ex62 is an example of a finite > element code using DMPlex > which can have tetrahedral or hexahedral cells, and solves Stokes equation. > > PETSc can handle partitioning and distribution for you. > > Thanks, > > Matt > > >> Thank you. >> >> >> On Feb 11, 2015, at 12:01, Matthew Knepley wrote: >> >> On Wed, Feb 11, 2015 at 10:52 AM, Hanglin Ye wrote: >> >>> Dear all, >>> >>> I am new to PETSc and I want to use it to parallel my current serial FEM >>> code. I want to use Domain Decomposition Method so that the whole FEM >>> domain is partitioned into sub-domains and computations are performed in >>> sub-domains then assemble together. Is there a way to use PETSc to realize >>> that ? >>> >>> I've been searching through tutorials but none seems to be clear about >>> this aspect. I mainly wish to know what solver of PETSc do I need. >>> >> >> Do you have a structured or unstructured mesh? In either case, you will >> use a DM to encapsulate your mesh, which will give you >> DMLocalToGlobal() and DMGlobalToLocal() to map between Vecs which are >> appropriate for the solver (global) and those with >> ghost regions which are appropriate for assembly (local). You can see an >> example in SNES ex5, ex12, and ex19. >> >> Thanks, >> >> Matt >> >> >>> Thank you very much. >>> -- >>> Hanglin Ye >>> Ph.D.Student MANE, RPI >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- Hanglin Ye Ph.D.Student MANE, RPI -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 11 14:09:26 2015 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 11 Feb 2015 14:09:26 -0600 Subject: [petsc-users] Domain Decomposition Method for Parallel FEM Code In-Reply-To: References: Message-ID: On Wed, Feb 11, 2015 at 2:07 PM, Hanglin Ye wrote: > Hi, > Thank you again for the reply. I am trying to run ex62. But there is an > error : "Mesh generation needs external package support. Please > reconfigure with --download-triangle." I then configure again with > --download triangle, and it is download and installed. But this error keeps > showing up. Could you please let me know if I am missing anything? > Always send the entire error text. It will show your configure line, and lots of other information. Thanks, Matt > Thank you. > > On Wed, Feb 11, 2015 at 1:05 PM, Matthew Knepley > wrote: > >> On Wed, Feb 11, 2015 at 12:01 PM, Hanglin Ye wrote: >> >>> Thank you very much for the reply. >>> >>> I am not very sure if structured/unstructured mesh means differently in >>> PETSc, but my mesh are simply hexahedral mesh, which I assume is >>> structured. Is there any difference when dealing with structured and >>> unstructured mesh? >>> >> >> >>> And another question is: Can I say that I only need to provide the whole >>> domain, and PETSc can take care of the decomposition, so that I do not need >>> to use software such as Metis to partition the mesh in advance? >>> >> >> Now your mesh sounds unstructured. SNES ex62 is an example of a finite >> element code using DMPlex >> which can have tetrahedral or hexahedral cells, and solves Stokes >> equation. >> >> PETSc can handle partitioning and distribution for you. >> >> Thanks, >> >> Matt >> >> >>> Thank you. >>> >>> >>> On Feb 11, 2015, at 12:01, Matthew Knepley wrote: >>> >>> On Wed, Feb 11, 2015 at 10:52 AM, Hanglin Ye >>> wrote: >>> >>>> Dear all, >>>> >>>> I am new to PETSc and I want to use it to parallel my current serial >>>> FEM code. I want to use Domain Decomposition Method so that the whole FEM >>>> domain is partitioned into sub-domains and computations are performed in >>>> sub-domains then assemble together. Is there a way to use PETSc to realize >>>> that ? >>>> >>>> I've been searching through tutorials but none seems to be clear about >>>> this aspect. I mainly wish to know what solver of PETSc do I need. >>>> >>> >>> Do you have a structured or unstructured mesh? In either case, you will >>> use a DM to encapsulate your mesh, which will give you >>> DMLocalToGlobal() and DMGlobalToLocal() to map between Vecs which are >>> appropriate for the solver (global) and those with >>> ghost regions which are appropriate for assembly (local). You can see an >>> example in SNES ex5, ex12, and ex19. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Thank you very much. >>>> -- >>>> Hanglin Ye >>>> Ph.D.Student MANE, RPI >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > Hanglin Ye > Ph.D.Student MANE, RPI > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From parsani.matteo at gmail.com Wed Feb 11 15:53:36 2015 From: parsani.matteo at gmail.com (Matteo Parsani) Date: Wed, 11 Feb 2015 16:53:36 -0500 Subject: [petsc-users] SGI compiler (mpt-2.11) and fortran interface In-Reply-To: References: <31C966BA-1B0B-4368-AF2F-5BAB407F2529@mcs.anl.gov> Message-ID: Hi, I have included petscvec.h90 (the code was indeed working with openmpi) I can run all the examples you suggested. Thanks for your help! On Wed, Feb 11, 2015 at 2:31 PM, Satish Balay wrote: > I suspect its one of the following: > > - missing including petscvec.h90 [i.e missing prototype for > VecGetArrayF90(). > - perhaps building PETSc with fortran-compiler-a -but attempt to use with > fortran-compiler-b. > - unknown fortran compile that does crazy things with f90 pointers.. > > It would be best to reproduce the problem with a PETSc example - > perhaps one from the list below > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Vec/VecGetArrayF90.html > > BTW: looks like SGI mpt refers to MPI from SGI? - but then - you can > use either intel or gnu compilers with it. > > http://www.nas.nasa.gov/hecc/support/kb/SGI-MPT_89.html > > [and VecGetArrayF90 should work with both these compiler sets..] > > Satish > > On Wed, 11 Feb 2015, Barry Smith wrote: > > > > > Oh, yes, sorry I should have read more clearly. Then I am not sure > what the issue could be. > > > > Barry > > > > > On Feb 11, 2015, at 1:16 PM, Matteo Parsani > wrote: > > > > > > Hello, > > > I am already using VecGetArrayF90(). > > > > > > I am waiting now for the IT help because the issue is only with the > SGI compiler on our cluster. Once everything is set up I will try to run > the fortran examples in PETSc and then I will let you know if the example > work. > > > > > > > > > > > > On Wed, Feb 11, 2015 at 2:12 PM, Barry Smith > wrote: > > > > > > > On Feb 11, 2015, at 11:32 AM, Matteo Parsani < > parsani.matteo at gmail.com> wrote: > > > > > > > > Dear Petsc Users and Developers, > > > > Recently, I have compiled my fortran code using the SGI compiler > (mpt-2.11). Previously I was using openmpi 1.7.3 and everything worked fine. > > > > > > Almost for sure your code is now being compiled to check for out of > bounds in array access. You need to turn that off if you use VecGetArray() > you need to check the documentation for your compiler to find the flag to > turn it off. > > > > > > Or you can switch to using VecGetArrayF90() which will also simplify > your code slightly and is the post-f77 way of accessing arrays from PETSc > vectors. > > > > > > Barry > > > > > > > > > > > Now, with when I run the code compiled with SGI I get the following > error: > > > > > > > > At line 1921 of file mpi_module.F90 > > > > Fortran runtime error: Array reference out of bounds for array > 'xx_v', upper bound of dimension 1 exceeded (349921 > 1) > > > > > > > > Precisely the line that gives troubles is the following > > > > > > > > x_ghost(i_dir,i_loc) = xx_v(n_tot+3*(i_loc-1) + i_dir) > > > > > > > > > > > > The variables xx_v is a fortran pointer which I get using > > > > > > > > call VecGetArrayF90(x_local, xx_v, i_er) > > > > > > > > where i_err is defined as > > > > > > > > PetscErrorCode i_err > > > > > > > > Do you have any idea what I am doing wrong? > > > > > > > > Thank you! > > > > > > > > -- > > > > Matteo > > > > > > > > > > > > > > > -- > > > Matteo > > > > > > -- Matteo -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Feb 11 20:45:03 2015 From: jed at jedbrown.org (Jed Brown) Date: Wed, 11 Feb 2015 19:45:03 -0700 Subject: [petsc-users] Setting step acceptance criteria and/or domain validity using TS module In-Reply-To: References: Message-ID: <87mw4jvoeo.fsf@jedbrown.org> Pierre Barbier de Reuille writes: > Ok, I made progress. But: > > 1 - whatever I do, I have very slightly negative values, and therefore all > my steps get rejected (values like 1e-16) > 2 - As I expected, SNES is only used with implicit methods. So if I use > explicit Runge-Kutta, then there is no solution vector stored by the SNES > object. > > Reading the code for the Runge-Kutta solver, it seems that TSPostStage is > where I can retrieve the current state, and TSAdaptCheckStage where I can > reject it. But is this something I can rely on? TSPostStage is only called *after* the stage has been accepted (the step might be rejected later, e.g., based on a local error controller). We should pass the stage solution to TSAdaptCheckStage so you can check it there. I can add this, but I'm at a conference in Singapore this week and have a couple more pressing things, so you'd have to wait until next week unless someone else can do it (or you'd like to submit a patch). We should also add TSSetSetFunctionDomainError() so you can check it there (my preference, actually). -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From C.Klaij at marin.nl Thu Feb 12 03:02:10 2015 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Thu, 12 Feb 2015 09:02:10 +0000 Subject: [petsc-users] PETSC_NULL_OBJECT gets corrupt after call to MatNestGetISs in fortran Message-ID: <1423731730426.77220@marin.nl> Using petsc-3.5.3, I noticed that PETSC_NULL_OBJECT gets corrupt after calling MatNestGetISs in fortran. Here's a small example: $ cat fieldsplittry2.F90 program fieldsplittry2 use petscksp implicit none #include PetscErrorCode :: ierr PetscInt :: size,i,j,start,end,n=4,numsplit=1 PetscScalar :: zero=0.0,one=1.0 Vec :: diag3,x,b Mat :: A,subA(4),myS PC :: pc,subpc(2) KSP :: ksp,subksp(2) IS :: isg(2) call PetscInitialize(PETSC_NULL_CHARACTER,ierr); CHKERRQ(ierr) call MPI_Comm_size(PETSC_COMM_WORLD,size,ierr); CHKERRQ(ierr); ! vectors call VecCreateMPI(MPI_COMM_WORLD,3*n,PETSC_DECIDE,diag3,ierr); CHKERRQ(ierr) call VecSet(diag3,one,ierr); CHKERRQ(ierr) call VecCreateMPI(MPI_COMM_WORLD,4*n,PETSC_DECIDE,x,ierr); CHKERRQ(ierr) call VecSet(x,zero,ierr); CHKERRQ(ierr) call VecDuplicate(x,b,ierr); CHKERRQ(ierr) call VecSet(b,one,ierr); CHKERRQ(ierr) ! matrix a00 call MatCreateAIJ(MPI_COMM_WORLD,3*n,3*n,PETSC_DECIDE,PETSC_DECIDE,1,PETSC_NULL_INTEGER,0,PETSC_NULL_INTEGER,subA(1),ierr);CHKERRQ(ierr) call MatDiagonalSet(subA(1),diag3,INSERT_VALUES,ierr);CHKERRQ(ierr) call MatAssemblyBegin(subA(1),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr) call MatAssemblyEnd(subA(1),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr) ! matrix a01 call MatCreateAIJ(MPI_COMM_WORLD,3*n,n,PETSC_DECIDE,PETSC_DECIDE,1,PETSC_NULL_INTEGER,1,PETSC_NULL_INTEGER,subA(2),ierr);CHKERRQ(ierr) call MatGetOwnershipRange(subA(2),start,end,ierr);CHKERRQ(ierr); do i=start,end-1 j=mod(i,size*n) call MatSetValue(subA(2),i,j,one,INSERT_VALUES,ierr);CHKERRQ(ierr) end do call MatAssemblyBegin(subA(2),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr) call MatAssemblyEnd(subA(2),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr) ! matrix a10 call MatTranspose(subA(2),MAT_INITIAL_MATRIX,subA(3),ierr);CHKERRQ(ierr) ! matrix a11 (empty) call MatCreateAIJ(MPI_COMM_WORLD,n,n,PETSC_DECIDE,PETSC_DECIDE,0,PETSC_NULL_INTEGER,0,PETSC_NULL_INTEGER,subA(4),ierr);CHKERRQ(ierr) call MatAssemblyBegin(subA(4),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr) call MatAssemblyEnd(subA(4),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr) ! nested mat [a00,a01;a10,a11] call MatCreateNest(MPI_COMM_WORLD,2,PETSC_NULL_OBJECT,2,PETSC_NULL_OBJECT,subA,A,ierr);CHKERRQ(ierr) call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr) call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr) print *, PETSC_NULL_OBJECT call MatNestGetISs(A,isg,PETSC_NULL_OBJECT,ierr);CHKERRQ(ierr); print *, PETSC_NULL_OBJECT call PetscFinalize(ierr) end program fieldsplittry2 $ ./fieldsplittry2 0 39367824 $ dr. ir. Christiaan Klaij CFD Researcher Research & Development E mailto:C.Klaij at marin.nl T +31 317 49 33 44 MARIN 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl From bsmith at mcs.anl.gov Thu Feb 12 07:13:08 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 12 Feb 2015 07:13:08 -0600 Subject: [petsc-users] PETSC_NULL_OBJECT gets corrupt after call to MatNestGetISs in fortran In-Reply-To: <1423731730426.77220@marin.nl> References: <1423731730426.77220@marin.nl> Message-ID: <02304755-F478-4588-AA5C-22A00DE4AF7F@mcs.anl.gov> Thanks for reporting this. Currently the Fortran stub for this function is generated automatically which means it does not have the logic for handling a PETSC_NULL_OBJECT argument. Satish, could you please see if you can add a custom fortran stub for this function in maint? Thanks Barry > On Feb 12, 2015, at 3:02 AM, Klaij, Christiaan wrote: > > Using petsc-3.5.3, I noticed that PETSC_NULL_OBJECT gets corrupt after calling MatNestGetISs in fortran. Here's a small example: > > $ cat fieldsplittry2.F90 > program fieldsplittry2 > > use petscksp > implicit none > #include > > PetscErrorCode :: ierr > PetscInt :: size,i,j,start,end,n=4,numsplit=1 > PetscScalar :: zero=0.0,one=1.0 > Vec :: diag3,x,b > Mat :: A,subA(4),myS > PC :: pc,subpc(2) > KSP :: ksp,subksp(2) > IS :: isg(2) > > call PetscInitialize(PETSC_NULL_CHARACTER,ierr); CHKERRQ(ierr) > call MPI_Comm_size(PETSC_COMM_WORLD,size,ierr); CHKERRQ(ierr); > > ! vectors > call VecCreateMPI(MPI_COMM_WORLD,3*n,PETSC_DECIDE,diag3,ierr); CHKERRQ(ierr) > call VecSet(diag3,one,ierr); CHKERRQ(ierr) > > call VecCreateMPI(MPI_COMM_WORLD,4*n,PETSC_DECIDE,x,ierr); CHKERRQ(ierr) > call VecSet(x,zero,ierr); CHKERRQ(ierr) > > call VecDuplicate(x,b,ierr); CHKERRQ(ierr) > call VecSet(b,one,ierr); CHKERRQ(ierr) > > ! matrix a00 > call MatCreateAIJ(MPI_COMM_WORLD,3*n,3*n,PETSC_DECIDE,PETSC_DECIDE,1,PETSC_NULL_INTEGER,0,PETSC_NULL_INTEGER,subA(1),ierr);CHKERRQ(ierr) > call MatDiagonalSet(subA(1),diag3,INSERT_VALUES,ierr);CHKERRQ(ierr) > call MatAssemblyBegin(subA(1),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr) > call MatAssemblyEnd(subA(1),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr) > > ! matrix a01 > call MatCreateAIJ(MPI_COMM_WORLD,3*n,n,PETSC_DECIDE,PETSC_DECIDE,1,PETSC_NULL_INTEGER,1,PETSC_NULL_INTEGER,subA(2),ierr);CHKERRQ(ierr) > call MatGetOwnershipRange(subA(2),start,end,ierr);CHKERRQ(ierr); > do i=start,end-1 > j=mod(i,size*n) > call MatSetValue(subA(2),i,j,one,INSERT_VALUES,ierr);CHKERRQ(ierr) > end do > call MatAssemblyBegin(subA(2),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr) > call MatAssemblyEnd(subA(2),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr) > > ! matrix a10 > call MatTranspose(subA(2),MAT_INITIAL_MATRIX,subA(3),ierr);CHKERRQ(ierr) > > ! matrix a11 (empty) > call MatCreateAIJ(MPI_COMM_WORLD,n,n,PETSC_DECIDE,PETSC_DECIDE,0,PETSC_NULL_INTEGER,0,PETSC_NULL_INTEGER,subA(4),ierr);CHKERRQ(ierr) > call MatAssemblyBegin(subA(4),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr) > call MatAssemblyEnd(subA(4),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr) > > ! nested mat [a00,a01;a10,a11] > call MatCreateNest(MPI_COMM_WORLD,2,PETSC_NULL_OBJECT,2,PETSC_NULL_OBJECT,subA,A,ierr);CHKERRQ(ierr) > call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr) > call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr) > print *, PETSC_NULL_OBJECT > call MatNestGetISs(A,isg,PETSC_NULL_OBJECT,ierr);CHKERRQ(ierr); > print *, PETSC_NULL_OBJECT > > call PetscFinalize(ierr) > > end program fieldsplittry2 > $ ./fieldsplittry2 > 0 > 39367824 > $ > > > dr. ir. Christiaan Klaij > CFD Researcher > Research & Development > E mailto:C.Klaij at marin.nl > T +31 317 49 33 44 > > > MARIN > 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands > T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl > From pierre.barbierdereuille at gmail.com Thu Feb 12 08:37:47 2015 From: pierre.barbierdereuille at gmail.com (Pierre Barbier de Reuille) Date: Thu, 12 Feb 2015 14:37:47 +0000 Subject: [petsc-users] Setting step acceptance criteria and/or domain validity using TS module References: <87mw4jvoeo.fsf@jedbrown.org> Message-ID: Hello, so here is a patch against the MASTER branch to add time and current solution vector to the TSAdaptCheckStage. What I did is add the same arguments as for the TSPostStage call. I hope I haven't made any mistake. In addition, if the stage is rejected, PETSc only tried again, changing nothing, and therefore failing in the exact same way. So I also added a reduction of the time step if the stage is rejected by the user. Note: I tested the code with the RungeKutta solver only for now. Cheers, Pierre On Thu Feb 12 2015 at 03:45:13 Jed Brown wrote: > Pierre Barbier de Reuille writes: > > > Ok, I made progress. But: > > > > 1 - whatever I do, I have very slightly negative values, and therefore > all > > my steps get rejected (values like 1e-16) > > 2 - As I expected, SNES is only used with implicit methods. So if I use > > explicit Runge-Kutta, then there is no solution vector stored by the SNES > > object. > > > > Reading the code for the Runge-Kutta solver, it seems that TSPostStage is > > where I can retrieve the current state, and TSAdaptCheckStage where I can > > reject it. But is this something I can rely on? > > TSPostStage is only called *after* the stage has been accepted (the step > might be rejected later, e.g., based on a local error controller). > > We should pass the stage solution to TSAdaptCheckStage so you can check > it there. I can add this, but I'm at a conference in Singapore this > week and have a couple more pressing things, so you'd have to wait until > next week unless someone else can do it (or you'd like to submit a > patch). > > We should also add TSSetSetFunctionDomainError() so you can check it > there (my preference, actually). > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: update_TSAdaptChaeckStage.patch Type: text/x-patch Size: 9406 bytes Desc: not available URL: From xzhao99 at gmail.com Thu Feb 12 11:40:42 2015 From: xzhao99 at gmail.com (Xujun Zhao) Date: Thu, 12 Feb 2015 11:40:42 -0600 Subject: [petsc-users] petsc example src/snes/examples/tutorials/ex55.c fails Message-ID: I am running the Petsc example snes/ex55 with the suggested command line: ./ex55 -ksp_type fgmres -pc_type fieldsplit -pc_fieldsplit_detect_saddle_point -pc_fieldsplit_type schur -pc_fieldsplit_schur_precondition self -fieldsplit_1_ksp_type fgmres -fieldsplit_1_pc_type lsc -snes_vi_monitor -ksp_monitor_true_residual -fieldsplit_ksp_monitor -fieldsplit_0_pc_type hypre -da_grid_x 65 -da_grid_y 65 -snes_atol 1.e-11 -ksp_rtol 1.e-8 it gave the following error message(In fact the second suggested command line with mg pc_type in the source codes also failed to run): 0 SNES VI Function norm 6.221605446769e-05 Active lower constraints 1928/2241 upper constraints 302/397 Percent of total 0.131953 Percent of bounded 0.175937 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: [0] PCFieldSplitSetDefaults line 328 /Users/xzhao/software/petsc/petsc_dbg_clang/src/ksp/pc/impls/fieldsplit/fieldsplit.c [0]PETSC ERROR: [0] PCSetUp_FieldSplit line 491 /Users/xzhao/software/petsc/petsc_dbg_clang/src/ksp/pc/impls/fieldsplit/fieldsplit.c [0]PETSC ERROR: [0] KSPSetUp line 220 /Users/xzhao/software/petsc/petsc_dbg_clang/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: [0] SNESSolve_VINEWTONRSLS line 346 /Users/xzhao/software/petsc/petsc_dbg_clang/src/snes/impls/vi/rs/virs.c [0]PETSC ERROR: [0] SNESSolve line 3696 /Users/xzhao/software/petsc/petsc_dbg_clang/src/snes/interface/snes.c [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Signal received [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.5.3, unknown [0]PETSC ERROR: ./ex55 on a arch-darwin-c-debug named mcswl130.mcs.anl.gov by xzhao Thu Feb 12 11:00:18 2015 [0]PETSC ERROR: Configure options --download-fblaslapack --download-mpich --download-mumps --download-scalapack --download-hypre -download-superlu_dist --download-parmetis --download-metis --download-triangle -download-chaco --download-ml --with-opencl=0 --with-debugging=1 [0]PETSC ERROR: #1 User provided function() line 0 in unknown file application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 [unset]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.barbierdereuille at gmail.com Fri Feb 13 07:59:31 2015 From: pierre.barbierdereuille at gmail.com (Pierre Barbier de Reuille) Date: Fri, 13 Feb 2015 13:59:31 +0000 Subject: [petsc-users] Setting step acceptance criteria and/or domain validity using TS module Message-ID: Hello, sorry to bombard you with emails. But here is another patch, still on the master branch, which adds TSSetFunctionDomainError and TSFunctionDomainError functions. I tried them with Runge-Kutta and they work. I think I added the correct calls to all the methods, or at least all the ones calling TSPostStage. Note that I needed to modify the Runge-Kutta call to remove the goto. For some reason, on my system (Ubuntu 14.04, gcc 4.8.2), the time step would not get updated if compiled with optimisations. Removing the goto and replacing it with break/continue prevented that issue. Please tell me what you think of the modification. Cheers, Pierre On Thu Feb 12 2015 at 15:37:48 Pierre Barbier de Reuille < pierre.barbierdereuille at gmail.com> wrote: > Hello, > > so here is a patch against the MASTER branch to add time and current > solution vector to the TSAdaptCheckStage. What I did is add the same > arguments as for the TSPostStage call. > I hope I haven't made any mistake. > > In addition, if the stage is rejected, PETSc only tried again, changing > nothing, and therefore failing in the exact same way. So I also added a > reduction of the time step if the stage is rejected by the user. > > Note: I tested the code with the RungeKutta solver only for now. > > Cheers, > > Pierre > > > On Thu Feb 12 2015 at 03:45:13 Jed Brown wrote: > >> Pierre Barbier de Reuille writes: >> >> > Ok, I made progress. But: >> > >> > 1 - whatever I do, I have very slightly negative values, and therefore >> all >> > my steps get rejected (values like 1e-16) >> > 2 - As I expected, SNES is only used with implicit methods. So if I use >> > explicit Runge-Kutta, then there is no solution vector stored by the >> SNES >> > object. >> > >> > Reading the code for the Runge-Kutta solver, it seems that TSPostStage >> is >> > where I can retrieve the current state, and TSAdaptCheckStage where I >> can >> > reject it. But is this something I can rely on? >> >> TSPostStage is only called *after* the stage has been accepted (the step >> might be rejected later, e.g., based on a local error controller). >> >> We should pass the stage solution to TSAdaptCheckStage so you can check >> it there. I can add this, but I'm at a conference in Singapore this >> week and have a couple more pressing things, so you'd have to wait until >> next week unless someone else can do it (or you'd like to submit a >> patch). >> >> We should also add TSSetSetFunctionDomainError() so you can check it >> there (my preference, actually). >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: TSSetFunctionDomainError.patch Type: text/x-patch Size: 21663 bytes Desc: not available URL: From andrew at spott.us Fri Feb 13 15:17:51 2015 From: andrew at spott.us (Andrew Spott) Date: Fri, 13 Feb 2015 13:17:51 -0800 (PST) Subject: [petsc-users] Hang while attempting to run EPSSolve() Message-ID: <1423862270736.2ac20028@Nodemailer> Local tests on OS X can?t reproduce, but production tests on our local supercomputer always hang while waiting for a lock. The back trace: #0? 0x00002ba2980df054 in __lll_lock_wait () from /lib64/libpthread.so.0 #1? 0x00002ba2980da388 in _L_lock_854 () from /lib64/libpthread.so.0 #2? 0x00002ba2980da257 in pthread_mutex_lock () from /lib64/libpthread.so.0 #3? 0x00002ba29a1d9e2c in ompi_attr_get_c () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 #4? 0x00002ba29a207f8e in PMPI_Attr_get () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 #5? 0x00002ba294aa111e in Petsc_DelComm_Outer () at /home/ansp6066/local/src/petsc-3.5.3/src/sys/objects/pinit.c:409 #6? 0x00002ba29a1dae02 in ompi_attr_delete_all () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 #7? 0x00002ba29a1dcb6c in ompi_comm_free () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 #8? 0x00002ba29a20c713 in PMPI_Comm_free () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 #9? 0x00002ba294aba7cf in PetscSubcommCreate_contiguous(_n_PetscSubcomm*) () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 #10 0x00002ba294ab89d5 in PetscSubcommSetType () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 #11 0x00002ba2958ce437 in PCSetUp_Redundant(_p_PC*) () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 #12 0x00002ba2957a243d in PCSetUp () at /home/ansp6066/local/src/petsc-3.5.3/src/ksp/pc/interface/precon.c:902 #13 0x00002ba2958dea31 in KSPSetUp () at /home/ansp6066/local/src/petsc-3.5.3/src/ksp/ksp/interface/itfunc.c:306 #14 0x00002ba29a7f8e70 in STSetUp_Sinvert(_p_ST*) () at /home/ansp6066/local/src/slepc-3.5.3/src/sys/classes/st/impls/sinvert/sinvert.c:145 #15 0x00002ba29a7e92cf in STSetUp () at /home/ansp6066/local/src/slepc-3.5.3/src/sys/classes/st/interface/stsolve.c:301 #16 0x00002ba29a845ea6 in EPSSetUp () at /home/ansp6066/local/src/slepc-3.5.3/src/eps/interface/epssetup.c:207 #17 0x00002ba29a849f91 in EPSSolve () at /home/ansp6066/local/src/slepc-3.5.3/src/eps/interface/epssolve.c:88 #18 0x0000000000410de5 in petsc::EigenvalueSolver::solve() () at /home/ansp6066/code/petsc_cpp_wrapper/src/petsc_cpp/EigenvalueSolver.cpp:40 #19 0x00000000004065c7 in main () at /home/ansp6066/code/new_work_project/src/main.cpp:165 This happens for MPI or single process runs. ?Does anyone have any hints on how I can debug this? ?I honestly have no idea. -Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Feb 13 15:21:49 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 13 Feb 2015 15:21:49 -0600 Subject: [petsc-users] Hang while attempting to run EPSSolve() In-Reply-To: <1423862270736.2ac20028@Nodemailer> References: <1423862270736.2ac20028@Nodemailer> Message-ID: <22302EDD-04D1-4E0F-A9B1-87232F48DA47@mcs.anl.gov> Andrew, This is a bug in the 1.8.2 OpenMPI implementation they recently introduced. Can you link against an earlier OpenMPI implementation on the machine? Or do they have MPICH installed you could use? Barry > On Feb 13, 2015, at 3:17 PM, Andrew Spott wrote: > > Local tests on OS X can?t reproduce, but production tests on our local supercomputer always hang while waiting for a lock. > > The back trace: > > #0 0x00002ba2980df054 in __lll_lock_wait () from /lib64/libpthread.so.0 > #1 0x00002ba2980da388 in _L_lock_854 () from /lib64/libpthread.so.0 > #2 0x00002ba2980da257 in pthread_mutex_lock () from /lib64/libpthread.so.0 > #3 0x00002ba29a1d9e2c in ompi_attr_get_c () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 > #4 0x00002ba29a207f8e in PMPI_Attr_get () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 > #5 0x00002ba294aa111e in Petsc_DelComm_Outer () at /home/ansp6066/local/src/petsc-3.5.3/src/sys/objects/pinit.c:409 > #6 0x00002ba29a1dae02 in ompi_attr_delete_all () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 > #7 0x00002ba29a1dcb6c in ompi_comm_free () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 > #8 0x00002ba29a20c713 in PMPI_Comm_free () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 > #9 0x00002ba294aba7cf in PetscSubcommCreate_contiguous(_n_PetscSubcomm*) () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 > #10 0x00002ba294ab89d5 in PetscSubcommSetType () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 > #11 0x00002ba2958ce437 in PCSetUp_Redundant(_p_PC*) () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 > #12 0x00002ba2957a243d in PCSetUp () at /home/ansp6066/local/src/petsc-3.5.3/src/ksp/pc/interface/precon.c:902 > #13 0x00002ba2958dea31 in KSPSetUp () at /home/ansp6066/local/src/petsc-3.5.3/src/ksp/ksp/interface/itfunc.c:306 > #14 0x00002ba29a7f8e70 in STSetUp_Sinvert(_p_ST*) () at /home/ansp6066/local/src/slepc-3.5.3/src/sys/classes/st/impls/sinvert/sinvert.c:145 > #15 0x00002ba29a7e92cf in STSetUp () at /home/ansp6066/local/src/slepc-3.5.3/src/sys/classes/st/interface/stsolve.c:301 > #16 0x00002ba29a845ea6 in EPSSetUp () at /home/ansp6066/local/src/slepc-3.5.3/src/eps/interface/epssetup.c:207 > #17 0x00002ba29a849f91 in EPSSolve () at /home/ansp6066/local/src/slepc-3.5.3/src/eps/interface/epssolve.c:88 > #18 0x0000000000410de5 in petsc::EigenvalueSolver::solve() () at /home/ansp6066/code/petsc_cpp_wrapper/src/petsc_cpp/EigenvalueSolver.cpp:40 > #19 0x00000000004065c7 in main () at /home/ansp6066/code/new_work_project/src/main.cpp:165 > > This happens for MPI or single process runs. Does anyone have any hints on how I can debug this? I honestly have no idea. > > -Andrew > From andrew at spott.us Fri Feb 13 15:23:37 2015 From: andrew at spott.us (Andrew Spott) Date: Fri, 13 Feb 2015 13:23:37 -0800 (PST) Subject: [petsc-users] Hang while attempting to run EPSSolve() In-Reply-To: <22302EDD-04D1-4E0F-A9B1-87232F48DA47@mcs.anl.gov> References: <22302EDD-04D1-4E0F-A9B1-87232F48DA47@mcs.anl.gov> Message-ID: <1423862616098.9edb4a92@Nodemailer> Thanks! ?You just saved me hours of debugging. I?ll look into linking against an earlier implementation of OpenMPI. -Andrew On Fri, Feb 13, 2015 at 2:21 PM, Barry Smith wrote: > Andrew, > This is a bug in the 1.8.2 OpenMPI implementation they recently introduced. Can you link against an earlier OpenMPI implementation on the machine? Or do they have MPICH installed you could use? > Barry >> On Feb 13, 2015, at 3:17 PM, Andrew Spott wrote: >> >> Local tests on OS X can?t reproduce, but production tests on our local supercomputer always hang while waiting for a lock. >> >> The back trace: >> >> #0 0x00002ba2980df054 in __lll_lock_wait () from /lib64/libpthread.so.0 >> #1 0x00002ba2980da388 in _L_lock_854 () from /lib64/libpthread.so.0 >> #2 0x00002ba2980da257 in pthread_mutex_lock () from /lib64/libpthread.so.0 >> #3 0x00002ba29a1d9e2c in ompi_attr_get_c () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 >> #4 0x00002ba29a207f8e in PMPI_Attr_get () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 >> #5 0x00002ba294aa111e in Petsc_DelComm_Outer () at /home/ansp6066/local/src/petsc-3.5.3/src/sys/objects/pinit.c:409 >> #6 0x00002ba29a1dae02 in ompi_attr_delete_all () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 >> #7 0x00002ba29a1dcb6c in ompi_comm_free () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 >> #8 0x00002ba29a20c713 in PMPI_Comm_free () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 >> #9 0x00002ba294aba7cf in PetscSubcommCreate_contiguous(_n_PetscSubcomm*) () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 >> #10 0x00002ba294ab89d5 in PetscSubcommSetType () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 >> #11 0x00002ba2958ce437 in PCSetUp_Redundant(_p_PC*) () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 >> #12 0x00002ba2957a243d in PCSetUp () at /home/ansp6066/local/src/petsc-3.5.3/src/ksp/pc/interface/precon.c:902 >> #13 0x00002ba2958dea31 in KSPSetUp () at /home/ansp6066/local/src/petsc-3.5.3/src/ksp/ksp/interface/itfunc.c:306 >> #14 0x00002ba29a7f8e70 in STSetUp_Sinvert(_p_ST*) () at /home/ansp6066/local/src/slepc-3.5.3/src/sys/classes/st/impls/sinvert/sinvert.c:145 >> #15 0x00002ba29a7e92cf in STSetUp () at /home/ansp6066/local/src/slepc-3.5.3/src/sys/classes/st/interface/stsolve.c:301 >> #16 0x00002ba29a845ea6 in EPSSetUp () at /home/ansp6066/local/src/slepc-3.5.3/src/eps/interface/epssetup.c:207 >> #17 0x00002ba29a849f91 in EPSSolve () at /home/ansp6066/local/src/slepc-3.5.3/src/eps/interface/epssolve.c:88 >> #18 0x0000000000410de5 in petsc::EigenvalueSolver::solve() () at /home/ansp6066/code/petsc_cpp_wrapper/src/petsc_cpp/EigenvalueSolver.cpp:40 >> #19 0x00000000004065c7 in main () at /home/ansp6066/code/new_work_project/src/main.cpp:165 >> >> This happens for MPI or single process runs. Does anyone have any hints on how I can debug this? I honestly have no idea. >> >> -Andrew >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew at spott.us Fri Feb 13 16:43:25 2015 From: andrew at spott.us (Andrew Spott) Date: Fri, 13 Feb 2015 14:43:25 -0800 (PST) Subject: [petsc-users] Hang while attempting to run EPSSolve() In-Reply-To: <1423862616098.9edb4a92@Nodemailer> References: <1423862616098.9edb4a92@Nodemailer> Message-ID: <1423867404192.4573ed81@Nodemailer> What version of OpenMPI was this introduced in? ?I appear to be finding it in ?OpenMPI 1.8.3 and 1.8.1 as well. ?Should I go back to 1.8.0 or 1.7.4? Thanks, -Andrew On Fri, Feb 13, 2015 at 2:23 PM, Andrew Spott wrote: > Thanks! ?You just saved me hours of debugging. > I?ll look into linking against an earlier implementation of OpenMPI. > -Andrew > On Fri, Feb 13, 2015 at 2:21 PM, Barry Smith wrote: >> Andrew, >> This is a bug in the 1.8.2 OpenMPI implementation they recently introduced. Can you link against an earlier OpenMPI implementation on the machine? Or do they have MPICH installed you could use? >> Barry >>> On Feb 13, 2015, at 3:17 PM, Andrew Spott wrote: >>> >>> Local tests on OS X can?t reproduce, but production tests on our local supercomputer always hang while waiting for a lock. >>> >>> The back trace: >>> >>> #0 0x00002ba2980df054 in __lll_lock_wait () from /lib64/libpthread.so.0 >>> #1 0x00002ba2980da388 in _L_lock_854 () from /lib64/libpthread.so.0 >>> #2 0x00002ba2980da257 in pthread_mutex_lock () from /lib64/libpthread.so.0 >>> #3 0x00002ba29a1d9e2c in ompi_attr_get_c () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 >>> #4 0x00002ba29a207f8e in PMPI_Attr_get () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 >>> #5 0x00002ba294aa111e in Petsc_DelComm_Outer () at /home/ansp6066/local/src/petsc-3.5.3/src/sys/objects/pinit.c:409 >>> #6 0x00002ba29a1dae02 in ompi_attr_delete_all () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 >>> #7 0x00002ba29a1dcb6c in ompi_comm_free () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 >>> #8 0x00002ba29a20c713 in PMPI_Comm_free () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 >>> #9 0x00002ba294aba7cf in PetscSubcommCreate_contiguous(_n_PetscSubcomm*) () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 >>> #10 0x00002ba294ab89d5 in PetscSubcommSetType () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 >>> #11 0x00002ba2958ce437 in PCSetUp_Redundant(_p_PC*) () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 >>> #12 0x00002ba2957a243d in PCSetUp () at /home/ansp6066/local/src/petsc-3.5.3/src/ksp/pc/interface/precon.c:902 >>> #13 0x00002ba2958dea31 in KSPSetUp () at /home/ansp6066/local/src/petsc-3.5.3/src/ksp/ksp/interface/itfunc.c:306 >>> #14 0x00002ba29a7f8e70 in STSetUp_Sinvert(_p_ST*) () at /home/ansp6066/local/src/slepc-3.5.3/src/sys/classes/st/impls/sinvert/sinvert.c:145 >>> #15 0x00002ba29a7e92cf in STSetUp () at /home/ansp6066/local/src/slepc-3.5.3/src/sys/classes/st/interface/stsolve.c:301 >>> #16 0x00002ba29a845ea6 in EPSSetUp () at /home/ansp6066/local/src/slepc-3.5.3/src/eps/interface/epssetup.c:207 >>> #17 0x00002ba29a849f91 in EPSSolve () at /home/ansp6066/local/src/slepc-3.5.3/src/eps/interface/epssolve.c:88 >>> #18 0x0000000000410de5 in petsc::EigenvalueSolver::solve() () at /home/ansp6066/code/petsc_cpp_wrapper/src/petsc_cpp/EigenvalueSolver.cpp:40 >>> #19 0x00000000004065c7 in main () at /home/ansp6066/code/new_work_project/src/main.cpp:165 >>> >>> This happens for MPI or single process runs. Does anyone have any hints on how I can debug this? I honestly have no idea. >>> >>> -Andrew >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Feb 13 16:45:18 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 13 Feb 2015 16:45:18 -0600 Subject: [petsc-users] Hang while attempting to run EPSSolve() In-Reply-To: <1423867404192.4573ed81@Nodemailer> References: <1423862616098.9edb4a92@Nodemailer> <1423867404192.4573ed81@Nodemailer> Message-ID: I think it was introduced in 1.8.1 so I think 1.8.0 should be ok, but if it hangs then go back to 1.7.4 > On Feb 13, 2015, at 4:43 PM, Andrew Spott wrote: > > What version of OpenMPI was this introduced in? I appear to be finding it in OpenMPI 1.8.3 and 1.8.1 as well. Should I go back to 1.8.0 or 1.7.4? > > Thanks, > > -Andrew > > > > On Fri, Feb 13, 2015 at 2:23 PM, Andrew Spott wrote: > > Thanks! You just saved me hours of debugging. > > I?ll look into linking against an earlier implementation of OpenMPI. > > -Andrew > > > > On Fri, Feb 13, 2015 at 2:21 PM, Barry Smith wrote: > > > Andrew, > > This is a bug in the 1.8.2 OpenMPI implementation they recently introduced. Can you link against an earlier OpenMPI implementation on the machine? Or do they have MPICH installed you could use? > > Barry > > > > > On Feb 13, 2015, at 3:17 PM, Andrew Spott wrote: > > > > Local tests on OS X can?t reproduce, but production tests on our local supercomputer always hang while waiting for a lock. > > > > The back trace: > > > > #0 0x00002ba2980df054 in __lll_lock_wait () from /lib64/libpthread.so.0 > > #1 0x00002ba2980da388 in _L_lock_854 () from /lib64/libpthread.so.0 > > #2 0x00002ba2980da257 in pthread_mutex_lock () from /lib64/libpthread.so.0 > > #3 0x00002ba29a1d9e2c in ompi_attr_get_c () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 > > #4 0x00002ba29a207f8e in PMPI_Attr_get () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 > > #5 0x00002ba294aa111e in Petsc_DelComm_Outer () at /home/ansp6066/local/src/petsc-3.5.3/src/sys/objects/pinit.c:409 > > #6 0x00002ba29a1dae02 in ompi_attr_delete_all () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 > > #7 0x00002ba29a1dcb6c in ompi_comm_free () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 > > #8 0x00002ba29a20c713 in PMPI_Comm_free () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 > > #9 0x00002ba294aba7cf in PetscSubcommCreate_contiguous(_n_PetscSubcomm*) () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 > > #10 0x00002ba294ab89d5 in PetscSubcommSetType () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 > > #11 0x00002ba2958ce437 in PCSetUp_Redundant(_p_PC*) () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 > > #12 0x00002ba2957a243d in PCSetUp () at /home/ansp6066/local/src/petsc-3.5.3/src/ksp/pc/interface/precon.c:902 > > #13 0x00002ba2958dea31 in KSPSetUp () at /home/ansp6066/local/src/petsc-3.5.3/src/ksp/ksp/interface/itfunc.c:306 > > #14 0x00002ba29a7f8e70 in STSetUp_Sinvert(_p_ST*) () at /home/ansp6066/local/src/slepc-3.5.3/src/sys/classes/st/impls/sinvert/sinvert.c:145 > > #15 0x00002ba29a7e92cf in STSetUp () at /home/ansp6066/local/src/slepc-3.5.3/src/sys/classes/st/interface/stsolve.c:301 > > #16 0x00002ba29a845ea6 in EPSSetUp () at /home/ansp6066/local/src/slepc-3.5.3/src/eps/interface/epssetup.c:207 > > #17 0x00002ba29a849f91 in EPSSolve () at /home/ansp6066/local/src/slepc-3.5.3/src/eps/interface/epssolve.c:88 > > #18 0x0000000000410de5 in petsc::EigenvalueSolver::solve() () at /home/ansp6066/code/petsc_cpp_wrapper/src/petsc_cpp/EigenvalueSolver.cpp:40 > > #19 0x00000000004065c7 in main () at /home/ansp6066/code/new_work_project/src/main.cpp:165 > > > > This happens for MPI or single process runs. Does anyone have any hints on how I can debug this? I honestly have no idea. > > > > -Andrew > > > > > From andrew at spott.us Fri Feb 13 16:45:47 2015 From: andrew at spott.us (Andrew Spott) Date: Fri, 13 Feb 2015 14:45:47 -0800 (PST) Subject: [petsc-users] Hang while attempting to run EPSSolve() In-Reply-To: References: Message-ID: <1423867546534.6684ca73@Nodemailer> Thanks! On Fri, Feb 13, 2015 at 3:45 PM, Barry Smith wrote: > I think it was introduced in 1.8.1 so I think 1.8.0 should be ok, but if it hangs then go back to 1.7.4 >> On Feb 13, 2015, at 4:43 PM, Andrew Spott wrote: >> >> What version of OpenMPI was this introduced in? I appear to be finding it in OpenMPI 1.8.3 and 1.8.1 as well. Should I go back to 1.8.0 or 1.7.4? >> >> Thanks, >> >> -Andrew >> >> >> >> On Fri, Feb 13, 2015 at 2:23 PM, Andrew Spott wrote: >> >> Thanks! You just saved me hours of debugging. >> >> I?ll look into linking against an earlier implementation of OpenMPI. >> >> -Andrew >> >> >> >> On Fri, Feb 13, 2015 at 2:21 PM, Barry Smith wrote: >> >> >> Andrew, >> >> This is a bug in the 1.8.2 OpenMPI implementation they recently introduced. Can you link against an earlier OpenMPI implementation on the machine? Or do they have MPICH installed you could use? >> >> Barry >> >> >> >> > On Feb 13, 2015, at 3:17 PM, Andrew Spott wrote: >> > >> > Local tests on OS X can?t reproduce, but production tests on our local supercomputer always hang while waiting for a lock. >> > >> > The back trace: >> > >> > #0 0x00002ba2980df054 in __lll_lock_wait () from /lib64/libpthread.so.0 >> > #1 0x00002ba2980da388 in _L_lock_854 () from /lib64/libpthread.so.0 >> > #2 0x00002ba2980da257 in pthread_mutex_lock () from /lib64/libpthread.so.0 >> > #3 0x00002ba29a1d9e2c in ompi_attr_get_c () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 >> > #4 0x00002ba29a207f8e in PMPI_Attr_get () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 >> > #5 0x00002ba294aa111e in Petsc_DelComm_Outer () at /home/ansp6066/local/src/petsc-3.5.3/src/sys/objects/pinit.c:409 >> > #6 0x00002ba29a1dae02 in ompi_attr_delete_all () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 >> > #7 0x00002ba29a1dcb6c in ompi_comm_free () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 >> > #8 0x00002ba29a20c713 in PMPI_Comm_free () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 >> > #9 0x00002ba294aba7cf in PetscSubcommCreate_contiguous(_n_PetscSubcomm*) () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 >> > #10 0x00002ba294ab89d5 in PetscSubcommSetType () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 >> > #11 0x00002ba2958ce437 in PCSetUp_Redundant(_p_PC*) () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 >> > #12 0x00002ba2957a243d in PCSetUp () at /home/ansp6066/local/src/petsc-3.5.3/src/ksp/pc/interface/precon.c:902 >> > #13 0x00002ba2958dea31 in KSPSetUp () at /home/ansp6066/local/src/petsc-3.5.3/src/ksp/ksp/interface/itfunc.c:306 >> > #14 0x00002ba29a7f8e70 in STSetUp_Sinvert(_p_ST*) () at /home/ansp6066/local/src/slepc-3.5.3/src/sys/classes/st/impls/sinvert/sinvert.c:145 >> > #15 0x00002ba29a7e92cf in STSetUp () at /home/ansp6066/local/src/slepc-3.5.3/src/sys/classes/st/interface/stsolve.c:301 >> > #16 0x00002ba29a845ea6 in EPSSetUp () at /home/ansp6066/local/src/slepc-3.5.3/src/eps/interface/epssetup.c:207 >> > #17 0x00002ba29a849f91 in EPSSolve () at /home/ansp6066/local/src/slepc-3.5.3/src/eps/interface/epssolve.c:88 >> > #18 0x0000000000410de5 in petsc::EigenvalueSolver::solve() () at /home/ansp6066/code/petsc_cpp_wrapper/src/petsc_cpp/EigenvalueSolver.cpp:40 >> > #19 0x00000000004065c7 in main () at /home/ansp6066/code/new_work_project/src/main.cpp:165 >> > >> > This happens for MPI or single process runs. Does anyone have any hints on how I can debug this? I honestly have no idea. >> > >> > -Andrew >> > >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Fri Feb 13 16:47:51 2015 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 13 Feb 2015 16:47:51 -0600 Subject: [petsc-users] Hang while attempting to run EPSSolve() In-Reply-To: References: <1423862616098.9edb4a92@Nodemailer> <1423867404192.4573ed81@Nodemailer> Message-ID: I'll suggest 1.6 I belive 1.6, 1.8 etc are considered stable releases - and 1.5, 1.7 etc are considered development releases. Satish On Fri, 13 Feb 2015, Barry Smith wrote: > > I think it was introduced in 1.8.1 so I think 1.8.0 should be ok, but if it hangs then go back to 1.7.4 > > > > On Feb 13, 2015, at 4:43 PM, Andrew Spott wrote: > > > > What version of OpenMPI was this introduced in? I appear to be finding it in OpenMPI 1.8.3 and 1.8.1 as well. Should I go back to 1.8.0 or 1.7.4? > > > > Thanks, > > > > -Andrew > > > > > > > > On Fri, Feb 13, 2015 at 2:23 PM, Andrew Spott wrote: > > > > Thanks! You just saved me hours of debugging. > > > > I?ll look into linking against an earlier implementation of OpenMPI. > > > > -Andrew > > > > > > > > On Fri, Feb 13, 2015 at 2:21 PM, Barry Smith wrote: > > > > > > Andrew, > > > > This is a bug in the 1.8.2 OpenMPI implementation they recently introduced. Can you link against an earlier OpenMPI implementation on the machine? Or do they have MPICH installed you could use? > > > > Barry > > > > > > > > > On Feb 13, 2015, at 3:17 PM, Andrew Spott wrote: > > > > > > Local tests on OS X can?t reproduce, but production tests on our local supercomputer always hang while waiting for a lock. > > > > > > The back trace: > > > > > > #0 0x00002ba2980df054 in __lll_lock_wait () from /lib64/libpthread.so.0 > > > #1 0x00002ba2980da388 in _L_lock_854 () from /lib64/libpthread.so.0 > > > #2 0x00002ba2980da257 in pthread_mutex_lock () from /lib64/libpthread.so.0 > > > #3 0x00002ba29a1d9e2c in ompi_attr_get_c () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 > > > #4 0x00002ba29a207f8e in PMPI_Attr_get () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 > > > #5 0x00002ba294aa111e in Petsc_DelComm_Outer () at /home/ansp6066/local/src/petsc-3.5.3/src/sys/objects/pinit.c:409 > > > #6 0x00002ba29a1dae02 in ompi_attr_delete_all () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 > > > #7 0x00002ba29a1dcb6c in ompi_comm_free () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 > > > #8 0x00002ba29a20c713 in PMPI_Comm_free () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 > > > #9 0x00002ba294aba7cf in PetscSubcommCreate_contiguous(_n_PetscSubcomm*) () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 > > > #10 0x00002ba294ab89d5 in PetscSubcommSetType () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 > > > #11 0x00002ba2958ce437 in PCSetUp_Redundant(_p_PC*) () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 > > > #12 0x00002ba2957a243d in PCSetUp () at /home/ansp6066/local/src/petsc-3.5.3/src/ksp/pc/interface/precon.c:902 > > > #13 0x00002ba2958dea31 in KSPSetUp () at /home/ansp6066/local/src/petsc-3.5.3/src/ksp/ksp/interface/itfunc.c:306 > > > #14 0x00002ba29a7f8e70 in STSetUp_Sinvert(_p_ST*) () at /home/ansp6066/local/src/slepc-3.5.3/src/sys/classes/st/impls/sinvert/sinvert.c:145 > > > #15 0x00002ba29a7e92cf in STSetUp () at /home/ansp6066/local/src/slepc-3.5.3/src/sys/classes/st/interface/stsolve.c:301 > > > #16 0x00002ba29a845ea6 in EPSSetUp () at /home/ansp6066/local/src/slepc-3.5.3/src/eps/interface/epssetup.c:207 > > > #17 0x00002ba29a849f91 in EPSSolve () at /home/ansp6066/local/src/slepc-3.5.3/src/eps/interface/epssolve.c:88 > > > #18 0x0000000000410de5 in petsc::EigenvalueSolver::solve() () at /home/ansp6066/code/petsc_cpp_wrapper/src/petsc_cpp/EigenvalueSolver.cpp:40 > > > #19 0x00000000004065c7 in main () at /home/ansp6066/code/new_work_project/src/main.cpp:165 > > > > > > This happens for MPI or single process runs. Does anyone have any hints on how I can debug this? I honestly have no idea. > > > > > > -Andrew > > > > > > > > > > > From andrew at spott.us Fri Feb 13 17:07:21 2015 From: andrew at spott.us (Andrew Spott) Date: Fri, 13 Feb 2015 15:07:21 -0800 (PST) Subject: [petsc-users] Hang while attempting to run EPSSolve() In-Reply-To: References: Message-ID: <1423868840515.5c295d0d@Nodemailer> Is there a known workaround for this? It also occurs in 1.8.0 (so far I?ve checked 1.8.{0,1,2,3}). ?Unfortunately, going back farther requires actually building openMPI, which requires something special (IB drivers, I believe). -Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Feb 13 17:15:48 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 13 Feb 2015 17:15:48 -0600 Subject: [petsc-users] Hang while attempting to run EPSSolve() In-Reply-To: <1423868840515.5c295d0d@Nodemailer> References: <1423868840515.5c295d0d@Nodemailer> Message-ID: <803167D4-01B3-421D-8E91-E82A935EC2F0@mcs.anl.gov> > On Feb 13, 2015, at 5:07 PM, Andrew Spott wrote: > > Is there a known workaround for this? No. This should be reported as a show stopper to the vendor who sold you the system and provided the software. Barry Note that the code in PETSc that "triggers" the hang has been there for at least 15 years and has never been problematic with earlier versions of OpenMPI or any other MPI implementation. The OpenMPI guys just got lazy and made a non-MPI standard assumption that MPI attribute destructors would never use MPI internally in the 1.8 series and ripped out all the old code that handled it correctly because it was "too complicated". > > It also occurs in 1.8.0 (so far I?ve checked 1.8.{0,1,2,3}). Unfortunately, going back farther requires actually building openMPI, which requires something special (IB drivers, I believe). > > -Andrew > > > > On Fri, Feb 13, 2015 at 3:47 PM, Satish Balay wrote: > > I'll suggest 1.6 > > I belive 1.6, 1.8 etc are considered stable releases - and 1.5, 1.7 > etc are considered development releases. > > Satish > > On Fri, 13 Feb 2015, Barry Smith wrote: > > > > > I think it was introduced in 1.8.1 so I think 1.8.0 should be ok, but if it hangs then go back to 1.7.4 > > > > > > > On Feb 13, 2015, at 4:43 PM, Andrew Spott wrote: > > > > > > What version of OpenMPI was this introduced in? I appear to be finding it in OpenMPI 1.8.3 and 1.8.1 as well. Should I go back to 1.8.0 or 1.7.4? > > > > > > Thanks, > > > > > > -Andrew > > > > > > > > > > > > On Fri, Feb 13, 2015 at 2:23 PM, Andrew Spott wrote: > > > > > > Thanks! You just saved me hours of debugging. > > > > > > I?ll look into linking against an earlier implementation of OpenMPI. > > > > > > -Andrew > > > > > > > > > > > > On Fri, Feb 13, 2015 at 2:21 PM, Barry Smith wrote: > > > > > > > > > Andrew, > > > > > > This is a bug in the 1.8.2 OpenMPI implementation they recently introduced. Can you link against an earlier OpenMPI implementation on the machine? Or do they have MPICH installed you could use? > > > > > > Barry > > > > > > > > > > > > > On Feb 13, 2015, at 3:17 PM, Andrew Spott wrote: > > > > > > > > Local tests on OS X can?t reproduce, but production tests on our local supercomputer always hang while waiting for a lock. > > > > > > > > The back trace: > > > > > > > > #0 0x00002ba2980df054 in __lll_lock_wait () from /lib64/libpthread.so.0 > > > > #1 0x00002ba2980da388 in _L_lock_854 () from /lib64/libpthread.so.0 > > > > #2 0x00002ba2980da257 in pthread_mutex_lock () from /lib64/libpthread.so.0 > > > > #3 0x00002ba29a1d9e2c in ompi_attr_get_c () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 > > > > #4 0x00002ba29a207f8e in PMPI_Attr_get () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 > > > > #5 0x00002ba294aa111e in Petsc_DelComm_Outer () at /home/ansp6066/local/src/petsc-3.5.3/src/sys/objects/pinit.c:409 > > > > #6 0x00002ba29a1dae02 in ompi_attr_delete_all () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 > > > > #7 0x00002ba29a1dcb6c in ompi_comm_free () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 > > > > #8 0x00002ba29a20c713 in PMPI_Comm_free () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 > > > > #9 0x00002ba294aba7cf in PetscSubcommCreate_contiguous(_n_PetscSubcomm*) () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 > > > > #10 0x00002ba294ab89d5 in PetscSubcommSetType () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 > > > > #11 0x00002ba2958ce437 in PCSetUp_Redundant(_p_PC*) () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 > > > > #12 0x00002ba2957a243d in PCSetUp () at /home/ansp6066/local/src/petsc-3.5.3/src/ksp/pc/interface/precon.c:902 > > > > #13 0x00002ba2958dea31 in KSPSetUp () at /home/ansp6066/local/src/petsc-3.5.3/src/ksp/ksp/interface/itfunc.c:306 > > > > #14 0x00002ba29a7f8e70 in STSetUp_Sinvert(_p_ST*) () at /home/ansp6066/local/src/slepc-3.5.3/src/sys/classes/st/impls/sinvert/sinvert.c:145 > > > > #15 0x00002ba29a7e92cf in STSetUp () at /home/ansp6066/local/src/slepc-3.5.3/src/sys/classes/st/interface/stsolve.c:301 > > > > #16 0x00002ba29a845ea6 in EPSSetUp () at /home/ansp6066/local/src/slepc-3.5.3/src/eps/interface/epssetup.c:207 > > > > #17 0x00002ba29a849f91 in EPSSolve () at /home/ansp6066/local/src/slepc-3.5.3/src/eps/interface/epssolve.c:88 > > > > #18 0x0000000000410de5 in petsc::EigenvalueSolver::solve() () at /home/ansp6066/code/petsc_cpp_wrapper/src/petsc_cpp/EigenvalueSolver.cpp:40 > > > > #19 0x00000000004065c7 in main () at /home/ansp6066/code/new_work_project/src/main.cpp:165 > > > > > > > > This happens for MPI or single process runs. Does anyone have any hints on how I can debug this? I honestly have no idea. > > > > > > > > -Andrew > > > > > > > > > > > > > > > > > > > From andrew at spott.us Fri Feb 13 17:17:20 2015 From: andrew at spott.us (Andrew Spott) Date: Fri, 13 Feb 2015 15:17:20 -0800 (PST) Subject: [petsc-users] Hang while attempting to run EPSSolve() In-Reply-To: <803167D4-01B3-421D-8E91-E82A935EC2F0@mcs.anl.gov> References: <803167D4-01B3-421D-8E91-E82A935EC2F0@mcs.anl.gov> Message-ID: <1423869438671.dfd50c3b@Nodemailer> Thanks. ?I?ll start work on rolling back to another version of MPI On Fri, Feb 13, 2015 at 4:15 PM, Barry Smith wrote: >> On Feb 13, 2015, at 5:07 PM, Andrew Spott wrote: >> >> Is there a known workaround for this? > No. This should be reported as a show stopper to the vendor who sold you the system and provided the software. > Barry > Note that the code in PETSc that "triggers" the hang has been there for at least 15 years and has never been problematic with earlier versions of OpenMPI or any other MPI implementation. The OpenMPI guys just got lazy and made a non-MPI standard assumption that MPI attribute destructors would never use MPI internally in the 1.8 series and ripped out all the old code that handled it correctly because it was "too complicated". >> >> It also occurs in 1.8.0 (so far I?ve checked 1.8.{0,1,2,3}). Unfortunately, going back farther requires actually building openMPI, which requires something special (IB drivers, I believe). >> >> -Andrew >> >> >> >> On Fri, Feb 13, 2015 at 3:47 PM, Satish Balay wrote: >> >> I'll suggest 1.6 >> >> I belive 1.6, 1.8 etc are considered stable releases - and 1.5, 1.7 >> etc are considered development releases. >> >> Satish >> >> On Fri, 13 Feb 2015, Barry Smith wrote: >> >> > >> > I think it was introduced in 1.8.1 so I think 1.8.0 should be ok, but if it hangs then go back to 1.7.4 >> > >> > >> > > On Feb 13, 2015, at 4:43 PM, Andrew Spott wrote: >> > > >> > > What version of OpenMPI was this introduced in? I appear to be finding it in OpenMPI 1.8.3 and 1.8.1 as well. Should I go back to 1.8.0 or 1.7.4? >> > > >> > > Thanks, >> > > >> > > -Andrew >> > > >> > > >> > > >> > > On Fri, Feb 13, 2015 at 2:23 PM, Andrew Spott wrote: >> > > >> > > Thanks! You just saved me hours of debugging. >> > > >> > > I?ll look into linking against an earlier implementation of OpenMPI. >> > > >> > > -Andrew >> > > >> > > >> > > >> > > On Fri, Feb 13, 2015 at 2:21 PM, Barry Smith wrote: >> > > >> > > >> > > Andrew, >> > > >> > > This is a bug in the 1.8.2 OpenMPI implementation they recently introduced. Can you link against an earlier OpenMPI implementation on the machine? Or do they have MPICH installed you could use? >> > > >> > > Barry >> > > >> > > >> > > >> > > > On Feb 13, 2015, at 3:17 PM, Andrew Spott wrote: >> > > > >> > > > Local tests on OS X can?t reproduce, but production tests on our local supercomputer always hang while waiting for a lock. >> > > > >> > > > The back trace: >> > > > >> > > > #0 0x00002ba2980df054 in __lll_lock_wait () from /lib64/libpthread.so.0 >> > > > #1 0x00002ba2980da388 in _L_lock_854 () from /lib64/libpthread.so.0 >> > > > #2 0x00002ba2980da257 in pthread_mutex_lock () from /lib64/libpthread.so.0 >> > > > #3 0x00002ba29a1d9e2c in ompi_attr_get_c () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 >> > > > #4 0x00002ba29a207f8e in PMPI_Attr_get () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 >> > > > #5 0x00002ba294aa111e in Petsc_DelComm_Outer () at /home/ansp6066/local/src/petsc-3.5.3/src/sys/objects/pinit.c:409 >> > > > #6 0x00002ba29a1dae02 in ompi_attr_delete_all () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 >> > > > #7 0x00002ba29a1dcb6c in ompi_comm_free () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 >> > > > #8 0x00002ba29a20c713 in PMPI_Comm_free () from /curc/tools/x_86_64/rh6/openmpi/1.8.2/gcc/4.9.1/lib/libmpi.so.1 >> > > > #9 0x00002ba294aba7cf in PetscSubcommCreate_contiguous(_n_PetscSubcomm*) () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 >> > > > #10 0x00002ba294ab89d5 in PetscSubcommSetType () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 >> > > > #11 0x00002ba2958ce437 in PCSetUp_Redundant(_p_PC*) () from /home/ansp6066/local/petsc-3.5.3-debug/lib/libpetsc.so.3.5 >> > > > #12 0x00002ba2957a243d in PCSetUp () at /home/ansp6066/local/src/petsc-3.5.3/src/ksp/pc/interface/precon.c:902 >> > > > #13 0x00002ba2958dea31 in KSPSetUp () at /home/ansp6066/local/src/petsc-3.5.3/src/ksp/ksp/interface/itfunc.c:306 >> > > > #14 0x00002ba29a7f8e70 in STSetUp_Sinvert(_p_ST*) () at /home/ansp6066/local/src/slepc-3.5.3/src/sys/classes/st/impls/sinvert/sinvert.c:145 >> > > > #15 0x00002ba29a7e92cf in STSetUp () at /home/ansp6066/local/src/slepc-3.5.3/src/sys/classes/st/interface/stsolve.c:301 >> > > > #16 0x00002ba29a845ea6 in EPSSetUp () at /home/ansp6066/local/src/slepc-3.5.3/src/eps/interface/epssetup.c:207 >> > > > #17 0x00002ba29a849f91 in EPSSolve () at /home/ansp6066/local/src/slepc-3.5.3/src/eps/interface/epssolve.c:88 >> > > > #18 0x0000000000410de5 in petsc::EigenvalueSolver::solve() () at /home/ansp6066/code/petsc_cpp_wrapper/src/petsc_cpp/EigenvalueSolver.cpp:40 >> > > > #19 0x00000000004065c7 in main () at /home/ansp6066/code/new_work_project/src/main.cpp:165 >> > > > >> > > > This happens for MPI or single process runs. Does anyone have any hints on how I can debug this? I honestly have no idea. >> > > > >> > > > -Andrew >> > > > >> > > >> > > >> > > >> > >> > >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Sat Feb 14 13:19:59 2015 From: mfadams at lbl.gov (Mark Adams) Date: Sat, 14 Feb 2015 14:19:59 -0500 Subject: [petsc-users] TSSetIJacobian question Message-ID: I am advancing a two equation system with TS that has an additional constraint equation. I build a 3x3 composite matrix a_ts%FJacobean that has my F(U). I then do: call MatDuplicate(a_ts%FJacobean,MAT_DO_NOT_COPY_VALUES,a_ts%FJacobean2,ierr) call TSSetIJacobian(a_ts%ts,a_ts%FJacobean2,a_ts%FJacobean2,FormIJacobian,a_ts,ierr) I am thinking my FormIJacobian would look like this: ! copy in linear operator call MatCopy(a_ts%FJacobean,Jpre,ierr);CHKERRQ(ierr) ! shift 1 & 2 by 'shift' call MatShift(mat00,shift,ierr);CHKERRQ(ierr) ???? call MatShift(mat11,shift,ierr);CHKERRQ(ierr) ???? Is this a good basic approach? I'm not sure how to shift just the first two blocks. MatGetSubMatrix does not seem usable here. I want these two diagonal block matrices to shift them. Can I get an array of matrices out of a composite matrix? Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gabel.fabian at gmail.com Sun Feb 15 12:39:59 2015 From: gabel.fabian at gmail.com (Fabian Gabel) Date: Sun, 15 Feb 2015 19:39:59 +0100 Subject: [petsc-users] Field Split PC for Fully-Coupled 3d stationary incompressible Navier-Stokes Solution Algorithm In-Reply-To: References: <1422869962.961.2.camel@gmail.com> <1422871832.961.4.camel@gmail.com> <1423082081.3096.6.camel@gmail.com> <1423174513.3627.1.camel@gmail.com> Message-ID: <1424025599.3226.11.camel@gmail.com> After some further testing I found, that the key to reduce the wall time is to balance the number of iterations needed to apply the preconditioner and the number of inner iterations of my solver for the complete linear system (velocities and pressure). I found that the following set of options led to a further improvement of wall time from 6083s to 2200s: -coupledsolve_fieldsplit_0_fieldsplit_ksp_type preonly -coupledsolve_fieldsplit_0_fieldsplit_pc_type ml -coupledsolve_fieldsplit_0_ksp_rtol 1e-2 -coupledsolve_fieldsplit_0_ksp_type gmres -coupledsolve_fieldsplit_0_pc_fieldsplit_block_size 3 -coupledsolve_fieldsplit_0_pc_type fieldsplit -coupledsolve_fieldsplit_1_ksp_rtol 1e-1 -coupledsolve_fieldsplit_1_ksp_type gmres -coupledsolve_fieldsplit_1_pc_type ilu -coupledsolve_fieldsplit_ksp_converged_reason -coupledsolve_fieldsplit_schur_precondition a11 -coupledsolve_ksp_type fgmres -coupledsolve_pc_fieldsplit_0_fields 0,1,2 -coupledsolve_pc_fieldsplit_1_fields 3 -coupledsolve_pc_fieldsplit_block_size 4 -coupledsolve_pc_fieldsplit_type schur -coupledsolve_pc_type fieldsplit -coupledsolve_pc_fieldsplit_schur_fact_type lower I will also attach the complete output at the end of my mail. On Fr, 2015-02-06 at 08:35 +0100, Dave May wrote: > > -coupledsolve_pc_fieldsplit_block_size 4 > -coupledsolve_fieldsplit_0_pc_fieldsplit_block_size 3 > > Without them, I get the error message > > [0]PETSC ERROR: PCFieldSplitSetDefaults() line 468 > in /work/build/petsc/src/ksp/pc/impls/fieldsplit/fieldsplit.c > Unhandled > case, must have at least two fields, not 1 > > I thought PETSc would already know, what I want to do, since I > initialized the fieldsplit with > > CALL PCFieldSplitSetIS(PRECON,PETSC_NULL_CHARACTER,ISU,IERR) > > > > Huh. > If you called PCFieldSplitSetIS(), then certainly the FS knows you > have four fields and you shouldn't need to set the block size 4 > option. I know this is true as I use it all the time. When you start > grouping new splits (in your case u,v,w) I would have also thought > that the block size 3 option would also be redundant - however I use > this option less frequently. > > > > > As a matter of fact I spent the last days digging through > papers on the > regard of preconditioners or approximate Schur complements and > the names > Elman and Silvester have come up quite often. > > The problem I experience is, that, except for one publication, > all the > other ones I checked deal with finite element formulations. > Only > > Klaij, C. and Vuik, C. SIMPLE-type preconditioners for > cell-centered, > colocated finite volume discretization of incompressible > Reynolds-averaged Navier?Stokes equations > > presented an approach for finite volume methods. > > > The exact same analysis applies directly to any stable mixed u-p > discretization (I agree with all Matt's comments as well). If your > stablilization term is doing what is supposed to, then your > discretization should be stable. > I don't know of any papers which show results using your exact > discretization, but here are some preconditioning papers for Stokes > which employ FV discretizations: > > @article{olshanskii1999iterative, > title={An iterative solver for the Oseen problem and numerical solution of incompressible Navier--Stokes equations}, > author={Olshanskii, Maxim A}, > journal={Numerical linear algebra with applications}, > volume={6}, > number={5}, > pages={353--378}, > year={1999} > } > > @article{olshanskii2004grad, > title={Grad-div stablilization for Stokes equations}, > author={Olshanskii, Maxim and Reusken, Arnold}, > journal={Mathematics of Computation}, > volume={73}, > number={248}, > pages={1699--1718}, > year={2004} > } > @article{griffith2009accurate, > title={An accurate and efficient method for the incompressible Navier--Stokes equations using the projection method as a preconditioner}, > author={Griffith, Boyce E}, > journal={Journal of Computational Physics}, > volume={228}, > number={20}, > pages={7565--7595}, > year={2009}, > publisher={Elsevier} > } > @article{furuichi2011development, > title={Development of a Stokes flow solver robust to large viscosity jumps using a Schur complement approach with mixed precision arithmetic}, > author={Furuichi, Mikito and May, Dave A and Tackley, Paul J}, > journal={Journal of Computational Physics}, > volume={230}, > number={24}, > pages={8835--8851}, > year={2011}, > publisher={Elsevier} > } > > @article{cai2013efficient, > title={Efficient variable-coefficient finite-volume Stokes solvers}, > author={Cai, Mingchao and Nonaka, AJ and Bell, John B and Griffith, Boyce E and Donev, Aleksandar}, > journal={arXiv preprint arXiv:1308.4605}, > year={2013} > } > > > Furthermore, a lot of > literature is found on saddle point problems, since the linear > system > from stable finite element formulations comes with a 0 block > as pressure > matrix. I'm not sure how I can benefit from the work that has > already > been done for finite element methods, since I neither use > finite > elements nor I am trying to solve a saddle point problem (?). > > > I would say you are trying to solve a saddle point system, only one > which has been stabilized. I expect your stabilization term should > vanish in the limit of h -> 0. The block preconditioners are directly > applicable to what you are doing, as are all the issues associated > with building preconditioners for schur complement approximations. I > have used FS based preconditioners for stablized Q1-Q1 finite element > discretizations for Stokes problems. Despite the stabilization term in > the p-p coupling term, saddle point preconditioning techniques are > appropriate. There are examples of this in > src/ksp/ksp/examples/tutorials - see ex43.c ex42.c > > > > > But then what happens in this line from the > tutorial /snes/examples/tutorials/ex70.c > > ierr = KSPSetOperators(subksp[1], s->myS, > s->myS);CHKERRQ(ierr); > > It think the approximate Schur complement a (Matrix of type > Schur) gets > replaced by an explicitely formed Matrix (myS, of type > MPIAIJ). > > > Oh yes, you are right, that is what is done in this example. But you > should note that this is not the default way petsc's fieldsplit > preconditioner will define the schur complement \hat{S}. This > particular piece of code lives in the users example. > > If you really wanted to do this, the same thing could be configured on > the command line: > > -XXX_ksp_type preonly -XXX_pc_type ksp > > > > > > > You have two choices in how to define the preconditioned, > \hat{S_p}: > > > > [1] Assemble you own matrix (as is done in ex70) > > > > [2] Let PETSc build one. PETSc does this according to > > > > \hat{S_p} = A11 - A10 inv(diag(A00)) A01 > > > Regards, > Fabian > > > > > > > > From gabel.fabian at gmail.com Sun Feb 15 14:08:32 2015 From: gabel.fabian at gmail.com (Fabian Gabel) Date: Sun, 15 Feb 2015 21:08:32 +0100 Subject: [petsc-users] Field Split PC for Fully-Coupled 3d stationary incompressible Navier-Stokes Solution Algorithm In-Reply-To: References: <1422869962.961.2.camel@gmail.com> <1422871832.961.4.camel@gmail.com> <1423082081.3096.6.camel@gmail.com> <1423174513.3627.1.camel@gmail.com> Message-ID: <1424030912.3226.13.camel@gmail.com> In some further testing I tried to reduce the wall clock time for my program runs by balancing the number of solver iterations needed to apply the preconditioner and the number of inner iterations of the solver for the complete linear system (velocities and pressure). Using -coupledsolve_pc_fieldsplit_schur_fact_type DIAG led to many solver iterations during the application of the preconditioner. On the other hand -coupledsolve_pc_fieldsplit_schur_fact_type UPPER did not present itself as a good factorization either, because to achieve a good preconditioning in the sense that the outermost KSP (-coupledsolve_ksp_type) would benefit from it, too many solver iterations during the application of the preconditioner were needed, which negatively affected the amount of time required for the program to finish. I found that the following set of options led to a further notable improvement of needed wall clock time for the program execution from 6083s to 2200s (2e6 unknowns): -coupledsolve_fieldsplit_0_fieldsplit_ksp_type preonly -coupledsolve_fieldsplit_0_fieldsplit_pc_type ml -coupledsolve_fieldsplit_0_ksp_rtol 1e-2 -coupledsolve_fieldsplit_0_ksp_type gmres -coupledsolve_fieldsplit_0_pc_fieldsplit_block_size 3 -coupledsolve_fieldsplit_0_pc_type fieldsplit -coupledsolve_fieldsplit_1_ksp_rtol 1e-1 -coupledsolve_fieldsplit_1_ksp_type gmres -coupledsolve_fieldsplit_1_pc_type ilu -coupledsolve_fieldsplit_ksp_converged_reason -coupledsolve_ksp_type fgmres -coupledsolve_pc_fieldsplit_0_fields 0,1,2 -coupledsolve_pc_fieldsplit_1_fields 3 -coupledsolve_pc_fieldsplit_block_size 4 -coupledsolve_pc_fieldsplit_type schur -coupledsolve_pc_type fieldsplit -coupledsolve_pc_fieldsplit_schur_fact_type -coupledsolve_ksp_monitor I attached the complete output at the end of my mail. Any comments on the present performance and further suggestions are appreciated. I wonder why the lower schur factorization approximation would perform considerably better than the upper factorization. Is this due to the fact, that I use the Schur complement preconditioner as a left preconditioner for GMRES? Furthermore, my solver is able to couple the solution process for the Navier-Stokes equations with the solution process for a temperature equation. The velocity-to-temperature coupling is realized through a Boussinesq approximation, the temperature-to-velocity/pressure coupling comes from a Newton-Raphson linearization of the convective term of the temperature equation like in @article{galpin86, author = {Galpin, P. F. and Raithby, G. D.}, title = {NUMERICAL SOLUTION OF PROBLEMS IN INCOMPRESSIBLE FLUID FLOW: TREATMENT OF THE TEMPERATURE-VELOCITY COUPLING}, journal = {Numerical Heat Transfer}, volume = {10}, number = {2}, pages = {105-129}, year = {1986}, } Would you also suggest to precondition the resulting linear system for velocity, pressure and temperature via a fieldsplit preconditioner? I am thinking about a nested fieldsplit, where I use on the outermost level a -coupledsolve_pc_fieldsplit_type additive -coupledsolve_pc_fieldsplit_0_fields 0,1,2,3 -coupledsolve_pc_fieldsplit_1_fields 4 Regards, Fabian -------------- next part -------------- Sender: LSF System Subject: Job 529769: in cluster Done Job was submitted from host by user in cluster . Job was executed on host(s) , in queue , as user in cluster . was used as the home directory. was used as the working directory. Started at Sun Feb 15 19:21:29 2015 Results reported at Sun Feb 15 19:58:42 2015 Your job looked like: ------------------------------------------------------------ # LSBATCH: User input #! /bin/sh #BSUB -J lower.fieldsplit_128 #BSUB -o /home/gu08vomo/thesis/fieldsplit/lower_128.out.%J #BSUB -n 1 #BSUB -W 0:40 #BSUB -x #BSUB -q test_mpi2 #BSUB -a openmpi module load openmpi/intel/1.8.2 export PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr export MYWORKDIR=/work/scratch/gu08vomo/thesis/singleblock/128_1_1/ export OUTPUTDIR=/home/gu08vomo/thesis/coupling export PETSC_OPS="-options_file ops.lower" #cat ops.full cat ops.lower echo "PETSC_DIR="$PETSC_DIR echo "MYWORKDIR="$MYWORKDIR echo "PETSC_OPS="$PETSC_OPS cd $MYWORKDIR mpirun -n 1 ./caffa3d.cpld.lnx ${PETSC_OPS} ------------------------------------------------------------ Successfully completed. Resource usage summary: CPU time : 2230.93 sec. Max Memory : 12976 MB Average Memory : 12440.09 MB Total Requested Memory : - Delta Memory : - (Delta: the difference between total requested memory and actual max usage.) Max Swap : 13833 MB Max Processes : 6 Max Threads : 11 The output (if any) follows: Modules: loading gcc/4.8.4 Modules: loading intel/2015 Modules: loading openmpi/intel/1.8.2 -coupledsolve_fieldsplit_0_fieldsplit_ksp_type preonly -coupledsolve_fieldsplit_0_fieldsplit_pc_type ml -coupledsolve_fieldsplit_0_ksp_rtol 1e-2 -coupledsolve_fieldsplit_0_ksp_type gmres -coupledsolve_fieldsplit_0_pc_fieldsplit_block_size 3 -coupledsolve_fieldsplit_0_pc_type fieldsplit -coupledsolve_fieldsplit_1_ksp_rtol 1e-1 -coupledsolve_fieldsplit_1_ksp_type gmres -coupledsolve_fieldsplit_1_pc_type ilu -coupledsolve_fieldsplit_ksp_converged_reason -coupledsolve_fieldsplit_schur_precondition a11 -coupledsolve_ksp_monitor -coupledsolve_ksp_type fgmres -coupledsolve_pc_fieldsplit_0_fields 0,1,2 -coupledsolve_pc_fieldsplit_1_fields 3 -coupledsolve_pc_fieldsplit_block_size 4 -coupledsolve_pc_fieldsplit_type schur -coupledsolve_pc_type fieldsplit -coupledsolve_pc_fieldsplit_schur_fact_type lower -on_error_abort -log_summary PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr MYWORKDIR=/work/scratch/gu08vomo/thesis/singleblock/128_1_1/ PETSC_OPS=-options_file ops.lower ENTER PROBLEM NAME (SIX CHARACTERS): **************************************************** NAME OF PROBLEM SOLVED control **************************************************** *************************************************** CONTROL SETTINGS *************************************************** LREAD,LWRITE,LPOST,LTEST,LOUTS,LOUTE,LTIME,LGRAD F F T F F F F F IMON, JMON, KMON, MMON, RMON, IPR, JPR, KPR, MPR,NPCOR,NIGRAD 8 9 8 1 0 2 2 3 1 1 1 SORMAX, SLARGE, ALFA 0.1000E-07 0.1000E+31 0.9200E+00 (URF(I),I=1,6) 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 (SOR(I),I=1,6) 0.1000E-02 0.1000E-01 0.1000E-01 0.1000E-01 0.1000E-01 0.1000E-07 (GDS(I),I=1,6) - BLENDING (CDS-UDS) 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 LSG 50 *************************************************** START COUPLED ALGORITHM *************************************************** Residual norms for coupledsolve_ solve. 0 KSP Residual norm 3.037662565552e+02 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 1 KSP Residual norm 1.553300775148e+00 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 2 KSP Residual norm 2.017338866144e-01 KSP Object:(coupledsolve_) 1 MPI processes type: fgmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=0.001, absolute=1e-50, divergence=10000 right preconditioning has attached null space using UNPRECONDITIONED norm type for convergence test PC Object:(coupledsolve_) 1 MPI processes type: fieldsplit FieldSplit with Schur preconditioner, blocksize = 4, factorization LOWER Preconditioner for the Schur complement formed from A11 Split info: Split number 0 Fields 0, 1, 2 Split number 1 Fields 3 KSP solver for A00 block KSP Object: (coupledsolve_fieldsplit_0_) 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=0.01, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_) 1 MPI processes type: fieldsplit FieldSplit with MULTIPLICATIVE composition: total splits = 3, blocksize = 3 Solver info for each split is in the following KSP objects: Split number 0 Fields 0 KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_) 1 MPI processes type: ml MG: type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_coarse_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5, needed 1 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=1, cols=1 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=1, cols=1 total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_1_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_1_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=4, cols=4 total: nonzeros=16, allocated nonzeros=16 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 1 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_2_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_2_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=105, cols=105 total: nonzeros=3963, allocated nonzeros=3963 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_3_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_3_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=6667, cols=6667 total: nonzeros=356943, allocated nonzeros=356943 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_4_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_4_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=263552, cols=263552 total: nonzeros=7.74159e+06, allocated nonzeros=7.74159e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_5_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_5_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_fieldsplit_0_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_fieldsplit_0_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines Split number 1 Fields 1 KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_) 1 MPI processes type: ml MG: type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_coarse_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5, needed 1 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=1, cols=1 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=1, cols=1 total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_1_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_1_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=4, cols=4 total: nonzeros=16, allocated nonzeros=16 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 1 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_2_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_2_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=105, cols=105 total: nonzeros=3963, allocated nonzeros=3963 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_3_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_3_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=6667, cols=6667 total: nonzeros=356943, allocated nonzeros=356943 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_4_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_4_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=263552, cols=263552 total: nonzeros=7.74159e+06, allocated nonzeros=7.74159e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_5_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_5_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_fieldsplit_1_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_fieldsplit_1_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines Split number 2 Fields 2 KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_) 1 MPI processes type: ml MG: type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_coarse_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5, needed 1 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=1, cols=1 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=1, cols=1 total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_1_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_1_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=4, cols=4 total: nonzeros=16, allocated nonzeros=16 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 1 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_2_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_2_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=105, cols=105 total: nonzeros=3963, allocated nonzeros=3963 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_3_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_3_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=6667, cols=6667 total: nonzeros=356943, allocated nonzeros=356943 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_4_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_4_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=263552, cols=263552 total: nonzeros=7.74159e+06, allocated nonzeros=7.74159e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_5_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_5_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_fieldsplit_2_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_fieldsplit_2_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_) 1 MPI processes type: seqaij rows=6591000, cols=6591000 total: nonzeros=4.40448e+07, allocated nonzeros=4.40448e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines KSP solver for S = A11 - A10 inv(A00) A01 KSP Object: (coupledsolve_fieldsplit_1_) 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=0.1, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: (coupledsolve_fieldsplit_1_) 1 MPI processes type: ilu ILU: out-of-place factorization 0 levels of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 1 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=2197000, cols=2197000 package used to perform factorization: petsc total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix followed by preconditioner matrix: Mat Object: (coupledsolve_fieldsplit_1_) 1 MPI processes type: schurcomplement rows=2197000, cols=2197000 Schur complement A11 - A10 inv(A00) A01 A11 Mat Object: (coupledsolve_fieldsplit_1_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines A10 Mat Object: 1 MPI processes type: seqaij rows=2197000, cols=6591000 total: nonzeros=4.37453e+07, allocated nonzeros=4.37453e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines KSP of A00 KSP Object: (coupledsolve_fieldsplit_0_) 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=0.01, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_) 1 MPI processes type: fieldsplit FieldSplit with MULTIPLICATIVE composition: total splits = 3, blocksize = 3 Solver info for each split is in the following KSP objects: Split number 0 Fields 0 KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_) 1 MPI processes type: ml MG: type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_coarse_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5, needed 1 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=1, cols=1 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=1, cols=1 total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_1_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_1_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=4, cols=4 total: nonzeros=16, allocated nonzeros=16 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 1 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_2_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_2_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=105, cols=105 total: nonzeros=3963, allocated nonzeros=3963 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_3_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_3_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=6667, cols=6667 total: nonzeros=356943, allocated nonzeros=356943 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_4_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_4_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=263552, cols=263552 total: nonzeros=7.74159e+06, allocated nonzeros=7.74159e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_5_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_0_mg_levels_5_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_fieldsplit_0_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_fieldsplit_0_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines Split number 1 Fields 1 KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_) 1 MPI processes type: ml MG: type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_coarse_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5, needed 1 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=1, cols=1 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=1, cols=1 total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_1_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_1_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=4, cols=4 total: nonzeros=16, allocated nonzeros=16 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 1 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_2_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_2_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=105, cols=105 total: nonzeros=3963, allocated nonzeros=3963 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_3_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_3_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=6667, cols=6667 total: nonzeros=356943, allocated nonzeros=356943 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_4_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_4_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=263552, cols=263552 total: nonzeros=7.74159e+06, allocated nonzeros=7.74159e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_5_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_1_mg_levels_5_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_fieldsplit_1_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_fieldsplit_1_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines Split number 2 Fields 2 KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_) 1 MPI processes type: ml MG: type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_coarse_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5, needed 1 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=1, cols=1 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=1, cols=1 total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_1_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_1_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=4, cols=4 total: nonzeros=16, allocated nonzeros=16 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 1 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_2_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_2_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=105, cols=105 total: nonzeros=3963, allocated nonzeros=3963 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_3_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_3_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=6667, cols=6667 total: nonzeros=356943, allocated nonzeros=356943 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_4_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_4_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=263552, cols=263552 total: nonzeros=7.74159e+06, allocated nonzeros=7.74159e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_5_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (coupledsolve_fieldsplit_0_fieldsplit_2_mg_levels_5_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_fieldsplit_2_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_fieldsplit_2_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: (coupledsolve_fieldsplit_0_) 1 MPI processes type: seqaij rows=6591000, cols=6591000 total: nonzeros=4.40448e+07, allocated nonzeros=4.40448e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines A01 Mat Object: 1 MPI processes type: seqaij rows=6591000, cols=2197000 total: nonzeros=4.37453e+07, allocated nonzeros=4.37453e+07 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 2170168 nodes, limit used is 5 Mat Object: (coupledsolve_fieldsplit_1_) 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=8788000, cols=8788000 total: nonzeros=1.46217e+08, allocated nonzeros=1.46217e+08 total number of mallocs used during MatSetValues calls =0 not using I-node routines 0000001 0.1000E+01 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 1.028233530004e+04 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 1 KSP Residual norm 1.606838014009e+00 0000002 0.2344E+00 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 5.023800983596e+02 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 1 KSP Residual norm 7.425469191822e+01 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 2 KSP Residual norm 3.755849579991e-02 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 3 KSP Residual norm 5.743572058002e-03 0000003 0.5924E-01 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 3.866212850470e+02 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 1 KSP Residual norm 4.177990847114e+01 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 2 KSP Residual norm 4.660417610500e-02 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 3 KSP Residual norm 3.132317311252e-03 0000004 0.2121E-01 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 3.343607579183e+02 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 1 KSP Residual norm 2.875574578166e+01 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 2 KSP Residual norm 3.584207775217e-02 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 3 KSP Residual norm 2.932340791868e-03 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 4 KSP Residual norm 6.591545837209e-04 0000005 0.5494E-02 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 3.371702246940e+02 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 1 KSP Residual norm 3.123261193022e+01 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 2 KSP Residual norm 4.306370439136e-02 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 3 KSP Residual norm 2.539759362219e-03 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 4 KSP Residual norm 6.769178046009e-04 0000006 0.2255E-02 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 3.338827190981e+02 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 1 KSP Residual norm 2.973320832166e+01 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 2 KSP Residual norm 4.237610611667e-02 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 3 KSP Residual norm 2.599425547566e-03 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 4 KSP Residual norm 6.867219746999e-04 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 5 KSP Residual norm 6.421260647843e-05 0000007 0.5761E-03 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 3.340189527798e+02 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 1 KSP Residual norm 2.996199585312e+01 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 2 KSP Residual norm 4.316958897219e-02 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 3 KSP Residual norm 2.553781931326e-03 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 4 KSP Residual norm 6.836621746595e-04 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 5 KSP Residual norm 6.421110608885e-05 0000008 0.2498E-03 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 3.337586153246e+02 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 1 KSP Residual norm 2.984071587460e+01 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 2 KSP Residual norm 4.312404640336e-02 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 3 KSP Residual norm 2.560118557314e-03 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 4 KSP Residual norm 6.844610397581e-04 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 5 KSP Residual norm 6.432302924294e-05 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 6 KSP Residual norm 1.222419066738e-05 0000009 0.6187E-04 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 3.337917307386e+02 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 1 KSP Residual norm 2.986690345162e+01 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 2 KSP Residual norm 4.318268305495e-02 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 3 KSP Residual norm 2.553209668954e-03 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 4 KSP Residual norm 6.841345464525e-04 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 5 KSP Residual norm 6.428292062041e-05 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 6 KSP Residual norm 1.222368753058e-05 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 4 7 KSP Residual norm 1.375577072665e-06 0000010 0.2762E-04 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 3.337739325955e+02 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 1 KSP Residual norm 2.986270165904e+01 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 2 KSP Residual norm 4.323893440466e-02 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 3 KSP Residual norm 2.556775335866e-03 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 4 KSP Residual norm 6.846815409723e-04 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 5 KSP Residual norm 6.432816928730e-05 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 6 KSP Residual norm 1.223887358372e-05 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 4 7 KSP Residual norm 1.379043084412e-06 0000011 0.7066E-05 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 3.337775679923e+02 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 1 KSP Residual norm 2.986184744669e+01 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 2 KSP Residual norm 4.318965553962e-02 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 3 KSP Residual norm 2.554287389364e-03 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 4 KSP Residual norm 6.844989110986e-04 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 5 KSP Residual norm 6.432217417272e-05 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 6 KSP Residual norm 1.222905917426e-05 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 4 7 KSP Residual norm 1.376095357288e-06 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 4 8 KSP Residual norm 1.503544978257e-07 0000012 0.3120E-05 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 3.337771098018e+02 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 1 KSP Residual norm 2.984850895132e+01 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 2 KSP Residual norm 4.299992865481e-02 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 3 KSP Residual norm 2.543700557738e-03 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 4 KSP Residual norm 6.825323062426e-04 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 5 KSP Residual norm 6.412082595270e-05 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 6 KSP Residual norm 1.216648626849e-05 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 4 7 KSP Residual norm 1.364409498341e-06 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 4 8 KSP Residual norm 1.487213324224e-07 0000013 0.8620E-06 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 3.337773446173e+02 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 1 KSP Residual norm 2.986810561561e+01 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 2 KSP Residual norm 4.323978790588e-02 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 3 KSP Residual norm 2.556003828193e-03 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 4 KSP Residual norm 6.847922642024e-04 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 5 KSP Residual norm 6.433247419446e-05 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 6 KSP Residual norm 1.223084365730e-05 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 4 7 KSP Residual norm 1.377689534658e-06 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 4 8 KSP Residual norm 1.505324250977e-07 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 4 9 KSP Residual norm 1.619232539991e-08 0000014 0.3704E-06 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 3.337773685833e+02 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 1 KSP Residual norm 2.986487648392e+01 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 2 KSP Residual norm 4.320852529537e-02 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 3 KSP Residual norm 2.554903725565e-03 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 4 KSP Residual norm 6.846350555775e-04 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 5 KSP Residual norm 6.432444915169e-05 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 6 KSP Residual norm 1.223086617641e-05 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 4 7 KSP Residual norm 1.377051343103e-06 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 4 8 KSP Residual norm 1.504441967446e-07 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 4 9 KSP Residual norm 1.616472685210e-08 0000015 0.1094E-06 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 3.337773332937e+02 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 1 KSP Residual norm 2.986023728455e+01 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 2 KSP Residual norm 4.316768161054e-02 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 3 KSP Residual norm 2.552117009388e-03 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 4 KSP Residual norm 6.839757998817e-04 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 5 KSP Residual norm 6.426910696189e-05 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 6 KSP Residual norm 1.222118287831e-05 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 4 7 KSP Residual norm 1.375320747467e-06 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 4 8 KSP Residual norm 1.502808448276e-07 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 4 9 KSP Residual norm 1.614609627040e-08 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 4 10 KSP Residual norm 3.541413251773e-09 0000016 0.4662E-07 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 3.337773277429e+02 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 1 KSP Residual norm 2.985267475687e+01 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 2 KSP Residual norm 4.304539520997e-02 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 3 KSP Residual norm 2.545819623557e-03 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 4 KSP Residual norm 6.829675232413e-04 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 5 KSP Residual norm 6.416772602773e-05 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 6 KSP Residual norm 1.218105119348e-05 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 4 7 KSP Residual norm 1.367882303005e-06 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 4 8 KSP Residual norm 1.490864693800e-07 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 4 9 KSP Residual norm 1.602654189916e-08 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 4 10 KSP Residual norm 3.542736180343e-09 0000017 0.1465E-07 0.0000E+00 Residual norms for coupledsolve_ solve. 0 KSP Residual norm 3.337773332735e+02 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 1 KSP Residual norm 2.985366827171e+01 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 2 KSP Residual norm 4.310215432202e-02 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 3 KSP Residual norm 2.549090410787e-03 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 4 KSP Residual norm 6.834130919436e-04 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 5 KSP Residual norm 6.423278758870e-05 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 6 KSP Residual norm 1.220938587219e-05 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 4 7 KSP Residual norm 1.371836693851e-06 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 4 8 KSP Residual norm 1.498846959266e-07 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 4 9 KSP Residual norm 1.609018063623e-08 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 4 10 KSP Residual norm 3.554071111304e-09 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 4 11 KSP Residual norm 2.396591915816e-09 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 2 Linear solve converged due to CONVERGED_RTOL iterations 3 Linear solve converged due to CONVERGED_RTOL iterations 3 12 KSP Residual norm 1.947579109991e-09 0000018 0.6254E-08 0.0000E+00 TIME FOR CALCULATION: 0.2201E+04 L2-NORM ERROR U VELOCITY 2.804052375769966E-005 L2-NORM ERROR V VELOCITY 2.790892741001562E-005 L2-NORM ERROR W VELOCITY 2.917276753520388E-005 L2-NORM ERROR ABS. VELOCITY 3.168726795771020E-005 L2-NORM ERROR PRESSURE 1.392941064435226E-003 *** CALCULATION FINISHED - SEE RESULTS *** ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./caffa3d.cpld.lnx on a arch-openmpi-opt-intel-hlr-ext named hpb0062 with 1 processor, by gu08vomo Sun Feb 15 19:58:42 2015 Using Petsc Release Version 3.5.3, Jan, 31, 2015 Max Max/Min Avg Total Time (sec): 2.230e+03 1.00000 2.230e+03 Objects: 3.157e+03 1.00000 3.157e+03 Flops: 2.167e+12 1.00000 2.167e+12 2.167e+12 Flops/sec: 9.717e+08 1.00000 9.717e+08 9.717e+08 MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Reductions: 0.000e+00 0.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 1.0491e+02 4.7% 2.6364e+07 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 1: CPLD_SOL: 2.1255e+03 95.3% 2.1672e+12 100.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage ThreadCommRunKer 5 1.0 5.0068e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNorm 1 1.0 2.3196e-01 1.0 1.76e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 67 0 0 0 76 VecScale 1 1.0 6.8851e-03 1.0 8.79e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 33 0 0 0 1276 VecSet 623 1.0 6.3973e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 VecScatterBegin 663 1.0 1.8907e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 1 1.0 6.8860e-03 1.0 8.79e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 33 0 0 0 1276 MatAssemblyBegin 36 1.0 1.2398e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 36 1.0 1.8623e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 2 0 0 0 0 0 MatZeroEntries 18 1.0 1.3840e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 PetscBarrier 72 1.0 5.6267e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 --- Event Stage 1: CPLD_SOL VecMDot 1683 1.0 1.5884e+01 1.0 4.20e+10 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 1 2 0 0 0 2642 VecNorm 2293 1.0 7.7025e+00 1.0 2.68e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 3483 VecScale 6102 1.0 1.6482e+01 1.0 2.17e+10 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 1314 VecCopy 610 1.0 3.1824e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 44223 1.0 2.0095e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecAXPY 610 1.0 4.4378e+00 1.0 7.21e+09 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1624 VecAYPX 25245 1.0 1.4943e+01 1.0 1.25e+10 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 834 VecMAXPY 2275 1.0 2.5839e+01 1.0 6.16e+10 1.0 0.0e+00 0.0e+00 0.0e+00 1 3 0 0 0 1 3 0 0 0 2384 VecScatterBegin 10550 1.0 6.0145e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0 VecNormalize 2144 1.0 1.6139e+01 1.0 3.63e+10 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 1 2 0 0 0 2250 MatMult 15110 1.0 1.4587e+03 1.0 1.74e+12 1.0 0.0e+00 0.0e+00 0.0e+00 65 80 0 0 0 69 80 0 0 0 1192 MatMultAdd 25593 1.0 7.8538e+01 1.0 1.00e+11 1.0 0.0e+00 0.0e+00 0.0e+00 4 5 0 0 0 4 5 0 0 0 1274 MatSolve 5510 1.0 1.3219e+01 1.0 1.25e+10 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 947 MatSOR 50490 1.0 1.2115e+03 1.0 1.27e+12 1.0 0.0e+00 0.0e+00 0.0e+00 54 58 0 0 0 57 58 0 0 0 1045 MatLUFactorSym 54 1.0 3.5882e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatLUFactorNum 72 1.0 1.9253e+00 1.0 8.23e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 427 MatILUFactorSym 1 1.0 8.5662e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatResidual 25245 1.0 1.4386e+02 1.0 2.30e+11 1.0 0.0e+00 0.0e+00 0.0e+00 6 11 0 0 0 7 11 0 0 0 1599 MatAssemblyBegin 990 1.0 1.5211e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 990 1.0 8.5273e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRowIJ 55 1.0 5.7697e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetSubMatrice 180 1.0 3.9237e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 MatGetOrdering 55 1.0 9.5439e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatZeroEntries 51 1.0 3.1331e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 29 1.0 1.4956e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPGMRESOrthog 1683 1.0 3.4213e+01 1.0 8.39e+10 1.0 0.0e+00 0.0e+00 0.0e+00 2 4 0 0 0 2 4 0 0 0 2453 KSPSetUp 432 1.0 1.5358e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 18 1.0 2.1205e+03 1.0 2.16e+12 1.0 0.0e+00 0.0e+00 0.0e+00 95100 0 0 0 100100 0 0 0 1019 PCSetUp 108 1.0 2.7390e+02 1.0 1.55e+10 1.0 0.0e+00 0.0e+00 0.0e+00 12 1 0 0 0 13 1 0 0 0 57 PCApply 113 1.0 2.0665e+03 1.0 2.11e+12 1.0 0.0e+00 0.0e+00 0.0e+00 93 97 0 0 0 97 97 0 0 0 1019 --- Event Stage 2: Unknown ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Vector 71 257 4813507096 0 Vector Scatter 4 8 5216 0 Index Set 12 30 17599784 0 IS L to G Mapping 4 3 79093788 0 Matrix 2 62 6128226616 0 Krylov Solver 0 25 83112 0 Preconditioner 0 25 28000 0 Distributed Mesh 0 1 4448 0 Star Forest Bipartite Graph 0 2 1632 0 Discrete System 0 1 800 0 --- Event Stage 1: CPLD_SOL Vector 1838 1649 6224809880 0 Vector Scatter 5 0 0 0 Index Set 270 248 196824 0 Matrix 876 816 6070886592 0 Matrix Null Space 18 0 0 0 Krylov Solver 26 1 1328 0 Preconditioner 26 1 1016 0 Viewer 1 0 0 0 Distributed Mesh 1 0 0 0 Star Forest Bipartite Graph 2 0 0 0 Discrete System 1 0 0 0 --- Event Stage 2: Unknown ======================================================================================================================== Average time to get PetscTime(): 0 #PETSc Option Table entries: -coupledsolve_fieldsplit_0_fieldsplit_ksp_type preonly -coupledsolve_fieldsplit_0_fieldsplit_pc_type ml -coupledsolve_fieldsplit_0_ksp_rtol 1e-2 -coupledsolve_fieldsplit_0_ksp_type gmres -coupledsolve_fieldsplit_0_pc_fieldsplit_block_size 3 -coupledsolve_fieldsplit_0_pc_type fieldsplit -coupledsolve_fieldsplit_1_ksp_rtol 1e-1 -coupledsolve_fieldsplit_1_ksp_type gmres -coupledsolve_fieldsplit_1_pc_type ilu -coupledsolve_fieldsplit_ksp_converged_reason -coupledsolve_fieldsplit_schur_precondition a11 -coupledsolve_ksp_monitor -coupledsolve_ksp_type fgmres -coupledsolve_pc_fieldsplit_0_fields 0,1,2 -coupledsolve_pc_fieldsplit_1_fields 3 -coupledsolve_pc_fieldsplit_block_size 4 -coupledsolve_pc_fieldsplit_schur_fact_type lower -coupledsolve_pc_fieldsplit_type schur -coupledsolve_pc_type fieldsplit -log_summary -on_error_abort #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure options: PETSC_ARCH=arch-openmpi-opt-intel-hlr-ext PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3 -prefix=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr-ext --with-blas-lapack-dir=/shared/apps/intel/2015/composer_xe_2015/mkl/lib/intel64/ --with-mpi-dir=/shared/apps/openmpi/1.8.2_intel COPTFLAGS="-O3 -xHost" FOPTFLAGS="-O3 -xHost" CXXOPTFLAGS="-O3 -xHost" --with-debugging=0 --download-hypre --download-ml ----------------------------------------- Libraries compiled on Sun Feb 1 16:09:22 2015 on hla0003 Machine characteristics: Linux-3.0.101-0.40-default-x86_64-with-SuSE-11-x86_64 Using PETSc directory: /home/gu08vomo/soft/petsc/3.5.3 Using PETSc arch: arch-openmpi-opt-intel-hlr-ext ----------------------------------------- Using C compiler: /shared/apps/openmpi/1.8.2_intel/bin/mpicc -fPIC -wd1572 -O3 -xHost ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /shared/apps/openmpi/1.8.2_intel/bin/mpif90 -fPIC -O3 -xHost ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/include -I/home/gu08vomo/soft/petsc/3.5.3/include -I/home/gu08vomo/soft/petsc/3.5.3/include -I/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/include -I/shared/apps/openmpi/1.8.2_intel/include ----------------------------------------- Using C linker: /shared/apps/openmpi/1.8.2_intel/bin/mpicc Using Fortran linker: /shared/apps/openmpi/1.8.2_intel/bin/mpif90 Using libraries: -Wl,-rpath,/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -L/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -lpetsc -Wl,-rpath,/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -L/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -lHYPRE -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -L/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib -L/shared/apps/gcc/4.8.3/lib -lmpi_cxx -lml -lmpi_cxx -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm -lX11 -lpthread -lssl -lcrypto -lmpi_usempi_ignore_tkr -lmpi_mpifh -lifport -lifcore -lm -lmpi_cxx -ldl -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -lmpi -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -L/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib -L/shared/apps/gcc/4.8.3/lib -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -limf -lsvml -lirng -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lpthread -lirc_s -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -L/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib -L/shared/apps/gcc/4.8.3/lib -ldl ----------------------------------------- From gideon.simpson at gmail.com Sun Feb 15 20:22:34 2015 From: gideon.simpson at gmail.com (Gideon Simpson) Date: Sun, 15 Feb 2015 21:22:34 -0500 Subject: [petsc-users] MatMPIAIJSetPreallocation question Message-ID: <176A184B-6A0F-4FBD-A591-77AF1822182D@gmail.com> I?m trying to get a handle on the different ways of constructing matrices. Currently, I have: MatCreate(...) MatSetSizes(...) MatSetFromOptions(...) MatSetUp(?) but I gather from reading the manual that, by not preallocating, I?m losing out in performance. If I assume that my matrix will either by SeqAIJ or MPIAIJ, depending on the number of processors available, how would I go about doing this. I see some of the example codes with: MatSeqAIJSetPreallocation(?) MatMPIAIJSetPreallocation(?) as successive commands. Should I interpret this as saying that PETSc will just ignore the one that is not the active one in the current instance? -gideon From bsmith at mcs.anl.gov Sun Feb 15 20:31:36 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 15 Feb 2015 20:31:36 -0600 Subject: [petsc-users] MatMPIAIJSetPreallocation question In-Reply-To: <176A184B-6A0F-4FBD-A591-77AF1822182D@gmail.com> References: <176A184B-6A0F-4FBD-A591-77AF1822182D@gmail.com> Message-ID: <4BC345E4-86D8-4D37-AD60-D26ADF9A7C83@mcs.anl.gov> > On Feb 15, 2015, at 8:22 PM, Gideon Simpson wrote: > > I?m trying to get a handle on the different ways of constructing matrices. Currently, I have: > > MatCreate(...) > MatSetSizes(...) > MatSetFromOptions(...) > MatSetUp(?) > > but I gather from reading the manual that, by not preallocating, I?m losing out in performance. If I assume that my matrix will either by SeqAIJ or MPIAIJ, depending on the number of processors available, how would I go about doing this. I see some of the example codes with: > > MatSeqAIJSetPreallocation(?) > MatMPIAIJSetPreallocation(?) > > as successive commands. Should I interpret this as saying that PETSc will just ignore the one that is not the active one in the current instance? Exactly*. You can also use MatXAIJSetPreallocation() which sets the possible preallocation forms for all the matrix types without requiring you to set one each individuallly. Barry * We support this in many places because we hate requiring users to put a bunch of if (type == xxx) kind of code in their applications. For example KSPGMRESSetRestart() is ignored if the KSP type is not GMRES. > > -gideon > From gideon.simpson at gmail.com Sun Feb 15 21:17:04 2015 From: gideon.simpson at gmail.com (Gideon Simpson) Date: Sun, 15 Feb 2015 22:17:04 -0500 Subject: [petsc-users] MatMPIAIJSetPreallocation question In-Reply-To: <4BC345E4-86D8-4D37-AD60-D26ADF9A7C83@mcs.anl.gov> References: <176A184B-6A0F-4FBD-A591-77AF1822182D@gmail.com> <4BC345E4-86D8-4D37-AD60-D26ADF9A7C83@mcs.anl.gov> Message-ID: <02B6D553-15E3-4122-9B1C-DDF3061C14A7@gmail.com> Got it, one follow up question: When calling MatMPIAIJSetPreallocation, is there a reason why the number of nonzero entries in the off-diagonal sub matrix cannot be zero? -gideon > On Feb 15, 2015, at 9:31 PM, Barry Smith wrote: > > >> On Feb 15, 2015, at 8:22 PM, Gideon Simpson wrote: >> >> I?m trying to get a handle on the different ways of constructing matrices. Currently, I have: >> >> MatCreate(...) >> MatSetSizes(...) >> MatSetFromOptions(...) >> MatSetUp(?) >> >> but I gather from reading the manual that, by not preallocating, I?m losing out in performance. If I assume that my matrix will either by SeqAIJ or MPIAIJ, depending on the number of processors available, how would I go about doing this. I see some of the example codes with: >> >> MatSeqAIJSetPreallocation(?) >> MatMPIAIJSetPreallocation(?) >> >> as successive commands. Should I interpret this as saying that PETSc will just ignore the one that is not the active one in the current instance? > > Exactly*. > > You can also use MatXAIJSetPreallocation() which sets the possible preallocation forms for all the matrix types without requiring you to set one each individuallly. > > Barry > > * We support this in many places because we hate requiring users to put a bunch of if (type == xxx) kind of code in their applications. For example KSPGMRESSetRestart() is ignored if the KSP type is not GMRES. > > >> >> -gideon >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Feb 15 21:34:39 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 15 Feb 2015 21:34:39 -0600 Subject: [petsc-users] MatMPIAIJSetPreallocation question In-Reply-To: <02B6D553-15E3-4122-9B1C-DDF3061C14A7@gmail.com> References: <176A184B-6A0F-4FBD-A591-77AF1822182D@gmail.com> <4BC345E4-86D8-4D37-AD60-D26ADF9A7C83@mcs.anl.gov> <02B6D553-15E3-4122-9B1C-DDF3061C14A7@gmail.com> Message-ID: <05E765F6-8E2B-420A-9D85-188DBCD107BE@mcs.anl.gov> > On Feb 15, 2015, at 9:17 PM, Gideon Simpson wrote: > > Got it, one follow up question: > > When calling MatMPIAIJSetPreallocation, is there a reason why the number of nonzero entries in the off-diagonal sub matrix cannot be zero? It could be zero; that means there is not coupling between processes however so essentially each process has its own independent problem; thus normally it is most definitely not zero. Barry > > -gideon > >> On Feb 15, 2015, at 9:31 PM, Barry Smith wrote: >> >> >>> On Feb 15, 2015, at 8:22 PM, Gideon Simpson wrote: >>> >>> I?m trying to get a handle on the different ways of constructing matrices. Currently, I have: >>> >>> MatCreate(...) >>> MatSetSizes(...) >>> MatSetFromOptions(...) >>> MatSetUp(?) >>> >>> but I gather from reading the manual that, by not preallocating, I?m losing out in performance. If I assume that my matrix will either by SeqAIJ or MPIAIJ, depending on the number of processors available, how would I go about doing this. I see some of the example codes with: >>> >>> MatSeqAIJSetPreallocation(?) >>> MatMPIAIJSetPreallocation(?) >>> >>> as successive commands. Should I interpret this as saying that PETSc will just ignore the one that is not the active one in the current instance? >> >> Exactly*. >> >> You can also use MatXAIJSetPreallocation() which sets the possible preallocation forms for all the matrix types without requiring you to set one each individuallly. >> >> Barry >> >> * We support this in many places because we hate requiring users to put a bunch of if (type == xxx) kind of code in their applications. For example KSPGMRESSetRestart() is ignored if the KSP type is not GMRES. >> >> >>> >>> -gideon >>> >> > From jed at jedbrown.org Mon Feb 16 00:11:47 2015 From: jed at jedbrown.org (Jed Brown) Date: Mon, 16 Feb 2015 14:11:47 +0800 Subject: [petsc-users] TSSetIJacobian question In-Reply-To: References: Message-ID: <87bnku1j30.fsf@jedbrown.org> Mark Adams writes: > I am advancing a two equation system with TS that has an additional > constraint equation. I build a 3x3 composite matrix a_ts%FJacobean Any particular reason for spelling it incorrectly? > that has my F(U). I then do: > > call > MatDuplicate(a_ts%FJacobean,MAT_DO_NOT_COPY_VALUES,a_ts%FJacobean2,ierr) > call > TSSetIJacobian(a_ts%ts,a_ts%FJacobean2,a_ts%FJacobean2,FormIJacobian,a_ts,ierr) > > I am thinking my FormIJacobian would look like this: > > ! copy in linear operator > call MatCopy(a_ts%FJacobean,Jpre,ierr);CHKERRQ(ierr) > ! shift 1 & 2 by 'shift' > call MatShift(mat00,shift,ierr);CHKERRQ(ierr) ???? > call MatShift(mat11,shift,ierr);CHKERRQ(ierr) ???? What does this mean? Why not include the shift while assembling? > Is this a good basic approach? > > I'm not sure how to shift just the first two blocks. MatGetSubMatrix does > not seem usable here. I want these two diagonal block matrices to shift > them. Can I get an array of matrices out of a composite matrix? Since I assume you read the MATCOMPOSITE man page say that all the matrices have the same size, I have no idea what you're asking for. http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATCOMPOSITE.html Maybe you don't want a composite matrix? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From pierre.barbierdereuille at gmail.com Mon Feb 16 06:57:23 2015 From: pierre.barbierdereuille at gmail.com (Pierre Barbier de Reuille) Date: Mon, 16 Feb 2015 12:57:23 +0000 Subject: [petsc-users] Setting step acceptance criteria and/or domain validity using TS module References: Message-ID: Hello again, to simplify the process I created a pull request on bitbucket, only for the domain error function (e.g. there is no change to the TSSetCheckStage function. Here is the link to the pull request: https://bitbucket.org/petsc/petsc/pull-request/263 Cheers, Pierre On Fri Feb 13 2015 at 14:59:31 Pierre Barbier de Reuille < pierre.barbierdereuille at gmail.com> wrote: > Hello, > > sorry to bombard you with emails. But here is another patch, still on the > master branch, which adds TSSetFunctionDomainError and > TSFunctionDomainError functions. I tried them with Runge-Kutta and they > work. I think I added the correct calls to all the methods, or at least all > the ones calling TSPostStage. > > Note that I needed to modify the Runge-Kutta call to remove the goto. For > some reason, on my system (Ubuntu 14.04, gcc 4.8.2), the time step would > not get updated if compiled with optimisations. Removing the goto and > replacing it with break/continue prevented that issue. > > Please tell me what you think of the modification. > > Cheers, > > Pierre > > > On Thu Feb 12 2015 at 15:37:48 Pierre Barbier de Reuille < > pierre.barbierdereuille at gmail.com> wrote: > >> Hello, >> >> so here is a patch against the MASTER branch to add time and current >> solution vector to the TSAdaptCheckStage. What I did is add the same >> arguments as for the TSPostStage call. >> I hope I haven't made any mistake. >> >> In addition, if the stage is rejected, PETSc only tried again, changing >> nothing, and therefore failing in the exact same way. So I also added a >> reduction of the time step if the stage is rejected by the user. >> >> Note: I tested the code with the RungeKutta solver only for now. >> >> Cheers, >> >> Pierre >> >> >> On Thu Feb 12 2015 at 03:45:13 Jed Brown wrote: >> >>> Pierre Barbier de Reuille writes: >>> >>> > Ok, I made progress. But: >>> > >>> > 1 - whatever I do, I have very slightly negative values, and >>> therefore all >>> > my steps get rejected (values like 1e-16) >>> > 2 - As I expected, SNES is only used with implicit methods. So if I >>> use >>> > explicit Runge-Kutta, then there is no solution vector stored by the >>> SNES >>> > object. >>> > >>> > Reading the code for the Runge-Kutta solver, it seems that TSPostStage >>> is >>> > where I can retrieve the current state, and TSAdaptCheckStage where I >>> can >>> > reject it. But is this something I can rely on? >>> >>> TSPostStage is only called *after* the stage has been accepted (the step >>> might be rejected later, e.g., based on a local error controller). >>> >>> We should pass the stage solution to TSAdaptCheckStage so you can check >>> it there. I can add this, but I'm at a conference in Singapore this >>> week and have a couple more pressing things, so you'd have to wait until >>> next week unless someone else can do it (or you'd like to submit a >>> patch). >>> >>> We should also add TSSetSetFunctionDomainError() so you can check it >>> there (my preference, actually). >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Mon Feb 16 09:28:57 2015 From: mfadams at lbl.gov (Mark Adams) Date: Mon, 16 Feb 2015 10:28:57 -0500 Subject: [petsc-users] TSSetIJacobian question In-Reply-To: <87bnku1j30.fsf@jedbrown.org> References: <87bnku1j30.fsf@jedbrown.org> Message-ID: Sorry, I was getting "composite" from DMCompositeGetAccessArray, I am talking about MatNest. I can get the matrices with MatNestSetSubMats.... > > > ! copy in linear operator > > call MatCopy(a_ts%FJacobean,Jpre,ierr);CHKERRQ(ierr) > > ! shift 1 & 2 by 'shift' > > call MatShift(mat00,shift,ierr);CHKERRQ(ierr) ???? > > call MatShift(mat11,shift,ierr);CHKERRQ(ierr) ???? > > What does this mean? Why not include the shift while assembling? > Perhaps I am using this incorrectly but the shift is added in "FormIJacobian" and only 2 of the 3 fields have u_t. The rest of the matrix is linear and not time dependant so I create it once in the setup and copy it in in FormIJacobian. Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gideon.simpson at gmail.com Mon Feb 16 09:29:40 2015 From: gideon.simpson at gmail.com (Gideon Simpson) Date: Mon, 16 Feb 2015 10:29:40 -0500 Subject: [petsc-users] MatMPIAIJSetPreallocation question In-Reply-To: <05E765F6-8E2B-420A-9D85-188DBCD107BE@mcs.anl.gov> References: <176A184B-6A0F-4FBD-A591-77AF1822182D@gmail.com> <4BC345E4-86D8-4D37-AD60-D26ADF9A7C83@mcs.anl.gov> <02B6D553-15E3-4122-9B1C-DDF3061C14A7@gmail.com> <05E765F6-8E2B-420A-9D85-188DBCD107BE@mcs.anl.gov> Message-ID: <8DD51FC4-9E28-4967-B755-51451331FE7C@gmail.com> Got it. I had a coding error. -gideon > On Feb 15, 2015, at 10:34 PM, Barry Smith wrote: > > >> On Feb 15, 2015, at 9:17 PM, Gideon Simpson wrote: >> >> Got it, one follow up question: >> >> When calling MatMPIAIJSetPreallocation, is there a reason why the number of nonzero entries in the off-diagonal sub matrix cannot be zero? > > It could be zero; that means there is not coupling between processes however so essentially each process has its own independent problem; thus normally it is most definitely not zero. > > Barry > >> >> -gideon >> >>> On Feb 15, 2015, at 9:31 PM, Barry Smith wrote: >>> >>> >>>> On Feb 15, 2015, at 8:22 PM, Gideon Simpson wrote: >>>> >>>> I?m trying to get a handle on the different ways of constructing matrices. Currently, I have: >>>> >>>> MatCreate(...) >>>> MatSetSizes(...) >>>> MatSetFromOptions(...) >>>> MatSetUp(?) >>>> >>>> but I gather from reading the manual that, by not preallocating, I?m losing out in performance. If I assume that my matrix will either by SeqAIJ or MPIAIJ, depending on the number of processors available, how would I go about doing this. I see some of the example codes with: >>>> >>>> MatSeqAIJSetPreallocation(?) >>>> MatMPIAIJSetPreallocation(?) >>>> >>>> as successive commands. Should I interpret this as saying that PETSc will just ignore the one that is not the active one in the current instance? >>> >>> Exactly*. >>> >>> You can also use MatXAIJSetPreallocation() which sets the possible preallocation forms for all the matrix types without requiring you to set one each individuallly. >>> >>> Barry >>> >>> * We support this in many places because we hate requiring users to put a bunch of if (type == xxx) kind of code in their applications. For example KSPGMRESSetRestart() is ignored if the KSP type is not GMRES. >>> >>> >>>> >>>> -gideon >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gideon.simpson at gmail.com Mon Feb 16 16:54:09 2015 From: gideon.simpson at gmail.com (Gideon Simpson) Date: Mon, 16 Feb 2015 17:54:09 -0500 Subject: [petsc-users] filename upper case letters Message-ID: Does petsc not distinguish lower/upper case letters in file names? I was trying to write a vector to the file ?Fvec.bin?, but it comes out as ?fvec.bin?. -gideon -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Feb 16 16:57:01 2015 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 16 Feb 2015 16:57:01 -0600 Subject: [petsc-users] filename upper case letters In-Reply-To: References: Message-ID: On Mon, Feb 16, 2015 at 4:54 PM, Gideon Simpson wrote: > Does petsc not distinguish lower/upper case letters in file names? I was > trying to write a vector to the file ?Fvec.bin?, but it comes out as > ?fvec.bin?. > We allow all cases. Very old Windows is case insensitive, but I don't think you have that. Matt > -gideon > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon Feb 16 16:57:35 2015 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 16 Feb 2015 16:57:35 -0600 Subject: [petsc-users] filename upper case letters In-Reply-To: References: Message-ID: Are you using a Mac? It could be (case-insensitive) filesystem issue.. Satish On Mon, 16 Feb 2015, Gideon Simpson wrote: > Does petsc not distinguish lower/upper case letters in file names? I was trying to write a vector to the file ?Fvec.bin?, but it comes out as ?fvec.bin?. > > -gideon > > From gideon.simpson at gmail.com Mon Feb 16 16:57:57 2015 From: gideon.simpson at gmail.com (Gideon Simpson) Date: Mon, 16 Feb 2015 17:57:57 -0500 Subject: [petsc-users] filename upper case letters In-Reply-To: References: Message-ID: Yup, on an OS X 10.10 machine. -gideon > On Feb 16, 2015, at 5:57 PM, Satish Balay wrote: > > Are you using a Mac? It could be (case-insensitive) filesystem issue.. > > Satish > > On Mon, 16 Feb 2015, Gideon Simpson wrote: > >> Does petsc not distinguish lower/upper case letters in file names? I was trying to write a vector to the file ?Fvec.bin?, but it comes out as ?fvec.bin?. >> >> -gideon >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Feb 16 17:03:21 2015 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 16 Feb 2015 17:03:21 -0600 Subject: [petsc-users] filename upper case letters In-Reply-To: References: Message-ID: On Mon, Feb 16, 2015 at 4:57 PM, Gideon Simpson wrote: > Yup, on an OS X 10.10 machine. > The Mac preserves case, but it matches like case insensitive. Very confusing and stupid design in my opinion. Matt > > -gideon > > On Feb 16, 2015, at 5:57 PM, Satish Balay wrote: > > Are you using a Mac? It could be (case-insensitive) filesystem issue.. > > Satish > > On Mon, 16 Feb 2015, Gideon Simpson wrote: > > Does petsc not distinguish lower/upper case letters in file names? I was > trying to write a vector to the file ?Fvec.bin?, but it comes out as > ?fvec.bin?. > > -gideon > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Feb 16 17:27:30 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 16 Feb 2015 17:27:30 -0600 Subject: [petsc-users] filename upper case letters In-Reply-To: References: Message-ID: <20DBA26B-EDD2-4284-9369-436784986585@mcs.anl.gov> Send a tiny case that demonstrates this, we can then reproduce it and debug it. > On Feb 16, 2015, at 4:54 PM, Gideon Simpson wrote: > > Does petsc not distinguish lower/upper case letters in file names? I was trying to write a vector to the file ?Fvec.bin?, but it comes out as ?fvec.bin?. > > -gideon > From gideon.simpson at gmail.com Mon Feb 16 17:39:14 2015 From: gideon.simpson at gmail.com (Gideon Simpson) Date: Mon, 16 Feb 2015 18:39:14 -0500 Subject: [petsc-users] filename upper case letters In-Reply-To: <20DBA26B-EDD2-4284-9369-436784986585@mcs.anl.gov> References: <20DBA26B-EDD2-4284-9369-436784986585@mcs.anl.gov> Message-ID: It has the following behavior, which is consistent with what was said about OS X. If there is no xvec.bin/xvec.bin.info, and you run this code, it saves to Xvec.bin as desired. However, if xvec.bin/xvec.bin.info are present, it overwrites them, using the lower case filename. #include int main(int argc,char **argv) { PetscInt n=100; char X_filename[PETSC_MAX_PATH_LEN]="Xvec.bin"; Vec X; PetscViewer viewer; PetscScalar a=2.0; PetscInitialize(&argc,&argv,NULL,NULL); VecCreate(PETSC_COMM_WORLD,&X); VecSetSizes(X,PETSC_DECIDE,n); VecSetFromOptions(X); VecSet(X, a); PetscViewerBinaryOpen(PETSC_COMM_WORLD, X_filename,FILE_MODE_WRITE,&viewer); VecView(X, viewer); PetscViewerDestroy(&viewer); VecDestroy(&X); PetscFinalize(); return 0; } -gideon > On Feb 16, 2015, at 6:27 PM, Barry Smith wrote: > > > Send a tiny case that demonstrates this, we can then reproduce it and debug it. > > >> On Feb 16, 2015, at 4:54 PM, Gideon Simpson wrote: >> >> Does petsc not distinguish lower/upper case letters in file names? I was trying to write a vector to the file ?Fvec.bin?, but it comes out as ?fvec.bin?. >> >> -gideon >> > From jed at jedbrown.org Mon Feb 16 17:50:15 2015 From: jed at jedbrown.org (Jed Brown) Date: Tue, 17 Feb 2015 07:50:15 +0800 Subject: [petsc-users] filename upper case letters In-Reply-To: References: Message-ID: <87h9ulzaa0.fsf@jedbrown.org> Matthew Knepley writes: > On Mon, Feb 16, 2015 at 4:57 PM, Gideon Simpson > wrote: > >> Yup, on an OS X 10.10 machine. >> > > The Mac preserves case, but it matches like case insensitive. Very > confusing and stupid design in my opinion. Case-insensitive comparison for Unicode depends on the locale. Why anyone would consider putting such crap in a file system is incomprehensible. Linus: "The true horrors of HFS+ are not in how it's not a great filesystem, but in how it's *actively* designed to be a *bad* filesystem by people who thought they had good ideas." https://plus.google.com/+JunioCHamano/posts/1Bpaj3e3Rru -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From siddhesh4godbole at gmail.com Tue Feb 17 00:03:14 2015 From: siddhesh4godbole at gmail.com (siddhesh godbole) Date: Tue, 17 Feb 2015 11:33:14 +0530 Subject: [petsc-users] slepc Message-ID: Hello, Can i use slepc eps solver to calculate all the eigenvalues and eigenvectors? eps_all appears to calculate all eigenvalues in the the domain [a, b] . so can i ask for all the eigenvalues by practically putting b as infinity (very large value) ? I cant use ScalLAPACK as its routines are in FORTRAN, and due to time constraints i cant learn and then implement FORTRAN. please help *Siddhesh M Godbole* 5th year Dual Degree, Civil Eng & Applied Mech. IIT Madras -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Tue Feb 17 00:18:51 2015 From: jed at jedbrown.org (Jed Brown) Date: Tue, 17 Feb 2015 14:18:51 +0800 Subject: [petsc-users] slepc In-Reply-To: References: Message-ID: <87r3tpxdpw.fsf@jedbrown.org> siddhesh godbole writes: > Hello, > > Can i use slepc eps solver to calculate all the eigenvalues and > eigenvectors? eps_all appears to calculate all eigenvalues in the the > domain [a, b] . so can i ask for all the eigenvalues by practically putting > b as infinity (very large value) ? Use LAPACK or Elemental if you want all eigenvalues. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From gideon.simpson at gmail.com Tue Feb 17 07:46:59 2015 From: gideon.simpson at gmail.com (Gideon Simpson) Date: Tue, 17 Feb 2015 08:46:59 -0500 Subject: [petsc-users] parallel interpolation? Message-ID: <260DC590-305E-4205-BEF4-9F2482A93E5F@gmail.com> Suppose I have data in Vec x and Vec y, and I want to interpolate this onto Vec xx, storing the values in Vec yy. All vectors have the same layout. The problem is that, for example, some of the values in xx on processor 0 may need the values of x and y on processor 1, and so on. Aside from just using sequential vectors, so that everything is local, is there a reasonable way to make this computation? -gideon -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Feb 17 08:10:34 2015 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 17 Feb 2015 08:10:34 -0600 Subject: [petsc-users] parallel interpolation? In-Reply-To: <260DC590-305E-4205-BEF4-9F2482A93E5F@gmail.com> References: <260DC590-305E-4205-BEF4-9F2482A93E5F@gmail.com> Message-ID: On Tue, Feb 17, 2015 at 7:46 AM, Gideon Simpson wrote: > Suppose I have data in Vec x and Vec y, and I want to interpolate this > onto Vec xx, storing the values in Vec yy. All vectors have the same > layout. The problem is that, for example, some of the values in xx on > processor 0 may need the values of x and y on processor 1, and so on. > Aside from just using sequential vectors, so that everything is local, is > there a reasonable way to make this computation? > At the most basic linear algebra level, you would construct a VecScatter which mapped the pieces you need from other processes into a local vector along with the local portion, and you would use that to calculate values, which you then put back into your owned portion of a global vector. Thus local vectors have halos and global vectors do not. If you halo regions (values you need from other processes) have a common topology, then we have simpler support that will make the VecScatter for you. For example, if your values lie on a Cartesian grid and you just need neighbors within distance k, you can use a DMDA to express this and automatically make the VecScatter. Likewise, if you values lie on an unstructured mesh and you need a distance k adjacency, DMPlex can create the scatter for you. If you are creating the VecScatter yourself, it might be easier to use the new PetscSF instead since it only needs one-sided information, and performs the same job. This is what DMPlex uses to do the communication. Thanks, Matt > -gideon > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From gideon.simpson at gmail.com Tue Feb 17 08:15:42 2015 From: gideon.simpson at gmail.com (Gideon Simpson) Date: Tue, 17 Feb 2015 09:15:42 -0500 Subject: [petsc-users] parallel interpolation? In-Reply-To: References: <260DC590-305E-4205-BEF4-9F2482A93E5F@gmail.com> Message-ID: <57071C37-3175-4A02-8A6C-AC8B9D240651@gmail.com> I?m gathering from your suggestions that I would need, a priori, knowledge of how many ghost points I would need, is that right? -gideon > On Feb 17, 2015, at 9:10 AM, Matthew Knepley wrote: > > On Tue, Feb 17, 2015 at 7:46 AM, Gideon Simpson > wrote: > Suppose I have data in Vec x and Vec y, and I want to interpolate this onto Vec xx, storing the values in Vec yy. All vectors have the same layout. The problem is that, for example, some of the values in xx on processor 0 may need the values of x and y on processor 1, and so on. Aside from just using sequential vectors, so that everything is local, is there a reasonable way to make this computation? > > At the most basic linear algebra level, you would construct a VecScatter which mapped the pieces you need from other processes into a local vector along with the local portion, and you would use that to calculate values, which you then put back into your owned portion of a global vector. Thus local vectors have halos and global vectors do not. > > If you halo regions (values you need from other processes) have a common topology, then we have simpler > support that will make the VecScatter for you. For example, if your values lie on a Cartesian grid and you > just need neighbors within distance k, you can use a DMDA to express this and automatically make the > VecScatter. Likewise, if you values lie on an unstructured mesh and you need a distance k adjacency, > DMPlex can create the scatter for you. > > If you are creating the VecScatter yourself, it might be easier to use the new PetscSF instead since it only needs one-sided information, and performs the same job. This is what DMPlex uses to do the communication. > > Thanks, > > Matt > > -gideon > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From lawrence.mitchell at imperial.ac.uk Tue Feb 17 08:49:32 2015 From: lawrence.mitchell at imperial.ac.uk (Lawrence Mitchell) Date: Tue, 17 Feb 2015 14:49:32 +0000 Subject: [petsc-users] Issue with window SF type using derived datatypes for reduction In-Reply-To: <871tlyzh5c.fsf@jedbrown.org> References: <59C95D6C-FD8F-4986-B42E-1305F6198CF5@imperial.ac.uk> <871tlyzh5c.fsf@jedbrown.org> Message-ID: <54E354FC.507@imperial.ac.uk> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 10/02/15 01:31, Jed Brown wrote: > Lawrence Mitchell writes: >> Having just tried a build with --download-mpich, I notice this >> problem does not occur. So should I shout at the OpenMPI team? > > Open MPI has many long-standing bugs with one-sided and datatypes. > I have pleaded with them to error instead of corrupting memory or > returning wrong results for unsupported cases. My recommendation > is to not use -sf_type window with Open MPI. > > I hear that they have newfound interest in fixing the decade-old > one-sided bugs, so I would say it is worth reporting this issue. Done, and seemingly fixed, in https://github.com/open-mpi/ompi/issues/385 I shall see if any more issues arise. Cheers, Lawrence -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJU41T4AAoJECOc1kQ8PEYv3/cH/0/Q1tLsf1et4EVXE4hPOgOs 1EfqELCuX+qtSO0obGeZ4n7x1ixGAB2l0OnHCv4qXOp57pT2WNAFtMlCizZSsBTX ekGdwWJIIV99Qh8NdnaGBKTZLA1DWrgTueIzmdAJrPSmU5DU8kWrrGW0Qr7nwtwn zBCc2iQXCwJLgEDKIhIQh9uPrNFWa4IQohD/9UFrD/TetT5CEdYrAvnxhcirz1qX 3R3jbFTB11slQlJ/txRqJNDhlRlSAI5mtRxwMRDG+lI/UF1I782kR8ClXOODjHJ+ eOVqQh4rQ3atRlJ2ynr7RB86BWHxM9Ktl+nGGUQyT1W+07DoUZ+UrctLT4gecws= =KP++ -----END PGP SIGNATURE----- From knepley at gmail.com Tue Feb 17 09:11:38 2015 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 17 Feb 2015 09:11:38 -0600 Subject: [petsc-users] parallel interpolation? In-Reply-To: <57071C37-3175-4A02-8A6C-AC8B9D240651@gmail.com> References: <260DC590-305E-4205-BEF4-9F2482A93E5F@gmail.com> <57071C37-3175-4A02-8A6C-AC8B9D240651@gmail.com> Message-ID: On Tue, Feb 17, 2015 at 8:15 AM, Gideon Simpson wrote: > I?m gathering from your suggestions that I would need, a priori, knowledge > of how many ghost points I would need, is that right? > We have to be more precise about a priori. You can certainly create a VecScatter on the fly every time if your communication pattern is changing. However, how will you know what needs to be communicated. Matt > -gideon > > On Feb 17, 2015, at 9:10 AM, Matthew Knepley wrote: > > On Tue, Feb 17, 2015 at 7:46 AM, Gideon Simpson > wrote: > >> Suppose I have data in Vec x and Vec y, and I want to interpolate this >> onto Vec xx, storing the values in Vec yy. All vectors have the same >> layout. The problem is that, for example, some of the values in xx on >> processor 0 may need the values of x and y on processor 1, and so on. >> Aside from just using sequential vectors, so that everything is local, is >> there a reasonable way to make this computation? >> > > At the most basic linear algebra level, you would construct a VecScatter > which mapped the pieces you need from other processes into a local vector > along with the local portion, and you would use that to calculate values, > which you then put back into your owned portion of a global vector. Thus > local vectors have halos and global vectors do not. > > If you halo regions (values you need from other processes) have a common > topology, then we have simpler > support that will make the VecScatter for you. For example, if your values > lie on a Cartesian grid and you > just need neighbors within distance k, you can use a DMDA to express this > and automatically make the > VecScatter. Likewise, if you values lie on an unstructured mesh and you > need a distance k adjacency, > DMPlex can create the scatter for you. > > If you are creating the VecScatter yourself, it might be easier to use the > new PetscSF instead since it only needs one-sided information, and performs > the same job. This is what DMPlex uses to do the communication. > > Thanks, > > Matt > > >> -gideon >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From gabel.fabian at gmail.com Tue Feb 17 09:14:50 2015 From: gabel.fabian at gmail.com (Fabian Gabel) Date: Tue, 17 Feb 2015 16:14:50 +0100 Subject: [petsc-users] Efficient Use of GAMG for Poisson Equation with Full Neumann Boundary Conditions Message-ID: <1424186090.3298.2.camel@gmail.com> Dear PETSc team, I am trying to optimize the solver parameters for the linear system I get, when I discretize the pressure correction equation Poisson equation with Neumann boundary conditions) in a SIMPLE-type algorithm using a finite volume method. The resulting system is symmetric and positive semi-definite. A basis to the associated nullspace has been provided to the KSP object. Using a CG solver with ICC preconditioning the solver needs a lot of inner iterations to converge (-ksp_monitor -ksp_view output attached for a case with approx. 2e6 unknowns; the lines beginning with 000XXXX show the relative residual regarding the initial residual in the outer iteration no. 1 for the variables u,v,w,p). Furthermore I don't quite understand, why the solver reports Linear solve did not converge due to DIVERGED_INDEFINITE_PC at the later stages of my Picard iteration process (iteration 0001519). I then tried out CG+GAMG preconditioning with success regarding the number of inner iterations, but without advantages regarding wall time (output attached). Also the DIVERGED_INDEFINITE_PC reason shows up repeatedly after iteration 0001487. I used the following options -pressure_mg_coarse_sub_pc_type svd -pressure_mg_levels_ksp_rtol 1e-4 -pressure_mg_levels_ksp_type richardson -pressure_mg_levels_pc_type sor -pressure_pc_gamg_agg_nsmooths 1 -pressure_pc_type gamg I would like to get an opinion on how the solver performance could be increased further. -log_summary shows that my code spends 80% of the time solving the linear systems for the pressure correction (STAGE 2: PRESSCORR). Furthermore, do you know what could be causing the DIVERGED_INDEFINITE_PC converged reason? Regards, Fabian Gabel -------------- next part -------------- Sender: LSF System Subject: Job 531314: in cluster Done Job was submitted from host by user in cluster . Job was executed on host(s) , in queue , as user in cluster . was used as the home directory. was used as the working directory. Started at Mon Feb 16 11:49:23 2015 Results reported at Mon Feb 16 23:14:32 2015 Your job looked like: ------------------------------------------------------------ # LSBATCH: User input #! /bin/sh #BSUB -J mg_test #BSUB -o /home/gu08vomo/thesis/mgtest/gamg.128.out.%J #BSUB -n 1 #BSUB -W 14:00 #BSUB -x #BSUB -q test_mpi2 #BSUB -a openmpi module load openmpi/intel/1.8.2 #export PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr export PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr-ext export MYWORKDIR=/work/scratch/gu08vomo/thesis/singleblock/128_1_1_seg/ export OUTPUTDIR=/home/gu08vomo/thesis/coupling export PETSC_OPS="-options_file ops.gamg" cat ops.gamg echo "PETSC_DIR="$PETSC_DIR echo "MYWORKDIR="$MYWORKDIR cd $MYWORKDIR mpirun -n 1 ./caffa3d.MB.lnx ${PETSC_OPS} ------------------------------------------------------------ Successfully completed. Resource usage summary: CPU time : 41126.00 sec. Max Memory : 2905 MB Average Memory : 2243.40 MB Total Requested Memory : - Delta Memory : - (Delta: the difference between total requested memory and actual max usage.) Max Swap : 3658 MB Max Processes : 6 Max Threads : 11 The output (if any) follows: Modules: loading gcc/4.8.4 Modules: loading intel/2015 Modules: loading openmpi/intel/1.8.2 -momentum_ksp_type gmres -pressure_pc_type gamg -pressure_mg_coarse_sub_pc_type svd -pressure_pc_gamg_agg_nsmooths 1 -pressure_mg_levels_ksp_type richardson -pressure_mg_levels_pc_type sor -pressure_mg_levels_ksp_rtol 1e-4 -log_summary -options_left -pressure_ksp_converged_reason PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr-ext MYWORKDIR=/work/scratch/gu08vomo/thesis/singleblock/128_1_1_seg/ ENTER PROBLEM NAME (SIX CHARACTERS): *************************************************** NAME OF PROBLEM SOLVED control *************************************************** *************************************************** CONTROL SETTINGS *************************************************** LREAD,LWRITE,LPOST,LTEST,LOUTS,LOUTE,LTIME,LGRAD F F T F F F F F IMON, JMON, KMON, MMON, RMON, IPR, JPR, KPR, MPR,NPCOR,NIGRAD 8 9 8 1 0 2 2 3 1 1 1 SORMAX, SLARGE, ALFA 0.1000E-07 0.1000E+31 0.9200E+00 (URF(I),I=1,5) 0.9000E+00 0.9000E+00 0.9000E+00 0.1000E+00 0.1000E+01 (SOR(I),I=1,5) 0.1000E+00 0.1000E+00 0.1000E+00 0.1000E+00 0.1000E+00 (GDS(I),I=1,5) - BLENDING (CDS-UDS) 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 LSG 100000 *************************************************** START SIMPLE RELAXATIONS *************************************************** Linear solve converged due to CONVERGED_RTOL iterations 2 KSP Object:(pressure_) 1 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=0.1, absolute=1e-50, divergence=10000 left preconditioning has attached null space using PRECONDITIONED norm type for convergence test PC Object:(pressure_) 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=4 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (pressure_mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (pressure_mg_coarse_) 1 MPI processes type: bjacobi block Jacobi: number of blocks = 1 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (pressure_mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (pressure_mg_coarse_sub_) 1 MPI processes type: svd linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=26, cols=26 total: nonzeros=536, allocated nonzeros=536 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=26, cols=26 total: nonzeros=536, allocated nonzeros=536 total number of mallocs used during MatSetValues calls =0 not using I-node routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (pressure_mg_levels_1_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=0.0001, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (pressure_mg_levels_1_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=2781, cols=2781 total: nonzeros=156609, allocated nonzeros=156609 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (pressure_mg_levels_2_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=0.0001, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (pressure_mg_levels_2_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=188698, cols=188698 total: nonzeros=6.12809e+06, allocated nonzeros=6.12809e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (pressure_mg_levels_3_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=0.0001, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (pressure_mg_levels_3_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines 0000001 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 2 0000002 0.9073E+00 0.8614E+00 0.9079E+00 0.6833E+00 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 4 0000003 0.5342E+00 0.4470E+00 0.5272E+00 0.2646E+00 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 5 0000004 0.1644E+00 0.1311E+00 0.1656E+00 0.5821E-01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 5 0000005 0.4251E-01 0.3089E-01 0.4307E-01 0.3617E-01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 0000006 0.2505E-01 0.1344E-01 0.2515E-01 0.2052E-01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 0000007 0.1922E-01 0.8700E-02 0.1923E-01 0.1271E-01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 0000008 0.1620E-01 0.6938E-02 0.1621E-01 0.1056E-01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 0000009 0.1411E-01 0.5867E-02 0.1411E-01 0.6362E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 7 0000010 0.1264E-01 0.5191E-02 0.1265E-01 0.5396E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 7 0000011 0.1143E-01 0.4670E-02 0.1144E-01 0.3307E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 7 0000012 0.1051E-01 0.4260E-02 0.1051E-01 0.2662E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 7 0000013 0.9719E-02 0.3915E-02 0.9720E-02 0.1725E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000014 0.9059E-02 0.3647E-02 0.9063E-02 0.1573E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000015 0.8490E-02 0.3406E-02 0.8490E-02 0.1009E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000016 0.8000E-02 0.3208E-02 0.8004E-02 0.1052E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000017 0.7574E-02 0.3027E-02 0.7574E-02 0.6479E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000018 0.7202E-02 0.2874E-02 0.7205E-02 0.7754E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000019 0.6878E-02 0.2735E-02 0.6877E-02 0.4608E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000020 0.6592E-02 0.2616E-02 0.6595E-02 0.6190E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000021 0.6344E-02 0.2507E-02 0.6343E-02 0.3714E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000022 0.6124E-02 0.2414E-02 0.6127E-02 0.3916E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000023 0.5642E-02 0.2369E-02 0.5644E-02 0.3617E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000024 0.5419E-02 0.2297E-02 0.5421E-02 0.2312E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000025 0.5228E-02 0.2091E-02 0.5230E-02 0.2852E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000026 0.5041E-02 0.2013E-02 0.5043E-02 0.1855E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000027 0.4870E-02 0.1943E-02 0.4872E-02 0.2379E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000028 0.4714E-02 0.1875E-02 0.4715E-02 0.1561E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000029 0.4570E-02 0.1814E-02 0.4573E-02 0.2141E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000030 0.4442E-02 0.1756E-02 0.4443E-02 0.1441E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000031 0.4327E-02 0.1703E-02 0.4329E-02 0.2068E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000032 0.4227E-02 0.1654E-02 0.4228E-02 0.1483E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000033 0.4140E-02 0.1610E-02 0.4141E-02 0.2102E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000034 0.4066E-02 0.1570E-02 0.4067E-02 0.1236E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000035 0.3790E-02 0.1567E-02 0.3792E-02 0.1364E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000036 0.3687E-02 0.1544E-02 0.3689E-02 0.9388E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000037 0.3600E-02 0.1428E-02 0.3601E-02 0.8595E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000038 0.3511E-02 0.1390E-02 0.3513E-02 0.8999E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000039 0.3428E-02 0.1355E-02 0.3429E-02 0.8086E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000040 0.3351E-02 0.1321E-02 0.3353E-02 0.9217E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000041 0.3282E-02 0.1289E-02 0.3283E-02 0.8273E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000042 0.3221E-02 0.1258E-02 0.3222E-02 0.1002E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000043 0.3168E-02 0.1230E-02 0.3170E-02 0.9158E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000044 0.3126E-02 0.1203E-02 0.3127E-02 0.1128E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000045 0.3094E-02 0.1179E-02 0.3095E-02 0.7344E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000046 0.2900E-02 0.1183E-02 0.2902E-02 0.8077E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000047 0.2837E-02 0.1172E-02 0.2839E-02 0.7287E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000048 0.2778E-02 0.1163E-02 0.2780E-02 0.5482E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000049 0.2730E-02 0.1074E-02 0.2732E-02 0.5861E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000050 0.2680E-02 0.1050E-02 0.2681E-02 0.5116E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000051 0.2633E-02 0.1028E-02 0.2634E-02 0.5965E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000052 0.2592E-02 0.1007E-02 0.2593E-02 0.5476E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000053 0.2558E-02 0.9870E-03 0.2559E-02 0.6632E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000054 0.2533E-02 0.9676E-03 0.2533E-02 0.6257E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000055 0.2517E-02 0.9491E-03 0.2517E-02 0.5373E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000056 0.2374E-02 0.9512E-03 0.2375E-02 0.5294E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000057 0.2330E-02 0.9425E-03 0.2331E-02 0.5063E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000058 0.2289E-02 0.9363E-03 0.2290E-02 0.5137E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000059 0.2249E-02 0.9338E-03 0.2250E-02 0.4776E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000060 0.2219E-02 0.8637E-03 0.2220E-02 0.4961E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000061 0.2187E-02 0.8471E-03 0.2188E-02 0.3806E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000062 0.2160E-02 0.8318E-03 0.2161E-02 0.4753E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000063 0.2140E-02 0.8164E-03 0.2140E-02 0.3909E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000064 0.2128E-02 0.8016E-03 0.2129E-02 0.5139E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000065 0.2128E-02 0.7871E-03 0.2127E-02 0.4852E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000066 0.1994E-02 0.7910E-03 0.1996E-02 0.4766E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000067 0.1962E-02 0.7849E-03 0.1963E-02 0.3633E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000068 0.1932E-02 0.7810E-03 0.1933E-02 0.4033E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000069 0.1903E-02 0.7812E-03 0.1904E-02 0.5747E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000070 0.1882E-02 0.7259E-03 0.1883E-02 0.4938E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000071 0.1860E-02 0.7133E-03 0.1861E-02 0.3911E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000072 0.1845E-02 0.7017E-03 0.1845E-02 0.3621E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000073 0.1837E-02 0.6900E-03 0.1837E-02 0.3506E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000074 0.1839E-02 0.6790E-03 0.1839E-02 0.4942E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000075 0.1731E-02 0.6810E-03 0.1733E-02 0.4894E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000076 0.1706E-02 0.6757E-03 0.1707E-02 0.3154E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000077 0.1682E-02 0.6726E-03 0.1683E-02 0.3418E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000078 0.1660E-02 0.6735E-03 0.1661E-02 0.2557E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000079 0.1640E-02 0.6787E-03 0.1640E-02 0.8022E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000080 0.1626E-02 0.6200E-03 0.1627E-02 0.7663E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000081 0.1615E-02 0.6102E-03 0.1615E-02 0.4472E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000082 0.1611E-02 0.6017E-03 0.1611E-02 0.4725E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000083 0.1617E-02 0.5941E-03 0.1617E-02 0.6258E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000084 0.1520E-02 0.5971E-03 0.1521E-02 0.4931E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000085 0.1499E-02 0.5936E-03 0.1500E-02 0.3603E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000086 0.1480E-02 0.5925E-03 0.1481E-02 0.3047E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000087 0.1462E-02 0.5957E-03 0.1463E-02 0.2231E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000088 0.1447E-02 0.6038E-03 0.1448E-02 0.1014E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000089 0.1438E-02 0.5440E-03 0.1438E-02 0.9025E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000090 0.1430E-02 0.5365E-03 0.1430E-02 0.5401E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000091 0.1429E-02 0.5311E-03 0.1429E-02 0.5154E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000092 0.1439E-02 0.5278E-03 0.1438E-02 0.7476E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000093 0.1345E-02 0.5331E-03 0.1346E-02 0.5986E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000094 0.1328E-02 0.5319E-03 0.1329E-02 0.3966E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000095 0.1313E-02 0.5339E-03 0.1314E-02 0.3235E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000096 0.1299E-02 0.5407E-03 0.1300E-02 0.9743E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000097 0.1289E-02 0.4880E-03 0.1290E-02 0.8425E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000098 0.1280E-02 0.4820E-03 0.1280E-02 0.5210E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000099 0.1275E-02 0.4782E-03 0.1276E-02 0.4807E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000100 0.1278E-02 0.4765E-03 0.1278E-02 0.2896E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000101 0.1290E-02 0.4777E-03 0.1290E-02 0.8290E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000102 0.1198E-02 0.4857E-03 0.1199E-02 0.7911E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000103 0.1184E-02 0.4864E-03 0.1185E-02 0.3830E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000104 0.1172E-02 0.4920E-03 0.1173E-02 0.7634E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000105 0.1163E-02 0.4400E-03 0.1163E-02 0.8482E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000106 0.1153E-02 0.4346E-03 0.1153E-02 0.3886E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000107 0.1146E-02 0.4310E-03 0.1146E-02 0.5017E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000108 0.1144E-02 0.4294E-03 0.1144E-02 0.2140E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000109 0.1150E-02 0.4299E-03 0.1150E-02 0.2949E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000110 0.1164E-02 0.4341E-03 0.1164E-02 0.9628E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000111 0.1073E-02 0.4431E-03 0.1074E-02 0.6905E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000112 0.1062E-02 0.4457E-03 0.1062E-02 0.6034E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000113 0.1053E-02 0.3983E-03 0.1054E-02 0.5685E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000114 0.1043E-02 0.3933E-03 0.1044E-02 0.3180E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000115 0.1035E-02 0.3895E-03 0.1036E-02 0.3340E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000116 0.1031E-02 0.3866E-03 0.1031E-02 0.1827E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000117 0.1031E-02 0.3850E-03 0.1031E-02 0.2016E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000118 0.1038E-02 0.3855E-03 0.1038E-02 0.6583E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000119 0.9770E-03 0.3908E-03 0.9775E-03 0.5126E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000120 0.9665E-03 0.3924E-03 0.9670E-03 0.3110E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000121 0.9569E-03 0.3977E-03 0.9574E-03 0.6344E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000122 0.9495E-03 0.3573E-03 0.9500E-03 0.6142E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000123 0.9414E-03 0.3534E-03 0.9418E-03 0.3285E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000124 0.9358E-03 0.3510E-03 0.9361E-03 0.3547E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000125 0.9336E-03 0.3503E-03 0.9339E-03 0.1814E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000126 0.9369E-03 0.3515E-03 0.9371E-03 0.2071E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000127 0.9470E-03 0.3558E-03 0.9472E-03 0.7667E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000128 0.8811E-03 0.3631E-03 0.8815E-03 0.5773E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000129 0.8723E-03 0.3660E-03 0.8727E-03 0.5122E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000130 0.8658E-03 0.3256E-03 0.8661E-03 0.4520E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000131 0.8579E-03 0.3219E-03 0.8583E-03 0.2709E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000132 0.8516E-03 0.3192E-03 0.8519E-03 0.2611E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000133 0.8477E-03 0.3173E-03 0.8480E-03 0.1539E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000134 0.8476E-03 0.3168E-03 0.8479E-03 0.1563E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000135 0.8530E-03 0.3183E-03 0.8532E-03 0.9130E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000136 0.8652E-03 0.3220E-03 0.8654E-03 0.6524E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000137 0.7973E-03 0.3304E-03 0.7977E-03 0.1538E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000138 0.7911E-03 0.2976E-03 0.7915E-03 0.3165E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000139 0.7836E-03 0.2938E-03 0.7839E-03 0.7570E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000140 0.7770E-03 0.2906E-03 0.7773E-03 0.1994E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000141 0.7719E-03 0.2875E-03 0.7722E-03 0.6080E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000142 0.7694E-03 0.2845E-03 0.7697E-03 0.1320E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000143 0.7707E-03 0.2819E-03 0.7710E-03 0.6133E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000144 0.7774E-03 0.2798E-03 0.7777E-03 0.1516E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000145 0.7318E-03 0.2843E-03 0.7321E-03 0.2038E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000146 0.7240E-03 0.2870E-03 0.7242E-03 0.7523E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000147 0.7165E-03 0.2926E-03 0.7167E-03 0.1551E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000148 0.7116E-03 0.2662E-03 0.7118E-03 0.2078E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000149 0.7062E-03 0.2631E-03 0.7065E-03 0.8511E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000150 0.7027E-03 0.2605E-03 0.7029E-03 0.1308E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000151 0.7019E-03 0.2581E-03 0.7021E-03 0.6388E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000152 0.7052E-03 0.2561E-03 0.7055E-03 0.8762E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000153 0.7141E-03 0.2548E-03 0.7144E-03 0.2428E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000154 0.6647E-03 0.2607E-03 0.6649E-03 0.1654E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000155 0.6578E-03 0.2644E-03 0.6580E-03 0.1171E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000156 0.6513E-03 0.2707E-03 0.6514E-03 0.2409E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000157 0.6475E-03 0.2416E-03 0.6477E-03 0.2134E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000158 0.6433E-03 0.2389E-03 0.6434E-03 0.1299E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000159 0.6409E-03 0.2369E-03 0.6411E-03 0.1251E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000160 0.6416E-03 0.2352E-03 0.6418E-03 0.7836E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000161 0.6465E-03 0.2343E-03 0.6468E-03 0.7654E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000162 0.6568E-03 0.2343E-03 0.6571E-03 0.3030E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000163 0.6046E-03 0.2413E-03 0.6048E-03 0.2363E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000164 0.5986E-03 0.2455E-03 0.5987E-03 0.2148E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000165 0.5946E-03 0.2219E-03 0.5947E-03 0.1838E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000166 0.5900E-03 0.2195E-03 0.5901E-03 0.1157E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000167 0.5866E-03 0.2177E-03 0.5867E-03 0.1073E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000168 0.5852E-03 0.2160E-03 0.5854E-03 0.6885E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000169 0.5869E-03 0.2150E-03 0.5871E-03 0.6642E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000170 0.5930E-03 0.2148E-03 0.5933E-03 0.2514E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000171 0.5564E-03 0.2193E-03 0.5565E-03 0.2013E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000172 0.5508E-03 0.2224E-03 0.5509E-03 0.1815E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000173 0.5467E-03 0.2041E-03 0.5468E-03 0.1540E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000174 0.5422E-03 0.2020E-03 0.5422E-03 0.9862E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000175 0.5384E-03 0.2002E-03 0.5385E-03 0.9074E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000176 0.5362E-03 0.1987E-03 0.5363E-03 0.5998E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000177 0.5361E-03 0.1977E-03 0.5363E-03 0.5794E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000178 0.5395E-03 0.1973E-03 0.5397E-03 0.4175E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000179 0.5471E-03 0.1980E-03 0.5475E-03 0.2619E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000180 0.5071E-03 0.2039E-03 0.5072E-03 0.2216E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000181 0.5021E-03 0.2079E-03 0.5021E-03 0.1995E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000182 0.4989E-03 0.1861E-03 0.4989E-03 0.1549E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000183 0.4950E-03 0.1842E-03 0.4951E-03 0.1082E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000184 0.4921E-03 0.1828E-03 0.4921E-03 0.8909E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000185 0.4907E-03 0.1816E-03 0.4908E-03 0.6341E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000186 0.4918E-03 0.1811E-03 0.4919E-03 0.5470E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000187 0.4963E-03 0.1814E-03 0.4965E-03 0.2238E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000188 0.4674E-03 0.1854E-03 0.4674E-03 0.1845E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000189 0.4628E-03 0.1883E-03 0.4628E-03 0.1661E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000190 0.4595E-03 0.1715E-03 0.4595E-03 0.1354E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000191 0.4557E-03 0.1698E-03 0.4557E-03 0.9013E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000192 0.4525E-03 0.1684E-03 0.4525E-03 0.7869E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000193 0.4505E-03 0.1673E-03 0.4505E-03 0.5387E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000194 0.4503E-03 0.1667E-03 0.4504E-03 0.4948E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000195 0.4528E-03 0.1667E-03 0.4530E-03 0.3613E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000196 0.4589E-03 0.1679E-03 0.4592E-03 0.2374E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000197 0.4266E-03 0.1730E-03 0.4266E-03 0.6171E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000198 0.4234E-03 0.1582E-03 0.4234E-03 0.1127E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000199 0.4197E-03 0.1565E-03 0.4197E-03 0.3316E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000200 0.4164E-03 0.1551E-03 0.4164E-03 0.7148E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000201 0.4139E-03 0.1537E-03 0.4139E-03 0.2719E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000202 0.4127E-03 0.1526E-03 0.4127E-03 0.4846E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000203 0.4134E-03 0.1518E-03 0.4135E-03 0.2708E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000204 0.4170E-03 0.1515E-03 0.4172E-03 0.7880E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000205 0.3938E-03 0.1550E-03 0.3937E-03 0.9159E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000206 0.3897E-03 0.1581E-03 0.3896E-03 0.7666E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000207 0.3870E-03 0.1446E-03 0.3869E-03 0.4670E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000208 0.3838E-03 0.1431E-03 0.3837E-03 0.4734E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000209 0.3811E-03 0.1418E-03 0.3810E-03 0.3312E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000210 0.3794E-03 0.1407E-03 0.3793E-03 0.3418E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000211 0.3791E-03 0.1398E-03 0.3791E-03 0.2854E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000212 0.3810E-03 0.1393E-03 0.3812E-03 0.2806E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000213 0.3860E-03 0.1394E-03 0.3862E-03 0.1047E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000214 0.3598E-03 0.1437E-03 0.3597E-03 0.8066E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000215 0.3561E-03 0.1473E-03 0.3560E-03 0.7540E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000216 0.3539E-03 0.1322E-03 0.3538E-03 0.6594E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000217 0.3512E-03 0.1309E-03 0.3511E-03 0.4325E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000218 0.3490E-03 0.1298E-03 0.3490E-03 0.4233E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000219 0.3480E-03 0.1288E-03 0.3480E-03 0.3005E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000220 0.3485E-03 0.1282E-03 0.3486E-03 0.3123E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000221 0.3515E-03 0.1279E-03 0.3516E-03 0.2907E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000222 0.3575E-03 0.1295E-03 0.3304E-03 0.1039E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000223 0.3290E-03 0.1332E-03 0.3286E-03 0.9661E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000224 0.3264E-03 0.1222E-03 0.3263E-03 0.5535E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000225 0.3236E-03 0.1209E-03 0.3235E-03 0.7639E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000226 0.3210E-03 0.1199E-03 0.3210E-03 0.4123E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000227 0.3191E-03 0.1189E-03 0.3192E-03 0.6401E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000228 0.3179E-03 0.1182E-03 0.3184E-03 0.3536E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000229 0.3183E-03 0.1178E-03 0.3191E-03 0.5460E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000230 0.3207E-03 0.1179E-03 0.3222E-03 0.8981E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000231 0.3272E-03 0.1199E-03 0.3021E-03 0.7988E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000232 0.3007E-03 0.1239E-03 0.3004E-03 0.4844E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000233 0.2986E-03 0.1119E-03 0.2985E-03 0.3669E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000234 0.2961E-03 0.1107E-03 0.2961E-03 0.3434E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000235 0.2939E-03 0.1098E-03 0.2941E-03 0.2831E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000236 0.2923E-03 0.1090E-03 0.2928E-03 0.2822E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000237 0.2916E-03 0.1084E-03 0.2928E-03 0.2436E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000238 0.2925E-03 0.1081E-03 0.2945E-03 0.2485E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000239 0.2954E-03 0.1083E-03 0.2986E-03 0.6093E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000240 0.2778E-03 0.1119E-03 0.2777E-03 0.2336E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000241 0.2757E-03 0.1034E-03 0.2756E-03 0.2734E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000242 0.2733E-03 0.1024E-03 0.2733E-03 0.1892E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000243 0.2711E-03 0.1015E-03 0.2712E-03 0.2148E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000244 0.2693E-03 0.1006E-03 0.2696E-03 0.1791E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000245 0.2683E-03 0.9992E-04 0.2689E-03 0.1877E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000246 0.2683E-03 0.9944E-04 0.2695E-03 0.1740E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000247 0.2700E-03 0.9934E-04 0.2720E-03 0.5772E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000248 0.2751E-03 0.1009E-03 0.2555E-03 0.3819E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000249 0.2542E-03 0.1041E-03 0.2539E-03 0.2138E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000250 0.2524E-03 0.9474E-04 0.2523E-03 0.1990E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000251 0.2503E-03 0.9378E-04 0.2502E-03 0.1732E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000252 0.2484E-03 0.9299E-04 0.2486E-03 0.1767E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000253 0.2469E-03 0.9225E-04 0.2475E-03 0.1616E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000254 0.2463E-03 0.9168E-04 0.2474E-03 0.1650E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000255 0.2469E-03 0.9135E-04 0.2488E-03 0.1546E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000256 0.2492E-03 0.9146E-04 0.2522E-03 0.3458E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000257 0.2350E-03 0.9432E-04 0.2348E-03 0.1551E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000258 0.2332E-03 0.8761E-04 0.2330E-03 0.1620E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000259 0.2311E-03 0.8673E-04 0.2311E-03 0.1416E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000260 0.2292E-03 0.8597E-04 0.2293E-03 0.1510E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000261 0.2277E-03 0.8524E-04 0.2280E-03 0.1397E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000262 0.2267E-03 0.8464E-04 0.2274E-03 0.1440E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000263 0.2267E-03 0.8420E-04 0.2279E-03 0.1378E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000264 0.2280E-03 0.8407E-04 0.2300E-03 0.4225E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000265 0.2321E-03 0.8529E-04 0.2162E-03 0.2246E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000266 0.2151E-03 0.8790E-04 0.2148E-03 0.1591E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000267 0.2135E-03 0.8030E-04 0.2134E-03 0.1471E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000268 0.2117E-03 0.7950E-04 0.2117E-03 0.1359E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000269 0.2101E-03 0.7882E-04 0.2103E-03 0.1394E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000270 0.2089E-03 0.7818E-04 0.2094E-03 0.1300E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000271 0.2083E-03 0.7770E-04 0.2093E-03 0.1337E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000272 0.2087E-03 0.7739E-04 0.2105E-03 0.1261E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000273 0.2106E-03 0.7745E-04 0.2134E-03 0.2525E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000274 0.1988E-03 0.7981E-04 0.1987E-03 0.2166E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000275 0.1968E-03 0.8195E-04 0.1967E-03 0.1555E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000276 0.1956E-03 0.7361E-04 0.1955E-03 0.1699E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000277 0.1940E-03 0.7287E-04 0.1941E-03 0.1368E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000278 0.1927E-03 0.7227E-04 0.1930E-03 0.1618E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000279 0.1918E-03 0.7169E-04 0.1925E-03 0.1369E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000280 0.1918E-03 0.7126E-04 0.1929E-03 0.1553E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000281 0.1928E-03 0.7101E-04 0.1947E-03 0.3926E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000282 0.1963E-03 0.7188E-04 0.1830E-03 0.1588E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000283 0.1821E-03 0.7385E-04 0.1819E-03 0.1420E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000284 0.1807E-03 0.6810E-04 0.1807E-03 0.1245E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000285 0.1792E-03 0.6742E-04 0.1792E-03 0.1247E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000286 0.1778E-03 0.6683E-04 0.1780E-03 0.1193E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000287 0.1768E-03 0.6628E-04 0.1773E-03 0.1187E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000288 0.1763E-03 0.6582E-04 0.1772E-03 0.1152E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000289 0.1766E-03 0.6550E-04 0.1783E-03 0.1137E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000290 0.1782E-03 0.6545E-04 0.1808E-03 0.2284E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000291 0.1684E-03 0.6730E-04 0.1683E-03 0.2099E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000292 0.1667E-03 0.6896E-04 0.1666E-03 0.1336E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000293 0.1656E-03 0.6244E-04 0.1656E-03 0.1410E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000294 0.1643E-03 0.6181E-04 0.1643E-03 0.1176E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000295 0.1632E-03 0.6129E-04 0.1634E-03 0.1331E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000296 0.1624E-03 0.6078E-04 0.1630E-03 0.1154E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000297 0.1624E-03 0.6039E-04 0.1634E-03 0.1264E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000298 0.1632E-03 0.6013E-04 0.1650E-03 0.3295E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000299 0.1662E-03 0.6079E-04 0.1550E-03 0.1318E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000300 0.1543E-03 0.6235E-04 0.1541E-03 0.1209E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000301 0.1531E-03 0.5778E-04 0.1530E-03 0.1041E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000302 0.1518E-03 0.5720E-04 0.1518E-03 0.1059E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000303 0.1506E-03 0.5670E-04 0.1508E-03 0.9945E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000304 0.1498E-03 0.5621E-04 0.1502E-03 0.9973E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000305 0.1493E-03 0.5581E-04 0.1502E-03 0.9551E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000306 0.1496E-03 0.5551E-04 0.1511E-03 0.9445E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000307 0.1509E-03 0.5542E-04 0.1532E-03 0.1974E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000308 0.1427E-03 0.5694E-04 0.1426E-03 0.1819E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000309 0.1412E-03 0.5827E-04 0.1411E-03 0.1119E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000310 0.1403E-03 0.5298E-04 0.1403E-03 0.1189E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000311 0.1392E-03 0.5245E-04 0.1392E-03 0.9872E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000312 0.1382E-03 0.5200E-04 0.1385E-03 0.1115E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000313 0.1376E-03 0.5156E-04 0.1382E-03 0.9620E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000314 0.1376E-03 0.5121E-04 0.1385E-03 0.1052E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000315 0.1383E-03 0.5097E-04 0.1399E-03 0.2759E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000316 0.1408E-03 0.5150E-04 0.1314E-03 0.1106E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000317 0.1307E-03 0.5278E-04 0.1306E-03 0.1019E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000318 0.1297E-03 0.4903E-04 0.1297E-03 0.8745E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000319 0.1286E-03 0.4854E-04 0.1286E-03 0.8928E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000320 0.1277E-03 0.4811E-04 0.1278E-03 0.8339E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000321 0.1269E-03 0.4770E-04 0.1273E-03 0.8377E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000322 0.1266E-03 0.4735E-04 0.1273E-03 0.7987E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000323 0.1268E-03 0.4708E-04 0.1281E-03 0.7896E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000324 0.1279E-03 0.4698E-04 0.1300E-03 0.1680E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000325 0.1209E-03 0.4825E-04 0.1209E-03 0.1553E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000326 0.1197E-03 0.4934E-04 0.1197E-03 0.9419E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000327 0.1189E-03 0.4496E-04 0.1189E-03 0.9997E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000328 0.1180E-03 0.4451E-04 0.1181E-03 0.8309E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000329 0.1172E-03 0.4412E-04 0.1174E-03 0.9351E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000330 0.1167E-03 0.4375E-04 0.1171E-03 0.8067E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000331 0.1166E-03 0.4345E-04 0.1175E-03 0.8791E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000332 0.1173E-03 0.4323E-04 0.1187E-03 0.2317E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000333 0.1194E-03 0.4367E-04 0.1114E-03 0.9303E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000334 0.1108E-03 0.4474E-04 0.1107E-03 0.8609E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000335 0.1100E-03 0.4161E-04 0.1100E-03 0.7358E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000336 0.1091E-03 0.4120E-04 0.1091E-03 0.7539E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000337 0.1083E-03 0.4083E-04 0.1084E-03 0.7014E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000338 0.1076E-03 0.4048E-04 0.1080E-03 0.7059E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000339 0.1073E-03 0.4018E-04 0.1080E-03 0.6709E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000340 0.1075E-03 0.3994E-04 0.1087E-03 0.6636E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000341 0.1085E-03 0.3985E-04 0.1103E-03 0.1428E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000342 0.1026E-03 0.4092E-04 0.1025E-03 0.1322E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000343 0.1015E-03 0.4184E-04 0.1015E-03 0.7947E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000344 0.1009E-03 0.3816E-04 0.1009E-03 0.8429E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000345 0.1001E-03 0.3778E-04 0.1001E-03 0.7008E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000346 0.9940E-04 0.3745E-04 0.9960E-04 0.7870E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000347 0.9898E-04 0.3713E-04 0.9939E-04 0.6789E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000348 0.9894E-04 0.3687E-04 0.9970E-04 0.7382E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000349 0.9951E-04 0.3668E-04 0.1007E-03 0.1953E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000350 0.1013E-03 0.3705E-04 0.9447E-04 0.7849E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000351 0.9403E-04 0.3795E-04 0.9393E-04 0.7284E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000352 0.9331E-04 0.3531E-04 0.9329E-04 0.6206E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000353 0.9253E-04 0.3496E-04 0.9256E-04 0.6375E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000354 0.9184E-04 0.3465E-04 0.9197E-04 0.5914E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000355 0.9132E-04 0.3435E-04 0.9162E-04 0.5961E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000356 0.9107E-04 0.3409E-04 0.9167E-04 0.5652E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000357 0.9127E-04 0.3389E-04 0.9229E-04 0.5593E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000358 0.9209E-04 0.3381E-04 0.9368E-04 0.1214E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000359 0.8703E-04 0.3472E-04 0.8698E-04 0.1124E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000360 0.8615E-04 0.3550E-04 0.8612E-04 0.6722E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000361 0.8559E-04 0.3238E-04 0.8560E-04 0.7121E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000362 0.8492E-04 0.3206E-04 0.8499E-04 0.5921E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000363 0.8436E-04 0.3178E-04 0.8454E-04 0.6641E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000364 0.8401E-04 0.3151E-04 0.8437E-04 0.5727E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000365 0.8399E-04 0.3129E-04 0.8465E-04 0.6218E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000366 0.8448E-04 0.3113E-04 0.8555E-04 0.1651E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000367 0.8606E-04 0.3144E-04 0.8018E-04 0.6638E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000368 0.7981E-04 0.3221E-04 0.7973E-04 0.6173E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000369 0.7920E-04 0.2997E-04 0.7919E-04 0.5241E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000370 0.7854E-04 0.2967E-04 0.7857E-04 0.5399E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000371 0.7796E-04 0.2941E-04 0.7808E-04 0.4995E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000372 0.7753E-04 0.2915E-04 0.7780E-04 0.5042E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000373 0.7732E-04 0.2893E-04 0.7785E-04 0.4770E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000374 0.7750E-04 0.2876E-04 0.7839E-04 0.4723E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000375 0.7821E-04 0.2869E-04 0.7960E-04 0.1033E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000376 0.7388E-04 0.2947E-04 0.7385E-04 0.9571E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000377 0.7314E-04 0.3013E-04 0.7313E-04 0.5696E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000378 0.7267E-04 0.2748E-04 0.7269E-04 0.6026E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000379 0.7210E-04 0.2721E-04 0.7217E-04 0.5010E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000380 0.7163E-04 0.2697E-04 0.7180E-04 0.5615E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000381 0.7134E-04 0.2674E-04 0.7167E-04 0.4840E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000382 0.7133E-04 0.2655E-04 0.7191E-04 0.5249E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000383 0.7176E-04 0.2642E-04 0.7270E-04 0.1399E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000384 0.7313E-04 0.2668E-04 0.6810E-04 0.5623E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000385 0.6778E-04 0.2734E-04 0.6772E-04 0.5239E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000386 0.6726E-04 0.2543E-04 0.6726E-04 0.4432E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000387 0.6671E-04 0.2518E-04 0.6674E-04 0.4577E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000388 0.6621E-04 0.2495E-04 0.6633E-04 0.4224E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000389 0.6585E-04 0.2474E-04 0.6610E-04 0.4271E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000390 0.6568E-04 0.2455E-04 0.6615E-04 0.4032E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000391 0.6584E-04 0.2441E-04 0.6663E-04 0.3993E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000392 0.6646E-04 0.2435E-04 0.6767E-04 0.8804E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000393 0.6276E-04 0.2501E-04 0.6275E-04 0.8152E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000394 0.6214E-04 0.2558E-04 0.6213E-04 0.4833E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000395 0.6173E-04 0.2332E-04 0.6176E-04 0.5105E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000396 0.6126E-04 0.2309E-04 0.6133E-04 0.4244E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000397 0.6086E-04 0.2288E-04 0.6102E-04 0.4753E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000398 0.6062E-04 0.2269E-04 0.6091E-04 0.4096E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000399 0.6062E-04 0.2253E-04 0.6113E-04 0.4437E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000400 0.6100E-04 0.2242E-04 0.6181E-04 0.1187E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000401 0.6218E-04 0.2265E-04 0.5787E-04 0.4770E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000402 0.5759E-04 0.2321E-04 0.5755E-04 0.4451E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000403 0.5716E-04 0.2158E-04 0.5717E-04 0.3752E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000404 0.5669E-04 0.2136E-04 0.5673E-04 0.3885E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000405 0.5627E-04 0.2117E-04 0.5639E-04 0.3575E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000406 0.5597E-04 0.2099E-04 0.5619E-04 0.3621E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000407 0.5583E-04 0.2083E-04 0.5625E-04 0.3411E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000408 0.5598E-04 0.2071E-04 0.5667E-04 0.3381E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000409 0.5651E-04 0.2066E-04 0.5757E-04 0.7505E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000410 0.5335E-04 0.2123E-04 0.5334E-04 0.6947E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000411 0.5282E-04 0.2171E-04 0.5282E-04 0.4105E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000412 0.5248E-04 0.1978E-04 0.5251E-04 0.4329E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000413 0.5208E-04 0.1959E-04 0.5215E-04 0.3599E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000414 0.5174E-04 0.1942E-04 0.5189E-04 0.4028E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000415 0.5154E-04 0.1925E-04 0.5180E-04 0.3470E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000416 0.5155E-04 0.1912E-04 0.5200E-04 0.3755E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000417 0.5188E-04 0.1902E-04 0.5259E-04 0.1009E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000418 0.5290E-04 0.1922E-04 0.4921E-04 0.4049E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000419 0.4897E-04 0.1970E-04 0.4895E-04 0.3785E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000420 0.4860E-04 0.1831E-04 0.4863E-04 0.3179E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000421 0.4821E-04 0.1813E-04 0.4826E-04 0.3299E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000422 0.4786E-04 0.1796E-04 0.4797E-04 0.3029E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000423 0.4761E-04 0.1781E-04 0.4781E-04 0.3073E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000424 0.4749E-04 0.1767E-04 0.4786E-04 0.2888E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000425 0.4762E-04 0.1757E-04 0.4823E-04 0.2864E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000426 0.4809E-04 0.1753E-04 0.4900E-04 0.6402E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000427 0.4538E-04 0.1802E-04 0.4538E-04 0.5924E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000428 0.4493E-04 0.1843E-04 0.4494E-04 0.3489E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000429 0.4464E-04 0.1678E-04 0.4468E-04 0.3674E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000430 0.4430E-04 0.1662E-04 0.4437E-04 0.3053E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000431 0.4402E-04 0.1647E-04 0.4416E-04 0.3416E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000432 0.4386E-04 0.1633E-04 0.4409E-04 0.2941E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000433 0.4387E-04 0.1622E-04 0.4427E-04 0.3181E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000434 0.4416E-04 0.1614E-04 0.4478E-04 0.8590E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000435 0.4504E-04 0.1631E-04 0.4188E-04 0.3439E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000436 0.4168E-04 0.1672E-04 0.4166E-04 0.3221E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000437 0.4136E-04 0.1553E-04 0.4139E-04 0.2695E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000438 0.4103E-04 0.1538E-04 0.4108E-04 0.2805E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000439 0.4073E-04 0.1524E-04 0.4084E-04 0.2568E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000440 0.4052E-04 0.1511E-04 0.4070E-04 0.2610E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000441 0.4043E-04 0.1499E-04 0.4075E-04 0.2447E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000442 0.4055E-04 0.1491E-04 0.4108E-04 0.2429E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000443 0.4095E-04 0.1488E-04 0.4175E-04 0.5465E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000444 0.3863E-04 0.1529E-04 0.3864E-04 0.5055E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000445 0.3825E-04 0.1565E-04 0.3827E-04 0.2968E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000446 0.3801E-04 0.1424E-04 0.3805E-04 0.3121E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000447 0.3772E-04 0.1409E-04 0.3779E-04 0.2593E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000448 0.3749E-04 0.1397E-04 0.3761E-04 0.2900E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000449 0.3735E-04 0.1385E-04 0.3756E-04 0.2495E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000450 0.3736E-04 0.1376E-04 0.3771E-04 0.2697E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000451 0.3762E-04 0.1369E-04 0.3816E-04 0.7320E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000452 0.3838E-04 0.1384E-04 0.3567E-04 0.2923E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000453 0.3549E-04 0.1419E-04 0.3549E-04 0.2743E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000454 0.3523E-04 0.1317E-04 0.3526E-04 0.2286E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000455 0.3495E-04 0.1304E-04 0.3500E-04 0.2386E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000456 0.3470E-04 0.1292E-04 0.3479E-04 0.2178E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000457 0.3452E-04 0.1281E-04 0.3468E-04 0.2218E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000458 0.3445E-04 0.1272E-04 0.3473E-04 0.2075E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000459 0.3455E-04 0.1265E-04 0.3501E-04 0.2061E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000460 0.3490E-04 0.1262E-04 0.3560E-04 0.4667E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000461 0.3291E-04 0.1298E-04 0.3293E-04 0.4315E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000462 0.3259E-04 0.1328E-04 0.3262E-04 0.2525E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000463 0.3239E-04 0.1208E-04 0.3243E-04 0.2652E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000464 0.3215E-04 0.1195E-04 0.3221E-04 0.2203E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000465 0.3195E-04 0.1185E-04 0.3206E-04 0.2462E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000466 0.3183E-04 0.1175E-04 0.3202E-04 0.2118E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000467 0.3185E-04 0.1167E-04 0.3216E-04 0.2287E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000468 0.3207E-04 0.1161E-04 0.3255E-04 0.6244E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000469 0.3273E-04 0.1174E-04 0.3041E-04 0.2486E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000470 0.3026E-04 0.1204E-04 0.3026E-04 0.2338E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000471 0.3003E-04 0.1117E-04 0.3007E-04 0.1941E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000472 0.2979E-04 0.1106E-04 0.2984E-04 0.2031E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000473 0.2958E-04 0.1096E-04 0.2967E-04 0.1849E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000474 0.2943E-04 0.1087E-04 0.2958E-04 0.1886E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000475 0.2937E-04 0.1079E-04 0.2963E-04 0.1760E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000476 0.2947E-04 0.1073E-04 0.2988E-04 0.1751E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000477 0.2977E-04 0.1070E-04 0.3038E-04 0.3988E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000478 0.2807E-04 0.1101E-04 0.2809E-04 0.3686E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000479 0.2780E-04 0.1127E-04 0.2783E-04 0.2150E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000480 0.2762E-04 0.1024E-04 0.2767E-04 0.2255E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000481 0.2742E-04 0.1014E-04 0.2749E-04 0.1872E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000482 0.2725E-04 0.1005E-04 0.2736E-04 0.2092E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000483 0.2716E-04 0.9964E-05 0.2733E-04 0.1799E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000484 0.2718E-04 0.9895E-05 0.2745E-04 0.1941E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000485 0.2737E-04 0.9848E-05 0.2779E-04 0.5332E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000486 0.2795E-04 0.9958E-05 0.2595E-04 0.2115E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000487 0.2582E-04 0.1022E-04 0.2583E-04 0.1994E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000488 0.2563E-04 0.9473E-05 0.2567E-04 0.1649E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000489 0.2543E-04 0.9379E-05 0.2548E-04 0.1731E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000490 0.2525E-04 0.9295E-05 0.2533E-04 0.1570E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000491 0.2512E-04 0.9215E-05 0.2526E-04 0.1606E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000492 0.2508E-04 0.9147E-05 0.2530E-04 0.1494E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000493 0.2516E-04 0.9096E-05 0.2552E-04 0.1488E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000494 0.2542E-04 0.9079E-05 0.2595E-04 0.3410E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000495 0.2396E-04 0.9341E-05 0.2399E-04 0.3152E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000496 0.2373E-04 0.9563E-05 0.2377E-04 0.1831E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000497 0.2359E-04 0.8683E-05 0.2363E-04 0.1919E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000498 0.2341E-04 0.8596E-05 0.2348E-04 0.1593E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000499 0.2327E-04 0.8521E-05 0.2337E-04 0.1779E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000500 0.2320E-04 0.8449E-05 0.2335E-04 0.1528E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000501 0.2321E-04 0.8391E-05 0.2346E-04 0.1648E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000502 0.2338E-04 0.8351E-05 0.2375E-04 0.4558E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000503 0.2388E-04 0.8446E-05 0.2217E-04 0.1801E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000504 0.2205E-04 0.8667E-05 0.2207E-04 0.1701E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000505 0.2189E-04 0.8033E-05 0.2193E-04 0.1402E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000506 0.2172E-04 0.7952E-05 0.2177E-04 0.1476E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000507 0.2157E-04 0.7881E-05 0.2165E-04 0.1335E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000508 0.2147E-04 0.7813E-05 0.2159E-04 0.1368E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000509 0.2143E-04 0.7756E-05 0.2163E-04 0.1269E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000510 0.2150E-04 0.7713E-05 0.2182E-04 0.1265E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000511 0.2173E-04 0.7699E-05 0.2220E-04 0.2919E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000512 0.2048E-04 0.7923E-05 0.2051E-04 0.2697E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000513 0.2029E-04 0.8114E-05 0.2032E-04 0.1561E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000514 0.2016E-04 0.7362E-05 0.2021E-04 0.1634E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000515 0.2002E-04 0.7288E-05 0.2008E-04 0.1356E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000516 0.1990E-04 0.7225E-05 0.1999E-04 0.1514E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000517 0.1983E-04 0.7164E-05 0.1997E-04 0.1300E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000518 0.1985E-04 0.7115E-05 0.2007E-04 0.1400E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000519 0.2000E-04 0.7081E-05 0.2032E-04 0.3901E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000520 0.2044E-04 0.7162E-05 0.1896E-04 0.1534E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000521 0.1886E-04 0.7352E-05 0.1888E-04 0.1453E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000522 0.1873E-04 0.6810E-05 0.1877E-04 0.1193E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000523 0.1858E-04 0.6742E-05 0.1863E-04 0.1259E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000524 0.1846E-04 0.6682E-05 0.1853E-04 0.1135E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000525 0.1837E-04 0.6624E-05 0.1848E-04 0.1166E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000526 0.1834E-04 0.6576E-05 0.1852E-04 0.1079E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000527 0.1840E-04 0.6540E-05 0.1868E-04 0.1077E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000528 0.1860E-04 0.6529E-05 0.1901E-04 0.2500E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000529 0.1753E-04 0.6720E-05 0.1756E-04 0.2309E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000530 0.1736E-04 0.6883E-05 0.1740E-04 0.1331E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000531 0.1726E-04 0.6241E-05 0.1730E-04 0.1392E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000532 0.1713E-04 0.6179E-05 0.1719E-04 0.1155E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000533 0.1704E-04 0.6125E-05 0.1712E-04 0.1289E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000534 0.1698E-04 0.6073E-05 0.1711E-04 0.1106E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000535 0.1700E-04 0.6032E-05 0.1719E-04 0.1191E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000536 0.1713E-04 0.6004E-05 0.1741E-04 0.3343E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000537 0.1751E-04 0.6073E-05 0.1624E-04 0.1307E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000538 0.1615E-04 0.6235E-05 0.1618E-04 0.1242E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000539 0.1604E-04 0.5773E-05 0.1608E-04 0.1015E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000540 0.1592E-04 0.5715E-05 0.1597E-04 0.1076E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000541 0.1581E-04 0.5664E-05 0.1588E-04 0.9664E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000542 0.1574E-04 0.5616E-05 0.1584E-04 0.9949E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000543 0.1571E-04 0.5575E-05 0.1587E-04 0.9179E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000544 0.1577E-04 0.5544E-05 0.1602E-04 0.9173E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000545 0.1595E-04 0.5535E-05 0.1630E-04 0.2143E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000546 0.1502E-04 0.5699E-05 0.1506E-04 0.1980E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000547 0.1488E-04 0.5838E-05 0.1492E-04 0.1136E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000548 0.1479E-04 0.5291E-05 0.1484E-04 0.1188E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000549 0.1469E-04 0.5238E-05 0.1474E-04 0.9850E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000550 0.1460E-04 0.5192E-05 0.1468E-04 0.1099E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000551 0.1456E-04 0.5148E-05 0.1467E-04 0.9419E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000552 0.1458E-04 0.5113E-05 0.1475E-04 0.1013E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000553 0.1469E-04 0.5090E-05 0.1494E-04 0.2869E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000554 0.1502E-04 0.5149E-05 0.1393E-04 0.1115E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000555 0.1386E-04 0.5288E-05 0.1388E-04 0.1063E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000556 0.1376E-04 0.4893E-05 0.1380E-04 0.8653E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000557 0.1365E-04 0.4845E-05 0.1370E-04 0.9199E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000558 0.1356E-04 0.4801E-05 0.1363E-04 0.8234E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000559 0.1350E-04 0.4760E-05 0.1360E-04 0.8498E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000560 0.1348E-04 0.4725E-05 0.1363E-04 0.7814E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000561 0.1354E-04 0.4700E-05 0.1375E-04 0.7821E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000562 0.1369E-04 0.4693E-05 0.1400E-04 0.1840E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000563 0.1289E-04 0.4833E-05 0.1293E-04 0.1700E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000564 0.1277E-04 0.4951E-05 0.1281E-04 0.9704E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000565 0.1270E-04 0.4484E-05 0.1274E-04 0.1014E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000566 0.1261E-04 0.4439E-05 0.1266E-04 0.8407E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000567 0.1254E-04 0.4401E-05 0.1261E-04 0.9371E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000568 0.1251E-04 0.4364E-05 0.1261E-04 0.8028E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000569 0.1252E-04 0.4334E-05 0.1267E-04 0.8627E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000570 0.1262E-04 0.4314E-05 0.1284E-04 0.2466E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000571 0.1291E-04 0.4365E-05 0.1197E-04 0.9520E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000572 0.1190E-04 0.4484E-05 0.1193E-04 0.9103E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000573 0.1182E-04 0.4148E-05 0.1186E-04 0.7382E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000574 0.1173E-04 0.4106E-05 0.1178E-04 0.7875E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000575 0.1166E-04 0.4070E-05 0.1172E-04 0.7021E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000576 0.1160E-04 0.4034E-05 0.1169E-04 0.7266E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000577 0.1159E-04 0.4005E-05 0.1172E-04 0.6658E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000578 0.1164E-04 0.3984E-05 0.1183E-04 0.6675E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000579 0.1177E-04 0.3978E-05 0.1204E-04 0.1581E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000580 0.1109E-04 0.4097E-05 0.1112E-04 0.1461E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000581 0.1098E-04 0.4198E-05 0.1102E-04 0.8295E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000582 0.1092E-04 0.3801E-05 0.1096E-04 0.8670E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000583 0.1085E-04 0.3763E-05 0.1090E-04 0.7183E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000584 0.1079E-04 0.3730E-05 0.1085E-04 0.8002E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000585 0.1076E-04 0.3698E-05 0.1085E-04 0.6849E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000586 0.1077E-04 0.3673E-05 0.1091E-04 0.7353E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000587 0.1086E-04 0.3657E-05 0.1105E-04 0.2123E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000588 0.1112E-04 0.3701E-05 0.1030E-04 0.8136E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000589 0.1024E-04 0.3801E-05 0.1027E-04 0.7807E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000590 0.1017E-04 0.3515E-05 0.1021E-04 0.6304E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000591 0.1010E-04 0.3480E-05 0.1014E-04 0.6751E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000592 0.1003E-04 0.3449E-05 0.1009E-04 0.5993E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000593 0.9991E-05 0.3419E-05 0.1007E-04 0.6221E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000594 0.9980E-05 0.3395E-05 0.1010E-04 0.5678E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000595 0.1002E-04 0.3376E-05 0.1019E-04 0.5704E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000596 0.1014E-04 0.3372E-05 0.1038E-04 0.1361E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000597 0.9548E-05 0.3474E-05 0.9580E-05 0.1257E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000598 0.9463E-05 0.3560E-05 0.9496E-05 0.7099E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000599 0.9408E-05 0.3221E-05 0.9447E-05 0.7421E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000600 0.9345E-05 0.3189E-05 0.9391E-05 0.6144E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000601 0.9295E-05 0.3161E-05 0.9355E-05 0.6841E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000602 0.9271E-05 0.3134E-05 0.9354E-05 0.5849E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000603 0.9287E-05 0.3113E-05 0.9407E-05 0.6273E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000604 0.9364E-05 0.3099E-05 0.9535E-05 0.1831E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000605 0.9587E-05 0.3137E-05 0.8881E-05 0.6961E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000606 0.8832E-05 0.3223E-05 0.8858E-05 0.6703E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000607 0.8771E-05 0.2979E-05 0.8807E-05 0.5390E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000608 0.8709E-05 0.2949E-05 0.8748E-05 0.5795E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000609 0.8654E-05 0.2923E-05 0.8706E-05 0.5122E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000610 0.8618E-05 0.2898E-05 0.8689E-05 0.5333E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000611 0.8610E-05 0.2877E-05 0.8714E-05 0.4848E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000612 0.8648E-05 0.2862E-05 0.8799E-05 0.4879E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000613 0.8750E-05 0.2858E-05 0.8962E-05 0.1173E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000614 0.8240E-05 0.2945E-05 0.8271E-05 0.1084E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000615 0.8167E-05 0.3018E-05 0.8199E-05 0.6081E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000616 0.8121E-05 0.2729E-05 0.8157E-05 0.6360E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000617 0.8067E-05 0.2702E-05 0.8110E-05 0.5263E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000618 0.8025E-05 0.2679E-05 0.8081E-05 0.5855E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000619 0.8006E-05 0.2656E-05 0.8081E-05 0.5002E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000620 0.8021E-05 0.2638E-05 0.8128E-05 0.5358E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000621 0.8089E-05 0.2627E-05 0.8240E-05 0.1582E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000622 0.8285E-05 0.2658E-05 0.7672E-05 0.5962E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000623 0.7631E-05 0.2732E-05 0.7655E-05 0.5764E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000624 0.7578E-05 0.2524E-05 0.7612E-05 0.4615E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000625 0.7525E-05 0.2499E-05 0.7562E-05 0.4983E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000626 0.7479E-05 0.2477E-05 0.7527E-05 0.4383E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000627 0.7449E-05 0.2455E-05 0.7513E-05 0.4578E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000628 0.7443E-05 0.2438E-05 0.7536E-05 0.4144E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000629 0.7477E-05 0.2425E-05 0.7612E-05 0.4180E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000630 0.7567E-05 0.2422E-05 0.7754E-05 0.1012E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000631 0.7126E-05 0.2496E-05 0.7155E-05 0.9363E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000632 0.7064E-05 0.2559E-05 0.7094E-05 0.5216E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000633 0.7024E-05 0.2313E-05 0.7058E-05 0.5460E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000634 0.6979E-05 0.2290E-05 0.7018E-05 0.4514E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000635 0.6943E-05 0.2270E-05 0.6994E-05 0.5019E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000636 0.6928E-05 0.2251E-05 0.6996E-05 0.4282E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000637 0.6942E-05 0.2236E-05 0.7038E-05 0.4582E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000638 0.7002E-05 0.2226E-05 0.7136E-05 0.1370E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000639 0.7176E-05 0.2253E-05 0.6642E-05 0.5115E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000640 0.6607E-05 0.2315E-05 0.6630E-05 0.4965E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000641 0.6562E-05 0.2139E-05 0.6593E-05 0.3958E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000642 0.6517E-05 0.2117E-05 0.6551E-05 0.4291E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000643 0.6477E-05 0.2099E-05 0.6522E-05 0.3756E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000644 0.6452E-05 0.2081E-05 0.6511E-05 0.3937E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000645 0.6448E-05 0.2066E-05 0.6532E-05 0.3547E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000646 0.6479E-05 0.2055E-05 0.6598E-05 0.3585E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000647 0.6558E-05 0.2053E-05 0.6723E-05 0.8756E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000648 0.6176E-05 0.2116E-05 0.6203E-05 0.8101E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000649 0.6123E-05 0.2169E-05 0.6152E-05 0.4481E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000650 0.6089E-05 0.1960E-05 0.6121E-05 0.4695E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000651 0.6050E-05 0.1940E-05 0.6087E-05 0.3879E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000652 0.6021E-05 0.1923E-05 0.6067E-05 0.4309E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000653 0.6008E-05 0.1907E-05 0.6069E-05 0.3672E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000654 0.6021E-05 0.1894E-05 0.6107E-05 0.3923E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000655 0.6075E-05 0.1886E-05 0.6194E-05 0.1189E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000656 0.6228E-05 0.1909E-05 0.5763E-05 0.4394E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000657 0.5733E-05 0.1962E-05 0.5755E-05 0.4283E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000658 0.5694E-05 0.1812E-05 0.5723E-05 0.3399E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000659 0.5656E-05 0.1794E-05 0.5688E-05 0.3702E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000660 0.5623E-05 0.1778E-05 0.5663E-05 0.3223E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000661 0.5602E-05 0.1763E-05 0.5655E-05 0.3391E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000662 0.5599E-05 0.1750E-05 0.5674E-05 0.3040E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000663 0.5627E-05 0.1741E-05 0.5733E-05 0.3081E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000664 0.5696E-05 0.1739E-05 0.5843E-05 0.7588E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000665 0.5364E-05 0.1793E-05 0.5390E-05 0.7024E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000666 0.5320E-05 0.1838E-05 0.5346E-05 0.3854E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000667 0.5290E-05 0.1660E-05 0.5320E-05 0.4044E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000668 0.5258E-05 0.1644E-05 0.5291E-05 0.3338E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000669 0.5233E-05 0.1629E-05 0.5275E-05 0.3705E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000670 0.5222E-05 0.1616E-05 0.5278E-05 0.3153E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000671 0.5235E-05 0.1605E-05 0.5312E-05 0.3364E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000672 0.5283E-05 0.1598E-05 0.5388E-05 0.1034E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000673 0.5419E-05 0.1618E-05 0.5012E-05 0.3781E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000674 0.4986E-05 0.1663E-05 0.5007E-05 0.3702E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000675 0.4953E-05 0.1535E-05 0.4980E-05 0.2924E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000676 0.4920E-05 0.1520E-05 0.4950E-05 0.3200E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000677 0.4892E-05 0.1506E-05 0.4929E-05 0.2771E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000678 0.4875E-05 0.1494E-05 0.4922E-05 0.2926E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000679 0.4873E-05 0.1483E-05 0.4940E-05 0.2610E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000680 0.4898E-05 0.1475E-05 0.4993E-05 0.2652E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000681 0.4960E-05 0.1474E-05 0.5090E-05 0.6589E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000682 0.4671E-05 0.1519E-05 0.4695E-05 0.6103E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000683 0.4633E-05 0.1558E-05 0.4657E-05 0.3321E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000684 0.4608E-05 0.1407E-05 0.4635E-05 0.3491E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000685 0.4580E-05 0.1393E-05 0.4611E-05 0.2878E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000686 0.4559E-05 0.1380E-05 0.4597E-05 0.3192E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000687 0.4551E-05 0.1369E-05 0.4600E-05 0.2712E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000688 0.4562E-05 0.1360E-05 0.4631E-05 0.2890E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000689 0.4605E-05 0.1354E-05 0.4698E-05 0.9008E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000690 0.4726E-05 0.1371E-05 0.4369E-05 0.3260E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000691 0.4347E-05 0.1409E-05 0.4367E-05 0.3206E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000692 0.4319E-05 0.1301E-05 0.4344E-05 0.2521E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000693 0.4291E-05 0.1288E-05 0.4318E-05 0.2772E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000694 0.4267E-05 0.1276E-05 0.4300E-05 0.2386E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000695 0.4252E-05 0.1265E-05 0.4295E-05 0.2530E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000696 0.4252E-05 0.1256E-05 0.4312E-05 0.2244E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000697 0.4274E-05 0.1250E-05 0.4359E-05 0.2287E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000698 0.4329E-05 0.1249E-05 0.4444E-05 0.5734E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000699 0.4077E-05 0.1288E-05 0.4099E-05 0.5314E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000700 0.4045E-05 0.1321E-05 0.4067E-05 0.2867E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000701 0.4023E-05 0.1192E-05 0.4048E-05 0.3019E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000702 0.3999E-05 0.1180E-05 0.4027E-05 0.2487E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000703 0.3981E-05 0.1169E-05 0.4016E-05 0.2756E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000704 0.3975E-05 0.1160E-05 0.4019E-05 0.2338E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000705 0.3986E-05 0.1152E-05 0.4047E-05 0.2487E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000706 0.4024E-05 0.1147E-05 0.4107E-05 0.7869E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000707 0.4131E-05 0.1161E-05 0.3818E-05 0.2816E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000708 0.3800E-05 0.1194E-05 0.3818E-05 0.2782E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000709 0.3775E-05 0.1102E-05 0.3798E-05 0.2177E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000710 0.3751E-05 0.1091E-05 0.3776E-05 0.2406E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000711 0.3731E-05 0.1081E-05 0.3761E-05 0.2059E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000712 0.3719E-05 0.1072E-05 0.3757E-05 0.2193E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000713 0.3719E-05 0.1064E-05 0.3773E-05 0.1933E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000714 0.3739E-05 0.1059E-05 0.3814E-05 0.1976E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000715 0.3788E-05 0.1058E-05 0.3890E-05 0.5001E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000716 0.3568E-05 0.1091E-05 0.3588E-05 0.4638E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000717 0.3540E-05 0.1119E-05 0.3561E-05 0.2480E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000718 0.3521E-05 0.1010E-05 0.3544E-05 0.2617E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000719 0.3501E-05 0.9994E-06 0.3526E-05 0.2153E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000720 0.3486E-05 0.9907E-06 0.3517E-05 0.2384E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000721 0.3481E-05 0.9824E-06 0.3521E-05 0.2019E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000722 0.3491E-05 0.9759E-06 0.3546E-05 0.2144E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000723 0.3525E-05 0.9717E-06 0.3599E-05 0.6891E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000724 0.3621E-05 0.9840E-06 0.3345E-05 0.2438E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000725 0.3329E-05 0.1012E-05 0.3346E-05 0.2420E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000726 0.3308E-05 0.9335E-06 0.3329E-05 0.1885E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000727 0.3288E-05 0.9241E-06 0.3310E-05 0.2094E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000728 0.3270E-05 0.9159E-06 0.3298E-05 0.1780E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000729 0.3260E-05 0.9081E-06 0.3295E-05 0.1904E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000730 0.3261E-05 0.9017E-06 0.3309E-05 0.1669E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000731 0.3279E-05 0.8971E-06 0.3346E-05 0.3506E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000732 0.3348E-05 0.9051E-06 0.3151E-05 0.4363E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000733 0.3131E-05 0.9248E-06 0.3147E-05 0.3940E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000734 0.3106E-05 0.9482E-06 0.3125E-05 0.2096E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000735 0.3090E-05 0.8551E-06 0.3110E-05 0.2209E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000736 0.3072E-05 0.8465E-06 0.3097E-05 0.1816E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000737 0.3059E-05 0.8390E-06 0.3091E-05 0.1996E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000738 0.3055E-05 0.8319E-06 0.3098E-05 0.1699E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000739 0.3064E-05 0.8262E-06 0.3125E-05 0.1775E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000740 0.3093E-05 0.8225E-06 0.3178E-05 0.5763E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000741 0.3181E-05 0.8335E-06 0.2937E-05 0.2134E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000742 0.2925E-05 0.8571E-06 0.2940E-05 0.2034E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000743 0.2906E-05 0.7907E-06 0.2925E-05 0.1557E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000744 0.2889E-05 0.7827E-06 0.2910E-05 0.1757E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000745 0.2873E-05 0.7757E-06 0.2901E-05 0.1482E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000746 0.2865E-05 0.7691E-06 0.2901E-05 0.1601E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000747 0.2866E-05 0.7635E-06 0.2916E-05 0.1400E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000748 0.2882E-05 0.7596E-06 0.2954E-05 0.3255E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000749 0.2945E-05 0.7667E-06 0.2771E-05 0.3715E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000750 0.2753E-05 0.7835E-06 0.2768E-05 0.3379E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000751 0.2732E-05 0.8034E-06 0.2749E-05 0.1797E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000752 0.2718E-05 0.7243E-06 0.2737E-05 0.1871E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000753 0.2702E-05 0.7169E-06 0.2727E-05 0.1554E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000754 0.2692E-05 0.7106E-06 0.2723E-05 0.1691E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000755 0.2688E-05 0.7045E-06 0.2732E-05 0.1448E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000756 0.2696E-05 0.6996E-06 0.2759E-05 0.1503E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000757 0.2722E-05 0.6963E-06 0.2810E-05 0.5056E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000758 0.2802E-05 0.7061E-06 0.2586E-05 0.1831E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000759 0.2575E-05 0.7262E-06 0.2589E-05 0.1761E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000760 0.2559E-05 0.6697E-06 0.2577E-05 0.1349E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000761 0.2544E-05 0.6630E-06 0.2564E-05 0.1521E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000762 0.2531E-05 0.6570E-06 0.2557E-05 0.1280E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000763 0.2524E-05 0.6513E-06 0.2559E-05 0.1384E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000764 0.2525E-05 0.6466E-06 0.2575E-05 0.1205E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000765 0.2539E-05 0.6432E-06 0.2611E-05 0.2925E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000766 0.2597E-05 0.6496E-06 0.2442E-05 0.3238E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000767 0.2427E-05 0.6638E-06 0.2440E-05 0.2946E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000768 0.2409E-05 0.6807E-06 0.2425E-05 0.1557E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000769 0.2397E-05 0.6135E-06 0.2415E-05 0.1621E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000770 0.2383E-05 0.6072E-06 0.2406E-05 0.1347E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000771 0.2374E-05 0.6018E-06 0.2404E-05 0.1461E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000772 0.2371E-05 0.5966E-06 0.2413E-05 0.1249E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000773 0.2379E-05 0.5924E-06 0.2439E-05 0.1290E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000774 0.2401E-05 0.5896E-06 0.2487E-05 0.4463E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000775 0.2474E-05 0.5981E-06 0.2282E-05 0.1598E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000776 0.2274E-05 0.6152E-06 0.2286E-05 0.1534E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000777 0.2259E-05 0.5672E-06 0.2276E-05 0.1170E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000778 0.2246E-05 0.5615E-06 0.2265E-05 0.1326E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000779 0.2235E-05 0.5565E-06 0.2260E-05 0.1108E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000780 0.2229E-05 0.5516E-06 0.2262E-05 0.1205E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000781 0.2230E-05 0.5476E-06 0.2278E-05 0.1042E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000782 0.2243E-05 0.5447E-06 0.2311E-05 0.2611E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000783 0.2295E-05 0.5503E-06 0.2158E-05 0.2842E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000784 0.2145E-05 0.5624E-06 0.2157E-05 0.2586E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000785 0.2129E-05 0.5768E-06 0.2144E-05 0.1354E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000786 0.2119E-05 0.5196E-06 0.2135E-05 0.1413E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000787 0.2107E-05 0.5143E-06 0.2128E-05 0.1173E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000788 0.2099E-05 0.5097E-06 0.2127E-05 0.1269E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000789 0.2097E-05 0.5053E-06 0.2136E-05 0.1082E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000790 0.2104E-05 0.5017E-06 0.2161E-05 0.1114E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000791 0.2124E-05 0.4993E-06 0.2205E-05 0.3952E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000792 0.2190E-05 0.5066E-06 0.2019E-05 0.1394E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000793 0.2012E-05 0.5211E-06 0.2023E-05 0.1343E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000794 0.1999E-05 0.4804E-06 0.2014E-05 0.1020E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000795 0.1988E-05 0.4756E-06 0.2005E-05 0.1161E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000796 0.1979E-05 0.4713E-06 0.2001E-05 0.9643E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000797 0.1974E-05 0.4672E-06 0.2004E-05 0.1054E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000798 0.1974E-05 0.4638E-06 0.2019E-05 0.9048E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000799 0.1986E-05 0.4613E-06 0.2050E-05 0.2324E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000800 0.2034E-05 0.4662E-06 0.1911E-05 0.2505E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000801 0.1901E-05 0.4765E-06 0.1911E-05 0.2279E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000802 0.1887E-05 0.4887E-06 0.1900E-05 0.1182E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000803 0.1877E-05 0.4401E-06 0.1892E-05 0.1236E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000804 0.1867E-05 0.4356E-06 0.1887E-05 0.1024E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000805 0.1861E-05 0.4317E-06 0.1886E-05 0.1108E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000806 0.1859E-05 0.4279E-06 0.1895E-05 0.9414E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000807 0.1865E-05 0.4249E-06 0.1918E-05 0.9669E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000808 0.1883E-05 0.4228E-06 0.1958E-05 0.3505E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000809 0.1943E-05 0.4291E-06 0.1790E-05 0.1220E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000810 0.1785E-05 0.4414E-06 0.1795E-05 0.1180E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000811 0.1774E-05 0.4069E-06 0.1787E-05 0.8910E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000812 0.1764E-05 0.4028E-06 0.1779E-05 0.1021E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000813 0.1756E-05 0.3992E-06 0.1776E-05 0.8413E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000814 0.1751E-05 0.3957E-06 0.1779E-05 0.9241E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000815 0.1752E-05 0.3928E-06 0.1793E-05 0.7878E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000816 0.1763E-05 0.3907E-06 0.1821E-05 0.2065E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000817 0.1806E-05 0.3950E-06 0.1697E-05 0.2215E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000818 0.1688E-05 0.4037E-06 0.1697E-05 0.2014E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000819 0.1675E-05 0.4140E-06 0.1687E-05 0.1034E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000820 0.1667E-05 0.3728E-06 0.1681E-05 0.1085E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000821 0.1659E-05 0.3689E-06 0.1676E-05 0.8973E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000822 0.1653E-05 0.3657E-06 0.1676E-05 0.9695E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000823 0.1651E-05 0.3624E-06 0.1685E-05 0.8213E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000824 0.1657E-05 0.3599E-06 0.1705E-05 0.8418E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000825 0.1673E-05 0.3581E-06 0.1742E-05 0.1158E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000826 0.1596E-05 0.3669E-06 0.1607E-05 0.1083E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000827 0.1586E-05 0.3742E-06 0.1596E-05 0.7507E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000828 0.1577E-05 0.3446E-06 0.1589E-05 0.7629E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000829 0.1569E-05 0.3411E-06 0.1582E-05 0.6937E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000830 0.1562E-05 0.3380E-06 0.1579E-05 0.7335E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000831 0.1560E-05 0.3350E-06 0.1583E-05 0.6834E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000832 0.1563E-05 0.3323E-06 0.1595E-05 0.7172E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000833 0.1575E-05 0.3303E-06 0.1621E-05 0.2186E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000834 0.1618E-05 0.3336E-06 0.1508E-05 0.1126E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000835 0.1502E-05 0.3413E-06 0.1510E-05 0.1387E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000836 0.1491E-05 0.3500E-06 0.1502E-05 0.8056E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000837 0.1484E-05 0.3157E-06 0.1496E-05 0.7164E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000838 0.1477E-05 0.3124E-06 0.1492E-05 0.7094E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000839 0.1473E-05 0.3096E-06 0.1492E-05 0.6975E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000840 0.1473E-05 0.3069E-06 0.1500E-05 0.6761E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000841 0.1480E-05 0.3047E-06 0.1519E-05 0.1721E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000842 0.1511E-05 0.3061E-06 0.1430E-05 0.1065E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000843 0.1422E-05 0.3110E-06 0.1430E-05 0.1082E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000844 0.1412E-05 0.3170E-06 0.1422E-05 0.6243E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000845 0.1405E-05 0.2918E-06 0.1416E-05 0.6179E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000846 0.1398E-05 0.2889E-06 0.1411E-05 0.5854E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000847 0.1393E-05 0.2862E-06 0.1409E-05 0.6054E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000848 0.1391E-05 0.2836E-06 0.1414E-05 0.5841E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000849 0.1395E-05 0.2813E-06 0.1428E-05 0.6001E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000850 0.1407E-05 0.2794E-06 0.1454E-05 0.1902E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000851 0.1448E-05 0.2824E-06 0.1343E-05 0.1004E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000852 0.1339E-05 0.2891E-06 0.1346E-05 0.1187E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000853 0.1330E-05 0.2965E-06 0.1340E-05 0.6665E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000854 0.1324E-05 0.2673E-06 0.1334E-05 0.5804E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000855 0.1318E-05 0.2646E-06 0.1332E-05 0.5863E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000856 0.1314E-05 0.2622E-06 0.1333E-05 0.5700E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000857 0.1315E-05 0.2598E-06 0.1342E-05 0.5624E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000858 0.1322E-05 0.2578E-06 0.1361E-05 0.1703E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000859 0.1352E-05 0.2592E-06 0.1276E-05 0.8550E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000860 0.1269E-05 0.2634E-06 0.1276E-05 0.9154E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000861 0.1261E-05 0.2685E-06 0.1269E-05 0.5499E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000862 0.1254E-05 0.2471E-06 0.1264E-05 0.5399E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000863 0.1248E-05 0.2446E-06 0.1260E-05 0.5132E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000864 0.1244E-05 0.2423E-06 0.1259E-05 0.5271E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000865 0.1242E-05 0.2401E-06 0.1265E-05 0.5076E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000866 0.1246E-05 0.2381E-06 0.1279E-05 0.5205E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000867 0.1258E-05 0.2365E-06 0.1304E-05 0.1684E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000868 0.1297E-05 0.2392E-06 0.1199E-05 0.9604E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000869 0.1196E-05 0.2449E-06 0.1203E-05 0.1095E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000870 0.1188E-05 0.2513E-06 0.1197E-05 0.5800E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000871 0.1183E-05 0.2264E-06 0.1193E-05 0.5044E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000872 0.1178E-05 0.2240E-06 0.1191E-05 0.5098E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000873 0.1175E-05 0.2220E-06 0.1193E-05 0.4946E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000874 0.1176E-05 0.2199E-06 0.1202E-05 0.4866E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000875 0.1183E-05 0.2182E-06 0.1220E-05 0.1590E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000876 0.1211E-05 0.2194E-06 0.1140E-05 0.7266E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000877 0.1135E-05 0.2231E-06 0.1141E-05 0.7940E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000878 0.1127E-05 0.2275E-06 0.1135E-05 0.4866E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000879 0.1122E-05 0.2093E-06 0.1130E-05 0.4842E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000880 0.1116E-05 0.2072E-06 0.1127E-05 0.4538E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000881 0.1113E-05 0.2052E-06 0.1127E-05 0.4687E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000882 0.1112E-05 0.2033E-06 0.1133E-05 0.4467E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000883 0.1116E-05 0.2016E-06 0.1146E-05 0.4601E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000884 0.1127E-05 0.2002E-06 0.1170E-05 0.1505E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000885 0.1163E-05 0.2026E-06 0.1072E-05 0.8973E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000886 0.1071E-05 0.2075E-06 0.1076E-05 0.1010E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000887 0.1064E-05 0.2129E-06 0.1071E-05 0.5154E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000888 0.1059E-05 0.1917E-06 0.1068E-05 0.4444E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000889 0.1054E-05 0.1897E-06 0.1066E-05 0.4523E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000890 0.1052E-05 0.1880E-06 0.1068E-05 0.4342E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000891 0.1053E-05 0.1862E-06 0.1077E-05 0.4281E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000892 0.1059E-05 0.1847E-06 0.1094E-05 0.1460E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000893 0.1086E-05 0.1858E-06 0.1021E-05 0.6301E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000894 0.1016E-05 0.1890E-06 0.1022E-05 0.6948E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000895 0.1010E-05 0.1927E-06 0.1017E-05 0.4307E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000896 0.1005E-05 0.1773E-06 0.1013E-05 0.4361E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000897 0.1000E-05 0.1754E-06 0.1010E-05 0.4015E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000898 0.9971E-06 0.1738E-06 0.1011E-05 0.4185E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000899 0.9964E-06 0.1722E-06 0.1016E-05 0.3938E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000900 0.1000E-05 0.1707E-06 0.1029E-05 0.4083E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000901 0.1010E-05 0.1695E-06 0.1051E-05 0.1348E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000902 0.1044E-05 0.1716E-06 0.9608E-06 0.8269E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000903 0.9598E-06 0.1759E-06 0.9649E-06 0.9248E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000904 0.9535E-06 0.1804E-06 0.9605E-06 0.4604E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000905 0.9494E-06 0.1624E-06 0.9574E-06 0.3946E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000906 0.9454E-06 0.1607E-06 0.9563E-06 0.4032E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000907 0.9435E-06 0.1592E-06 0.9586E-06 0.3833E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000908 0.9445E-06 0.1577E-06 0.9669E-06 0.3782E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000909 0.9506E-06 0.1564E-06 0.9829E-06 0.1329E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000910 0.9751E-06 0.1574E-06 0.9151E-06 0.5529E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000911 0.9119E-06 0.1601E-06 0.9166E-06 0.6114E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000912 0.9060E-06 0.1632E-06 0.9122E-06 0.3822E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000913 0.9017E-06 0.1501E-06 0.9086E-06 0.3937E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000914 0.8975E-06 0.1486E-06 0.9065E-06 0.3560E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000915 0.8948E-06 0.1472E-06 0.9071E-06 0.3744E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000916 0.8944E-06 0.1458E-06 0.9123E-06 0.3479E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000917 0.8979E-06 0.1446E-06 0.9241E-06 0.9970E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000918 0.9162E-06 0.1450E-06 0.8710E-06 0.9224E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000919 0.9384E-06 0.1456E-06 0.8618E-06 0.9266E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000920 0.8616E-06 0.1491E-06 0.8662E-06 0.4096E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000921 0.8566E-06 0.1388E-06 0.8629E-06 0.3589E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000922 0.8524E-06 0.1374E-06 0.8602E-06 0.3755E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000923 0.8489E-06 0.1362E-06 0.8598E-06 0.3422E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000924 0.8472E-06 0.1349E-06 0.8630E-06 0.3543E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000925 0.8480E-06 0.1338E-06 0.8719E-06 0.3240E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000926 0.8533E-06 0.1329E-06 0.8884E-06 0.9504E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000927 0.8765E-06 0.1343E-06 0.8218E-06 0.1012E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000928 0.8192E-06 0.1368E-06 0.8234E-06 0.9230E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000929 0.8140E-06 0.1398E-06 0.8196E-06 0.4322E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000930 0.8101E-06 0.1272E-06 0.8166E-06 0.4745E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000931 0.8064E-06 0.1259E-06 0.8153E-06 0.3906E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000932 0.8040E-06 0.1247E-06 0.8165E-06 0.4260E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000933 0.8035E-06 0.1235E-06 0.8223E-06 0.3556E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000934 0.8066E-06 0.1225E-06 0.8344E-06 0.1170E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000935 0.8238E-06 0.1229E-06 0.7827E-06 0.1175E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000936 0.8440E-06 0.1234E-06 0.7745E-06 0.6060E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000937 0.7746E-06 0.1266E-06 0.7788E-06 0.4770E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000938 0.7701E-06 0.1176E-06 0.7760E-06 0.4748E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000939 0.7664E-06 0.1164E-06 0.7738E-06 0.4310E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000940 0.7632E-06 0.1154E-06 0.7740E-06 0.4277E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000941 0.7616E-06 0.1144E-06 0.7775E-06 0.3895E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000942 0.7623E-06 0.1136E-06 0.7866E-06 0.3702E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000943 0.7669E-06 0.1131E-06 0.8027E-06 0.9073E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000944 0.7885E-06 0.1146E-06 0.7390E-06 0.1202E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000945 0.7369E-06 0.1170E-06 0.7406E-06 0.3184E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000946 0.7325E-06 0.1088E-06 0.7376E-06 0.4493E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000947 0.7288E-06 0.1076E-06 0.7349E-06 0.3015E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000948 0.7254E-06 0.1066E-06 0.7339E-06 0.3952E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000949 0.7233E-06 0.1056E-06 0.7354E-06 0.2889E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000950 0.7229E-06 0.1047E-06 0.7413E-06 0.3414E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000951 0.7257E-06 0.1039E-06 0.7531E-06 0.9865E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000952 0.7417E-06 0.1045E-06 0.7043E-06 0.8863E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000953 0.7600E-06 0.1052E-06 0.6968E-06 0.6094E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000954 0.6972E-06 0.1082E-06 0.7009E-06 0.3993E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000955 0.6932E-06 0.9962E-07 0.6986E-06 0.3230E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000956 0.6899E-06 0.9861E-07 0.6968E-06 0.3532E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000957 0.6871E-06 0.9774E-07 0.6973E-06 0.3007E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000958 0.6856E-06 0.9691E-07 0.7010E-06 0.3178E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000959 0.6862E-06 0.9626E-07 0.7098E-06 0.2726E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000960 0.6904E-06 0.9586E-07 0.7251E-06 0.8003E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000961 0.7103E-06 0.9728E-07 0.6653E-06 0.9751E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000962 0.6637E-06 0.9947E-07 0.6669E-06 0.2721E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000963 0.6597E-06 0.9213E-07 0.6643E-06 0.3465E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000964 0.6564E-06 0.9119E-07 0.6620E-06 0.2572E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000965 0.6534E-06 0.9035E-07 0.6613E-06 0.3099E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000966 0.6516E-06 0.8951E-07 0.6630E-06 0.2479E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000967 0.6512E-06 0.8875E-07 0.6687E-06 0.2739E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000968 0.6537E-06 0.8811E-07 0.6799E-06 0.8606E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000969 0.6685E-06 0.8879E-07 0.6344E-06 0.3530E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000970 0.6319E-06 0.9028E-07 0.6349E-06 0.3947E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000971 0.6279E-06 0.9223E-07 0.6319E-06 0.2534E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000972 0.6248E-06 0.8438E-07 0.6295E-06 0.2492E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000973 0.6218E-06 0.8352E-07 0.6282E-06 0.2365E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000974 0.6197E-06 0.8276E-07 0.6287E-06 0.2406E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000975 0.6187E-06 0.8202E-07 0.6324E-06 0.2292E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000976 0.6201E-06 0.8138E-07 0.6407E-06 0.2366E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000977 0.6248E-06 0.8090E-07 0.6549E-06 0.8120E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000978 0.6445E-06 0.8204E-07 0.5991E-06 0.4632E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000979 0.5984E-06 0.8405E-07 0.6012E-06 0.2560E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000980 0.5948E-06 0.7804E-07 0.5989E-06 0.2479E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000981 0.5920E-06 0.7725E-07 0.5969E-06 0.2356E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000982 0.5895E-06 0.7655E-07 0.5964E-06 0.2301E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000983 0.5881E-06 0.7586E-07 0.5981E-06 0.2257E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000984 0.5882E-06 0.7527E-07 0.6035E-06 0.2149E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000985 0.5911E-06 0.7480E-07 0.6139E-06 0.7012E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000986 0.6056E-06 0.7555E-07 0.5717E-06 0.5073E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000987 0.5699E-06 0.7701E-07 0.5725E-06 0.4914E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000988 0.5664E-06 0.7879E-07 0.5700E-06 0.2419E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000989 0.5637E-06 0.7149E-07 0.5678E-06 0.2354E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000990 0.5611E-06 0.7076E-07 0.5668E-06 0.2233E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000991 0.5594E-06 0.7012E-07 0.5673E-06 0.2237E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000992 0.5588E-06 0.6949E-07 0.5709E-06 0.2114E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000993 0.5605E-06 0.6896E-07 0.5786E-06 0.6608E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000994 0.5714E-06 0.6924E-07 0.5454E-06 0.6574E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0000995 0.5845E-06 0.6963E-07 0.5398E-06 0.4493E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000996 0.5400E-06 0.7139E-07 0.5425E-06 0.2788E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000997 0.5369E-06 0.6612E-07 0.5406E-06 0.2394E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000998 0.5343E-06 0.6546E-07 0.5390E-06 0.2545E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0000999 0.5321E-06 0.6488E-07 0.5390E-06 0.2270E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001000 0.5309E-06 0.6434E-07 0.5412E-06 0.2351E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001001 0.5312E-06 0.6393E-07 0.5471E-06 0.2105E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001002 0.5340E-06 0.6369E-07 0.5577E-06 0.5746E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001003 0.5483E-06 0.6457E-07 0.5158E-06 0.7048E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001004 0.5145E-06 0.6591E-07 0.5168E-06 0.1984E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001005 0.5115E-06 0.6115E-07 0.5147E-06 0.2608E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001006 0.5090E-06 0.6053E-07 0.5128E-06 0.1905E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001007 0.5067E-06 0.5997E-07 0.5121E-06 0.2343E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001008 0.5052E-06 0.5941E-07 0.5131E-06 0.1851E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001009 0.5048E-06 0.5891E-07 0.5169E-06 0.2086E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001010 0.5065E-06 0.5848E-07 0.5248E-06 0.6333E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001011 0.5172E-06 0.5886E-07 0.4922E-06 0.5693E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001012 0.5295E-06 0.5936E-07 0.4872E-06 0.4384E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001013 0.4877E-06 0.6104E-07 0.4900E-06 0.2492E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001014 0.4849E-06 0.5602E-07 0.4883E-06 0.2059E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001015 0.4827E-06 0.5546E-07 0.4871E-06 0.2234E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001016 0.4807E-06 0.5498E-07 0.4874E-06 0.1936E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001017 0.4797E-06 0.5453E-07 0.4898E-06 0.2042E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001018 0.4800E-06 0.5420E-07 0.4958E-06 0.1787E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001019 0.4828E-06 0.5402E-07 0.5062E-06 0.5344E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001020 0.4964E-06 0.5488E-07 0.4658E-06 0.6126E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001021 0.4649E-06 0.5612E-07 0.4669E-06 0.1761E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001022 0.4622E-06 0.5182E-07 0.4651E-06 0.2196E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001023 0.4599E-06 0.5129E-07 0.4635E-06 0.1688E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001024 0.4579E-06 0.5082E-07 0.4630E-06 0.1988E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001025 0.4566E-06 0.5036E-07 0.4642E-06 0.1641E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001026 0.4563E-06 0.4994E-07 0.4681E-06 0.1785E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001027 0.4580E-06 0.4960E-07 0.4757E-06 0.5708E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001028 0.4681E-06 0.5000E-07 0.4447E-06 0.5162E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001029 0.4796E-06 0.5051E-07 0.4401E-06 0.4097E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001030 0.4408E-06 0.5203E-07 0.4428E-06 0.2203E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001031 0.4383E-06 0.4748E-07 0.4414E-06 0.1831E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001032 0.4363E-06 0.4701E-07 0.4404E-06 0.1976E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001033 0.4346E-06 0.4661E-07 0.4409E-06 0.1723E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001034 0.4337E-06 0.4623E-07 0.4434E-06 0.1813E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001035 0.4341E-06 0.4597E-07 0.4492E-06 0.1594E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001036 0.4366E-06 0.4585E-07 0.4591E-06 0.4936E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001037 0.4494E-06 0.4664E-07 0.4210E-06 0.5431E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001038 0.4204E-06 0.4775E-07 0.4221E-06 0.1573E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001039 0.4179E-06 0.4392E-07 0.4205E-06 0.1950E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001040 0.4159E-06 0.4347E-07 0.4192E-06 0.1507E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001041 0.4141E-06 0.4308E-07 0.4189E-06 0.1766E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001042 0.4129E-06 0.4269E-07 0.4200E-06 0.1467E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001043 0.4127E-06 0.4234E-07 0.4238E-06 0.1587E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001044 0.4143E-06 0.4206E-07 0.4311E-06 0.5196E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001045 0.4238E-06 0.4245E-07 0.4021E-06 0.4677E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001046 0.4344E-06 0.4293E-07 0.3979E-06 0.3768E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001047 0.3987E-06 0.4427E-07 0.4005E-06 0.1973E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001048 0.3965E-06 0.4025E-07 0.3993E-06 0.1649E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001049 0.3947E-06 0.3985E-07 0.3985E-06 0.1769E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001050 0.3931E-06 0.3951E-07 0.3990E-06 0.1548E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001051 0.3924E-06 0.3920E-07 0.4014E-06 0.1622E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001052 0.3927E-06 0.3899E-07 0.4070E-06 0.1429E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001053 0.3951E-06 0.3891E-07 0.4162E-06 0.4520E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001054 0.4070E-06 0.3963E-07 0.3807E-06 0.4869E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001055 0.3803E-06 0.4060E-07 0.3819E-06 0.1411E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001056 0.3781E-06 0.3723E-07 0.3805E-06 0.1742E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001057 0.3763E-06 0.3685E-07 0.3793E-06 0.1352E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001058 0.3747E-06 0.3652E-07 0.3791E-06 0.1577E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001059 0.3737E-06 0.3619E-07 0.3803E-06 0.1317E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001060 0.3735E-06 0.3590E-07 0.3839E-06 0.1418E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001061 0.3750E-06 0.3567E-07 0.3906E-06 0.4715E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001062 0.3838E-06 0.3602E-07 0.3638E-06 0.2128E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001063 0.3628E-06 0.3668E-07 0.3642E-06 0.2348E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001064 0.3607E-06 0.3753E-07 0.3627E-06 0.1365E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001065 0.3589E-06 0.3411E-07 0.3613E-06 0.1306E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001066 0.3573E-06 0.3377E-07 0.3607E-06 0.1285E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001067 0.3561E-06 0.3347E-07 0.3612E-06 0.1272E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001068 0.3556E-06 0.3317E-07 0.3636E-06 0.1239E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001069 0.3564E-06 0.3293E-07 0.3686E-06 0.3851E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001070 0.3631E-06 0.3310E-07 0.3475E-06 0.3609E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001071 0.3710E-06 0.3335E-07 0.3440E-06 0.3099E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001072 0.3443E-06 0.3417E-07 0.3457E-06 0.1551E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001073 0.3424E-06 0.3156E-07 0.3445E-06 0.1312E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001074 0.3407E-06 0.3125E-07 0.3436E-06 0.1436E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001075 0.3393E-06 0.3098E-07 0.3436E-06 0.1265E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001076 0.3385E-06 0.3073E-07 0.3451E-06 0.1350E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001077 0.3385E-06 0.3054E-07 0.3490E-06 0.1204E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001078 0.3401E-06 0.3044E-07 0.3559E-06 0.3604E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001079 0.3490E-06 0.3089E-07 0.3291E-06 0.3839E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001080 0.3285E-06 0.3152E-07 0.3297E-06 0.1175E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001081 0.3266E-06 0.2920E-07 0.3285E-06 0.1427E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001082 0.3250E-06 0.2890E-07 0.3273E-06 0.1143E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001083 0.3236E-06 0.2864E-07 0.3269E-06 0.1313E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001084 0.3226E-06 0.2837E-07 0.3276E-06 0.1118E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001085 0.3223E-06 0.2814E-07 0.3302E-06 0.1202E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001086 0.3232E-06 0.2793E-07 0.3353E-06 0.3784E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001087 0.3300E-06 0.2814E-07 0.3145E-06 0.3421E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001088 0.3377E-06 0.2842E-07 0.3113E-06 0.2864E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001089 0.3118E-06 0.2921E-07 0.3131E-06 0.1438E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001090 0.3101E-06 0.2676E-07 0.3120E-06 0.1231E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001091 0.3087E-06 0.2650E-07 0.3113E-06 0.1312E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001092 0.3074E-06 0.2627E-07 0.3116E-06 0.1165E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001093 0.3067E-06 0.2607E-07 0.3132E-06 0.1219E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001094 0.3069E-06 0.2593E-07 0.3171E-06 0.1087E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001095 0.3085E-06 0.2587E-07 0.3239E-06 0.3364E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001096 0.3172E-06 0.2632E-07 0.2979E-06 0.3542E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001097 0.2976E-06 0.2690E-07 0.2987E-06 0.1062E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001098 0.2959E-06 0.2476E-07 0.2976E-06 0.1288E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001099 0.2945E-06 0.2451E-07 0.2966E-06 0.1031E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001100 0.2932E-06 0.2429E-07 0.2963E-06 0.1181E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001101 0.2924E-06 0.2407E-07 0.2971E-06 0.1008E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001102 0.2922E-06 0.2387E-07 0.2997E-06 0.1075E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001103 0.2931E-06 0.2371E-07 0.3047E-06 0.3504E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001104 0.2997E-06 0.2391E-07 0.2848E-06 0.3164E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001105 0.3070E-06 0.2419E-07 0.2818E-06 0.2637E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001106 0.2826E-06 0.2489E-07 0.2837E-06 0.1317E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001107 0.2810E-06 0.2270E-07 0.2828E-06 0.1115E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001108 0.2797E-06 0.2248E-07 0.2822E-06 0.1198E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001109 0.2786E-06 0.2229E-07 0.2825E-06 0.1054E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001110 0.2780E-06 0.2212E-07 0.2842E-06 0.1109E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001111 0.2782E-06 0.2202E-07 0.2880E-06 0.9822E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001112 0.2798E-06 0.2199E-07 0.2945E-06 0.3112E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001113 0.2881E-06 0.2241E-07 0.2699E-06 0.3257E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001114 0.2697E-06 0.2293E-07 0.2706E-06 0.9630E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001115 0.2682E-06 0.2101E-07 0.2697E-06 0.1174E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001116 0.2669E-06 0.2080E-07 0.2688E-06 0.9325E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001117 0.2658E-06 0.2061E-07 0.2687E-06 0.1072E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001118 0.2650E-06 0.2042E-07 0.2695E-06 0.9112E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001119 0.2649E-06 0.2025E-07 0.2720E-06 0.9719E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001120 0.2659E-06 0.2012E-07 0.2768E-06 0.3227E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001121 0.2720E-06 0.2031E-07 0.2580E-06 0.2901E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001122 0.2788E-06 0.2056E-07 0.2553E-06 0.2417E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001123 0.2561E-06 0.2117E-07 0.2571E-06 0.1201E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001124 0.2547E-06 0.1926E-07 0.2563E-06 0.1015E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001125 0.2536E-06 0.1907E-07 0.2558E-06 0.1090E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001126 0.2526E-06 0.1892E-07 0.2562E-06 0.9569E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001127 0.2521E-06 0.1878E-07 0.2579E-06 0.1006E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001128 0.2523E-06 0.1870E-07 0.2615E-06 0.8886E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001129 0.2538E-06 0.1869E-07 0.2675E-06 0.2855E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001130 0.2615E-06 0.1908E-07 0.2446E-06 0.2975E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001131 0.2445E-06 0.1952E-07 0.2453E-06 0.8717E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001132 0.2431E-06 0.1783E-07 0.2445E-06 0.1065E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001133 0.2420E-06 0.1765E-07 0.2437E-06 0.8434E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001134 0.2410E-06 0.1749E-07 0.2436E-06 0.9700E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001135 0.2403E-06 0.1733E-07 0.2445E-06 0.8237E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001136 0.2402E-06 0.1719E-07 0.2468E-06 0.8776E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001137 0.2412E-06 0.1707E-07 0.2513E-06 0.2949E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001138 0.2469E-06 0.1724E-07 0.2339E-06 0.1374E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001139 0.2335E-06 0.1754E-07 0.2342E-06 0.1511E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001140 0.2321E-06 0.1791E-07 0.2332E-06 0.8448E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001141 0.2310E-06 0.1634E-07 0.2324E-06 0.8058E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001142 0.2300E-06 0.1618E-07 0.2321E-06 0.8043E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001143 0.2292E-06 0.1604E-07 0.2324E-06 0.7880E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001144 0.2289E-06 0.1589E-07 0.2340E-06 0.7742E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001145 0.2295E-06 0.1578E-07 0.2373E-06 0.2453E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001146 0.2338E-06 0.1586E-07 0.2236E-06 0.2302E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001147 0.2389E-06 0.1599E-07 0.2214E-06 0.1941E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001148 0.2218E-06 0.1636E-07 0.2225E-06 0.9865E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001149 0.2205E-06 0.1513E-07 0.2217E-06 0.8266E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001150 0.2195E-06 0.1499E-07 0.2212E-06 0.9148E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001151 0.2186E-06 0.1486E-07 0.2213E-06 0.7978E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001152 0.2181E-06 0.1475E-07 0.2223E-06 0.8580E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001153 0.2181E-06 0.1469E-07 0.2248E-06 0.7603E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001154 0.2192E-06 0.1467E-07 0.2294E-06 0.2275E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001155 0.2250E-06 0.1493E-07 0.2119E-06 0.2442E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001156 0.2118E-06 0.1522E-07 0.2124E-06 0.7414E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001157 0.2105E-06 0.1401E-07 0.2116E-06 0.9042E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001158 0.2095E-06 0.1387E-07 0.2109E-06 0.7238E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001159 0.2086E-06 0.1374E-07 0.2107E-06 0.8306E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001160 0.2080E-06 0.1362E-07 0.2111E-06 0.7083E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001161 0.2078E-06 0.1350E-07 0.2128E-06 0.7591E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001162 0.2084E-06 0.1340E-07 0.2162E-06 0.2403E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001163 0.2128E-06 0.1350E-07 0.2027E-06 0.2168E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001164 0.2178E-06 0.1365E-07 0.2006E-06 0.1820E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001165 0.2012E-06 0.1400E-07 0.2018E-06 0.9115E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001166 0.2000E-06 0.1285E-07 0.2012E-06 0.7787E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001167 0.1991E-06 0.1273E-07 0.2007E-06 0.8350E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001168 0.1984E-06 0.1263E-07 0.2009E-06 0.7374E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001169 0.1979E-06 0.1254E-07 0.2020E-06 0.7758E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001170 0.1980E-06 0.1249E-07 0.2046E-06 0.6894E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001171 0.1991E-06 0.1249E-07 0.2090E-06 0.2134E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001172 0.2048E-06 0.1274E-07 0.1921E-06 0.2236E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001173 0.1921E-06 0.1300E-07 0.1927E-06 0.6707E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001174 0.1910E-06 0.1190E-07 0.1920E-06 0.8120E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001175 0.1901E-06 0.1178E-07 0.1914E-06 0.6539E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001176 0.1893E-06 0.1168E-07 0.1912E-06 0.7447E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001177 0.1888E-06 0.1157E-07 0.1918E-06 0.6396E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001178 0.1887E-06 0.1147E-07 0.1935E-06 0.6791E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001179 0.1893E-06 0.1138E-07 0.1968E-06 0.2222E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001180 0.1936E-06 0.1148E-07 0.1838E-06 0.2002E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001181 0.1983E-06 0.1163E-07 0.1819E-06 0.1690E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001182 0.1825E-06 0.1194E-07 0.1831E-06 0.8315E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001183 0.1815E-06 0.1092E-07 0.1826E-06 0.7058E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001184 0.1807E-06 0.1082E-07 0.1822E-06 0.7598E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001185 0.1800E-06 0.1073E-07 0.1824E-06 0.6676E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001186 0.1797E-06 0.1066E-07 0.1836E-06 0.7046E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001187 0.1798E-06 0.1063E-07 0.1861E-06 0.6235E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001188 0.1808E-06 0.1064E-07 0.1903E-06 0.1979E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001189 0.1862E-06 0.1088E-07 0.1743E-06 0.2045E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001190 0.1743E-06 0.1110E-07 0.1748E-06 0.6087E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001191 0.1733E-06 0.1012E-07 0.1742E-06 0.7372E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001192 0.1725E-06 0.1002E-07 0.1737E-06 0.5924E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001193 0.1718E-06 0.9925E-08 0.1736E-06 0.6745E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001194 0.1714E-06 0.9832E-08 0.1742E-06 0.5792E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001195 0.1713E-06 0.9750E-08 0.1758E-06 0.6136E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001196 0.1719E-06 0.9677E-08 0.1789E-06 0.2045E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001197 0.1759E-06 0.9769E-08 0.1667E-06 0.1835E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001198 0.1804E-06 0.9905E-08 0.1650E-06 0.1558E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001199 0.1657E-06 0.1017E-07 0.1662E-06 0.7568E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001200 0.1647E-06 0.9286E-08 0.1657E-06 0.6433E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001201 0.1640E-06 0.9198E-08 0.1654E-06 0.6904E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001202 0.1634E-06 0.9129E-08 0.1657E-06 0.6068E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001203 0.1631E-06 0.9071E-08 0.1667E-06 0.6394E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001204 0.1632E-06 0.9048E-08 0.1691E-06 0.1288E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001205 0.1661E-06 0.9143E-08 0.1595E-06 0.1379E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001206 0.1694E-06 0.9283E-08 0.1580E-06 0.6811E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001207 0.1582E-06 0.8681E-08 0.1587E-06 0.5537E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001208 0.1573E-06 0.8605E-08 0.1581E-06 0.5248E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001209 0.1566E-06 0.8534E-08 0.1577E-06 0.5408E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001210 0.1559E-06 0.8470E-08 0.1578E-06 0.5188E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001211 0.1555E-06 0.8425E-08 0.1585E-06 0.5255E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001212 0.1554E-06 0.8407E-08 0.1603E-06 0.5130E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001213 0.1560E-06 0.8437E-08 0.1635E-06 0.1575E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001214 0.1599E-06 0.8617E-08 0.1513E-06 0.6243E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001215 0.1512E-06 0.8045E-08 0.1515E-06 0.5282E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001216 0.1503E-06 0.7973E-08 0.1509E-06 0.5110E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001217 0.1496E-06 0.7905E-08 0.1504E-06 0.5139E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001218 0.1489E-06 0.7843E-08 0.1503E-06 0.4963E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001219 0.1484E-06 0.7793E-08 0.1506E-06 0.5010E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001220 0.1482E-06 0.7762E-08 0.1518E-06 0.4820E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001221 0.1486E-06 0.7765E-08 0.1542E-06 0.1463E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001222 0.1515E-06 0.7889E-08 0.1447E-06 0.1305E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001223 0.1549E-06 0.8050E-08 0.1433E-06 0.6558E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001224 0.1436E-06 0.7388E-08 0.1441E-06 0.4802E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001225 0.1428E-06 0.7324E-08 0.1436E-06 0.4768E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001226 0.1422E-06 0.7265E-08 0.1433E-06 0.4755E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001227 0.1416E-06 0.7213E-08 0.1434E-06 0.4677E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001228 0.1413E-06 0.7177E-08 0.1442E-06 0.4687E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001229 0.1413E-06 0.7166E-08 0.1460E-06 0.4584E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001230 0.1420E-06 0.7197E-08 0.1492E-06 0.1528E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001231 0.1460E-06 0.7372E-08 0.1372E-06 0.5775E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001232 0.1372E-06 0.6848E-08 0.1376E-06 0.4714E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001233 0.1364E-06 0.6788E-08 0.1370E-06 0.4580E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001234 0.1358E-06 0.6732E-08 0.1366E-06 0.4615E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001235 0.1352E-06 0.6681E-08 0.1365E-06 0.4471E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001236 0.1348E-06 0.6644E-08 0.1369E-06 0.4515E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001237 0.1347E-06 0.6627E-08 0.1381E-06 0.4361E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001238 0.1351E-06 0.6644E-08 0.1405E-06 0.1351E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001239 0.1381E-06 0.6773E-08 0.1313E-06 0.1215E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001240 0.1415E-06 0.6936E-08 0.1299E-06 0.6191E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001241 0.1304E-06 0.6294E-08 0.1308E-06 0.4366E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001242 0.1297E-06 0.6240E-08 0.1304E-06 0.4322E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001243 0.1291E-06 0.6191E-08 0.1301E-06 0.4319E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001244 0.1286E-06 0.6150E-08 0.1303E-06 0.4243E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001245 0.1284E-06 0.6126E-08 0.1311E-06 0.4254E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001246 0.1284E-06 0.6127E-08 0.1329E-06 0.4165E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001247 0.1292E-06 0.6168E-08 0.1359E-06 0.1414E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001248 0.1330E-06 0.6340E-08 0.1245E-06 0.5383E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001249 0.1246E-06 0.5836E-08 0.1249E-06 0.4292E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001250 0.1239E-06 0.5785E-08 0.1244E-06 0.4162E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001251 0.1233E-06 0.5738E-08 0.1241E-06 0.4195E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001252 0.1228E-06 0.5698E-08 0.1240E-06 0.4057E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001253 0.1225E-06 0.5671E-08 0.1244E-06 0.4104E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001254 0.1224E-06 0.5662E-08 0.1256E-06 0.3955E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001255 0.1229E-06 0.5687E-08 0.1278E-06 0.1248E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001256 0.1258E-06 0.5814E-08 0.1191E-06 0.1601E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001257 0.1289E-06 0.5383E-08 0.1179E-06 0.6594E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001258 0.1184E-06 0.5372E-08 0.1187E-06 0.7780E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001259 0.1178E-06 0.5332E-08 0.1184E-06 0.4965E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001260 0.1172E-06 0.5294E-08 0.1182E-06 0.6860E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001261 0.1168E-06 0.5276E-08 0.1184E-06 0.4810E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001262 0.1166E-06 0.5281E-08 0.1192E-06 0.6162E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001263 0.1167E-06 0.5330E-08 0.1209E-06 0.1064E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001264 0.1187E-06 0.5416E-08 0.1140E-06 0.1082E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001265 0.1211E-06 0.4998E-08 0.1129E-06 0.6253E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001266 0.1131E-06 0.4976E-08 0.1134E-06 0.6756E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001267 0.1125E-06 0.4937E-08 0.1130E-06 0.5149E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001268 0.1120E-06 0.4900E-08 0.1127E-06 0.6072E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001269 0.1115E-06 0.4881E-08 0.1128E-06 0.4889E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001270 0.1112E-06 0.4882E-08 0.1133E-06 0.5547E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001271 0.1111E-06 0.4928E-08 0.1146E-06 0.4741E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001272 0.1115E-06 0.5023E-08 0.1169E-06 0.1025E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001273 0.1143E-06 0.4640E-08 0.1082E-06 0.5265E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001274 0.1081E-06 0.4620E-08 0.1083E-06 0.5773E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001275 0.1075E-06 0.4582E-08 0.1079E-06 0.4521E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001276 0.1070E-06 0.4547E-08 0.1076E-06 0.5265E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001277 0.1065E-06 0.4524E-08 0.1075E-06 0.4348E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001278 0.1061E-06 0.4519E-08 0.1077E-06 0.4870E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001279 0.1060E-06 0.4545E-08 0.1086E-06 0.4250E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001280 0.1062E-06 0.4615E-08 0.1103E-06 0.8879E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001281 0.1083E-06 0.4309E-08 0.1035E-06 0.7628E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001282 0.1107E-06 0.4274E-08 0.1025E-06 0.7888E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001283 0.1027E-06 0.4261E-08 0.1030E-06 0.8357E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001284 0.1022E-06 0.4224E-08 0.1027E-06 0.6605E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001285 0.1017E-06 0.4194E-08 0.1025E-06 0.7378E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001286 0.1013E-06 0.4180E-08 0.1026E-06 0.6042E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001287 0.1010E-06 0.4194E-08 0.1032E-06 0.6574E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001288 0.1010E-06 0.4251E-08 0.1045E-06 0.5618E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001289 0.1015E-06 0.4363E-08 0.1068E-06 0.9656E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001290 0.1043E-06 0.3964E-08 0.9816E-07 0.5060E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001291 0.9817E-07 0.3947E-08 0.9838E-07 0.5387E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001292 0.9761E-07 0.3917E-08 0.9802E-07 0.4281E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001293 0.9714E-07 0.3889E-08 0.9772E-07 0.4910E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001294 0.9671E-07 0.3875E-08 0.9767E-07 0.4099E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001295 0.9642E-07 0.3879E-08 0.9797E-07 0.4551E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001296 0.9631E-07 0.3918E-08 0.9888E-07 0.4021E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001297 0.9657E-07 0.3999E-08 0.1006E-06 0.8270E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001298 0.9868E-07 0.3682E-08 0.9391E-07 0.7076E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001299 0.1010E-06 0.3654E-08 0.9298E-07 0.7398E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001300 0.9331E-07 0.3642E-08 0.9356E-07 0.7740E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001301 0.9279E-07 0.3610E-08 0.9328E-07 0.6136E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001302 0.9237E-07 0.3585E-08 0.9311E-07 0.6824E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001303 0.9200E-07 0.3576E-08 0.9326E-07 0.5598E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001304 0.9179E-07 0.3592E-08 0.9387E-07 0.6072E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001305 0.9181E-07 0.3651E-08 0.9521E-07 0.1126E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001306 0.9227E-07 0.3390E-08 0.9742E-07 0.8317E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001307 0.9497E-07 0.3406E-08 0.8911E-07 0.6734E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001308 0.8917E-07 0.3399E-08 0.8936E-07 0.5798E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001309 0.8865E-07 0.3368E-08 0.8904E-07 0.5638E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001310 0.8824E-07 0.3336E-08 0.8878E-07 0.5221E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001311 0.8785E-07 0.3307E-08 0.8876E-07 0.5070E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001312 0.8760E-07 0.3282E-08 0.8907E-07 0.4772E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001313 0.8753E-07 0.3259E-08 0.8996E-07 0.4653E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001314 0.8780E-07 0.3244E-08 0.9161E-07 0.1272E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001315 0.8981E-07 0.3256E-08 0.8527E-07 0.1187E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001316 0.9200E-07 0.3262E-08 0.8441E-07 0.5084E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001317 0.8476E-07 0.3327E-08 0.8498E-07 0.6346E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001318 0.8429E-07 0.3076E-08 0.8474E-07 0.5729E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001319 0.8391E-07 0.3059E-08 0.8460E-07 0.5693E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001320 0.8358E-07 0.3065E-08 0.8476E-07 0.5211E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001321 0.8341E-07 0.3107E-08 0.8534E-07 0.5234E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001322 0.8345E-07 0.3203E-08 0.8660E-07 0.7651E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001323 0.8487E-07 0.2931E-08 0.8161E-07 0.7598E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001324 0.8649E-07 0.2918E-08 0.8086E-07 0.6245E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001325 0.8099E-07 0.2914E-08 0.8118E-07 0.6319E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001326 0.8053E-07 0.2902E-08 0.8090E-07 0.5298E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001327 0.8015E-07 0.2904E-08 0.8071E-07 0.5616E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001328 0.7980E-07 0.2935E-08 0.8075E-07 0.4867E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001329 0.7956E-07 0.3005E-08 0.8115E-07 0.1098E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001330 0.7949E-07 0.2744E-08 0.8211E-07 0.1053E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001331 0.7972E-07 0.2746E-08 0.8380E-07 0.5217E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001332 0.8168E-07 0.2794E-08 0.7744E-07 0.4165E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001333 0.8372E-07 0.2838E-08 0.7665E-07 0.4047E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001334 0.7699E-07 0.2660E-08 0.7721E-07 0.3532E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001335 0.7656E-07 0.2645E-08 0.7701E-07 0.2965E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001336 0.7622E-07 0.2638E-08 0.7692E-07 0.3322E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001337 0.7592E-07 0.2650E-08 0.7713E-07 0.2929E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001338 0.7575E-07 0.2688E-08 0.7777E-07 0.3278E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001339 0.7579E-07 0.2764E-08 0.7906E-07 0.6168E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001340 0.7717E-07 0.2513E-08 0.7411E-07 0.6039E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001341 0.7869E-07 0.2497E-08 0.7344E-07 0.5493E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001342 0.7357E-07 0.2490E-08 0.7375E-07 0.5333E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001343 0.7315E-07 0.2474E-08 0.7351E-07 0.4633E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001344 0.7280E-07 0.2470E-08 0.7337E-07 0.4766E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001345 0.7248E-07 0.2486E-08 0.7345E-07 0.4255E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001346 0.7227E-07 0.2532E-08 0.7389E-07 0.8872E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001347 0.7220E-07 0.2351E-08 0.7487E-07 0.8588E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001348 0.7241E-07 0.2351E-08 0.7654E-07 0.5384E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001349 0.7428E-07 0.2390E-08 0.7033E-07 0.4417E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001350 0.7618E-07 0.2429E-08 0.6961E-07 0.7897E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001351 0.6994E-07 0.2435E-08 0.7015E-07 0.8023E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001352 0.6955E-07 0.2414E-08 0.6998E-07 0.6075E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001353 0.6923E-07 0.2405E-08 0.6993E-07 0.2497E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001354 0.6896E-07 0.2217E-08 0.7017E-07 0.2830E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001355 0.6880E-07 0.2192E-08 0.7082E-07 0.2390E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001356 0.6883E-07 0.2168E-08 0.7208E-07 0.7006E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001357 0.7014E-07 0.2169E-08 0.6732E-07 0.7408E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001358 0.7153E-07 0.2184E-08 0.6670E-07 0.5672E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001359 0.6683E-07 0.2187E-08 0.6700E-07 0.4814E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001360 0.6645E-07 0.2177E-08 0.6679E-07 0.4627E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001361 0.6613E-07 0.2172E-08 0.6668E-07 0.4158E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001362 0.6584E-07 0.2177E-08 0.6678E-07 0.3952E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001363 0.6564E-07 0.2192E-08 0.6722E-07 0.4044E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001364 0.6557E-07 0.2019E-08 0.6818E-07 0.4182E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001365 0.6577E-07 0.1995E-08 0.6977E-07 0.7044E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001366 0.6752E-07 0.1998E-08 0.6389E-07 0.6438E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001367 0.6925E-07 0.2000E-08 0.6323E-07 0.5283E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001368 0.6354E-07 0.1993E-08 0.6373E-07 0.5110E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001369 0.6318E-07 0.1973E-08 0.6359E-07 0.4221E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001370 0.6289E-07 0.1954E-08 0.6356E-07 0.4454E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001371 0.6264E-07 0.1937E-08 0.6380E-07 0.3786E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001372 0.6250E-07 0.1923E-08 0.6443E-07 0.3965E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001373 0.6252E-07 0.1913E-08 0.6563E-07 0.8216E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001374 0.6373E-07 0.1900E-08 0.6116E-07 0.8240E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001375 0.6501E-07 0.1871E-08 0.6060E-07 0.3738E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001376 0.6072E-07 0.1870E-08 0.6087E-07 0.3579E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001377 0.6037E-07 0.1884E-08 0.6069E-07 0.3176E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001378 0.6008E-07 0.1917E-08 0.6060E-07 0.5927E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001379 0.5982E-07 0.1777E-08 0.6071E-07 0.5862E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001380 0.5964E-07 0.1775E-08 0.6113E-07 0.5248E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001381 0.5957E-07 0.1803E-08 0.6203E-07 0.5327E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001382 0.6044E-07 0.1848E-08 0.5851E-07 0.6660E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001383 0.6140E-07 0.1717E-08 0.5801E-07 0.2618E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001384 0.5802E-07 0.1711E-08 0.5815E-07 0.3703E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001385 0.5769E-07 0.1702E-08 0.5795E-07 0.2271E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001386 0.5740E-07 0.1697E-08 0.5781E-07 0.3303E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001387 0.5714E-07 0.1706E-08 0.5784E-07 0.2268E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001388 0.5693E-07 0.1730E-08 0.5812E-07 0.3077E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001389 0.5682E-07 0.1782E-08 0.5881E-07 0.6351E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001390 0.5689E-07 0.1604E-08 0.6003E-07 0.4849E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001391 0.5816E-07 0.1610E-08 0.5553E-07 0.3690E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001392 0.5945E-07 0.1618E-08 0.5499E-07 0.4363E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001393 0.5516E-07 0.1614E-08 0.5531E-07 0.5559E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001394 0.5485E-07 0.1596E-08 0.5517E-07 0.3639E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001395 0.5459E-07 0.1582E-08 0.5511E-07 0.4762E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001396 0.5435E-07 0.1571E-08 0.5526E-07 0.3324E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001397 0.5420E-07 0.1567E-08 0.5571E-07 0.4138E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001398 0.5417E-07 0.1572E-08 0.5664E-07 0.6230E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001399 0.5506E-07 0.1569E-08 0.5313E-07 0.6390E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001400 0.5602E-07 0.1553E-08 0.5266E-07 0.4614E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001401 0.5271E-07 0.1550E-08 0.5284E-07 0.4279E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001402 0.5241E-07 0.1555E-08 0.5266E-07 0.3543E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001403 0.5216E-07 0.1444E-08 0.5256E-07 0.3513E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001404 0.5192E-07 0.1438E-08 0.5262E-07 0.3215E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001405 0.5174E-07 0.1445E-08 0.5292E-07 0.3188E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001406 0.5166E-07 0.1474E-08 0.5362E-07 0.3019E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001407 0.5175E-07 0.1533E-08 0.5482E-07 0.7187E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001408 0.5301E-07 0.1389E-08 0.5042E-07 0.6672E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001409 0.5426E-07 0.1418E-08 0.4992E-07 0.4469E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001410 0.5011E-07 0.1429E-08 0.5026E-07 0.4609E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001411 0.4983E-07 0.1431E-08 0.5014E-07 0.3566E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001412 0.4960E-07 0.1448E-08 0.5010E-07 0.4693E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001413 0.4939E-07 0.1325E-08 0.5027E-07 0.4529E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001414 0.4926E-07 0.1326E-08 0.5073E-07 0.4037E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001415 0.4924E-07 0.1354E-08 0.5162E-07 0.2868E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001416 0.5012E-07 0.1390E-08 0.4826E-07 0.4157E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001417 0.5105E-07 0.1280E-08 0.4783E-07 0.2369E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001418 0.4789E-07 0.1273E-08 0.4801E-07 0.2907E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001419 0.4762E-07 0.1265E-08 0.4786E-07 0.1972E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001420 0.4739E-07 0.1260E-08 0.4778E-07 0.2599E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001421 0.4717E-07 0.1267E-08 0.4785E-07 0.1927E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001422 0.4702E-07 0.1285E-08 0.4816E-07 0.2430E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001423 0.4695E-07 0.1326E-08 0.4885E-07 0.4957E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001424 0.4705E-07 0.1198E-08 0.4999E-07 0.4138E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001425 0.4826E-07 0.1206E-08 0.4580E-07 0.3424E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001426 0.4945E-07 0.1219E-08 0.4534E-07 0.3788E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001427 0.4553E-07 0.1214E-08 0.4567E-07 0.4547E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001428 0.4528E-07 0.1200E-08 0.4557E-07 0.3071E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001429 0.4507E-07 0.1190E-08 0.4555E-07 0.3869E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001430 0.4488E-07 0.1183E-08 0.4572E-07 0.2763E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001431 0.4477E-07 0.1182E-08 0.4617E-07 0.3337E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001432 0.4476E-07 0.1190E-08 0.4703E-07 0.5344E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001433 0.4560E-07 0.1187E-08 0.4384E-07 0.5284E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001434 0.4648E-07 0.1173E-08 0.4344E-07 0.3883E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001435 0.4351E-07 0.1169E-08 0.4362E-07 0.3508E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001436 0.4327E-07 0.1171E-08 0.4349E-07 0.2882E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001437 0.4306E-07 0.1083E-08 0.4342E-07 0.2896E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001438 0.4287E-07 0.1080E-08 0.4350E-07 0.2618E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001439 0.4273E-07 0.1088E-08 0.4381E-07 0.2632E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001440 0.4267E-07 0.1116E-08 0.4445E-07 0.4606E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001441 0.4328E-07 0.1050E-08 0.4194E-07 0.4610E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001442 0.4394E-07 0.1065E-08 0.4158E-07 0.3681E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001443 0.4502E-07 0.1113E-08 0.4119E-07 0.3120E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001444 0.4138E-07 0.1024E-08 0.4151E-07 0.3054E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001445 0.4113E-07 0.1028E-08 0.4144E-07 0.2532E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001446 0.4095E-07 0.1046E-08 0.4145E-07 0.2812E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001447 0.4077E-07 0.1090E-08 0.4166E-07 0.5373E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001448 0.4066E-07 0.9867E-09 0.4215E-07 0.5609E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001449 0.4065E-07 0.1007E-08 0.4303E-07 0.4033E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001450 0.4148E-07 0.1040E-08 0.3982E-07 0.5506E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001451 0.4229E-07 0.9688E-09 0.3946E-07 0.2277E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001452 0.3954E-07 0.9665E-09 0.3964E-07 0.2601E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001453 0.3931E-07 0.9643E-09 0.3954E-07 0.1893E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001454 0.3912E-07 0.9677E-09 0.3949E-07 0.2325E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001455 0.3894E-07 0.9836E-09 0.3960E-07 0.1829E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001456 0.3881E-07 0.1013E-08 0.3993E-07 0.4351E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001457 0.3876E-07 0.9143E-09 0.4059E-07 0.2402E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001458 0.3934E-07 0.9177E-09 0.3810E-07 0.2372E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001459 0.3996E-07 0.9258E-09 0.3778E-07 0.1898E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001460 0.4096E-07 0.9493E-09 0.3742E-07 0.2491E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001461 0.3759E-07 0.8897E-09 0.3773E-07 0.2087E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001462 0.3737E-07 0.8911E-09 0.3768E-07 0.1932E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001463 0.3720E-07 0.9025E-09 0.3771E-07 0.1967E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001464 0.3704E-07 0.9305E-09 0.3795E-07 0.4059E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001465 0.3693E-07 0.8568E-09 0.3845E-07 0.4067E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001466 0.3692E-07 0.8666E-09 0.3932E-07 0.3262E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001467 0.3771E-07 0.8868E-09 0.3618E-07 0.3277E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001468 0.3847E-07 0.9167E-09 0.3585E-07 0.1684E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001469 0.3592E-07 0.8330E-09 0.3603E-07 0.1492E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001470 0.3572E-07 0.8298E-09 0.3594E-07 0.1335E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001471 0.3554E-07 0.8308E-09 0.3592E-07 0.1430E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001472 0.3538E-07 0.8394E-09 0.3603E-07 0.1336E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001473 0.3526E-07 0.8585E-09 0.3637E-07 0.3159E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001474 0.3521E-07 0.7947E-09 0.3702E-07 0.3268E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001475 0.3577E-07 0.7990E-09 0.3462E-07 0.2971E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001476 0.3635E-07 0.8133E-09 0.3432E-07 0.2490E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001477 0.3433E-07 0.8152E-09 0.3442E-07 0.2815E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001478 0.3414E-07 0.8100E-09 0.3431E-07 0.2052E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001479 0.3396E-07 0.8100E-09 0.3425E-07 0.2355E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001480 0.3380E-07 0.8143E-09 0.3430E-07 0.2066E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001481 0.3367E-07 0.7530E-09 0.3453E-07 0.2197E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001482 0.3360E-07 0.7501E-09 0.3502E-07 0.1796E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001483 0.3362E-07 0.7549E-09 0.3585E-07 0.2946E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001484 0.3442E-07 0.7674E-09 0.3286E-07 0.2768E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001485 0.3519E-07 0.7833E-09 0.3254E-07 0.1565E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 22 0001486 0.3264E-07 0.7266E-09 0.3274E-07 0.1435E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001487 0.3246E-07 0.7243E-09 0.3267E-07 0.1186E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001488 0.3230E-07 0.7254E-09 0.3265E-07 0.1355E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001489 0.3216E-07 0.7342E-09 0.3278E-07 0.1191E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001490 0.3206E-07 0.7520E-09 0.3310E-07 0.2927E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001491 0.3204E-07 0.6940E-09 0.3372E-07 0.2656E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001492 0.3260E-07 0.6976E-09 0.3144E-07 0.2424E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001493 0.3318E-07 0.7078E-09 0.3116E-07 0.2286E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001494 0.3120E-07 0.7066E-09 0.3127E-07 0.2489E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001495 0.3102E-07 0.7005E-09 0.3118E-07 0.1889E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001496 0.3087E-07 0.6978E-09 0.3113E-07 0.2120E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001497 0.3072E-07 0.6983E-09 0.3119E-07 0.1672E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001498 0.3061E-07 0.7043E-09 0.3141E-07 0.2083E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001499 0.3056E-07 0.6523E-09 0.3188E-07 0.1800E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001500 0.3096E-07 0.6532E-09 0.3007E-07 0.2041E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001501 0.3140E-07 0.6589E-09 0.2982E-07 0.1465E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001502 0.3213E-07 0.6754E-09 0.2955E-07 0.2644E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001503 0.2966E-07 0.6745E-09 0.2976E-07 0.2876E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001504 0.2949E-07 0.6649E-09 0.2971E-07 0.2084E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001505 0.2935E-07 0.6605E-09 0.2972E-07 0.2382E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001506 0.2922E-07 0.6570E-09 0.2988E-07 0.1810E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001507 0.2913E-07 0.6606E-09 0.3023E-07 0.1794E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001508 0.2911E-07 0.6088E-09 0.3088E-07 0.2102E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001509 0.2968E-07 0.6114E-09 0.2856E-07 0.2460E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001510 0.3024E-07 0.6226E-09 0.2831E-07 0.2187E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001511 0.2835E-07 0.6206E-09 0.2842E-07 0.1869E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001512 0.2818E-07 0.6171E-09 0.2835E-07 0.1761E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001513 0.2804E-07 0.6150E-09 0.2832E-07 0.1618E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001514 0.2791E-07 0.6170E-09 0.2840E-07 0.1514E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001515 0.2781E-07 0.6219E-09 0.2864E-07 0.1700E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001516 0.2776E-07 0.5721E-09 0.2913E-07 0.1942E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001517 0.2817E-07 0.5722E-09 0.2732E-07 0.1999E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 20 0001518 0.2859E-07 0.5770E-09 0.2709E-07 0.1549E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001519 0.2928E-07 0.5933E-09 0.2684E-07 0.2372E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001520 0.2695E-07 0.5925E-09 0.2705E-07 0.2528E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001521 0.2679E-07 0.5848E-09 0.2702E-07 0.1850E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001522 0.2667E-07 0.5821E-09 0.2704E-07 0.2094E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001523 0.2655E-07 0.5812E-09 0.2722E-07 0.1619E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001524 0.2647E-07 0.5391E-09 0.2758E-07 0.1575E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001525 0.2644E-07 0.5389E-09 0.2822E-07 0.2209E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001526 0.2700E-07 0.5436E-09 0.2595E-07 0.2190E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001527 0.2753E-07 0.5548E-09 0.2571E-07 0.2042E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001528 0.2576E-07 0.5528E-09 0.2583E-07 0.1924E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001529 0.2561E-07 0.5478E-09 0.2577E-07 0.1622E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001530 0.2548E-07 0.5450E-09 0.2576E-07 0.1611E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001531 0.2536E-07 0.5442E-09 0.2584E-07 0.1380E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001532 0.2527E-07 0.5461E-09 0.2609E-07 0.1136E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001533 0.2523E-07 0.5031E-09 0.2656E-07 0.2079E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001534 0.2562E-07 0.5037E-09 0.2482E-07 0.2234E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001535 0.2602E-07 0.5125E-09 0.2461E-07 0.1661E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001536 0.2666E-07 0.5352E-09 0.2439E-07 0.1396E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001537 0.2449E-07 0.4907E-09 0.2459E-07 0.1330E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001538 0.2435E-07 0.4909E-09 0.2456E-07 0.1032E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001539 0.2423E-07 0.4950E-09 0.2460E-07 0.1223E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001540 0.2412E-07 0.5080E-09 0.2477E-07 0.2013E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001541 0.2405E-07 0.4741E-09 0.2513E-07 0.2221E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001542 0.2403E-07 0.4790E-09 0.2573E-07 0.2345E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 20 0001543 0.2456E-07 0.4899E-09 0.2358E-07 0.2380E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001544 0.2506E-07 0.5134E-09 0.2336E-07 0.1124E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001545 0.2341E-07 0.4624E-09 0.2348E-07 0.9293E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001546 0.2327E-07 0.4611E-09 0.2342E-07 0.8935E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001547 0.2316E-07 0.4629E-09 0.2342E-07 0.9010E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001548 0.2305E-07 0.4694E-09 0.2351E-07 0.8917E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001549 0.2297E-07 0.4828E-09 0.2374E-07 0.1933E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001550 0.2293E-07 0.4431E-09 0.2419E-07 0.2235E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001551 0.2330E-07 0.4471E-09 0.2256E-07 0.2029E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001552 0.2368E-07 0.4605E-09 0.2236E-07 0.1581E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001553 0.2237E-07 0.4620E-09 0.2243E-07 0.1793E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001554 0.2224E-07 0.4590E-09 0.2236E-07 0.1273E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 22 0001555 0.2213E-07 0.4601E-09 0.2233E-07 0.1277E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 22 0001556 0.2202E-07 0.4251E-09 0.2237E-07 0.1171E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 23 0001557 0.2194E-07 0.4239E-09 0.2253E-07 0.1142E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001558 0.2188E-07 0.4279E-09 0.2287E-07 0.1331E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 20 0001559 0.2215E-07 0.4341E-09 0.2157E-07 0.1503E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001560 0.2244E-07 0.4442E-09 0.2140E-07 0.1732E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001561 0.2293E-07 0.4121E-09 0.2121E-07 0.1180E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001562 0.2127E-07 0.4085E-09 0.2134E-07 0.1396E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001563 0.2115E-07 0.4066E-09 0.2131E-07 0.9369E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001564 0.2104E-07 0.4076E-09 0.2131E-07 0.1238E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001565 0.2094E-07 0.4143E-09 0.2143E-07 0.9194E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001566 0.2088E-07 0.4278E-09 0.2168E-07 0.2074E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001567 0.2085E-07 0.3923E-09 0.2215E-07 0.1510E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001568 0.2124E-07 0.3954E-09 0.2049E-07 0.1589E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001569 0.2162E-07 0.4047E-09 0.2031E-07 0.1546E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 24 0001570 0.2033E-07 0.4030E-09 0.2038E-07 0.1521E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001571 0.2021E-07 0.3995E-09 0.2033E-07 0.1251E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001572 0.2011E-07 0.3980E-09 0.2031E-07 0.1293E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001573 0.2001E-07 0.3990E-09 0.2037E-07 0.1026E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001574 0.1994E-07 0.3721E-09 0.2054E-07 0.1145E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001575 0.1989E-07 0.3715E-09 0.2089E-07 0.1468E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001576 0.2017E-07 0.3735E-09 0.1960E-07 0.1346E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001577 0.2046E-07 0.3792E-09 0.1943E-07 0.1184E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001578 0.2094E-07 0.3939E-09 0.1926E-07 0.1065E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001579 0.1933E-07 0.3608E-09 0.1940E-07 0.9467E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001580 0.1921E-07 0.3615E-09 0.1937E-07 0.8095E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 23 0001581 0.1912E-07 0.3658E-09 0.1939E-07 0.8947E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001582 0.1903E-07 0.3768E-09 0.1952E-07 0.1608E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001583 0.1897E-07 0.3490E-09 0.1978E-07 0.1681E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001584 0.1895E-07 0.3536E-09 0.2023E-07 0.1701E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001585 0.1934E-07 0.3613E-09 0.1861E-07 0.1655E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001586 0.1971E-07 0.3766E-09 0.1844E-07 0.8575E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001587 0.1847E-07 0.3406E-09 0.1853E-07 0.7400E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001588 0.1837E-07 0.3399E-09 0.1848E-07 0.6935E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 22 0001589 0.1827E-07 0.3416E-09 0.1847E-07 0.7180E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 23 0001590 0.1819E-07 0.3471E-09 0.1853E-07 0.6967E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001591 0.1812E-07 0.3578E-09 0.1871E-07 0.1510E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001592 0.1809E-07 0.3274E-09 0.1905E-07 0.1630E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001593 0.1837E-07 0.3302E-09 0.1780E-07 0.1510E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 20 0001594 0.1864E-07 0.3400E-09 0.1765E-07 0.1312E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001595 0.1910E-07 0.3623E-09 0.1749E-07 0.1166E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001596 0.1756E-07 0.3204E-09 0.1763E-07 0.9429E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001597 0.1746E-07 0.3224E-09 0.1761E-07 0.9123E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001598 0.1738E-07 0.3298E-09 0.1764E-07 0.9071E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001599 0.1730E-07 0.3446E-09 0.1776E-07 0.1817E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001600 0.1725E-07 0.3103E-09 0.1802E-07 0.1796E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001601 0.1723E-07 0.3183E-09 0.1845E-07 0.1704E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001602 0.1761E-07 0.3296E-09 0.1691E-07 0.2158E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001603 0.1796E-07 0.3078E-09 0.1676E-07 0.9296E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 22 0001604 0.1679E-07 0.3068E-09 0.1684E-07 0.1048E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001605 0.1669E-07 0.3071E-09 0.1680E-07 0.7661E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001606 0.1661E-07 0.3099E-09 0.1679E-07 0.9424E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001607 0.1653E-07 0.3179E-09 0.1686E-07 0.1284E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001608 0.1647E-07 0.2927E-09 0.1703E-07 0.1430E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001609 0.1644E-07 0.2962E-09 0.1735E-07 0.1163E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001610 0.1670E-07 0.3015E-09 0.1618E-07 0.1120E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001611 0.1697E-07 0.3111E-09 0.1604E-07 0.1460E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001612 0.1739E-07 0.2886E-09 0.1589E-07 0.1013E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 22 0001613 0.1596E-07 0.2852E-09 0.1603E-07 0.1151E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 22 0001614 0.1587E-07 0.2849E-09 0.1601E-07 0.7881E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 23 0001615 0.1579E-07 0.2880E-09 0.1604E-07 0.1023E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001616 0.1572E-07 0.2975E-09 0.1616E-07 0.1409E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 20 0001617 0.1567E-07 0.2754E-09 0.1640E-07 0.1071E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 20 0001618 0.1585E-07 0.2771E-09 0.1548E-07 0.8826E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001619 0.1602E-07 0.2803E-09 0.1535E-07 0.8731E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001620 0.1634E-07 0.2912E-09 0.1523E-07 0.9633E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001621 0.1525E-07 0.2692E-09 0.1530E-07 0.7097E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001622 0.1516E-07 0.2706E-09 0.1528E-07 0.7930E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 22 0001623 0.1509E-07 0.2768E-09 0.1529E-07 0.7018E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001624 0.1501E-07 0.2882E-09 0.1537E-07 0.1533E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001625 0.1496E-07 0.2611E-09 0.1555E-07 0.1426E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001626 0.1493E-07 0.2674E-09 0.1589E-07 0.1280E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001627 0.1519E-07 0.2750E-09 0.1470E-07 0.1544E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 22 0001628 0.1544E-07 0.2577E-09 0.1457E-07 0.7125E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 22 0001629 0.1458E-07 0.2569E-09 0.1462E-07 0.8636E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001630 0.1449E-07 0.2570E-09 0.1458E-07 0.6106E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 22 0001631 0.1442E-07 0.2591E-09 0.1457E-07 0.7809E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001632 0.1435E-07 0.2654E-09 0.1461E-07 0.1039E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001633 0.1429E-07 0.2465E-09 0.1473E-07 0.1189E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001634 0.1425E-07 0.2492E-09 0.1498E-07 0.9484E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 20 0001635 0.1444E-07 0.2529E-09 0.1406E-07 0.9285E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001636 0.1463E-07 0.2599E-09 0.1394E-07 0.1184E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001637 0.1495E-07 0.2425E-09 0.1382E-07 0.8045E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001638 0.1386E-07 0.2402E-09 0.1391E-07 0.9416E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 28 0001639 0.1378E-07 0.2399E-09 0.1390E-07 0.6471E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001640 0.1371E-07 0.2423E-09 0.1391E-07 0.8408E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 22 0001641 0.1365E-07 0.2495E-09 0.1400E-07 0.1144E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 20 0001642 0.1360E-07 0.2323E-09 0.1419E-07 0.1260E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 20 0001643 0.1358E-07 0.2370E-09 0.1451E-07 0.1077E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 20 0001644 0.1385E-07 0.2419E-09 0.1335E-07 0.1073E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001645 0.1409E-07 0.2516E-09 0.1323E-07 0.6072E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001646 0.1325E-07 0.2269E-09 0.1329E-07 0.5119E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 33 0001647 0.1317E-07 0.2266E-09 0.1325E-07 0.4988E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 23 0001648 0.1310E-07 0.2281E-09 0.1325E-07 0.5009E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001649 0.1304E-07 0.2321E-09 0.1329E-07 0.5019E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 22 0001650 0.1299E-07 0.2398E-09 0.1342E-07 0.1037E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001651 0.1296E-07 0.2188E-09 0.1366E-07 0.1108E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 20 0001652 0.1315E-07 0.2205E-09 0.1277E-07 0.1012E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001653 0.1334E-07 0.2266E-09 0.1266E-07 0.8936E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 22 0001654 0.1366E-07 0.2413E-09 0.1255E-07 0.8178E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 23 0001655 0.1260E-07 0.2143E-09 0.1265E-07 0.6578E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 109 0001656 0.1252E-07 0.2158E-09 0.1263E-07 0.6462E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001657 0.1246E-07 0.2213E-09 0.1265E-07 0.6375E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001658 0.1240E-07 0.2318E-09 0.1274E-07 0.1268E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 22 0001659 0.1236E-07 0.2081E-09 0.1292E-07 0.1245E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001660 0.1235E-07 0.2142E-09 0.1324E-07 0.1191E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001661 0.1261E-07 0.2217E-09 0.1213E-07 0.1486E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 22 0001662 0.1285E-07 0.2067E-09 0.1202E-07 0.6487E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001663 0.1204E-07 0.2060E-09 0.1208E-07 0.7328E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001664 0.1197E-07 0.2063E-09 0.1205E-07 0.5373E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001665 0.1191E-07 0.2085E-09 0.1204E-07 0.6613E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 22 0001666 0.1185E-07 0.2143E-09 0.1209E-07 0.8977E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001667 0.1181E-07 0.1968E-09 0.1221E-07 0.1001E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001668 0.1178E-07 0.1996E-09 0.1244E-07 0.8278E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001669 0.1197E-07 0.2033E-09 0.1160E-07 0.7959E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001670 0.1215E-07 0.2101E-09 0.1151E-07 0.1026E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001671 0.1245E-07 0.1946E-09 0.1140E-07 0.7119E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001672 0.1145E-07 0.1922E-09 0.1149E-07 0.8065E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 24 0001673 0.1138E-07 0.1922E-09 0.1148E-07 0.5560E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 23 0001674 0.1133E-07 0.1946E-09 0.1150E-07 0.7193E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001675 0.1127E-07 0.2014E-09 0.1159E-07 0.9874E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001676 0.1124E-07 0.1860E-09 0.1176E-07 0.7581E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 20 0001677 0.1136E-07 0.1872E-09 0.1110E-07 0.6278E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 20 0001678 0.1148E-07 0.1896E-09 0.1101E-07 0.6205E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001679 0.1170E-07 0.1976E-09 0.1092E-07 0.6785E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 24 0001680 0.1094E-07 0.1819E-09 0.1098E-07 0.5029E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001681 0.1088E-07 0.1830E-09 0.1096E-07 0.5602E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001682 0.1082E-07 0.1875E-09 0.1096E-07 0.4991E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 22 0001683 0.1077E-07 0.1956E-09 0.1102E-07 0.1072E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 22 0001684 0.1073E-07 0.1768E-09 0.1116E-07 0.9974E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001685 0.1070E-07 0.1815E-09 0.1140E-07 0.9096E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001686 0.1089E-07 0.1868E-09 0.1054E-07 0.1092E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001687 0.1106E-07 0.1748E-09 0.1045E-07 0.5049E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001688 0.1046E-07 0.1743E-09 0.1048E-07 0.6080E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_ITS iterations 10000 0001689 0.1040E-07 0.1744E-09 0.1046E-07 0.4334E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 23 0001690 0.1034E-07 0.1760E-09 0.1045E-07 0.5514E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 20 0001691 0.1029E-07 0.1806E-09 0.1048E-07 0.7314E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 20 0001692 0.1025E-07 0.1674E-09 0.1057E-07 0.8353E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 20 0001693 0.1022E-07 0.1695E-09 0.1075E-07 0.6754E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001694 0.1035E-07 0.1720E-09 0.1008E-07 0.6603E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001695 0.1049E-07 0.1771E-09 0.1000E-07 0.8360E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 22 0001696 0.1071E-07 0.1651E-09 0.9915E-08 0.5701E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 22 0001697 0.9942E-08 0.1634E-09 0.9978E-08 0.6631E-10 0.0000E+00 TIME FOR CALCULATION: 0.4107E+05 L2-NORM ERROR U VELOCITY 1.073664051466740E-005 L2-NORM ERROR V VELOCITY 1.133490448715052E-005 L2-NORM ERROR W VELOCITY 1.095392594962641E-005 L2-NORM ERROR ABS. VELOCITY 1.412168740613489E-005 L2-NORM ERROR PRESSURE 1.307959628670870E-003 *** CALCULATION FINISHED - SEE RESULTS *** ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./caffa3d.MB.lnx on a arch-openmpi-opt-intel-hlr-ext named hpb0005 with 1 processor, by gu08vomo Mon Feb 16 23:14:32 2015 Using Petsc Release Version 3.5.3, Jan, 31, 2015 Max Max/Min Avg Total Time (sec): 4.111e+04 1.00000 4.111e+04 Objects: 2.920e+05 1.00000 2.920e+05 Flops: 1.925e+13 1.00000 1.925e+13 1.925e+13 Flops/sec: 4.684e+08 1.00000 4.684e+08 4.684e+08 MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Reductions: 0.000e+00 0.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 4.5673e+03 11.1% 6.5910e+06 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 1: MOMENTUM: 1.5111e+03 3.7% 1.2057e+12 6.3% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 2: PRESCORR: 3.5028e+04 85.2% 1.8048e+13 93.7% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage ThreadCommRunKer 8486 1.0 6.3159e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNorm 1 1.0 2.2623e-01 1.0 4.39e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 67 0 0 0 19 VecScale 1 1.0 1.8220e-03 1.0 2.20e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 33 0 0 0 1206 VecSet 67886 1.0 6.2601e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 VecScatterBegin 76376 1.0 2.1529e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 1 1.0 1.8241e-03 1.0 2.20e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 33 0 0 0 1204 MatAssemblyBegin 3394 1.0 9.6750e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 3394 1.0 6.2648e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 --- Event Stage 1: MOMENTUM VecMDot 5721 1.0 1.3349e+01 1.0 2.79e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 2 0 0 0 2091 VecNorm 20994 1.0 2.4870e+01 1.0 9.22e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 2 8 0 0 0 3709 VecScale 10812 1.0 1.4489e+01 1.0 2.38e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 2 0 0 0 1639 VecCopy 10182 1.0 2.1544e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 VecSet 5091 1.0 4.6968e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 15273 1.0 3.9258e+01 1.0 6.71e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 3 6 0 0 0 1709 VecMAXPY 10812 1.0 3.1309e+01 1.0 5.30e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 2 4 0 0 0 1694 VecNormalize 10812 1.0 2.7416e+01 1.0 7.13e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 2 6 0 0 0 2599 MatMult 15903 1.0 2.8189e+02 1.0 4.32e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 19 36 0 0 0 1533 MatSolve 15903 1.0 4.5531e+02 1.0 4.32e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 30 36 0 0 0 949 MatLUFactorNum 1697 1.0 1.7782e+02 1.0 7.75e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 12 6 0 0 0 436 MatILUFactorSym 1697 1.0 1.4325e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 9 0 0 0 0 0 MatGetRowIJ 1697 1.0 3.5739e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 1697 1.0 1.4906e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 KSPGMRESOrthog 5721 1.0 2.9044e+01 1.0 5.58e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 2 5 0 0 0 1922 KSPSetUp 5091 1.0 1.4322e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 9 0 0 0 0 0 KSPSolve 5091 1.0 1.2773e+03 1.0 1.02e+12 1.0 0.0e+00 0.0e+00 0.0e+00 3 5 0 0 0 85 85 0 0 0 801 PCSetUp 1697 1.0 3.3801e+02 1.0 7.75e+10 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 22 6 0 0 0 229 PCApply 15903 1.0 4.5533e+02 1.0 4.32e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 30 36 0 0 0 949 --- Event Stage 2: PRESCORR VecMDot 90355 1.0 2.2675e+02 1.0 6.19e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 3 0 0 0 1 3 0 0 0 2731 VecTDot 75581 1.0 1.4988e+02 1.0 3.32e+11 1.0 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 2216 VecNorm 97143 1.0 7.7793e+01 1.0 2.70e+11 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 3470 VecScale 56001 1.0 2.7295e+01 1.0 4.46e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1633 VecCopy 10182 1.0 1.4394e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 534887 1.0 3.0325e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecAXPY 82284 1.0 2.0990e+02 1.0 3.47e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 1 2 0 0 0 1655 VecAYPX 154386 1.0 2.0929e+02 1.0 2.53e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 1207 VecMAXPY 95446 1.0 2.8455e+02 1.0 7.00e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 4 0 0 0 1 4 0 0 0 2461 VecAssemblyBegin 5091 1.0 4.4203e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyEnd 5091 1.0 6.8688e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecPointwiseMult 95446 1.0 7.3898e+01 1.0 4.46e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 603 VecSetRandom 5091 1.0 4.0516e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 56001 1.0 5.2125e+01 1.0 1.34e+11 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 2566 MatMult 208690 1.0 2.0945e+03 1.0 3.30e+12 1.0 0.0e+00 0.0e+00 0.0e+00 5 17 0 0 0 6 18 0 0 0 1577 MatMultAdd 118335 1.0 8.0683e+02 1.0 6.36e+11 1.0 0.0e+00 0.0e+00 0.0e+00 2 3 0 0 0 2 4 0 0 0 788 MatMultTranspose 197225 1.0 9.2099e+02 1.0 6.36e+11 1.0 0.0e+00 0.0e+00 0.0e+00 2 3 0 0 0 3 4 0 0 0 691 MatSOR 236670 1.0 9.2102e+03 1.0 9.10e+12 1.0 0.0e+00 0.0e+00 0.0e+00 22 47 0 0 0 26 50 0 0 0 988 MatConvert 10182 1.0 1.6989e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatScale 15273 1.0 1.1443e+02 1.0 9.85e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 861 MatResidual 118335 1.0 1.0839e+03 1.0 1.65e+12 1.0 0.0e+00 0.0e+00 0.0e+00 3 9 0 0 0 3 9 0 0 0 1526 MatAssemblyBegin 52607 1.0 9.5365e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 52607 1.0 6.2471e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 MatGetRow -966829610 1.0 8.1121e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 MatCoarsen 5091 1.0 3.0592e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 MatView 6 1.0 4.8399e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAXPY 5091 1.0 6.3822e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 MatMatMult 5091 1.0 7.4095e+02 1.0 8.45e+10 1.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 114 MatMatMultSym 5091 1.0 5.0252e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 MatMatMultNum 5091 1.0 2.3840e+02 1.0 8.45e+10 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 354 MatPtAP 5091 1.0 3.9072e+03 1.0 5.04e+11 1.0 0.0e+00 0.0e+00 0.0e+00 10 3 0 0 0 11 3 0 0 0 129 MatPtAPSymbolic 5091 1.0 1.4575e+03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 4 0 0 0 0 4 0 0 0 0 0 MatPtAPNumeric 5091 1.0 2.4496e+03 1.0 5.04e+11 1.0 0.0e+00 0.0e+00 0.0e+00 6 3 0 0 0 7 3 0 0 0 206 MatTrnMatMult 5091 1.0 1.0514e+04 1.0 1.07e+12 1.0 0.0e+00 0.0e+00 0.0e+00 26 6 0 0 0 30 6 0 0 0 102 MatTrnMatMultSym 5091 1.0 4.9609e+03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 12 0 0 0 0 14 0 0 0 0 0 MatTrnMatMultNum 5091 1.0 5.5532e+03 1.0 1.07e+12 1.0 0.0e+00 0.0e+00 0.0e+00 14 6 0 0 0 16 6 0 0 0 193 MatGetSymTrans 10182 1.0 2.5778e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 KSPGMRESOrthog 50910 1.0 2.9287e+02 1.0 8.92e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 5 0 0 0 1 5 0 0 0 3045 KSPSetUp 16970 1.0 8.0605e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1697 1.0 3.4988e+04 1.0 1.80e+13 1.0 0.0e+00 0.0e+00 0.0e+00 85 93 0 0 0 100100 0 0 0 514 PCGAMGgraph_AGG 5091 1.0 2.4908e+03 1.0 7.12e+10 1.0 0.0e+00 0.0e+00 0.0e+00 6 0 0 0 0 7 0 0 0 0 29 PCGAMGcoarse_AGG 5091 1.0 1.1338e+04 1.0 1.07e+12 1.0 0.0e+00 0.0e+00 0.0e+00 28 6 0 0 0 32 6 0 0 0 95 PCGAMGProl_AGG 5091 1.0 6.5076e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 PCGAMGPOpt_AGG 5091 1.0 3.0760e+03 1.0 1.94e+12 1.0 0.0e+00 0.0e+00 0.0e+00 7 10 0 0 0 9 11 0 0 0 631 PCSetUp 3394 1.0 2.1549e+04 1.0 3.59e+12 1.0 0.0e+00 0.0e+00 0.0e+00 52 19 0 0 0 62 20 0 0 0 167 PCSetUpOnBlocks 39445 1.0 4.5932e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCApply 39445 1.0 1.2072e+04 1.0 1.20e+13 1.0 0.0e+00 0.0e+00 0.0e+00 29 62 0 0 0 34 67 0 0 0 996 --- Event Stage 3: Unknown ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Vector 67 95 833878448 0 Vector Scatter 2 2 1304 0 Index Set 4 7 17581544 0 IS L to G Mapping 2 2 17577192 0 Matrix 1 11 490350860 0 Matrix Null Space 0 1 620 0 Krylov Solver 0 7 117399992 0 Preconditioner 0 7 482316 0 --- Event Stage 1: MOMENTUM Vector 61096 61082 536882926608 0 Index Set 5091 5088 29812925696 0 Matrix 1697 1696 358422464000 0 Matrix Null Space 1 0 0 0 Krylov Solver 2 0 0 0 Preconditioner 2 0 0 0 --- Event Stage 2: PRESCORR Vector 156130 156105 816028335912 0 Index Set 5091 5091 4032072 0 Matrix 42425 42416 1154972646516 0 Matrix Coarsen 5091 5091 3278604 0 Krylov Solver 5096 5091 153829656 0 Preconditioner 5096 5091 5335368 0 PetscRandom 5091 5091 3258240 0 Viewer 1 0 0 0 --- Event Stage 3: Unknown ======================================================================================================================== Average time to get PetscTime(): 0 #PETSc Option Table entries: -log_summary -momentum_ksp_type gmres -options_left -pressure_ksp_converged_reason -pressure_mg_coarse_sub_pc_type svd -pressure_mg_levels_ksp_rtol 1e-4 -pressure_mg_levels_ksp_type richardson -pressure_mg_levels_pc_type sor -pressure_pc_gamg_agg_nsmooths 1 -pressure_pc_type gamg #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure options: PETSC_ARCH=arch-openmpi-opt-intel-hlr-ext PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3 -prefix=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr-ext --with-blas-lapack-dir=/shared/apps/intel/2015/composer_xe_2015/mkl/lib/intel64/ --with-mpi-dir=/shared/apps/openmpi/1.8.2_intel COPTFLAGS="-O3 -xHost" FOPTFLAGS="-O3 -xHost" CXXOPTFLAGS="-O3 -xHost" --with-debugging=0 --download-hypre --download-ml ----------------------------------------- Libraries compiled on Sun Feb 1 16:09:22 2015 on hla0003 Machine characteristics: Linux-3.0.101-0.40-default-x86_64-with-SuSE-11-x86_64 Using PETSc directory: /home/gu08vomo/soft/petsc/3.5.3 Using PETSc arch: arch-openmpi-opt-intel-hlr-ext ----------------------------------------- Using C compiler: /shared/apps/openmpi/1.8.2_intel/bin/mpicc -fPIC -wd1572 -O3 -xHost ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /shared/apps/openmpi/1.8.2_intel/bin/mpif90 -fPIC -O3 -xHost ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/include -I/home/gu08vomo/soft/petsc/3.5.3/include -I/home/gu08vomo/soft/petsc/3.5.3/include -I/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/include -I/shared/apps/openmpi/1.8.2_intel/include ----------------------------------------- Using C linker: /shared/apps/openmpi/1.8.2_intel/bin/mpicc Using Fortran linker: /shared/apps/openmpi/1.8.2_intel/bin/mpif90 Using libraries: -Wl,-rpath,/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -L/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -lpetsc -Wl,-rpath,/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -L/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -lHYPRE -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -L/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib -L/shared/apps/gcc/4.8.3/lib -lmpi_cxx -lml -lmpi_cxx -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm -lX11 -lpthread -lssl -lcrypto -lmpi_usempi_ignore_tkr -lmpi_mpifh -lifport -lifcore -lm -lmpi_cxx -ldl -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -lmpi -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -L/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib -L/shared/apps/gcc/4.8.3/lib -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -limf -lsvml -lirng -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lpthread -lirc_s -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -L/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib -L/shared/apps/gcc/4.8.3/lib -ldl ----------------------------------------- #PETSc Option Table entries: -log_summary -momentum_ksp_type gmres -options_left -pressure_ksp_converged_reason -pressure_mg_coarse_sub_pc_type svd -pressure_mg_levels_ksp_rtol 1e-4 -pressure_mg_levels_ksp_type richardson -pressure_mg_levels_pc_type sor -pressure_pc_gamg_agg_nsmooths 1 -pressure_pc_type gamg #End of PETSc Option Table entries There are no unused options. -------------- next part -------------- Sender: LSF System Subject: Job 429762: in cluster Done Job was submitted from host by user in cluster . Job was executed on host(s) , in queue , as user in cluster . was used as the home directory. was used as the working directory. Started at Wed Feb 11 00:08:29 2015 Results reported at Wed Feb 11 10:36:23 2015 Your job looked like: ------------------------------------------------------------ # LSBATCH: User input #! /bin/sh #BSUB -J mg_test #BSUB -o /home/gu08vomo/thesis/mgtest/icc.128.out.%J #BSUB -n 1 #BSUB -W 14:00 #BSUB -x #BSUB -q test_mpi2 #BSUB -a openmpi module load openmpi/intel/1.8.2 #export PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr export PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr-ext export MYWORKDIR=/work/scratch/gu08vomo/thesis/singleblock/128_1_1_seg/ export OUTPUTDIR=/home/gu08vomo/thesis/coupling export PETSC_OPS="-options_file ops.icc" cat ops.icc echo "PETSC_DIR="$PETSC_DIR echo "MYWORKDIR="$MYWORKDIR cd $MYWORKDIR mpirun -n 1 ./caffa3d.MB.lnx ${PETSC_OPS} ------------------------------------------------------------ Successfully completed. Resource usage summary: CPU time : 37692.72 sec. Max Memory : 2031 MB Average Memory : 2011.15 MB Total Requested Memory : - Delta Memory : - (Delta: the difference between total requested memory and actual max usage.) Max Swap : 2805 MB Max Processes : 6 Max Threads : 11 The output (if any) follows: Modules: loading openmpi/intel/1.8.2 -log_summary -options_left -pressure_ksp_converged_reason -ressure_pc_type icc PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr-ext MYWORKDIR=/work/scratch/gu08vomo/thesis/singleblock/128_1_1_seg/ ENTER PROBLEM NAME (SIX CHARACTERS): *************************************************** NAME OF PROBLEM SOLVED control *************************************************** *************************************************** CONTROL SETTINGS *************************************************** LREAD,LWRITE,LPOST,LTEST,LOUTS,LOUTE,LTIME,LGRAD F F T F F F F F IMON, JMON, KMON, MMON, RMON, IPR, JPR, KPR, MPR,NPCOR,NIGRAD 8 9 8 1 0 2 2 3 1 1 1 SORMAX, SLARGE, ALFA 0.1000E-07 0.1000E+31 0.9200E+00 (URF(I),I=1,5) 0.9000E+00 0.9000E+00 0.9000E+00 0.1000E+00 0.1000E+01 (SOR(I),I=1,5) 0.1000E+00 0.1000E+00 0.1000E+00 0.1000E+00 0.1000E+00 (GDS(I),I=1,5) - BLENDING (CDS-UDS) 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 LSG 100000 *************************************************** START SIMPLE RELAXATIONS *************************************************** Linear solve converged due to CONVERGED_RTOL iterations 29 KSP Object:(pressure_) 1 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=0.1, absolute=1e-50, divergence=10000 left preconditioning has attached null space using PRECONDITIONED norm type for convergence test PC Object:(pressure_) 1 MPI processes type: ilu ILU: out-of-place factorization 0 levels of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 1 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=2197000, cols=2197000 package used to perform factorization: petsc total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines 0000001 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 31 0000002 0.8850E+00 0.8273E+00 0.8851E+00 0.7264E+00 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 40 0000003 0.2792E+00 0.2452E+00 0.2793E+00 0.2116E+00 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 44 0000004 0.1129E+00 0.9670E-01 0.1130E+00 0.8806E-01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 30 0000005 0.4693E-01 0.3579E-01 0.4696E-01 0.5137E-01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 37 0000006 0.2598E-01 0.1605E-01 0.2599E-01 0.3425E-01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 38 0000007 0.1929E-01 0.1031E-01 0.1929E-01 0.2470E-01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 59 0000008 0.1615E-01 0.8123E-02 0.1615E-01 0.1890E-01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 39 0000009 0.1412E-01 0.6815E-02 0.1411E-01 0.1504E-01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 30 0000010 0.1260E-01 0.5890E-02 0.1260E-01 0.1227E-01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 40 0000011 0.1141E-01 0.5197E-02 0.1141E-01 0.1019E-01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 55 0000012 0.1044E-01 0.4657E-02 0.1044E-01 0.8576E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 37 0000013 0.9639E-02 0.4224E-02 0.9637E-02 0.7288E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 48 0000014 0.8956E-02 0.3875E-02 0.8955E-02 0.6241E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 70 0000015 0.8372E-02 0.3578E-02 0.8371E-02 0.5377E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 47 0000016 0.7864E-02 0.3329E-02 0.7862E-02 0.4655E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 51 0000017 0.7417E-02 0.3116E-02 0.7416E-02 0.4046E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 53 0000018 0.7022E-02 0.2929E-02 0.7021E-02 0.3528E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 55 0000019 0.6668E-02 0.2766E-02 0.6668E-02 0.3085E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 57 0000020 0.6351E-02 0.2621E-02 0.6350E-02 0.2704E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 61 0000021 0.6063E-02 0.2492E-02 0.6063E-02 0.2376E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 82 0000022 0.5802E-02 0.2376E-02 0.5801E-02 0.2091E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 58 0000023 0.5563E-02 0.2271E-02 0.5562E-02 0.1843E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 62 0000024 0.5344E-02 0.2176E-02 0.5343E-02 0.1627E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 82 0000025 0.5141E-02 0.2088E-02 0.5141E-02 0.1438E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 85 0000026 0.4954E-02 0.2008E-02 0.4953E-02 0.1273E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 86 0000027 0.4780E-02 0.1934E-02 0.4780E-02 0.1128E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 87 0000028 0.4618E-02 0.1865E-02 0.4618E-02 0.1001E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 88 0000029 0.4467E-02 0.1801E-02 0.4467E-02 0.8886E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 89 0000030 0.4326E-02 0.1741E-02 0.4325E-02 0.7898E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 91 0000031 0.4193E-02 0.1686E-02 0.4192E-02 0.7026E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 92 0000032 0.4068E-02 0.1633E-02 0.4067E-02 0.6256E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 110 0000033 0.3950E-02 0.1584E-02 0.3950E-02 0.5576E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 110 0000034 0.3838E-02 0.1538E-02 0.3838E-02 0.4974E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 112 0000035 0.3733E-02 0.1494E-02 0.3733E-02 0.4442E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 124 0000036 0.3633E-02 0.1453E-02 0.3633E-02 0.3970E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 126 0000037 0.3538E-02 0.1413E-02 0.3538E-02 0.3552E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 128 0000038 0.3448E-02 0.1376E-02 0.3448E-02 0.3181E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 130 0000039 0.3362E-02 0.1340E-02 0.3362E-02 0.2853E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 136 0000040 0.3280E-02 0.1307E-02 0.3280E-02 0.2561E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 138 0000041 0.3202E-02 0.1274E-02 0.3202E-02 0.2302E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 143 0000042 0.3127E-02 0.1243E-02 0.3127E-02 0.2072E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 145 0000043 0.3055E-02 0.1214E-02 0.3055E-02 0.1868E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 146 0000044 0.2986E-02 0.1186E-02 0.2986E-02 0.1686E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 148 0000045 0.2920E-02 0.1159E-02 0.2921E-02 0.1525E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 149 0000046 0.2857E-02 0.1133E-02 0.2857E-02 0.1381E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 150 0000047 0.2796E-02 0.1108E-02 0.2796E-02 0.1254E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 151 0000048 0.2738E-02 0.1084E-02 0.2738E-02 0.1141E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 151 0000049 0.2682E-02 0.1061E-02 0.2682E-02 0.1040E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 152 0000050 0.2627E-02 0.1038E-02 0.2627E-02 0.9501E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 153 0000051 0.2575E-02 0.1017E-02 0.2575E-02 0.8702E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 154 0000052 0.2524E-02 0.9962E-03 0.2525E-02 0.7992E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 155 0000053 0.2476E-02 0.9762E-03 0.2476E-02 0.7359E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 155 0000054 0.2429E-02 0.9569E-03 0.2429E-02 0.6795E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 156 0000055 0.2383E-02 0.9383E-03 0.2383E-02 0.6291E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 157 0000056 0.2339E-02 0.9202E-03 0.2339E-02 0.5842E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 157 0000057 0.2296E-02 0.9028E-03 0.2296E-02 0.5440E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 159 0000058 0.2255E-02 0.8859E-03 0.2255E-02 0.5081E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 159 0000059 0.2215E-02 0.8695E-03 0.2215E-02 0.4759E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 160 0000060 0.2176E-02 0.8537E-03 0.2176E-02 0.4470E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 160 0000061 0.2138E-02 0.8383E-03 0.2138E-02 0.4210E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 161 0000062 0.2101E-02 0.8233E-03 0.2102E-02 0.3975E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 161 0000063 0.2066E-02 0.8088E-03 0.2066E-02 0.3763E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 161 0000064 0.2031E-02 0.7947E-03 0.2032E-02 0.3571E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 162 0000065 0.1998E-02 0.7810E-03 0.1998E-02 0.3397E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 162 0000066 0.1965E-02 0.7677E-03 0.1965E-02 0.3239E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 163 0000067 0.1933E-02 0.7548E-03 0.1933E-02 0.3094E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 163 0000068 0.1902E-02 0.7422E-03 0.1902E-02 0.2961E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 164 0000069 0.1872E-02 0.7299E-03 0.1872E-02 0.2839E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 164 0000070 0.1842E-02 0.7180E-03 0.1843E-02 0.2727E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 164 0000071 0.1814E-02 0.7064E-03 0.1814E-02 0.2623E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 164 0000072 0.1786E-02 0.6950E-03 0.1786E-02 0.2527E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 165 0000073 0.1759E-02 0.6840E-03 0.1759E-02 0.2437E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 165 0000074 0.1732E-02 0.6732E-03 0.1732E-02 0.2354E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 165 0000075 0.1706E-02 0.6627E-03 0.1706E-02 0.2276E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 166 0000076 0.1681E-02 0.6524E-03 0.1681E-02 0.2203E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 166 0000077 0.1656E-02 0.6424E-03 0.1656E-02 0.2135E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 166 0000078 0.1632E-02 0.6326E-03 0.1632E-02 0.2070E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 166 0000079 0.1608E-02 0.6231E-03 0.1608E-02 0.2009E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 167 0000080 0.1585E-02 0.6138E-03 0.1585E-02 0.1951E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 167 0000081 0.1562E-02 0.6046E-03 0.1563E-02 0.1897E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 167 0000082 0.1540E-02 0.5957E-03 0.1540E-02 0.1845E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 167 0000083 0.1519E-02 0.5870E-03 0.1519E-02 0.1796E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 167 0000084 0.1497E-02 0.5785E-03 0.1498E-02 0.1749E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 167 0000085 0.1477E-02 0.5702E-03 0.1477E-02 0.1704E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 167 0000086 0.1456E-02 0.5620E-03 0.1457E-02 0.1661E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 168 0000087 0.1436E-02 0.5540E-03 0.1437E-02 0.1620E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 168 0000088 0.1417E-02 0.5462E-03 0.1417E-02 0.1581E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 168 0000089 0.1398E-02 0.5385E-03 0.1398E-02 0.1543E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 168 0000090 0.1379E-02 0.5310E-03 0.1380E-02 0.1507E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 168 0000091 0.1361E-02 0.5237E-03 0.1361E-02 0.1472E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 168 0000092 0.1343E-02 0.5165E-03 0.1343E-02 0.1438E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 169 0000093 0.1325E-02 0.5095E-03 0.1326E-02 0.1406E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 169 0000094 0.1308E-02 0.5025E-03 0.1308E-02 0.1375E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 169 0000095 0.1291E-02 0.4958E-03 0.1291E-02 0.1345E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 169 0000096 0.1274E-02 0.4891E-03 0.1275E-02 0.1316E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 169 0000097 0.1258E-02 0.4826E-03 0.1258E-02 0.1288E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 170 0000098 0.1242E-02 0.4762E-03 0.1242E-02 0.1261E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 170 0000099 0.1226E-02 0.4699E-03 0.1227E-02 0.1234E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 170 0000100 0.1211E-02 0.4638E-03 0.1211E-02 0.1209E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 170 0000101 0.1196E-02 0.4577E-03 0.1196E-02 0.1185E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 170 0000102 0.1181E-02 0.4518E-03 0.1181E-02 0.1161E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 171 0000103 0.1166E-02 0.4460E-03 0.1166E-02 0.1138E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 171 0000104 0.1152E-02 0.4403E-03 0.1152E-02 0.1115E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 171 0000105 0.1138E-02 0.4347E-03 0.1138E-02 0.1093E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 171 0000106 0.1124E-02 0.4291E-03 0.1124E-02 0.1072E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 171 0000107 0.1110E-02 0.4237E-03 0.1110E-02 0.1052E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 171 0000108 0.1097E-02 0.4184E-03 0.1097E-02 0.1032E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 172 0000109 0.1083E-02 0.4132E-03 0.1083E-02 0.1012E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 172 0000110 0.1070E-02 0.4080E-03 0.1070E-02 0.9936E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 172 0000111 0.1058E-02 0.4030E-03 0.1058E-02 0.9752E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 172 0000112 0.1045E-02 0.3980E-03 0.1045E-02 0.9574E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 172 0000113 0.1033E-02 0.3931E-03 0.1033E-02 0.9400E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 173 0000114 0.1020E-02 0.3883E-03 0.1020E-02 0.9231E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 173 0000115 0.1008E-02 0.3836E-03 0.1008E-02 0.9066E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 173 0000116 0.9965E-03 0.3789E-03 0.9966E-03 0.8905E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 173 0000117 0.9849E-03 0.3743E-03 0.9850E-03 0.8749E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 173 0000118 0.9734E-03 0.3698E-03 0.9735E-03 0.8596E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 173 0000119 0.9621E-03 0.3654E-03 0.9622E-03 0.8447E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 174 0000120 0.9510E-03 0.3611E-03 0.9511E-03 0.8302E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 174 0000121 0.9401E-03 0.3568E-03 0.9402E-03 0.8160E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 174 0000122 0.9293E-03 0.3525E-03 0.9294E-03 0.8022E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 174 0000123 0.9187E-03 0.3484E-03 0.9188E-03 0.7887E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 174 0000124 0.9082E-03 0.3443E-03 0.9083E-03 0.7755E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 174 0000125 0.8979E-03 0.3403E-03 0.8980E-03 0.7626E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 175 0000126 0.8877E-03 0.3363E-03 0.8878E-03 0.7501E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 175 0000127 0.8777E-03 0.3324E-03 0.8778E-03 0.7378E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 175 0000128 0.8678E-03 0.3285E-03 0.8679E-03 0.7258E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 175 0000129 0.8581E-03 0.3248E-03 0.8582E-03 0.7140E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 176 0000130 0.8485E-03 0.3210E-03 0.8485E-03 0.7025E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 176 0000131 0.8390E-03 0.3173E-03 0.8391E-03 0.6913E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 176 0000132 0.8297E-03 0.3137E-03 0.8297E-03 0.6804E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 176 0000133 0.8205E-03 0.3101E-03 0.8205E-03 0.6696E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 176 0000134 0.8114E-03 0.3066E-03 0.8114E-03 0.6591E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 177 0000135 0.8025E-03 0.3032E-03 0.8025E-03 0.6489E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 177 0000136 0.7936E-03 0.2997E-03 0.7937E-03 0.6388E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 177 0000137 0.7849E-03 0.2964E-03 0.7849E-03 0.6290E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 177 0000138 0.7763E-03 0.2930E-03 0.7763E-03 0.6193E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 178 0000139 0.7678E-03 0.2898E-03 0.7678E-03 0.6099E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 178 0000140 0.7595E-03 0.2865E-03 0.7595E-03 0.6006E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 179 0000141 0.7512E-03 0.2833E-03 0.7512E-03 0.5916E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 179 0000142 0.7431E-03 0.2802E-03 0.7430E-03 0.5827E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 179 0000143 0.7350E-03 0.2771E-03 0.7350E-03 0.5740E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 179 0000144 0.7271E-03 0.2740E-03 0.7270E-03 0.5655E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 179 0000145 0.7192E-03 0.2710E-03 0.7192E-03 0.5572E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 180 0000146 0.7115E-03 0.2681E-03 0.7115E-03 0.5490E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 180 0000147 0.7039E-03 0.2651E-03 0.7038E-03 0.5410E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 180 0000148 0.6963E-03 0.2622E-03 0.6963E-03 0.5331E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 180 0000149 0.6889E-03 0.2594E-03 0.6888E-03 0.5254E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 180 0000150 0.6815E-03 0.2565E-03 0.6815E-03 0.5178E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 180 0000151 0.6743E-03 0.2538E-03 0.6742E-03 0.5104E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 180 0000152 0.6671E-03 0.2510E-03 0.6670E-03 0.5031E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 180 0000153 0.6600E-03 0.2483E-03 0.6599E-03 0.4960E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 181 0000154 0.6530E-03 0.2456E-03 0.6529E-03 0.4890E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 181 0000155 0.6461E-03 0.2430E-03 0.6460E-03 0.4821E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 181 0000156 0.6393E-03 0.2404E-03 0.6392E-03 0.4754E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 181 0000157 0.6325E-03 0.2378E-03 0.6324E-03 0.4687E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 181 0000158 0.6259E-03 0.2353E-03 0.6258E-03 0.4622E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 181 0000159 0.6193E-03 0.2328E-03 0.6192E-03 0.4558E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 181 0000160 0.6128E-03 0.2303E-03 0.6127E-03 0.4496E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 181 0000161 0.6064E-03 0.2279E-03 0.6062E-03 0.4434E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 181 0000162 0.6000E-03 0.2254E-03 0.5999E-03 0.4374E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 182 0000163 0.5937E-03 0.2231E-03 0.5936E-03 0.4314E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 182 0000164 0.5875E-03 0.2207E-03 0.5874E-03 0.4256E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 182 0000165 0.5814E-03 0.2184E-03 0.5813E-03 0.4198E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 182 0000166 0.5754E-03 0.2161E-03 0.5752E-03 0.4142E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 182 0000167 0.5694E-03 0.2138E-03 0.5692E-03 0.4087E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 182 0000168 0.5635E-03 0.2116E-03 0.5633E-03 0.4032E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 182 0000169 0.5576E-03 0.2094E-03 0.5575E-03 0.3979E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 182 0000170 0.5519E-03 0.2072E-03 0.5517E-03 0.3926E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 182 0000171 0.5461E-03 0.2050E-03 0.5460E-03 0.3874E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 182 0000172 0.5405E-03 0.2029E-03 0.5403E-03 0.3823E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 182 0000173 0.5349E-03 0.2008E-03 0.5348E-03 0.3773E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 182 0000174 0.5294E-03 0.1987E-03 0.5292E-03 0.3724E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 182 0000175 0.5240E-03 0.1966E-03 0.5238E-03 0.3676E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 183 0000176 0.5186E-03 0.1946E-03 0.5184E-03 0.3628E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 183 0000177 0.5132E-03 0.1926E-03 0.5131E-03 0.3582E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 183 0000178 0.5080E-03 0.1906E-03 0.5078E-03 0.3535E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 183 0000179 0.5028E-03 0.1887E-03 0.5026E-03 0.3490E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 183 0000180 0.4976E-03 0.1867E-03 0.4974E-03 0.3446E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 183 0000181 0.4925E-03 0.1848E-03 0.4923E-03 0.3402E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 183 0000182 0.4875E-03 0.1829E-03 0.4873E-03 0.3359E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 183 0000183 0.4825E-03 0.1810E-03 0.4823E-03 0.3316E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 183 0000184 0.4776E-03 0.1792E-03 0.4774E-03 0.3274E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 184 0000185 0.4727E-03 0.1773E-03 0.4725E-03 0.3233E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 184 0000186 0.4679E-03 0.1755E-03 0.4677E-03 0.3193E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 184 0000187 0.4631E-03 0.1738E-03 0.4629E-03 0.3153E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 184 0000188 0.4584E-03 0.1720E-03 0.4582E-03 0.3114E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 185 0000189 0.4537E-03 0.1702E-03 0.4535E-03 0.3075E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 185 0000190 0.4491E-03 0.1685E-03 0.4489E-03 0.3037E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 185 0000191 0.4446E-03 0.1668E-03 0.4444E-03 0.2999E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 185 0000192 0.4401E-03 0.1651E-03 0.4399E-03 0.2962E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 185 0000193 0.4356E-03 0.1634E-03 0.4354E-03 0.2926E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 186 0000194 0.4312E-03 0.1618E-03 0.4310E-03 0.2890E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 186 0000195 0.4268E-03 0.1602E-03 0.4266E-03 0.2855E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 186 0000196 0.4225E-03 0.1585E-03 0.4223E-03 0.2820E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 187 0000197 0.4183E-03 0.1569E-03 0.4180E-03 0.2786E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 187 0000198 0.4140E-03 0.1554E-03 0.4138E-03 0.2752E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 187 0000199 0.4099E-03 0.1538E-03 0.4096E-03 0.2719E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 187 0000200 0.4057E-03 0.1523E-03 0.4055E-03 0.2686E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 187 0000201 0.4016E-03 0.1507E-03 0.4014E-03 0.2654E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 187 0000202 0.3976E-03 0.1492E-03 0.3974E-03 0.2622E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 187 0000203 0.3936E-03 0.1477E-03 0.3934E-03 0.2591E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 188 0000204 0.3896E-03 0.1462E-03 0.3894E-03 0.2560E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 188 0000205 0.3857E-03 0.1448E-03 0.3855E-03 0.2530E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 188 0000206 0.3818E-03 0.1433E-03 0.3816E-03 0.2500E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 188 0000207 0.3780E-03 0.1419E-03 0.3778E-03 0.2470E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 188 0000208 0.3742E-03 0.1405E-03 0.3740E-03 0.2441E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 188 0000209 0.3704E-03 0.1391E-03 0.3702E-03 0.2412E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 188 0000210 0.3667E-03 0.1377E-03 0.3665E-03 0.2384E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 191 0000211 0.3631E-03 0.1363E-03 0.3628E-03 0.2356E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 192 0000212 0.3594E-03 0.1350E-03 0.3592E-03 0.2328E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 192 0000213 0.3558E-03 0.1336E-03 0.3556E-03 0.2301E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 192 0000214 0.3523E-03 0.1323E-03 0.3520E-03 0.2274E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 192 0000215 0.3487E-03 0.1310E-03 0.3485E-03 0.2248E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 192 0000216 0.3453E-03 0.1297E-03 0.3450E-03 0.2222E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 192 0000217 0.3418E-03 0.1284E-03 0.3416E-03 0.2196E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 192 0000218 0.3384E-03 0.1271E-03 0.3382E-03 0.2170E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 192 0000219 0.3350E-03 0.1259E-03 0.3348E-03 0.2145E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 192 0000220 0.3317E-03 0.1246E-03 0.3314E-03 0.2121E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 193 0000221 0.3284E-03 0.1234E-03 0.3281E-03 0.2096E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 193 0000222 0.3251E-03 0.1222E-03 0.3248E-03 0.2072E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 193 0000223 0.3218E-03 0.1210E-03 0.3216E-03 0.2049E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 193 0000224 0.3186E-03 0.1198E-03 0.3184E-03 0.2025E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 193 0000225 0.3155E-03 0.1186E-03 0.3152E-03 0.2002E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 193 0000226 0.3123E-03 0.1174E-03 0.3121E-03 0.1979E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 193 0000227 0.3092E-03 0.1163E-03 0.3090E-03 0.1957E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 193 0000228 0.3061E-03 0.1151E-03 0.3059E-03 0.1935E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 193 0000229 0.3031E-03 0.1140E-03 0.3029E-03 0.1913E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 193 0000230 0.3001E-03 0.1129E-03 0.2998E-03 0.1891E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 193 0000231 0.2971E-03 0.1118E-03 0.2969E-03 0.1870E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 193 0000232 0.2942E-03 0.1107E-03 0.2939E-03 0.1849E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 193 0000233 0.2912E-03 0.1096E-03 0.2910E-03 0.1828E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 193 0000234 0.2883E-03 0.1085E-03 0.2881E-03 0.1808E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 193 0000235 0.2855E-03 0.1074E-03 0.2852E-03 0.1787E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 193 0000236 0.2827E-03 0.1064E-03 0.2824E-03 0.1767E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 193 0000237 0.2799E-03 0.1053E-03 0.2796E-03 0.1748E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 193 0000238 0.2771E-03 0.1043E-03 0.2768E-03 0.1728E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 194 0000239 0.2743E-03 0.1033E-03 0.2741E-03 0.1709E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 194 0000240 0.2716E-03 0.1023E-03 0.2714E-03 0.1690E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 194 0000241 0.2689E-03 0.1013E-03 0.2687E-03 0.1671E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 194 0000242 0.2663E-03 0.1003E-03 0.2660E-03 0.1653E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 194 0000243 0.2636E-03 0.9931E-04 0.2634E-03 0.1635E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 194 0000244 0.2610E-03 0.9834E-04 0.2608E-03 0.1616E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 194 0000245 0.2584E-03 0.9738E-04 0.2582E-03 0.1599E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 194 0000246 0.2559E-03 0.9643E-04 0.2557E-03 0.1581E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 194 0000247 0.2534E-03 0.9548E-04 0.2531E-03 0.1564E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 194 0000248 0.2509E-03 0.9455E-04 0.2506E-03 0.1547E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 194 0000249 0.2484E-03 0.9363E-04 0.2481E-03 0.1530E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 195 0000250 0.2459E-03 0.9272E-04 0.2457E-03 0.1513E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 195 0000251 0.2435E-03 0.9181E-04 0.2433E-03 0.1496E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 195 0000252 0.2411E-03 0.9092E-04 0.2409E-03 0.1480E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 195 0000253 0.2387E-03 0.9003E-04 0.2385E-03 0.1464E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 195 0000254 0.2364E-03 0.8915E-04 0.2361E-03 0.1448E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 195 0000255 0.2340E-03 0.8828E-04 0.2338E-03 0.1432E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 195 0000256 0.2317E-03 0.8742E-04 0.2315E-03 0.1417E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 195 0000257 0.2294E-03 0.8657E-04 0.2292E-03 0.1401E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 196 0000258 0.2272E-03 0.8573E-04 0.2269E-03 0.1386E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 196 0000259 0.2249E-03 0.8490E-04 0.2247E-03 0.1371E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 196 0000260 0.2227E-03 0.8407E-04 0.2225E-03 0.1356E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 196 0000261 0.2205E-03 0.8325E-04 0.2203E-03 0.1342E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 196 0000262 0.2184E-03 0.8244E-04 0.2181E-03 0.1327E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 196 0000263 0.2162E-03 0.8164E-04 0.2160E-03 0.1313E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 196 0000264 0.2141E-03 0.8085E-04 0.2139E-03 0.1299E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 197 0000265 0.2120E-03 0.8006E-04 0.2118E-03 0.1285E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 197 0000266 0.2099E-03 0.7928E-04 0.2097E-03 0.1271E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 197 0000267 0.2078E-03 0.7851E-04 0.2076E-03 0.1258E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 197 0000268 0.2058E-03 0.7775E-04 0.2056E-03 0.1244E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 197 0000269 0.2038E-03 0.7700E-04 0.2035E-03 0.1231E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 197 0000270 0.2018E-03 0.7625E-04 0.2015E-03 0.1218E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 197 0000271 0.1998E-03 0.7551E-04 0.1996E-03 0.1205E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 197 0000272 0.1978E-03 0.7478E-04 0.1976E-03 0.1192E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 197 0000273 0.1959E-03 0.7405E-04 0.1957E-03 0.1179E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 197 0000274 0.1939E-03 0.7333E-04 0.1937E-03 0.1167E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 197 0000275 0.1920E-03 0.7262E-04 0.1918E-03 0.1154E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 197 0000276 0.1902E-03 0.7192E-04 0.1899E-03 0.1142E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 197 0000277 0.1883E-03 0.7122E-04 0.1881E-03 0.1130E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 198 0000278 0.1864E-03 0.7053E-04 0.1862E-03 0.1118E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 198 0000279 0.1846E-03 0.6985E-04 0.1844E-03 0.1106E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 198 0000280 0.1828E-03 0.6917E-04 0.1826E-03 0.1094E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 198 0000281 0.1810E-03 0.6850E-04 0.1808E-03 0.1083E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 198 0000282 0.1792E-03 0.6783E-04 0.1790E-03 0.1071E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 198 0000283 0.1775E-03 0.6718E-04 0.1773E-03 0.1060E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 198 0000284 0.1757E-03 0.6653E-04 0.1755E-03 0.1049E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 198 0000285 0.1740E-03 0.6588E-04 0.1738E-03 0.1038E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 198 0000286 0.1723E-03 0.6524E-04 0.1721E-03 0.1027E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 198 0000287 0.1706E-03 0.6461E-04 0.1704E-03 0.1016E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 198 0000288 0.1690E-03 0.6399E-04 0.1688E-03 0.1006E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 198 0000289 0.1673E-03 0.6337E-04 0.1671E-03 0.9951E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 198 0000290 0.1657E-03 0.6275E-04 0.1655E-03 0.9847E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 198 0000291 0.1640E-03 0.6215E-04 0.1639E-03 0.9744E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 198 0000292 0.1624E-03 0.6155E-04 0.1623E-03 0.9643E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 199 0000293 0.1608E-03 0.6095E-04 0.1607E-03 0.9542E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 199 0000294 0.1593E-03 0.6036E-04 0.1591E-03 0.9442E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 199 0000295 0.1577E-03 0.5978E-04 0.1575E-03 0.9344E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 199 0000296 0.1562E-03 0.5920E-04 0.1560E-03 0.9247E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 199 0000297 0.1546E-03 0.5862E-04 0.1545E-03 0.9150E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 199 0000298 0.1531E-03 0.5806E-04 0.1530E-03 0.9055E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 200 0000299 0.1516E-03 0.5750E-04 0.1515E-03 0.8961E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 200 0000300 0.1502E-03 0.5694E-04 0.1500E-03 0.8868E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 200 0000301 0.1487E-03 0.5639E-04 0.1485E-03 0.8776E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 200 0000302 0.1472E-03 0.5584E-04 0.1471E-03 0.8685E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 200 0000303 0.1458E-03 0.5530E-04 0.1456E-03 0.8595E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 200 0000304 0.1444E-03 0.5477E-04 0.1442E-03 0.8506E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 200 0000305 0.1430E-03 0.5424E-04 0.1428E-03 0.8419E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 201 0000306 0.1416E-03 0.5371E-04 0.1414E-03 0.8332E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 201 0000307 0.1402E-03 0.5320E-04 0.1400E-03 0.8246E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 201 0000308 0.1388E-03 0.5268E-04 0.1387E-03 0.8160E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 201 0000309 0.1375E-03 0.5217E-04 0.1373E-03 0.8076E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 201 0000310 0.1361E-03 0.5167E-04 0.1360E-03 0.7993E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 201 0000311 0.1348E-03 0.5117E-04 0.1346E-03 0.7911E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 201 0000312 0.1335E-03 0.5067E-04 0.1333E-03 0.7829E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 201 0000313 0.1322E-03 0.5018E-04 0.1320E-03 0.7749E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 201 0000314 0.1309E-03 0.4970E-04 0.1308E-03 0.7669E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 202 0000315 0.1296E-03 0.4922E-04 0.1295E-03 0.7591E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 202 0000316 0.1284E-03 0.4874E-04 0.1282E-03 0.7513E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 202 0000317 0.1271E-03 0.4827E-04 0.1270E-03 0.7436E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 202 0000318 0.1259E-03 0.4780E-04 0.1257E-03 0.7359E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 202 0000319 0.1246E-03 0.4734E-04 0.1245E-03 0.7284E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 202 0000320 0.1234E-03 0.4688E-04 0.1233E-03 0.7210E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 202 0000321 0.1222E-03 0.4643E-04 0.1221E-03 0.7136E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 202 0000322 0.1210E-03 0.4598E-04 0.1209E-03 0.7063E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 203 0000323 0.1199E-03 0.4554E-04 0.1197E-03 0.6991E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 203 0000324 0.1187E-03 0.4510E-04 0.1186E-03 0.6919E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 204 0000325 0.1175E-03 0.4466E-04 0.1174E-03 0.6849E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 204 0000326 0.1164E-03 0.4423E-04 0.1163E-03 0.6779E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 205 0000327 0.1153E-03 0.4380E-04 0.1151E-03 0.6710E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 205 0000328 0.1142E-03 0.4338E-04 0.1140E-03 0.6642E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 205 0000329 0.1130E-03 0.4296E-04 0.1129E-03 0.6574E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 205 0000330 0.1119E-03 0.4255E-04 0.1118E-03 0.6507E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 205 0000331 0.1109E-03 0.4213E-04 0.1107E-03 0.6441E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 205 0000332 0.1098E-03 0.4173E-04 0.1097E-03 0.6375E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 205 0000333 0.1087E-03 0.4132E-04 0.1086E-03 0.6311E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 205 0000334 0.1077E-03 0.4092E-04 0.1075E-03 0.6247E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 205 0000335 0.1066E-03 0.4053E-04 0.1065E-03 0.6183E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 205 0000336 0.1056E-03 0.4014E-04 0.1055E-03 0.6121E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 205 0000337 0.1046E-03 0.3975E-04 0.1044E-03 0.6059E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 205 0000338 0.1035E-03 0.3937E-04 0.1034E-03 0.5997E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 205 0000339 0.1025E-03 0.3899E-04 0.1024E-03 0.5937E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 205 0000340 0.1015E-03 0.3861E-04 0.1014E-03 0.5876E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 206 0000341 0.1006E-03 0.3824E-04 0.1004E-03 0.5817E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 206 0000342 0.9958E-04 0.3787E-04 0.9947E-04 0.5758E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 206 0000343 0.9862E-04 0.3750E-04 0.9851E-04 0.5700E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 206 0000344 0.9766E-04 0.3714E-04 0.9755E-04 0.5643E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 206 0000345 0.9671E-04 0.3678E-04 0.9661E-04 0.5586E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 206 0000346 0.9578E-04 0.3642E-04 0.9567E-04 0.5529E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 206 0000347 0.9485E-04 0.3607E-04 0.9474E-04 0.5473E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 206 0000348 0.9393E-04 0.3572E-04 0.9383E-04 0.5418E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 206 0000349 0.9302E-04 0.3538E-04 0.9292E-04 0.5364E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 206 0000350 0.9212E-04 0.3504E-04 0.9202E-04 0.5310E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 206 0000351 0.9123E-04 0.3470E-04 0.9113E-04 0.5256E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 206 0000352 0.9035E-04 0.3436E-04 0.9025E-04 0.5203E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 206 0000353 0.8947E-04 0.3403E-04 0.8938E-04 0.5151E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 207 0000354 0.8861E-04 0.3370E-04 0.8851E-04 0.5099E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 207 0000355 0.8775E-04 0.3337E-04 0.8766E-04 0.5048E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 207 0000356 0.8690E-04 0.3305E-04 0.8681E-04 0.4997E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 207 0000357 0.8606E-04 0.3273E-04 0.8597E-04 0.4947E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 207 0000358 0.8523E-04 0.3242E-04 0.8514E-04 0.4898E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 207 0000359 0.8441E-04 0.3210E-04 0.8432E-04 0.4849E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 207 0000360 0.8359E-04 0.3179E-04 0.8350E-04 0.4800E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 207 0000361 0.8278E-04 0.3148E-04 0.8270E-04 0.4752E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 207 0000362 0.8198E-04 0.3118E-04 0.8190E-04 0.4704E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 207 0000363 0.8119E-04 0.3088E-04 0.8111E-04 0.4657E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 208 0000364 0.8041E-04 0.3058E-04 0.8032E-04 0.4611E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 208 0000365 0.7963E-04 0.3028E-04 0.7955E-04 0.4564E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 208 0000366 0.7886E-04 0.2999E-04 0.7878E-04 0.4519E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 208 0000367 0.7810E-04 0.2970E-04 0.7802E-04 0.4474E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 208 0000368 0.7735E-04 0.2941E-04 0.7727E-04 0.4429E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 208 0000369 0.7660E-04 0.2913E-04 0.7652E-04 0.4385E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 208 0000370 0.7586E-04 0.2885E-04 0.7579E-04 0.4341E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 208 0000371 0.7513E-04 0.2857E-04 0.7506E-04 0.4298E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 208 0000372 0.7441E-04 0.2829E-04 0.7433E-04 0.4255E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 208 0000373 0.7369E-04 0.2802E-04 0.7362E-04 0.4212E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 209 0000374 0.7298E-04 0.2775E-04 0.7291E-04 0.4170E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 209 0000375 0.7228E-04 0.2748E-04 0.7221E-04 0.4129E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 209 0000376 0.7158E-04 0.2721E-04 0.7151E-04 0.4088E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 209 0000377 0.7089E-04 0.2695E-04 0.7082E-04 0.4047E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 209 0000378 0.7021E-04 0.2669E-04 0.7014E-04 0.4007E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 209 0000379 0.6953E-04 0.2643E-04 0.6947E-04 0.3967E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 209 0000380 0.6886E-04 0.2618E-04 0.6880E-04 0.3927E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 209 0000381 0.6820E-04 0.2592E-04 0.6814E-04 0.3888E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 209 0000382 0.6755E-04 0.2567E-04 0.6748E-04 0.3850E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 209 0000383 0.6690E-04 0.2542E-04 0.6684E-04 0.3811E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 209 0000384 0.6625E-04 0.2518E-04 0.6619E-04 0.3774E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 209 0000385 0.6562E-04 0.2493E-04 0.6556E-04 0.3736E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 209 0000386 0.6499E-04 0.2469E-04 0.6493E-04 0.3699E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 209 0000387 0.6436E-04 0.2445E-04 0.6430E-04 0.3662E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 209 0000388 0.6374E-04 0.2422E-04 0.6369E-04 0.3626E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 209 0000389 0.6313E-04 0.2398E-04 0.6308E-04 0.3590E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 209 0000390 0.6253E-04 0.2375E-04 0.6247E-04 0.3555E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 210 0000391 0.6193E-04 0.2352E-04 0.6187E-04 0.3519E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 210 0000392 0.6133E-04 0.2329E-04 0.6128E-04 0.3484E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 210 0000393 0.6074E-04 0.2307E-04 0.6069E-04 0.3450E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 210 0000394 0.6016E-04 0.2284E-04 0.6011E-04 0.3416E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 210 0000395 0.5958E-04 0.2262E-04 0.5953E-04 0.3382E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 210 0000396 0.5901E-04 0.2240E-04 0.5896E-04 0.3349E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 210 0000397 0.5845E-04 0.2219E-04 0.5840E-04 0.3315E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 210 0000398 0.5789E-04 0.2197E-04 0.5784E-04 0.3283E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 210 0000399 0.5733E-04 0.2176E-04 0.5729E-04 0.3250E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 210 0000400 0.5678E-04 0.2155E-04 0.5674E-04 0.3218E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 211 0000401 0.5624E-04 0.2134E-04 0.5619E-04 0.3186E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 211 0000402 0.5570E-04 0.2113E-04 0.5566E-04 0.3155E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 211 0000403 0.5517E-04 0.2093E-04 0.5512E-04 0.3124E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 211 0000404 0.5464E-04 0.2073E-04 0.5460E-04 0.3093E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 211 0000405 0.5412E-04 0.2052E-04 0.5408E-04 0.3062E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 211 0000406 0.5360E-04 0.2033E-04 0.5356E-04 0.3032E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 212 0000407 0.5309E-04 0.2013E-04 0.5305E-04 0.3002E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 212 0000408 0.5258E-04 0.1993E-04 0.5254E-04 0.2973E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 212 0000409 0.5208E-04 0.1974E-04 0.5204E-04 0.2943E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 212 0000410 0.5158E-04 0.1955E-04 0.5154E-04 0.2914E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 212 0000411 0.5109E-04 0.1936E-04 0.5105E-04 0.2886E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 212 0000412 0.5060E-04 0.1917E-04 0.5056E-04 0.2857E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 212 0000413 0.5011E-04 0.1899E-04 0.5008E-04 0.2829E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 212 0000414 0.4964E-04 0.1880E-04 0.4960E-04 0.2801E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 213 0000415 0.4916E-04 0.1862E-04 0.4913E-04 0.2774E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 213 0000416 0.4869E-04 0.1844E-04 0.4866E-04 0.2747E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 213 0000417 0.4823E-04 0.1826E-04 0.4820E-04 0.2720E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 213 0000418 0.4777E-04 0.1808E-04 0.4774E-04 0.2693E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 213 0000419 0.4731E-04 0.1791E-04 0.4728E-04 0.2666E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 216 0000420 0.4686E-04 0.1774E-04 0.4683E-04 0.2640E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 216 0000421 0.4642E-04 0.1756E-04 0.4639E-04 0.2614E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 216 0000422 0.4597E-04 0.1739E-04 0.4595E-04 0.2589E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 216 0000423 0.4554E-04 0.1722E-04 0.4551E-04 0.2563E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 216 0000424 0.4510E-04 0.1706E-04 0.4508E-04 0.2538E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 216 0000425 0.4467E-04 0.1689E-04 0.4465E-04 0.2513E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 216 0000426 0.4425E-04 0.1673E-04 0.4422E-04 0.2489E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 216 0000427 0.4383E-04 0.1657E-04 0.4380E-04 0.2464E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 216 0000428 0.4341E-04 0.1641E-04 0.4339E-04 0.2440E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 216 0000429 0.4300E-04 0.1625E-04 0.4298E-04 0.2416E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 216 0000430 0.4259E-04 0.1609E-04 0.4257E-04 0.2393E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 217 0000431 0.4218E-04 0.1593E-04 0.4216E-04 0.2369E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 217 0000432 0.4178E-04 0.1578E-04 0.4176E-04 0.2346E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 217 0000433 0.4139E-04 0.1563E-04 0.4137E-04 0.2323E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 217 0000434 0.4099E-04 0.1547E-04 0.4098E-04 0.2300E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 217 0000435 0.4061E-04 0.1532E-04 0.4059E-04 0.2278E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 217 0000436 0.4022E-04 0.1518E-04 0.4020E-04 0.2255E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 217 0000437 0.3984E-04 0.1503E-04 0.3982E-04 0.2233E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 217 0000438 0.3946E-04 0.1488E-04 0.3945E-04 0.2212E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 218 0000439 0.3909E-04 0.1474E-04 0.3907E-04 0.2190E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 218 0000440 0.3872E-04 0.1459E-04 0.3870E-04 0.2169E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 218 0000441 0.3835E-04 0.1445E-04 0.3834E-04 0.2147E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 218 0000442 0.3799E-04 0.1431E-04 0.3797E-04 0.2126E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 218 0000443 0.3763E-04 0.1417E-04 0.3761E-04 0.2106E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 218 0000444 0.3727E-04 0.1404E-04 0.3726E-04 0.2085E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 218 0000445 0.3692E-04 0.1390E-04 0.3691E-04 0.2065E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 218 0000446 0.3657E-04 0.1377E-04 0.3656E-04 0.2045E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 218 0000447 0.3622E-04 0.1363E-04 0.3621E-04 0.2025E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 218 0000448 0.3588E-04 0.1350E-04 0.3587E-04 0.2005E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 218 0000449 0.3554E-04 0.1337E-04 0.3553E-04 0.1985E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 218 0000450 0.3521E-04 0.1324E-04 0.3520E-04 0.1966E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 218 0000451 0.3487E-04 0.1311E-04 0.3487E-04 0.1947E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 218 0000452 0.3454E-04 0.1298E-04 0.3454E-04 0.1928E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 218 0000453 0.3422E-04 0.1286E-04 0.3421E-04 0.1909E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 218 0000454 0.3390E-04 0.1273E-04 0.3389E-04 0.1891E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 218 0000455 0.3358E-04 0.1261E-04 0.3357E-04 0.1872E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 218 0000456 0.3326E-04 0.1249E-04 0.3325E-04 0.1854E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 219 0000457 0.3295E-04 0.1236E-04 0.3294E-04 0.1836E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 219 0000458 0.3264E-04 0.1224E-04 0.3263E-04 0.1818E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 219 0000459 0.3233E-04 0.1213E-04 0.3233E-04 0.1801E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 219 0000460 0.3202E-04 0.1201E-04 0.3202E-04 0.1783E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 219 0000461 0.3172E-04 0.1189E-04 0.3172E-04 0.1766E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 219 0000462 0.3142E-04 0.1178E-04 0.3142E-04 0.1749E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 219 0000463 0.3113E-04 0.1166E-04 0.3113E-04 0.1732E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 219 0000464 0.3084E-04 0.1155E-04 0.3084E-04 0.1715E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 219 0000465 0.3055E-04 0.1144E-04 0.3055E-04 0.1698E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 219 0000466 0.3026E-04 0.1133E-04 0.3026E-04 0.1682E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 219 0000467 0.2998E-04 0.1122E-04 0.2998E-04 0.1665E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 219 0000468 0.2969E-04 0.1111E-04 0.2969E-04 0.1649E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 219 0000469 0.2942E-04 0.1100E-04 0.2942E-04 0.1633E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 219 0000470 0.2914E-04 0.1089E-04 0.2914E-04 0.1617E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 219 0000471 0.2887E-04 0.1079E-04 0.2887E-04 0.1602E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 219 0000472 0.2860E-04 0.1068E-04 0.2860E-04 0.1586E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 219 0000473 0.2833E-04 0.1058E-04 0.2833E-04 0.1571E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 219 0000474 0.2806E-04 0.1047E-04 0.2807E-04 0.1556E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 219 0000475 0.2780E-04 0.1037E-04 0.2780E-04 0.1541E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 219 0000476 0.2754E-04 0.1027E-04 0.2754E-04 0.1526E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 220 0000477 0.2728E-04 0.1017E-04 0.2729E-04 0.1511E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 220 0000478 0.2703E-04 0.1007E-04 0.2703E-04 0.1496E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 220 0000479 0.2677E-04 0.9975E-05 0.2678E-04 0.1482E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 220 0000480 0.2652E-04 0.9878E-05 0.2653E-04 0.1467E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 220 0000481 0.2628E-04 0.9782E-05 0.2628E-04 0.1453E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 220 0000482 0.2603E-04 0.9687E-05 0.2604E-04 0.1439E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 220 0000483 0.2579E-04 0.9593E-05 0.2579E-04 0.1425E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 220 0000484 0.2555E-04 0.9500E-05 0.2555E-04 0.1412E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 220 0000485 0.2531E-04 0.9407E-05 0.2532E-04 0.1398E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 220 0000486 0.2507E-04 0.9316E-05 0.2508E-04 0.1384E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 220 0000487 0.2484E-04 0.9225E-05 0.2485E-04 0.1371E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 220 0000488 0.2461E-04 0.9135E-05 0.2462E-04 0.1358E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 220 0000489 0.2438E-04 0.9047E-05 0.2439E-04 0.1345E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 220 0000490 0.2415E-04 0.8959E-05 0.2416E-04 0.1332E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 220 0000491 0.2393E-04 0.8872E-05 0.2394E-04 0.1319E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 221 0000492 0.2371E-04 0.8785E-05 0.2372E-04 0.1306E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 221 0000493 0.2349E-04 0.8700E-05 0.2350E-04 0.1294E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 221 0000494 0.2327E-04 0.8615E-05 0.2328E-04 0.1281E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 221 0000495 0.2305E-04 0.8531E-05 0.2306E-04 0.1269E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 221 0000496 0.2284E-04 0.8448E-05 0.2285E-04 0.1257E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 221 0000497 0.2263E-04 0.8366E-05 0.2264E-04 0.1245E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 221 0000498 0.2242E-04 0.8285E-05 0.2243E-04 0.1233E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 221 0000499 0.2221E-04 0.8204E-05 0.2222E-04 0.1221E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 221 0000500 0.2200E-04 0.8124E-05 0.2201E-04 0.1209E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 221 0000501 0.2180E-04 0.8045E-05 0.2181E-04 0.1197E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 221 0000502 0.2160E-04 0.7967E-05 0.2161E-04 0.1186E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 221 0000503 0.2140E-04 0.7890E-05 0.2141E-04 0.1174E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 222 0000504 0.2120E-04 0.7813E-05 0.2121E-04 0.1163E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 222 0000505 0.2100E-04 0.7737E-05 0.2102E-04 0.1152E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 222 0000506 0.2081E-04 0.7662E-05 0.2082E-04 0.1141E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 222 0000507 0.2062E-04 0.7587E-05 0.2063E-04 0.1130E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 222 0000508 0.2043E-04 0.7513E-05 0.2044E-04 0.1119E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 222 0000509 0.2024E-04 0.7440E-05 0.2025E-04 0.1108E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 222 0000510 0.2005E-04 0.7368E-05 0.2007E-04 0.1098E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 222 0000511 0.1987E-04 0.7296E-05 0.1988E-04 0.1087E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 222 0000512 0.1968E-04 0.7225E-05 0.1970E-04 0.1077E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 222 0000513 0.1950E-04 0.7155E-05 0.1952E-04 0.1066E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 222 0000514 0.1932E-04 0.7085E-05 0.1934E-04 0.1056E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 222 0000515 0.1915E-04 0.7016E-05 0.1916E-04 0.1046E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 223 0000516 0.1897E-04 0.6948E-05 0.1898E-04 0.1036E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 223 0000517 0.1879E-04 0.6880E-05 0.1881E-04 0.1026E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 223 0000518 0.1862E-04 0.6813E-05 0.1864E-04 0.1016E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 224 0000519 0.1845E-04 0.6747E-05 0.1847E-04 0.1007E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 228 0000520 0.1828E-04 0.6681E-05 0.1830E-04 0.9970E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 228 0000521 0.1811E-04 0.6616E-05 0.1813E-04 0.9875E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 229 0000522 0.1795E-04 0.6552E-05 0.1796E-04 0.9780E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 229 0000523 0.1778E-04 0.6488E-05 0.1780E-04 0.9687E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 229 0000524 0.1762E-04 0.6425E-05 0.1764E-04 0.9594E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 229 0000525 0.1746E-04 0.6362E-05 0.1748E-04 0.9503E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 229 0000526 0.1730E-04 0.6300E-05 0.1732E-04 0.9412E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 229 0000527 0.1714E-04 0.6239E-05 0.1716E-04 0.9322E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 229 0000528 0.1698E-04 0.6178E-05 0.1700E-04 0.9233E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 229 0000529 0.1683E-04 0.6118E-05 0.1685E-04 0.9145E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 229 0000530 0.1668E-04 0.6058E-05 0.1669E-04 0.9058E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 229 0000531 0.1652E-04 0.5999E-05 0.1654E-04 0.8971E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 230 0000532 0.1637E-04 0.5941E-05 0.1639E-04 0.8886E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 230 0000533 0.1622E-04 0.5883E-05 0.1624E-04 0.8801E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 230 0000534 0.1608E-04 0.5826E-05 0.1610E-04 0.8717E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 230 0000535 0.1593E-04 0.5769E-05 0.1595E-04 0.8634E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 230 0000536 0.1579E-04 0.5713E-05 0.1580E-04 0.8552E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 230 0000537 0.1564E-04 0.5657E-05 0.1566E-04 0.8471E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 230 0000538 0.1550E-04 0.5602E-05 0.1552E-04 0.8390E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 230 0000539 0.1536E-04 0.5547E-05 0.1538E-04 0.8310E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 230 0000540 0.1522E-04 0.5493E-05 0.1524E-04 0.8231E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 230 0000541 0.1508E-04 0.5440E-05 0.1510E-04 0.8153E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 238 0000542 0.1494E-04 0.5387E-05 0.1496E-04 0.8075E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 238 0000543 0.1481E-04 0.5334E-05 0.1483E-04 0.7999E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 238 0000544 0.1467E-04 0.5282E-05 0.1469E-04 0.7923E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 238 0000545 0.1454E-04 0.5231E-05 0.1456E-04 0.7847E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 239 0000546 0.1441E-04 0.5180E-05 0.1443E-04 0.7773E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 239 0000547 0.1428E-04 0.5129E-05 0.1430E-04 0.7699E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 239 0000548 0.1415E-04 0.5079E-05 0.1417E-04 0.7626E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 239 0000549 0.1402E-04 0.5030E-05 0.1404E-04 0.7554E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 239 0000550 0.1390E-04 0.4981E-05 0.1392E-04 0.7482E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 239 0000551 0.1377E-04 0.4932E-05 0.1379E-04 0.7411E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 239 0000552 0.1365E-04 0.4884E-05 0.1367E-04 0.7341E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 239 0000553 0.1352E-04 0.4837E-05 0.1355E-04 0.7272E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 239 0000554 0.1340E-04 0.4790E-05 0.1342E-04 0.7203E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 239 0000555 0.1328E-04 0.4743E-05 0.1330E-04 0.7135E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 239 0000556 0.1316E-04 0.4697E-05 0.1318E-04 0.7067E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 240 0000557 0.1304E-04 0.4651E-05 0.1307E-04 0.7000E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 240 0000558 0.1293E-04 0.4606E-05 0.1295E-04 0.6934E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 240 0000559 0.1281E-04 0.4561E-05 0.1283E-04 0.6868E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 240 0000560 0.1270E-04 0.4516E-05 0.1272E-04 0.6804E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 240 0000561 0.1258E-04 0.4472E-05 0.1260E-04 0.6739E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 240 0000562 0.1247E-04 0.4429E-05 0.1249E-04 0.6676E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 240 0000563 0.1236E-04 0.4385E-05 0.1238E-04 0.6613E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 240 0000564 0.1225E-04 0.4343E-05 0.1227E-04 0.6550E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 240 0000565 0.1214E-04 0.4300E-05 0.1216E-04 0.6488E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 240 0000566 0.1203E-04 0.4258E-05 0.1205E-04 0.6427E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 240 0000567 0.1192E-04 0.4217E-05 0.1194E-04 0.6367E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 240 0000568 0.1182E-04 0.4176E-05 0.1184E-04 0.6307E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 240 0000569 0.1171E-04 0.4135E-05 0.1173E-04 0.6247E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 241 0000570 0.1161E-04 0.4095E-05 0.1163E-04 0.6188E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 241 0000571 0.1150E-04 0.4055E-05 0.1153E-04 0.6130E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 241 0000572 0.1140E-04 0.4015E-05 0.1142E-04 0.6072E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 241 0000573 0.1130E-04 0.3976E-05 0.1132E-04 0.6015E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 241 0000574 0.1120E-04 0.3937E-05 0.1122E-04 0.5959E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 241 0000575 0.1110E-04 0.3899E-05 0.1112E-04 0.5903E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 241 0000576 0.1100E-04 0.3861E-05 0.1102E-04 0.5847E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 241 0000577 0.1090E-04 0.3823E-05 0.1093E-04 0.5792E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 241 0000578 0.1081E-04 0.3786E-05 0.1083E-04 0.5738E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 241 0000579 0.1071E-04 0.3749E-05 0.1073E-04 0.5684E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 241 0000580 0.1062E-04 0.3712E-05 0.1064E-04 0.5631E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 241 0000581 0.1052E-04 0.3676E-05 0.1055E-04 0.5578E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 242 0000582 0.1043E-04 0.3640E-05 0.1045E-04 0.5526E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 242 0000583 0.1034E-04 0.3605E-05 0.1036E-04 0.5474E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 242 0000584 0.1025E-04 0.3570E-05 0.1027E-04 0.5423E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 242 0000585 0.1016E-04 0.3535E-05 0.1018E-04 0.5372E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 242 0000586 0.1007E-04 0.3500E-05 0.1009E-04 0.5322E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 242 0000587 0.9978E-05 0.3466E-05 0.1000E-04 0.5272E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 242 0000588 0.9891E-05 0.3432E-05 0.9913E-05 0.5222E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 242 0000589 0.9804E-05 0.3399E-05 0.9826E-05 0.5174E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 242 0000590 0.9718E-05 0.3366E-05 0.9740E-05 0.5125E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 242 0000591 0.9632E-05 0.3333E-05 0.9655E-05 0.5077E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 242 0000592 0.9548E-05 0.3300E-05 0.9570E-05 0.5030E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 243 0000593 0.9464E-05 0.3268E-05 0.9487E-05 0.4983E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 243 0000594 0.9381E-05 0.3236E-05 0.9404E-05 0.4936E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 243 0000595 0.9299E-05 0.3205E-05 0.9322E-05 0.4890E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 243 0000596 0.9218E-05 0.3173E-05 0.9241E-05 0.4845E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 243 0000597 0.9137E-05 0.3142E-05 0.9160E-05 0.4800E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 243 0000598 0.9058E-05 0.3112E-05 0.9080E-05 0.4755E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 243 0000599 0.8979E-05 0.3081E-05 0.9001E-05 0.4711E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 243 0000600 0.8900E-05 0.3051E-05 0.8923E-05 0.4667E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 243 0000601 0.8823E-05 0.3021E-05 0.8846E-05 0.4623E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 244 0000602 0.8746E-05 0.2992E-05 0.8769E-05 0.4580E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 244 0000603 0.8670E-05 0.2963E-05 0.8693E-05 0.4538E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 244 0000604 0.8595E-05 0.2934E-05 0.8617E-05 0.4496E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 244 0000605 0.8520E-05 0.2905E-05 0.8543E-05 0.4454E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 244 0000606 0.8446E-05 0.2877E-05 0.8469E-05 0.4413E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 244 0000607 0.8373E-05 0.2849E-05 0.8396E-05 0.4372E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 245 0000608 0.8301E-05 0.2821E-05 0.8323E-05 0.4331E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 245 0000609 0.8229E-05 0.2793E-05 0.8251E-05 0.4291E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 245 0000610 0.8158E-05 0.2766E-05 0.8180E-05 0.4251E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 245 0000611 0.8087E-05 0.2739E-05 0.8110E-05 0.4212E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 245 0000612 0.8017E-05 0.2712E-05 0.8040E-05 0.4173E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 245 0000613 0.7948E-05 0.2686E-05 0.7971E-05 0.4134E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 245 0000614 0.7880E-05 0.2660E-05 0.7902E-05 0.4096E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 245 0000615 0.7812E-05 0.2634E-05 0.7834E-05 0.4058E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 245 0000616 0.7745E-05 0.2608E-05 0.7767E-05 0.4020E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 245 0000617 0.7678E-05 0.2583E-05 0.7701E-05 0.3983E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 246 0000618 0.7612E-05 0.2557E-05 0.7635E-05 0.3947E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 246 0000619 0.7547E-05 0.2532E-05 0.7569E-05 0.3910E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 246 0000620 0.7482E-05 0.2508E-05 0.7505E-05 0.3874E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 246 0000621 0.7418E-05 0.2483E-05 0.7440E-05 0.3838E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 246 0000622 0.7355E-05 0.2459E-05 0.7377E-05 0.3803E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 246 0000623 0.7292E-05 0.2435E-05 0.7314E-05 0.3768E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 246 0000624 0.7229E-05 0.2411E-05 0.7252E-05 0.3733E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 246 0000625 0.7168E-05 0.2388E-05 0.7190E-05 0.3699E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 246 0000626 0.7107E-05 0.2364E-05 0.7129E-05 0.3665E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 247 0000627 0.7046E-05 0.2341E-05 0.7068E-05 0.3631E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 247 0000628 0.6986E-05 0.2318E-05 0.7008E-05 0.3598E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 247 0000629 0.6927E-05 0.2296E-05 0.6949E-05 0.3565E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 247 0000630 0.6868E-05 0.2273E-05 0.6890E-05 0.3532E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 248 0000631 0.6810E-05 0.2251E-05 0.6831E-05 0.3500E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 248 0000632 0.6752E-05 0.2229E-05 0.6774E-05 0.3468E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 248 0000633 0.6695E-05 0.2207E-05 0.6716E-05 0.3436E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 248 0000634 0.6638E-05 0.2186E-05 0.6660E-05 0.3404E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 249 0000635 0.6582E-05 0.2164E-05 0.6603E-05 0.3373E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 249 0000636 0.6526E-05 0.2143E-05 0.6548E-05 0.3342E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 249 0000637 0.6471E-05 0.2122E-05 0.6493E-05 0.3312E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 249 0000638 0.6416E-05 0.2102E-05 0.6438E-05 0.3281E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 249 0000639 0.6362E-05 0.2081E-05 0.6384E-05 0.3251E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 249 0000640 0.6309E-05 0.2061E-05 0.6330E-05 0.3222E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 249 0000641 0.6256E-05 0.2041E-05 0.6277E-05 0.3192E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 249 0000642 0.6203E-05 0.2021E-05 0.6225E-05 0.3163E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 249 0000643 0.6151E-05 0.2001E-05 0.6172E-05 0.3134E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 249 0000644 0.6099E-05 0.1981E-05 0.6121E-05 0.3106E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 249 0000645 0.6048E-05 0.1962E-05 0.6070E-05 0.3077E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 249 0000646 0.5998E-05 0.1943E-05 0.6019E-05 0.3049E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 250 0000647 0.5948E-05 0.1924E-05 0.5969E-05 0.3022E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 250 0000648 0.5898E-05 0.1905E-05 0.5919E-05 0.2994E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 250 0000649 0.5849E-05 0.1886E-05 0.5870E-05 0.2967E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 250 0000650 0.5800E-05 0.1868E-05 0.5821E-05 0.2940E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 250 0000651 0.5752E-05 0.1850E-05 0.5772E-05 0.2913E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 250 0000652 0.5704E-05 0.1832E-05 0.5724E-05 0.2887E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 250 0000653 0.5656E-05 0.1814E-05 0.5677E-05 0.2861E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 250 0000654 0.5609E-05 0.1796E-05 0.5630E-05 0.2835E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 250 0000655 0.5563E-05 0.1778E-05 0.5583E-05 0.2809E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 250 0000656 0.5517E-05 0.1761E-05 0.5537E-05 0.2784E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 250 0000657 0.5471E-05 0.1744E-05 0.5491E-05 0.2758E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 250 0000658 0.5426E-05 0.1727E-05 0.5446E-05 0.2734E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 250 0000659 0.5381E-05 0.1710E-05 0.5401E-05 0.2709E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 250 0000660 0.5336E-05 0.1693E-05 0.5357E-05 0.2684E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 250 0000661 0.5292E-05 0.1677E-05 0.5312E-05 0.2660E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 251 0000662 0.5249E-05 0.1660E-05 0.5269E-05 0.2636E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 250 0000663 0.5205E-05 0.1644E-05 0.5226E-05 0.2612E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 251 0000664 0.5163E-05 0.1628E-05 0.5183E-05 0.2589E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 251 0000665 0.5120E-05 0.1612E-05 0.5140E-05 0.2566E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 251 0000666 0.5078E-05 0.1596E-05 0.5098E-05 0.2542E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 251 0000667 0.5036E-05 0.1581E-05 0.5056E-05 0.2520E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 251 0000668 0.4995E-05 0.1565E-05 0.5015E-05 0.2497E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 251 0000669 0.4954E-05 0.1550E-05 0.4974E-05 0.2475E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 251 0000670 0.4914E-05 0.1535E-05 0.4933E-05 0.2452E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 251 0000671 0.4874E-05 0.1520E-05 0.4893E-05 0.2430E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 251 0000672 0.4834E-05 0.1505E-05 0.4853E-05 0.2409E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 251 0000673 0.4795E-05 0.1490E-05 0.4814E-05 0.2387E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 251 0000674 0.4756E-05 0.1476E-05 0.4775E-05 0.2366E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 251 0000675 0.4717E-05 0.1461E-05 0.4736E-05 0.2344E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 251 0000676 0.4679E-05 0.1447E-05 0.4698E-05 0.2323E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 251 0000677 0.4641E-05 0.1433E-05 0.4660E-05 0.2303E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 251 0000678 0.4603E-05 0.1419E-05 0.4622E-05 0.2282E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 251 0000679 0.4566E-05 0.1405E-05 0.4585E-05 0.2262E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 252 0000680 0.4529E-05 0.1391E-05 0.4548E-05 0.2242E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 252 0000681 0.4492E-05 0.1378E-05 0.4511E-05 0.2222E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 252 0000682 0.4456E-05 0.1364E-05 0.4475E-05 0.2202E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 252 0000683 0.4420E-05 0.1351E-05 0.4439E-05 0.2182E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 252 0000684 0.4384E-05 0.1338E-05 0.4403E-05 0.2163E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 252 0000685 0.4349E-05 0.1325E-05 0.4368E-05 0.2144E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 252 0000686 0.4314E-05 0.1312E-05 0.4332E-05 0.2125E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 252 0000687 0.4279E-05 0.1299E-05 0.4298E-05 0.2106E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 252 0000688 0.4245E-05 0.1286E-05 0.4263E-05 0.2087E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 252 0000689 0.4211E-05 0.1273E-05 0.4229E-05 0.2068E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 252 0000690 0.4177E-05 0.1261E-05 0.4195E-05 0.2050E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 252 0000691 0.4144E-05 0.1249E-05 0.4162E-05 0.2032E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 252 0000692 0.4111E-05 0.1236E-05 0.4129E-05 0.2014E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 252 0000693 0.4078E-05 0.1224E-05 0.4096E-05 0.1996E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 252 0000694 0.4045E-05 0.1212E-05 0.4063E-05 0.1979E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 252 0000695 0.4013E-05 0.1201E-05 0.4031E-05 0.1961E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 253 0000696 0.3981E-05 0.1189E-05 0.3999E-05 0.1944E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 253 0000697 0.3950E-05 0.1177E-05 0.3967E-05 0.1927E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 253 0000698 0.3918E-05 0.1166E-05 0.3936E-05 0.1910E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 253 0000699 0.3887E-05 0.1154E-05 0.3905E-05 0.1893E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 253 0000700 0.3857E-05 0.1143E-05 0.3874E-05 0.1876E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 253 0000701 0.3826E-05 0.1132E-05 0.3843E-05 0.1860E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 253 0000702 0.3796E-05 0.1121E-05 0.3813E-05 0.1843E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 253 0000703 0.3766E-05 0.1110E-05 0.3783E-05 0.1827E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 253 0000704 0.3736E-05 0.1099E-05 0.3753E-05 0.1811E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 253 0000705 0.3707E-05 0.1088E-05 0.3724E-05 0.1795E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 253 0000706 0.3678E-05 0.1078E-05 0.3695E-05 0.1780E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 253 0000707 0.3649E-05 0.1067E-05 0.3666E-05 0.1764E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 253 0000708 0.3620E-05 0.1057E-05 0.3637E-05 0.1749E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 253 0000709 0.3592E-05 0.1046E-05 0.3608E-05 0.1733E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 254 0000710 0.3564E-05 0.1036E-05 0.3580E-05 0.1718E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 254 0000711 0.3536E-05 0.1026E-05 0.3552E-05 0.1703E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 254 0000712 0.3508E-05 0.1016E-05 0.3525E-05 0.1688E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 254 0000713 0.3481E-05 0.1006E-05 0.3497E-05 0.1674E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 255 0000714 0.3454E-05 0.9961E-06 0.3470E-05 0.1659E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 256 0000715 0.3427E-05 0.9863E-06 0.3443E-05 0.1645E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 256 0000716 0.3400E-05 0.9767E-06 0.3416E-05 0.1630E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 271 0000717 0.3374E-05 0.9671E-06 0.3390E-05 0.1616E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 271 0000718 0.3347E-05 0.9577E-06 0.3363E-05 0.1602E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 271 0000719 0.3321E-05 0.9483E-06 0.3337E-05 0.1588E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 271 0000720 0.3296E-05 0.9390E-06 0.3312E-05 0.1575E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 272 0000721 0.3270E-05 0.9298E-06 0.3286E-05 0.1561E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 272 0000722 0.3245E-05 0.9208E-06 0.3261E-05 0.1547E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 272 0000723 0.3220E-05 0.9117E-06 0.3236E-05 0.1534E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 273 0000724 0.3195E-05 0.9028E-06 0.3211E-05 0.1521E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 273 0000725 0.3170E-05 0.8940E-06 0.3186E-05 0.1508E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 273 0000726 0.3146E-05 0.8853E-06 0.3161E-05 0.1495E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 273 0000727 0.3122E-05 0.8766E-06 0.3137E-05 0.1482E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 274 0000728 0.3098E-05 0.8680E-06 0.3113E-05 0.1469E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 274 0000729 0.3074E-05 0.8595E-06 0.3089E-05 0.1456E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 274 0000730 0.3051E-05 0.8511E-06 0.3066E-05 0.1444E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 274 0000731 0.3027E-05 0.8428E-06 0.3042E-05 0.1432E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 274 0000732 0.3004E-05 0.8345E-06 0.3019E-05 0.1419E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 275 0000733 0.2981E-05 0.8264E-06 0.2996E-05 0.1407E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 275 0000734 0.2959E-05 0.8183E-06 0.2973E-05 0.1395E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 275 0000735 0.2936E-05 0.8103E-06 0.2951E-05 0.1383E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 276 0000736 0.2914E-05 0.8024E-06 0.2928E-05 0.1371E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 276 0000737 0.2892E-05 0.7945E-06 0.2906E-05 0.1360E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 276 0000738 0.2870E-05 0.7867E-06 0.2884E-05 0.1348E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 276 0000739 0.2848E-05 0.7790E-06 0.2862E-05 0.1337E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 277 0000740 0.2826E-05 0.7714E-06 0.2841E-05 0.1325E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 277 0000741 0.2805E-05 0.7639E-06 0.2819E-05 0.1314E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 278 0000742 0.2784E-05 0.7564E-06 0.2798E-05 0.1303E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 278 0000743 0.2763E-05 0.7490E-06 0.2777E-05 0.1292E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 278 0000744 0.2742E-05 0.7417E-06 0.2756E-05 0.1281E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 279 0000745 0.2721E-05 0.7344E-06 0.2735E-05 0.1270E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 279 0000746 0.2701E-05 0.7272E-06 0.2715E-05 0.1259E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 279 0000747 0.2681E-05 0.7201E-06 0.2694E-05 0.1248E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 279 0000748 0.2661E-05 0.7131E-06 0.2674E-05 0.1238E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 280 0000749 0.2641E-05 0.7061E-06 0.2654E-05 0.1227E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 280 0000750 0.2621E-05 0.6992E-06 0.2634E-05 0.1217E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 280 0000751 0.2601E-05 0.6923E-06 0.2615E-05 0.1207E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 281 0000752 0.2582E-05 0.6856E-06 0.2595E-05 0.1197E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 281 0000753 0.2563E-05 0.6789E-06 0.2576E-05 0.1186E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 281 0000754 0.2544E-05 0.6722E-06 0.2557E-05 0.1177E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 281 0000755 0.2525E-05 0.6656E-06 0.2538E-05 0.1167E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 282 0000756 0.2506E-05 0.6591E-06 0.2519E-05 0.1157E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 282 0000757 0.2487E-05 0.6527E-06 0.2500E-05 0.1147E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 282 0000758 0.2469E-05 0.6463E-06 0.2482E-05 0.1138E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 282 0000759 0.2451E-05 0.6400E-06 0.2463E-05 0.1128E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 290 0000760 0.2432E-05 0.6337E-06 0.2445E-05 0.1119E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 290 0000761 0.2414E-05 0.6275E-06 0.2427E-05 0.1109E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 290 0000762 0.2397E-05 0.6214E-06 0.2409E-05 0.1100E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 290 0000763 0.2379E-05 0.6153E-06 0.2392E-05 0.1091E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 290 0000764 0.2361E-05 0.6093E-06 0.2374E-05 0.1082E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 290 0000765 0.2344E-05 0.6033E-06 0.2356E-05 0.1073E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 290 0000766 0.2327E-05 0.5974E-06 0.2339E-05 0.1064E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 291 0000767 0.2310E-05 0.5916E-06 0.2322E-05 0.1055E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 291 0000768 0.2293E-05 0.5858E-06 0.2305E-05 0.1046E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 291 0000769 0.2276E-05 0.5800E-06 0.2288E-05 0.1037E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 291 0000770 0.2259E-05 0.5744E-06 0.2271E-05 0.1029E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 291 0000771 0.2243E-05 0.5687E-06 0.2255E-05 0.1020E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 291 0000772 0.2227E-05 0.5632E-06 0.2238E-05 0.1012E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 291 0000773 0.2210E-05 0.5577E-06 0.2222E-05 0.1004E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 291 0000774 0.2194E-05 0.5522E-06 0.2206E-05 0.9953E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 292 0000775 0.2178E-05 0.5468E-06 0.2190E-05 0.9871E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 292 0000776 0.2163E-05 0.5414E-06 0.2174E-05 0.9789E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 292 0000777 0.2147E-05 0.5361E-06 0.2158E-05 0.9709E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 292 0000778 0.2131E-05 0.5309E-06 0.2143E-05 0.9629E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 292 0000779 0.2116E-05 0.5257E-06 0.2127E-05 0.9550E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 292 0000780 0.2101E-05 0.5206E-06 0.2112E-05 0.9472E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 292 0000781 0.2085E-05 0.5155E-06 0.2097E-05 0.9394E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 292 0000782 0.2070E-05 0.5104E-06 0.2082E-05 0.9317E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 292 0000783 0.2055E-05 0.5054E-06 0.2067E-05 0.9241E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 292 0000784 0.2041E-05 0.5005E-06 0.2052E-05 0.9165E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 293 0000785 0.2026E-05 0.4956E-06 0.2037E-05 0.9091E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 293 0000786 0.2012E-05 0.4907E-06 0.2022E-05 0.9016E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 293 0000787 0.1997E-05 0.4859E-06 0.2008E-05 0.8943E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 293 0000788 0.1983E-05 0.4812E-06 0.1993E-05 0.8870E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 293 0000789 0.1969E-05 0.4765E-06 0.1979E-05 0.8798E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 293 0000790 0.1955E-05 0.4718E-06 0.1965E-05 0.8727E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 293 0000791 0.1941E-05 0.4672E-06 0.1951E-05 0.8656E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 293 0000792 0.1927E-05 0.4626E-06 0.1937E-05 0.8586E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 293 0000793 0.1913E-05 0.4581E-06 0.1923E-05 0.8516E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 293 0000794 0.1899E-05 0.4536E-06 0.1910E-05 0.8447E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 293 0000795 0.1886E-05 0.4492E-06 0.1896E-05 0.8379E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 293 0000796 0.1873E-05 0.4448E-06 0.1883E-05 0.8312E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 294 0000797 0.1859E-05 0.4404E-06 0.1869E-05 0.8245E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 294 0000798 0.1846E-05 0.4361E-06 0.1856E-05 0.8178E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 294 0000799 0.1833E-05 0.4318E-06 0.1843E-05 0.8112E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 294 0000800 0.1820E-05 0.4276E-06 0.1830E-05 0.8047E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 294 0000801 0.1807E-05 0.4234E-06 0.1817E-05 0.7983E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 294 0000802 0.1794E-05 0.4193E-06 0.1804E-05 0.7918E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 294 0000803 0.1782E-05 0.4152E-06 0.1792E-05 0.7855E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 294 0000804 0.1769E-05 0.4111E-06 0.1779E-05 0.7792E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 294 0000805 0.1757E-05 0.4071E-06 0.1766E-05 0.7730E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 294 0000806 0.1745E-05 0.4031E-06 0.1754E-05 0.7668E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 294 0000807 0.1732E-05 0.3992E-06 0.1742E-05 0.7607E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 294 0000808 0.1720E-05 0.3952E-06 0.1730E-05 0.7546E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 295 0000809 0.1708E-05 0.3914E-06 0.1717E-05 0.7486E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 295 0000810 0.1696E-05 0.3876E-06 0.1705E-05 0.7427E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 295 0000811 0.1684E-05 0.3838E-06 0.1694E-05 0.7368E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 295 0000812 0.1673E-05 0.3800E-06 0.1682E-05 0.7309E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 295 0000813 0.1661E-05 0.3763E-06 0.1670E-05 0.7251E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 295 0000814 0.1649E-05 0.3726E-06 0.1658E-05 0.7194E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 295 0000815 0.1638E-05 0.3690E-06 0.1647E-05 0.7137E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 295 0000816 0.1627E-05 0.3653E-06 0.1635E-05 0.7081E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 295 0000817 0.1615E-05 0.3618E-06 0.1624E-05 0.7025E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 295 0000818 0.1604E-05 0.3582E-06 0.1613E-05 0.6969E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 295 0000819 0.1593E-05 0.3547E-06 0.1602E-05 0.6915E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 295 0000820 0.1582E-05 0.3512E-06 0.1591E-05 0.6860E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 295 0000821 0.1571E-05 0.3478E-06 0.1580E-05 0.6806E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 295 0000822 0.1560E-05 0.3444E-06 0.1569E-05 0.6753E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 295 0000823 0.1549E-05 0.3410E-06 0.1558E-05 0.6700E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 295 0000824 0.1539E-05 0.3377E-06 0.1547E-05 0.6647E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 295 0000825 0.1528E-05 0.3344E-06 0.1537E-05 0.6595E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 295 0000826 0.1518E-05 0.3311E-06 0.1526E-05 0.6544E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 295 0000827 0.1507E-05 0.3279E-06 0.1516E-05 0.6493E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000828 0.1497E-05 0.3247E-06 0.1505E-05 0.6442E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000829 0.1487E-05 0.3215E-06 0.1495E-05 0.6392E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000830 0.1477E-05 0.3183E-06 0.1485E-05 0.6342E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000831 0.1466E-05 0.3152E-06 0.1474E-05 0.6293E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000832 0.1456E-05 0.3121E-06 0.1464E-05 0.6244E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000833 0.1447E-05 0.3091E-06 0.1454E-05 0.6196E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000834 0.1437E-05 0.3061E-06 0.1445E-05 0.6148E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000835 0.1427E-05 0.3031E-06 0.1435E-05 0.6100E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000836 0.1417E-05 0.3001E-06 0.1425E-05 0.6053E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000837 0.1408E-05 0.2972E-06 0.1415E-05 0.6006E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000838 0.1398E-05 0.2943E-06 0.1406E-05 0.5960E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000839 0.1389E-05 0.2914E-06 0.1396E-05 0.5914E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000840 0.1379E-05 0.2885E-06 0.1387E-05 0.5869E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000841 0.1370E-05 0.2857E-06 0.1377E-05 0.5824E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000842 0.1361E-05 0.2829E-06 0.1368E-05 0.5779E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000843 0.1351E-05 0.2801E-06 0.1359E-05 0.5735E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000844 0.1342E-05 0.2774E-06 0.1350E-05 0.5691E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000845 0.1333E-05 0.2747E-06 0.1340E-05 0.5647E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000846 0.1324E-05 0.2720E-06 0.1331E-05 0.5604E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000847 0.1315E-05 0.2693E-06 0.1323E-05 0.5561E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000848 0.1307E-05 0.2667E-06 0.1314E-05 0.5519E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000849 0.1298E-05 0.2641E-06 0.1305E-05 0.5477E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000850 0.1289E-05 0.2615E-06 0.1296E-05 0.5435E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000851 0.1280E-05 0.2589E-06 0.1287E-05 0.5394E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000852 0.1272E-05 0.2564E-06 0.1279E-05 0.5353E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000853 0.1263E-05 0.2539E-06 0.1270E-05 0.5312E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000854 0.1255E-05 0.2514E-06 0.1262E-05 0.5272E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000855 0.1247E-05 0.2489E-06 0.1253E-05 0.5232E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000856 0.1238E-05 0.2465E-06 0.1245E-05 0.5193E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000857 0.1230E-05 0.2441E-06 0.1237E-05 0.5154E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000858 0.1222E-05 0.2417E-06 0.1228E-05 0.5115E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 296 0000859 0.1214E-05 0.2393E-06 0.1220E-05 0.5076E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000860 0.1206E-05 0.2370E-06 0.1212E-05 0.5038E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000861 0.1198E-05 0.2347E-06 0.1204E-05 0.5000E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000862 0.1190E-05 0.2324E-06 0.1196E-05 0.4963E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000863 0.1182E-05 0.2301E-06 0.1188E-05 0.4925E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000864 0.1174E-05 0.2279E-06 0.1180E-05 0.4889E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000865 0.1166E-05 0.2256E-06 0.1173E-05 0.4852E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000866 0.1159E-05 0.2234E-06 0.1165E-05 0.4816E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000867 0.1151E-05 0.2212E-06 0.1157E-05 0.4780E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000868 0.1143E-05 0.2191E-06 0.1149E-05 0.4744E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000869 0.1136E-05 0.2169E-06 0.1142E-05 0.4709E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000870 0.1128E-05 0.2148E-06 0.1134E-05 0.4674E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000871 0.1121E-05 0.2127E-06 0.1127E-05 0.4639E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000872 0.1114E-05 0.2106E-06 0.1120E-05 0.4605E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000873 0.1106E-05 0.2086E-06 0.1112E-05 0.4571E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000874 0.1099E-05 0.2065E-06 0.1105E-05 0.4537E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000875 0.1092E-05 0.2045E-06 0.1098E-05 0.4503E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000876 0.1085E-05 0.2025E-06 0.1090E-05 0.4470E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000877 0.1078E-05 0.2005E-06 0.1083E-05 0.4437E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000878 0.1071E-05 0.1986E-06 0.1076E-05 0.4404E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000879 0.1064E-05 0.1966E-06 0.1069E-05 0.4372E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000880 0.1057E-05 0.1947E-06 0.1062E-05 0.4340E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000881 0.1050E-05 0.1928E-06 0.1055E-05 0.4308E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000882 0.1043E-05 0.1909E-06 0.1048E-05 0.4276E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000883 0.1036E-05 0.1890E-06 0.1042E-05 0.4245E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000884 0.1029E-05 0.1872E-06 0.1035E-05 0.4214E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000885 0.1023E-05 0.1853E-06 0.1028E-05 0.4183E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000886 0.1016E-05 0.1835E-06 0.1021E-05 0.4153E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000887 0.1010E-05 0.1817E-06 0.1015E-05 0.4122E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000888 0.1003E-05 0.1800E-06 0.1008E-05 0.4092E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000889 0.9966E-06 0.1782E-06 0.1002E-05 0.4063E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000890 0.9901E-06 0.1765E-06 0.9953E-06 0.4033E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000891 0.9837E-06 0.1747E-06 0.9888E-06 0.4004E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000892 0.9774E-06 0.1730E-06 0.9824E-06 0.3975E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000893 0.9711E-06 0.1713E-06 0.9761E-06 0.3946E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000894 0.9648E-06 0.1697E-06 0.9698E-06 0.3917E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000895 0.9586E-06 0.1680E-06 0.9636E-06 0.3889E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 297 0000896 0.9525E-06 0.1663E-06 0.9574E-06 0.3861E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 298 0000897 0.9464E-06 0.1647E-06 0.9512E-06 0.3833E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 298 0000898 0.9403E-06 0.1631E-06 0.9451E-06 0.3806E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 298 0000899 0.9343E-06 0.1615E-06 0.9390E-06 0.3778E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 298 0000900 0.9283E-06 0.1599E-06 0.9330E-06 0.3751E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 298 0000901 0.9223E-06 0.1584E-06 0.9270E-06 0.3724E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 298 0000902 0.9164E-06 0.1568E-06 0.9211E-06 0.3697E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 298 0000903 0.9106E-06 0.1553E-06 0.9152E-06 0.3671E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 298 0000904 0.9048E-06 0.1538E-06 0.9094E-06 0.3645E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 298 0000905 0.8990E-06 0.1523E-06 0.9035E-06 0.3619E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 298 0000906 0.8932E-06 0.1508E-06 0.8978E-06 0.3593E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 298 0000907 0.8876E-06 0.1493E-06 0.8920E-06 0.3567E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 298 0000908 0.8819E-06 0.1478E-06 0.8864E-06 0.3542E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 298 0000909 0.8763E-06 0.1464E-06 0.8807E-06 0.3517E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 298 0000910 0.8707E-06 0.1450E-06 0.8751E-06 0.3492E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 298 0000911 0.8652E-06 0.1435E-06 0.8695E-06 0.3467E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 298 0000912 0.8597E-06 0.1421E-06 0.8640E-06 0.3442E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 298 0000913 0.8543E-06 0.1407E-06 0.8585E-06 0.3418E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 298 0000914 0.8488E-06 0.1394E-06 0.8531E-06 0.3394E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 298 0000915 0.8435E-06 0.1380E-06 0.8477E-06 0.3370E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 298 0000916 0.8381E-06 0.1367E-06 0.8423E-06 0.3346E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 298 0000917 0.8328E-06 0.1353E-06 0.8370E-06 0.3322E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 299 0000918 0.8276E-06 0.1340E-06 0.8317E-06 0.3299E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 299 0000919 0.8223E-06 0.1327E-06 0.8264E-06 0.3276E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 299 0000920 0.8172E-06 0.1314E-06 0.8212E-06 0.3253E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 299 0000921 0.8120E-06 0.1301E-06 0.8160E-06 0.3230E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 299 0000922 0.8069E-06 0.1288E-06 0.8109E-06 0.3207E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 299 0000923 0.8018E-06 0.1276E-06 0.8058E-06 0.3185E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 299 0000924 0.7968E-06 0.1263E-06 0.8007E-06 0.3162E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 299 0000925 0.7918E-06 0.1251E-06 0.7956E-06 0.3140E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 299 0000926 0.7868E-06 0.1239E-06 0.7906E-06 0.3118E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 299 0000927 0.7819E-06 0.1227E-06 0.7857E-06 0.3097E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 299 0000928 0.7769E-06 0.1215E-06 0.7807E-06 0.3075E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 299 0000929 0.7721E-06 0.1203E-06 0.7758E-06 0.3054E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 299 0000930 0.7672E-06 0.1191E-06 0.7710E-06 0.3032E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 299 0000931 0.7624E-06 0.1179E-06 0.7661E-06 0.3011E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 299 0000932 0.7577E-06 0.1168E-06 0.7613E-06 0.2991E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 306 0000933 0.7529E-06 0.1156E-06 0.7566E-06 0.2970E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 306 0000934 0.7482E-06 0.1145E-06 0.7518E-06 0.2949E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 306 0000935 0.7436E-06 0.1134E-06 0.7471E-06 0.2929E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 306 0000936 0.7389E-06 0.1123E-06 0.7425E-06 0.2909E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 306 0000937 0.7343E-06 0.1112E-06 0.7378E-06 0.2889E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 306 0000938 0.7297E-06 0.1101E-06 0.7332E-06 0.2869E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 306 0000939 0.7252E-06 0.1090E-06 0.7287E-06 0.2849E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 307 0000940 0.7207E-06 0.1079E-06 0.7241E-06 0.2829E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 307 0000941 0.7162E-06 0.1069E-06 0.7196E-06 0.2810E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 307 0000942 0.7118E-06 0.1058E-06 0.7151E-06 0.2791E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 307 0000943 0.7073E-06 0.1048E-06 0.7107E-06 0.2771E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 307 0000944 0.7030E-06 0.1038E-06 0.7063E-06 0.2752E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 307 0000945 0.6986E-06 0.1028E-06 0.7019E-06 0.2734E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 307 0000946 0.6943E-06 0.1018E-06 0.6975E-06 0.2715E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 307 0000947 0.6900E-06 0.1008E-06 0.6932E-06 0.2696E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 307 0000948 0.6857E-06 0.9979E-07 0.6889E-06 0.2678E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 307 0000949 0.6815E-06 0.9881E-07 0.6846E-06 0.2660E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 307 0000950 0.6772E-06 0.9785E-07 0.6804E-06 0.2642E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 307 0000951 0.6731E-06 0.9689E-07 0.6762E-06 0.2624E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 307 0000952 0.6689E-06 0.9594E-07 0.6720E-06 0.2606E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 307 0000953 0.6648E-06 0.9501E-07 0.6679E-06 0.2588E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 307 0000954 0.6607E-06 0.9408E-07 0.6637E-06 0.2571E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 307 0000955 0.6566E-06 0.9316E-07 0.6596E-06 0.2553E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 308 0000956 0.6526E-06 0.9225E-07 0.6556E-06 0.2536E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 307 0000957 0.6486E-06 0.9135E-07 0.6515E-06 0.2519E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 308 0000958 0.6446E-06 0.9045E-07 0.6475E-06 0.2502E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 308 0000959 0.6406E-06 0.8957E-07 0.6435E-06 0.2485E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 308 0000960 0.6367E-06 0.8870E-07 0.6396E-06 0.2468E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 309 0000961 0.6328E-06 0.8783E-07 0.6356E-06 0.2451E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 309 0000962 0.6289E-06 0.8697E-07 0.6317E-06 0.2435E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 309 0000963 0.6250E-06 0.8612E-07 0.6278E-06 0.2419E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 309 0000964 0.6212E-06 0.8528E-07 0.6240E-06 0.2402E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 309 0000965 0.6174E-06 0.8445E-07 0.6202E-06 0.2386E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 309 0000966 0.6136E-06 0.8362E-07 0.6164E-06 0.2370E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 309 0000967 0.6099E-06 0.8280E-07 0.6126E-06 0.2354E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 309 0000968 0.6061E-06 0.8200E-07 0.6088E-06 0.2339E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0000969 0.6024E-06 0.8119E-07 0.6051E-06 0.2323E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0000970 0.5987E-06 0.8040E-07 0.6014E-06 0.2308E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0000971 0.5951E-06 0.7962E-07 0.5977E-06 0.2292E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0000972 0.5915E-06 0.7884E-07 0.5941E-06 0.2277E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0000973 0.5879E-06 0.7807E-07 0.5904E-06 0.2262E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0000974 0.5843E-06 0.7731E-07 0.5868E-06 0.2247E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0000975 0.5807E-06 0.7655E-07 0.5833E-06 0.2232E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0000976 0.5772E-06 0.7580E-07 0.5797E-06 0.2217E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0000977 0.5737E-06 0.7506E-07 0.5762E-06 0.2202E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0000978 0.5702E-06 0.7433E-07 0.5727E-06 0.2188E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0000979 0.5667E-06 0.7360E-07 0.5692E-06 0.2173E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0000980 0.5633E-06 0.7289E-07 0.5657E-06 0.2159E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0000981 0.5599E-06 0.7217E-07 0.5623E-06 0.2145E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0000982 0.5565E-06 0.7147E-07 0.5589E-06 0.2130E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0000983 0.5531E-06 0.7077E-07 0.5555E-06 0.2116E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0000984 0.5497E-06 0.7008E-07 0.5521E-06 0.2102E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0000985 0.5464E-06 0.6940E-07 0.5487E-06 0.2089E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0000986 0.5431E-06 0.6872E-07 0.5454E-06 0.2075E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0000987 0.5398E-06 0.6805E-07 0.5421E-06 0.2061E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0000988 0.5365E-06 0.6738E-07 0.5388E-06 0.2048E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0000989 0.5333E-06 0.6673E-07 0.5355E-06 0.2034E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0000990 0.5301E-06 0.6607E-07 0.5323E-06 0.2021E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0000991 0.5269E-06 0.6543E-07 0.5291E-06 0.2008E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0000992 0.5237E-06 0.6479E-07 0.5259E-06 0.1995E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0000993 0.5205E-06 0.6416E-07 0.5227E-06 0.1982E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0000994 0.5174E-06 0.6353E-07 0.5195E-06 0.1969E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0000995 0.5143E-06 0.6291E-07 0.5164E-06 0.1956E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0000996 0.5112E-06 0.6230E-07 0.5133E-06 0.1943E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0000997 0.5081E-06 0.6169E-07 0.5102E-06 0.1930E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0000998 0.5050E-06 0.6109E-07 0.5071E-06 0.1918E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0000999 0.5020E-06 0.6049E-07 0.5041E-06 0.1905E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001000 0.4990E-06 0.5990E-07 0.5010E-06 0.1893E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001001 0.4960E-06 0.5932E-07 0.4980E-06 0.1881E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001002 0.4930E-06 0.5874E-07 0.4950E-06 0.1868E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001003 0.4900E-06 0.5817E-07 0.4920E-06 0.1856E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001004 0.4871E-06 0.5760E-07 0.4891E-06 0.1844E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001005 0.4842E-06 0.5704E-07 0.4861E-06 0.1832E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001006 0.4813E-06 0.5648E-07 0.4832E-06 0.1821E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001007 0.4784E-06 0.5593E-07 0.4803E-06 0.1809E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001008 0.4755E-06 0.5538E-07 0.4774E-06 0.1797E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001009 0.4727E-06 0.5484E-07 0.4746E-06 0.1786E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001010 0.4698E-06 0.5431E-07 0.4717E-06 0.1774E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 317 0001011 0.4670E-06 0.5378E-07 0.4689E-06 0.1763E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 317 0001012 0.4642E-06 0.5325E-07 0.4661E-06 0.1751E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 317 0001013 0.4614E-06 0.5273E-07 0.4633E-06 0.1740E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 317 0001014 0.4587E-06 0.5222E-07 0.4605E-06 0.1729E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 317 0001015 0.4559E-06 0.5171E-07 0.4577E-06 0.1718E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 317 0001016 0.4532E-06 0.5121E-07 0.4550E-06 0.1707E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 317 0001017 0.4505E-06 0.5071E-07 0.4523E-06 0.1696E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 317 0001018 0.4478E-06 0.5021E-07 0.4496E-06 0.1685E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 317 0001019 0.4452E-06 0.4972E-07 0.4469E-06 0.1674E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 317 0001020 0.4425E-06 0.4924E-07 0.4442E-06 0.1664E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001021 0.4399E-06 0.4876E-07 0.4416E-06 0.1653E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001022 0.4372E-06 0.4828E-07 0.4389E-06 0.1642E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001023 0.4346E-06 0.4781E-07 0.4363E-06 0.1632E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001024 0.4321E-06 0.4735E-07 0.4337E-06 0.1622E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001025 0.4295E-06 0.4689E-07 0.4311E-06 0.1611E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001026 0.4269E-06 0.4643E-07 0.4286E-06 0.1601E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 319 0001027 0.4244E-06 0.4598E-07 0.4260E-06 0.1591E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001028 0.4219E-06 0.4553E-07 0.4235E-06 0.1581E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 319 0001029 0.4194E-06 0.4508E-07 0.4210E-06 0.1571E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 319 0001030 0.4169E-06 0.4465E-07 0.4184E-06 0.1561E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 319 0001031 0.4144E-06 0.4421E-07 0.4160E-06 0.1551E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 319 0001032 0.4119E-06 0.4378E-07 0.4135E-06 0.1541E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 319 0001033 0.4095E-06 0.4335E-07 0.4110E-06 0.1531E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 319 0001034 0.4071E-06 0.4293E-07 0.4086E-06 0.1522E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 320 0001035 0.4047E-06 0.4251E-07 0.4062E-06 0.1512E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 320 0001036 0.4023E-06 0.4210E-07 0.4037E-06 0.1502E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 320 0001037 0.3999E-06 0.4169E-07 0.4013E-06 0.1493E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 320 0001038 0.3975E-06 0.4128E-07 0.3990E-06 0.1484E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 320 0001039 0.3951E-06 0.4088E-07 0.3966E-06 0.1474E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 320 0001040 0.3928E-06 0.4048E-07 0.3942E-06 0.1465E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 320 0001041 0.3905E-06 0.4009E-07 0.3919E-06 0.1456E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 320 0001042 0.3882E-06 0.3970E-07 0.3896E-06 0.1447E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 320 0001043 0.3859E-06 0.3931E-07 0.3873E-06 0.1437E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 320 0001044 0.3836E-06 0.3893E-07 0.3850E-06 0.1428E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 320 0001045 0.3813E-06 0.3855E-07 0.3827E-06 0.1420E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 320 0001046 0.3791E-06 0.3817E-07 0.3804E-06 0.1411E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 320 0001047 0.3768E-06 0.3780E-07 0.3782E-06 0.1402E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 321 0001048 0.3746E-06 0.3744E-07 0.3760E-06 0.1393E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 320 0001049 0.3724E-06 0.3707E-07 0.3737E-06 0.1384E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 321 0001050 0.3702E-06 0.3671E-07 0.3715E-06 0.1376E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 321 0001051 0.3680E-06 0.3635E-07 0.3693E-06 0.1367E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 321 0001052 0.3659E-06 0.3600E-07 0.3672E-06 0.1359E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 321 0001053 0.3637E-06 0.3565E-07 0.3650E-06 0.1350E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 321 0001054 0.3616E-06 0.3530E-07 0.3628E-06 0.1342E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 321 0001055 0.3594E-06 0.3496E-07 0.3607E-06 0.1333E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 321 0001056 0.3573E-06 0.3462E-07 0.3586E-06 0.1325E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 321 0001057 0.3552E-06 0.3428E-07 0.3565E-06 0.1317E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 321 0001058 0.3531E-06 0.3395E-07 0.3544E-06 0.1309E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 321 0001059 0.3511E-06 0.3362E-07 0.3523E-06 0.1300E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 321 0001060 0.3490E-06 0.3329E-07 0.3502E-06 0.1292E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 321 0001061 0.3469E-06 0.3297E-07 0.3481E-06 0.1284E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 321 0001062 0.3449E-06 0.3265E-07 0.3461E-06 0.1276E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 321 0001063 0.3429E-06 0.3233E-07 0.3440E-06 0.1268E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 321 0001064 0.3409E-06 0.3202E-07 0.3420E-06 0.1261E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 321 0001065 0.3389E-06 0.3170E-07 0.3400E-06 0.1253E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001066 0.3369E-06 0.3140E-07 0.3380E-06 0.1245E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001067 0.3349E-06 0.3109E-07 0.3360E-06 0.1237E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001068 0.3329E-06 0.3079E-07 0.3340E-06 0.1230E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001069 0.3310E-06 0.3049E-07 0.3321E-06 0.1222E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001070 0.3290E-06 0.3019E-07 0.3301E-06 0.1215E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001071 0.3271E-06 0.2990E-07 0.3282E-06 0.1207E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001072 0.3252E-06 0.2961E-07 0.3263E-06 0.1200E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001073 0.3233E-06 0.2932E-07 0.3244E-06 0.1192E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001074 0.3214E-06 0.2904E-07 0.3225E-06 0.1185E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001075 0.3195E-06 0.2876E-07 0.3206E-06 0.1178E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001076 0.3176E-06 0.2848E-07 0.3187E-06 0.1170E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001077 0.3158E-06 0.2820E-07 0.3168E-06 0.1163E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001078 0.3139E-06 0.2793E-07 0.3150E-06 0.1156E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001079 0.3121E-06 0.2765E-07 0.3131E-06 0.1149E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001080 0.3103E-06 0.2739E-07 0.3113E-06 0.1142E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001081 0.3085E-06 0.2712E-07 0.3095E-06 0.1135E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001082 0.3067E-06 0.2686E-07 0.3076E-06 0.1128E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001083 0.3049E-06 0.2660E-07 0.3058E-06 0.1121E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001084 0.3031E-06 0.2634E-07 0.3041E-06 0.1114E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001085 0.3013E-06 0.2608E-07 0.3023E-06 0.1107E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001086 0.2996E-06 0.2583E-07 0.3005E-06 0.1101E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001087 0.2978E-06 0.2558E-07 0.2988E-06 0.1094E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001088 0.2961E-06 0.2533E-07 0.2970E-06 0.1087E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001089 0.2944E-06 0.2508E-07 0.2953E-06 0.1081E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001090 0.2926E-06 0.2484E-07 0.2936E-06 0.1074E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001091 0.2909E-06 0.2460E-07 0.2918E-06 0.1067E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001092 0.2892E-06 0.2436E-07 0.2901E-06 0.1061E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001093 0.2875E-06 0.2413E-07 0.2884E-06 0.1055E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001094 0.2859E-06 0.2389E-07 0.2868E-06 0.1048E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001095 0.2842E-06 0.2366E-07 0.2851E-06 0.1042E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 322 0001096 0.2826E-06 0.2343E-07 0.2834E-06 0.1035E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001097 0.2809E-06 0.2320E-07 0.2818E-06 0.1029E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001098 0.2793E-06 0.2298E-07 0.2801E-06 0.1023E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001099 0.2776E-06 0.2276E-07 0.2785E-06 0.1017E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001100 0.2760E-06 0.2254E-07 0.2769E-06 0.1010E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001101 0.2744E-06 0.2232E-07 0.2753E-06 0.1004E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001102 0.2728E-06 0.2210E-07 0.2737E-06 0.9983E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001103 0.2712E-06 0.2189E-07 0.2721E-06 0.9922E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001104 0.2697E-06 0.2167E-07 0.2705E-06 0.9862E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001105 0.2681E-06 0.2146E-07 0.2689E-06 0.9802E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001106 0.2665E-06 0.2126E-07 0.2673E-06 0.9743E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001107 0.2650E-06 0.2105E-07 0.2658E-06 0.9684E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001108 0.2635E-06 0.2085E-07 0.2642E-06 0.9626E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001109 0.2619E-06 0.2065E-07 0.2627E-06 0.9568E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001110 0.2604E-06 0.2045E-07 0.2612E-06 0.9510E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001111 0.2589E-06 0.2025E-07 0.2597E-06 0.9452E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001112 0.2574E-06 0.2005E-07 0.2581E-06 0.9395E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001113 0.2559E-06 0.1986E-07 0.2566E-06 0.9339E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001114 0.2544E-06 0.1967E-07 0.2552E-06 0.9283E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001115 0.2529E-06 0.1948E-07 0.2537E-06 0.9227E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001116 0.2515E-06 0.1929E-07 0.2522E-06 0.9171E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001117 0.2500E-06 0.1910E-07 0.2507E-06 0.9116E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001118 0.2486E-06 0.1892E-07 0.2493E-06 0.9061E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001119 0.2471E-06 0.1873E-07 0.2478E-06 0.9007E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001120 0.2457E-06 0.1855E-07 0.2464E-06 0.8952E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001121 0.2443E-06 0.1837E-07 0.2450E-06 0.8899E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001122 0.2429E-06 0.1819E-07 0.2435E-06 0.8845E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001123 0.2415E-06 0.1802E-07 0.2421E-06 0.8792E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001124 0.2401E-06 0.1784E-07 0.2407E-06 0.8739E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001125 0.2387E-06 0.1767E-07 0.2393E-06 0.8687E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001126 0.2373E-06 0.1750E-07 0.2379E-06 0.8635E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001127 0.2359E-06 0.1733E-07 0.2366E-06 0.8583E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001128 0.2346E-06 0.1716E-07 0.2352E-06 0.8532E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001129 0.2332E-06 0.1700E-07 0.2338E-06 0.8481E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001130 0.2318E-06 0.1683E-07 0.2325E-06 0.8430E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001131 0.2305E-06 0.1667E-07 0.2311E-06 0.8380E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001132 0.2292E-06 0.1651E-07 0.2298E-06 0.8330E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001133 0.2278E-06 0.1635E-07 0.2285E-06 0.8280E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001134 0.2265E-06 0.1619E-07 0.2271E-06 0.8230E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001135 0.2252E-06 0.1604E-07 0.2258E-06 0.8181E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001136 0.2239E-06 0.1588E-07 0.2245E-06 0.8132E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001137 0.2226E-06 0.1573E-07 0.2232E-06 0.8084E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001138 0.2213E-06 0.1558E-07 0.2219E-06 0.8036E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001139 0.2201E-06 0.1543E-07 0.2206E-06 0.7988E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001140 0.2188E-06 0.1528E-07 0.2194E-06 0.7940E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001141 0.2175E-06 0.1513E-07 0.2181E-06 0.7893E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001142 0.2163E-06 0.1499E-07 0.2168E-06 0.7846E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001143 0.2150E-06 0.1484E-07 0.2156E-06 0.7799E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001144 0.2138E-06 0.1470E-07 0.2143E-06 0.7753E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001145 0.2126E-06 0.1456E-07 0.2131E-06 0.7707E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001146 0.2113E-06 0.1442E-07 0.2119E-06 0.7661E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001147 0.2101E-06 0.1428E-07 0.2107E-06 0.7615E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001148 0.2089E-06 0.1414E-07 0.2094E-06 0.7570E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001149 0.2077E-06 0.1400E-07 0.2082E-06 0.7525E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001150 0.2065E-06 0.1387E-07 0.2070E-06 0.7481E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001151 0.2053E-06 0.1374E-07 0.2058E-06 0.7436E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001152 0.2041E-06 0.1360E-07 0.2046E-06 0.7392E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001153 0.2030E-06 0.1347E-07 0.2035E-06 0.7348E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001154 0.2018E-06 0.1334E-07 0.2023E-06 0.7305E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001155 0.2006E-06 0.1321E-07 0.2011E-06 0.7261E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001156 0.1995E-06 0.1309E-07 0.2000E-06 0.7218E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001157 0.1983E-06 0.1296E-07 0.1988E-06 0.7176E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001158 0.1972E-06 0.1284E-07 0.1977E-06 0.7133E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001159 0.1960E-06 0.1271E-07 0.1965E-06 0.7091E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001160 0.1949E-06 0.1259E-07 0.1954E-06 0.7049E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001161 0.1938E-06 0.1247E-07 0.1943E-06 0.7007E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001162 0.1927E-06 0.1235E-07 0.1931E-06 0.6966E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001163 0.1916E-06 0.1223E-07 0.1920E-06 0.6925E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001164 0.1905E-06 0.1211E-07 0.1909E-06 0.6884E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001165 0.1894E-06 0.1200E-07 0.1898E-06 0.6843E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001166 0.1883E-06 0.1188E-07 0.1887E-06 0.6803E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001167 0.1872E-06 0.1177E-07 0.1877E-06 0.6763E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001168 0.1861E-06 0.1166E-07 0.1866E-06 0.6723E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001169 0.1851E-06 0.1154E-07 0.1855E-06 0.6683E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001170 0.1840E-06 0.1143E-07 0.1844E-06 0.6644E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001171 0.1829E-06 0.1132E-07 0.1834E-06 0.6605E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001172 0.1819E-06 0.1121E-07 0.1823E-06 0.6566E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001173 0.1808E-06 0.1111E-07 0.1813E-06 0.6527E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001174 0.1798E-06 0.1100E-07 0.1802E-06 0.6489E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001175 0.1788E-06 0.1089E-07 0.1792E-06 0.6450E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001176 0.1777E-06 0.1079E-07 0.1782E-06 0.6412E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001177 0.1767E-06 0.1069E-07 0.1771E-06 0.6375E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001178 0.1757E-06 0.1058E-07 0.1761E-06 0.6337E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001179 0.1747E-06 0.1048E-07 0.1751E-06 0.6300E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001180 0.1737E-06 0.1038E-07 0.1741E-06 0.6263E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001181 0.1727E-06 0.1028E-07 0.1731E-06 0.6226E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001182 0.1717E-06 0.1018E-07 0.1721E-06 0.6190E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001183 0.1707E-06 0.1009E-07 0.1711E-06 0.6153E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001184 0.1697E-06 0.9991E-08 0.1701E-06 0.6117E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001185 0.1688E-06 0.9895E-08 0.1692E-06 0.6081E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001186 0.1678E-06 0.9801E-08 0.1682E-06 0.6046E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001187 0.1668E-06 0.9707E-08 0.1672E-06 0.6010E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001188 0.1659E-06 0.9614E-08 0.1663E-06 0.5975E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001189 0.1649E-06 0.9522E-08 0.1653E-06 0.5940E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001190 0.1640E-06 0.9431E-08 0.1644E-06 0.5905E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001191 0.1630E-06 0.9341E-08 0.1634E-06 0.5870E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001192 0.1621E-06 0.9252E-08 0.1625E-06 0.5836E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001193 0.1612E-06 0.9163E-08 0.1615E-06 0.5802E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001194 0.1603E-06 0.9076E-08 0.1606E-06 0.5768E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001195 0.1593E-06 0.8989E-08 0.1597E-06 0.5734E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001196 0.1584E-06 0.8903E-08 0.1588E-06 0.5701E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001197 0.1575E-06 0.8818E-08 0.1579E-06 0.5667E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001198 0.1566E-06 0.8734E-08 0.1570E-06 0.5634E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001199 0.1557E-06 0.8651E-08 0.1561E-06 0.5601E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001200 0.1548E-06 0.8568E-08 0.1552E-06 0.5568E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001201 0.1539E-06 0.8487E-08 0.1543E-06 0.5536E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001202 0.1531E-06 0.8406E-08 0.1534E-06 0.5504E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001203 0.1522E-06 0.8326E-08 0.1525E-06 0.5471E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001204 0.1513E-06 0.8246E-08 0.1516E-06 0.5439E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001205 0.1505E-06 0.8168E-08 0.1508E-06 0.5408E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001206 0.1496E-06 0.8090E-08 0.1499E-06 0.5376E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001207 0.1487E-06 0.8013E-08 0.1490E-06 0.5345E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001208 0.1479E-06 0.7937E-08 0.1482E-06 0.5314E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001209 0.1470E-06 0.7861E-08 0.1473E-06 0.5283E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001210 0.1462E-06 0.7786E-08 0.1465E-06 0.5252E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001211 0.1454E-06 0.7712E-08 0.1457E-06 0.5221E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001212 0.1445E-06 0.7639E-08 0.1448E-06 0.5191E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001213 0.1437E-06 0.7566E-08 0.1440E-06 0.5161E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001214 0.1429E-06 0.7494E-08 0.1432E-06 0.5131E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001215 0.1421E-06 0.7423E-08 0.1424E-06 0.5101E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001216 0.1413E-06 0.7353E-08 0.1415E-06 0.5071E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001217 0.1404E-06 0.7283E-08 0.1407E-06 0.5041E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001218 0.1396E-06 0.7214E-08 0.1399E-06 0.5012E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001219 0.1388E-06 0.7145E-08 0.1391E-06 0.4983E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001220 0.1381E-06 0.7077E-08 0.1383E-06 0.4954E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001221 0.1373E-06 0.7010E-08 0.1375E-06 0.4925E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001222 0.1365E-06 0.6944E-08 0.1367E-06 0.4897E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001223 0.1357E-06 0.6878E-08 0.1360E-06 0.4868E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001224 0.1349E-06 0.6813E-08 0.1352E-06 0.4840E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001225 0.1342E-06 0.6748E-08 0.1344E-06 0.4812E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001226 0.1334E-06 0.6684E-08 0.1336E-06 0.4784E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001227 0.1326E-06 0.6621E-08 0.1329E-06 0.4756E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001228 0.1319E-06 0.6558E-08 0.1321E-06 0.4728E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001229 0.1311E-06 0.6496E-08 0.1314E-06 0.4701E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001230 0.1304E-06 0.6434E-08 0.1306E-06 0.4674E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001231 0.1296E-06 0.6374E-08 0.1299E-06 0.4646E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001232 0.1289E-06 0.6313E-08 0.1291E-06 0.4619E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001233 0.1281E-06 0.6254E-08 0.1284E-06 0.4593E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001234 0.1274E-06 0.6194E-08 0.1277E-06 0.4566E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001235 0.1267E-06 0.6136E-08 0.1269E-06 0.4540E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001236 0.1260E-06 0.6078E-08 0.1262E-06 0.4513E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001237 0.1252E-06 0.6021E-08 0.1255E-06 0.4487E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001238 0.1245E-06 0.5964E-08 0.1248E-06 0.4461E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001239 0.1238E-06 0.5907E-08 0.1240E-06 0.4435E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001240 0.1231E-06 0.5852E-08 0.1233E-06 0.4410E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001241 0.1224E-06 0.5796E-08 0.1226E-06 0.4384E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001242 0.1217E-06 0.5742E-08 0.1219E-06 0.4359E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001243 0.1210E-06 0.5688E-08 0.1212E-06 0.4333E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001244 0.1203E-06 0.5634E-08 0.1205E-06 0.4308E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001245 0.1196E-06 0.5581E-08 0.1199E-06 0.4283E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001246 0.1190E-06 0.5528E-08 0.1192E-06 0.4259E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001247 0.1183E-06 0.5476E-08 0.1185E-06 0.4234E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001248 0.1176E-06 0.5425E-08 0.1178E-06 0.4209E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001249 0.1169E-06 0.5374E-08 0.1171E-06 0.4185E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001250 0.1163E-06 0.5323E-08 0.1165E-06 0.4161E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001251 0.1156E-06 0.5273E-08 0.1158E-06 0.4137E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001252 0.1149E-06 0.5224E-08 0.1151E-06 0.4113E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001253 0.1143E-06 0.5174E-08 0.1145E-06 0.4089E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001254 0.1136E-06 0.5126E-08 0.1138E-06 0.4066E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001255 0.1130E-06 0.5078E-08 0.1132E-06 0.4042E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001256 0.1123E-06 0.5030E-08 0.1125E-06 0.4019E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001257 0.1117E-06 0.4983E-08 0.1119E-06 0.3996E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001258 0.1111E-06 0.4936E-08 0.1113E-06 0.3972E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001259 0.1104E-06 0.4890E-08 0.1106E-06 0.3950E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001260 0.1098E-06 0.4844E-08 0.1100E-06 0.3927E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001261 0.1092E-06 0.4799E-08 0.1094E-06 0.3904E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001262 0.1086E-06 0.4754E-08 0.1087E-06 0.3882E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001263 0.1079E-06 0.4709E-08 0.1081E-06 0.3859E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001264 0.1073E-06 0.4665E-08 0.1075E-06 0.3837E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001265 0.1067E-06 0.4621E-08 0.1069E-06 0.3815E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001266 0.1061E-06 0.4578E-08 0.1063E-06 0.3793E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001267 0.1055E-06 0.4535E-08 0.1057E-06 0.3771E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001268 0.1049E-06 0.4493E-08 0.1051E-06 0.3749E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001269 0.1043E-06 0.4451E-08 0.1045E-06 0.3728E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001270 0.1037E-06 0.4409E-08 0.1039E-06 0.3706E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001271 0.1031E-06 0.4368E-08 0.1033E-06 0.3685E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001272 0.1025E-06 0.4327E-08 0.1027E-06 0.3663E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001273 0.1019E-06 0.4287E-08 0.1021E-06 0.3642E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001274 0.1014E-06 0.4247E-08 0.1015E-06 0.3621E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001275 0.1008E-06 0.4207E-08 0.1010E-06 0.3601E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001276 0.1002E-06 0.4168E-08 0.1004E-06 0.3580E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001277 0.9965E-07 0.4129E-08 0.9981E-07 0.3559E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001278 0.9908E-07 0.4091E-08 0.9924E-07 0.3539E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001279 0.9852E-07 0.4053E-08 0.9867E-07 0.3518E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001280 0.9796E-07 0.4015E-08 0.9811E-07 0.3498E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001281 0.9740E-07 0.3978E-08 0.9755E-07 0.3478E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001282 0.9684E-07 0.3941E-08 0.9700E-07 0.3458E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001283 0.9629E-07 0.3904E-08 0.9644E-07 0.3438E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001284 0.9575E-07 0.3868E-08 0.9589E-07 0.3418E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001285 0.9520E-07 0.3832E-08 0.9535E-07 0.3399E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001286 0.9466E-07 0.3797E-08 0.9480E-07 0.3379E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001287 0.9412E-07 0.3761E-08 0.9426E-07 0.3360E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001288 0.9358E-07 0.3726E-08 0.9373E-07 0.3340E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001289 0.9305E-07 0.3692E-08 0.9319E-07 0.3321E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001290 0.9252E-07 0.3658E-08 0.9266E-07 0.3302E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001291 0.9200E-07 0.3624E-08 0.9214E-07 0.3283E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001292 0.9147E-07 0.3590E-08 0.9161E-07 0.3264E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001293 0.9095E-07 0.3557E-08 0.9109E-07 0.3245E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001294 0.9044E-07 0.3524E-08 0.9057E-07 0.3227E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001295 0.8992E-07 0.3492E-08 0.9005E-07 0.3208E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001296 0.8941E-07 0.3459E-08 0.8954E-07 0.3190E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001297 0.8890E-07 0.3427E-08 0.8903E-07 0.3172E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001298 0.8840E-07 0.3396E-08 0.8853E-07 0.3153E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001299 0.8789E-07 0.3364E-08 0.8802E-07 0.3135E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001300 0.8739E-07 0.3333E-08 0.8752E-07 0.3117E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001301 0.8690E-07 0.3303E-08 0.8702E-07 0.3099E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001302 0.8640E-07 0.3272E-08 0.8653E-07 0.3082E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001303 0.8591E-07 0.3242E-08 0.8604E-07 0.3064E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001304 0.8542E-07 0.3212E-08 0.8555E-07 0.3046E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001305 0.8494E-07 0.3183E-08 0.8506E-07 0.3029E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001306 0.8445E-07 0.3153E-08 0.8457E-07 0.3011E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001307 0.8397E-07 0.3124E-08 0.8409E-07 0.2994E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001308 0.8350E-07 0.3096E-08 0.8362E-07 0.2977E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001309 0.8302E-07 0.3067E-08 0.8314E-07 0.2960E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001310 0.8255E-07 0.3039E-08 0.8267E-07 0.2943E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001311 0.8208E-07 0.3011E-08 0.8220E-07 0.2926E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001312 0.8162E-07 0.2984E-08 0.8173E-07 0.2909E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001313 0.8115E-07 0.2956E-08 0.8126E-07 0.2893E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001314 0.8069E-07 0.2929E-08 0.8080E-07 0.2876E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001315 0.8023E-07 0.2902E-08 0.8034E-07 0.2860E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001316 0.7978E-07 0.2876E-08 0.7989E-07 0.2843E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 314 0001317 0.7932E-07 0.2850E-08 0.7943E-07 0.2827E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001318 0.7887E-07 0.2823E-08 0.7898E-07 0.2811E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001319 0.7842E-07 0.2798E-08 0.7853E-07 0.2795E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 314 0001320 0.7798E-07 0.2772E-08 0.7808E-07 0.2779E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001321 0.7754E-07 0.2747E-08 0.7764E-07 0.2763E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 314 0001322 0.7710E-07 0.2722E-08 0.7720E-07 0.2747E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001323 0.7666E-07 0.2697E-08 0.7676E-07 0.2731E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001324 0.7622E-07 0.2672E-08 0.7632E-07 0.2716E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 314 0001325 0.7579E-07 0.2648E-08 0.7589E-07 0.2700E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001326 0.7536E-07 0.2624E-08 0.7546E-07 0.2685E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001327 0.7493E-07 0.2600E-08 0.7503E-07 0.2669E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001328 0.7450E-07 0.2576E-08 0.7460E-07 0.2654E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001329 0.7408E-07 0.2553E-08 0.7418E-07 0.2639E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 314 0001330 0.7366E-07 0.2530E-08 0.7376E-07 0.2624E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001331 0.7324E-07 0.2507E-08 0.7334E-07 0.2609E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001332 0.7283E-07 0.2484E-08 0.7292E-07 0.2594E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001333 0.7241E-07 0.2462E-08 0.7251E-07 0.2579E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001334 0.7200E-07 0.2439E-08 0.7209E-07 0.2564E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 314 0001335 0.7159E-07 0.2417E-08 0.7168E-07 0.2549E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001336 0.7119E-07 0.2395E-08 0.7128E-07 0.2535E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001337 0.7078E-07 0.2374E-08 0.7087E-07 0.2520E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001338 0.7038E-07 0.2352E-08 0.7047E-07 0.2506E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 314 0001339 0.6998E-07 0.2331E-08 0.7007E-07 0.2492E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 314 0001340 0.6958E-07 0.2310E-08 0.6967E-07 0.2477E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001341 0.6919E-07 0.2289E-08 0.6927E-07 0.2463E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 314 0001342 0.6880E-07 0.2268E-08 0.6888E-07 0.2449E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001343 0.6841E-07 0.2248E-08 0.6849E-07 0.2435E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001344 0.6802E-07 0.2228E-08 0.6810E-07 0.2421E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001345 0.6763E-07 0.2208E-08 0.6771E-07 0.2407E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001346 0.6725E-07 0.2188E-08 0.6733E-07 0.2394E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 314 0001347 0.6687E-07 0.2168E-08 0.6695E-07 0.2380E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001348 0.6649E-07 0.2149E-08 0.6657E-07 0.2366E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001349 0.6611E-07 0.2129E-08 0.6619E-07 0.2353E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001350 0.6573E-07 0.2110E-08 0.6581E-07 0.2339E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001351 0.6536E-07 0.2091E-08 0.6544E-07 0.2326E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001352 0.6499E-07 0.2073E-08 0.6507E-07 0.2313E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001353 0.6462E-07 0.2054E-08 0.6470E-07 0.2300E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001354 0.6425E-07 0.2036E-08 0.6433E-07 0.2286E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001355 0.6389E-07 0.2017E-08 0.6396E-07 0.2273E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001356 0.6353E-07 0.1999E-08 0.6360E-07 0.2260E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001357 0.6317E-07 0.1981E-08 0.6324E-07 0.2247E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001358 0.6281E-07 0.1964E-08 0.6288E-07 0.2235E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001359 0.6245E-07 0.1946E-08 0.6252E-07 0.2222E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001360 0.6210E-07 0.1929E-08 0.6217E-07 0.2209E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001361 0.6174E-07 0.1912E-08 0.6182E-07 0.2197E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001362 0.6139E-07 0.1895E-08 0.6146E-07 0.2184E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001363 0.6105E-07 0.1878E-08 0.6112E-07 0.2172E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001364 0.6070E-07 0.1861E-08 0.6077E-07 0.2159E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001365 0.6036E-07 0.1845E-08 0.6042E-07 0.2147E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001366 0.6001E-07 0.1828E-08 0.6008E-07 0.2135E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001367 0.5967E-07 0.1812E-08 0.5974E-07 0.2122E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001368 0.5933E-07 0.1796E-08 0.5940E-07 0.2110E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001369 0.5900E-07 0.1780E-08 0.5906E-07 0.2098E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001370 0.5866E-07 0.1764E-08 0.5873E-07 0.2086E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001371 0.5833E-07 0.1749E-08 0.5839E-07 0.2074E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001372 0.5800E-07 0.1733E-08 0.5806E-07 0.2063E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001373 0.5767E-07 0.1718E-08 0.5773E-07 0.2051E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001374 0.5734E-07 0.1703E-08 0.5741E-07 0.2039E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001375 0.5702E-07 0.1688E-08 0.5708E-07 0.2028E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001376 0.5669E-07 0.1673E-08 0.5676E-07 0.2016E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001377 0.5637E-07 0.1658E-08 0.5643E-07 0.2004E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001378 0.5605E-07 0.1643E-08 0.5611E-07 0.1993E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001379 0.5574E-07 0.1629E-08 0.5580E-07 0.1982E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001380 0.5542E-07 0.1615E-08 0.5548E-07 0.1970E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001381 0.5511E-07 0.1600E-08 0.5516E-07 0.1959E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001382 0.5479E-07 0.1586E-08 0.5485E-07 0.1948E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001383 0.5448E-07 0.1572E-08 0.5454E-07 0.1937E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001384 0.5417E-07 0.1559E-08 0.5423E-07 0.1926E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 317 0001385 0.5387E-07 0.1545E-08 0.5392E-07 0.1915E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001386 0.5356E-07 0.1532E-08 0.5362E-07 0.1904E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001387 0.5326E-07 0.1518E-08 0.5331E-07 0.1893E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001388 0.5295E-07 0.1505E-08 0.5301E-07 0.1882E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001389 0.5265E-07 0.1492E-08 0.5271E-07 0.1872E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001390 0.5236E-07 0.1479E-08 0.5241E-07 0.1861E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001391 0.5206E-07 0.1466E-08 0.5211E-07 0.1850E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001392 0.5176E-07 0.1453E-08 0.5182E-07 0.1840E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001393 0.5147E-07 0.1440E-08 0.5152E-07 0.1829E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001394 0.5118E-07 0.1428E-08 0.5123E-07 0.1819E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001395 0.5089E-07 0.1415E-08 0.5094E-07 0.1809E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001396 0.5060E-07 0.1403E-08 0.5065E-07 0.1798E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001397 0.5031E-07 0.1391E-08 0.5036E-07 0.1788E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001398 0.5003E-07 0.1379E-08 0.5008E-07 0.1778E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001399 0.4974E-07 0.1367E-08 0.4979E-07 0.1768E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001400 0.4946E-07 0.1355E-08 0.4951E-07 0.1758E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001401 0.4918E-07 0.1343E-08 0.4923E-07 0.1748E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001402 0.4890E-07 0.1332E-08 0.4895E-07 0.1738E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001403 0.4863E-07 0.1320E-08 0.4867E-07 0.1728E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001404 0.4835E-07 0.1309E-08 0.4840E-07 0.1718E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001405 0.4808E-07 0.1298E-08 0.4812E-07 0.1708E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001406 0.4780E-07 0.1286E-08 0.4785E-07 0.1699E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001407 0.4753E-07 0.1275E-08 0.4758E-07 0.1689E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001408 0.4726E-07 0.1264E-08 0.4731E-07 0.1679E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 315 0001409 0.4700E-07 0.1254E-08 0.4704E-07 0.1670E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001410 0.4673E-07 0.1243E-08 0.4677E-07 0.1660E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001411 0.4646E-07 0.1232E-08 0.4651E-07 0.1651E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 319 0001412 0.4620E-07 0.1222E-08 0.4624E-07 0.1641E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001413 0.4594E-07 0.1211E-08 0.4598E-07 0.1632E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001414 0.4568E-07 0.1201E-08 0.4572E-07 0.1623E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 317 0001415 0.4542E-07 0.1190E-08 0.4546E-07 0.1613E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001416 0.4516E-07 0.1180E-08 0.4520E-07 0.1604E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001417 0.4491E-07 0.1170E-08 0.4495E-07 0.1595E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001418 0.4465E-07 0.1160E-08 0.4469E-07 0.1586E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001419 0.4440E-07 0.1150E-08 0.4444E-07 0.1577E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001420 0.4415E-07 0.1141E-08 0.4419E-07 0.1568E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001421 0.4390E-07 0.1131E-08 0.4394E-07 0.1559E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001422 0.4365E-07 0.1121E-08 0.4369E-07 0.1550E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001423 0.4340E-07 0.1112E-08 0.4344E-07 0.1542E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001424 0.4316E-07 0.1102E-08 0.4319E-07 0.1533E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001425 0.4291E-07 0.1093E-08 0.4295E-07 0.1524E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001426 0.4267E-07 0.1084E-08 0.4271E-07 0.1515E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001427 0.4243E-07 0.1075E-08 0.4246E-07 0.1507E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 316 0001428 0.4219E-07 0.1065E-08 0.4222E-07 0.1498E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 317 0001429 0.4195E-07 0.1056E-08 0.4198E-07 0.1490E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 317 0001430 0.4171E-07 0.1048E-08 0.4175E-07 0.1481E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001431 0.4147E-07 0.1039E-08 0.4151E-07 0.1473E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 317 0001432 0.4124E-07 0.1030E-08 0.4127E-07 0.1464E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001433 0.4100E-07 0.1021E-08 0.4104E-07 0.1456E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001434 0.4077E-07 0.1013E-08 0.4081E-07 0.1448E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 317 0001435 0.4054E-07 0.1004E-08 0.4058E-07 0.1440E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001436 0.4031E-07 0.9958E-09 0.4035E-07 0.1431E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 319 0001437 0.4008E-07 0.9875E-09 0.4012E-07 0.1423E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001438 0.3986E-07 0.9792E-09 0.3989E-07 0.1415E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 319 0001439 0.3963E-07 0.9710E-09 0.3966E-07 0.1407E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001440 0.3941E-07 0.9629E-09 0.3944E-07 0.1399E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 319 0001441 0.3918E-07 0.9549E-09 0.3921E-07 0.1391E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001442 0.3896E-07 0.9469E-09 0.3899E-07 0.1383E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 317 0001443 0.3874E-07 0.9390E-09 0.3877E-07 0.1375E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 317 0001444 0.3852E-07 0.9312E-09 0.3855E-07 0.1368E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001445 0.3830E-07 0.9235E-09 0.3833E-07 0.1360E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001446 0.3808E-07 0.9158E-09 0.3812E-07 0.1352E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001447 0.3787E-07 0.9082E-09 0.3790E-07 0.1344E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001448 0.3765E-07 0.9007E-09 0.3769E-07 0.1337E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001449 0.3744E-07 0.8932E-09 0.3747E-07 0.1329E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 320 0001450 0.3723E-07 0.8858E-09 0.3726E-07 0.1322E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 320 0001451 0.3702E-07 0.8785E-09 0.3705E-07 0.1314E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001452 0.3681E-07 0.8712E-09 0.3684E-07 0.1307E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001453 0.3660E-07 0.8640E-09 0.3663E-07 0.1299E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001454 0.3639E-07 0.8569E-09 0.3642E-07 0.1292E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 327 0001455 0.3619E-07 0.8499E-09 0.3622E-07 0.1285E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001456 0.3598E-07 0.8429E-09 0.3601E-07 0.1277E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001457 0.3578E-07 0.8360E-09 0.3581E-07 0.1270E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 319 0001458 0.3558E-07 0.8291E-09 0.3560E-07 0.1263E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 328 0001459 0.3537E-07 0.8223E-09 0.3540E-07 0.1256E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 319 0001460 0.3517E-07 0.8156E-09 0.3520E-07 0.1248E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 319 0001461 0.3497E-07 0.8089E-09 0.3500E-07 0.1241E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001462 0.3478E-07 0.8023E-09 0.3480E-07 0.1234E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 329 0001463 0.3458E-07 0.7957E-09 0.3461E-07 0.1227E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 329 0001464 0.3438E-07 0.7892E-09 0.3441E-07 0.1220E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001465 0.3419E-07 0.7828E-09 0.3421E-07 0.1213E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 329 0001466 0.3400E-07 0.7764E-09 0.3402E-07 0.1207E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 331 0001467 0.3380E-07 0.7701E-09 0.3383E-07 0.1200E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 318 0001468 0.3361E-07 0.7638E-09 0.3364E-07 0.1193E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 331 0001469 0.3342E-07 0.7576E-09 0.3345E-07 0.1186E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 319 0001470 0.3323E-07 0.7515E-09 0.3326E-07 0.1179E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001471 0.3304E-07 0.7454E-09 0.3307E-07 0.1173E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 332 0001472 0.3286E-07 0.7394E-09 0.3288E-07 0.1166E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 333 0001473 0.3267E-07 0.7334E-09 0.3269E-07 0.1159E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 331 0001474 0.3249E-07 0.7275E-09 0.3251E-07 0.1153E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 333 0001475 0.3230E-07 0.7216E-09 0.3232E-07 0.1146E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 320 0001476 0.3212E-07 0.7158E-09 0.3214E-07 0.1140E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 333 0001477 0.3194E-07 0.7100E-09 0.3196E-07 0.1133E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 332 0001478 0.3176E-07 0.7043E-09 0.3178E-07 0.1127E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 320 0001479 0.3158E-07 0.6987E-09 0.3160E-07 0.1120E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 337 0001480 0.3140E-07 0.6931E-09 0.3142E-07 0.1114E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 332 0001481 0.3122E-07 0.6875E-09 0.3124E-07 0.1108E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 319 0001482 0.3104E-07 0.6820E-09 0.3106E-07 0.1102E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 319 0001483 0.3087E-07 0.6766E-09 0.3089E-07 0.1095E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 333 0001484 0.3069E-07 0.6711E-09 0.3071E-07 0.1089E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 333 0001485 0.3052E-07 0.6658E-09 0.3054E-07 0.1083E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 319 0001486 0.3035E-07 0.6605E-09 0.3037E-07 0.1077E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 336 0001487 0.3017E-07 0.6552E-09 0.3019E-07 0.1071E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 331 0001488 0.3000E-07 0.6500E-09 0.3002E-07 0.1065E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 331 0001489 0.2983E-07 0.6448E-09 0.2985E-07 0.1058E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 335 0001490 0.2966E-07 0.6397E-09 0.2968E-07 0.1052E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 320 0001491 0.2950E-07 0.6347E-09 0.2952E-07 0.1047E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 338 0001492 0.2933E-07 0.6296E-09 0.2935E-07 0.1041E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 332 0001493 0.2916E-07 0.6246E-09 0.2918E-07 0.1035E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 337 0001494 0.2900E-07 0.6197E-09 0.2902E-07 0.1029E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001495 0.2883E-07 0.6148E-09 0.2885E-07 0.1023E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 338 0001496 0.2867E-07 0.6100E-09 0.2869E-07 0.1017E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 333 0001497 0.2851E-07 0.6052E-09 0.2853E-07 0.1011E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001498 0.2835E-07 0.6004E-09 0.2837E-07 0.1006E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 338 0001499 0.2819E-07 0.5957E-09 0.2821E-07 0.9999E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 339 0001500 0.2803E-07 0.5910E-09 0.2805E-07 0.9943E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 336 0001501 0.2787E-07 0.5864E-09 0.2789E-07 0.9886E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 339 0001502 0.2771E-07 0.5818E-09 0.2773E-07 0.9830E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 338 0001503 0.2755E-07 0.5772E-09 0.2757E-07 0.9775E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001504 0.2740E-07 0.5727E-09 0.2742E-07 0.9719E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 340 0001505 0.2724E-07 0.5683E-09 0.2726E-07 0.9664E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 333 0001506 0.2709E-07 0.5638E-09 0.2711E-07 0.9609E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 334 0001507 0.2693E-07 0.5594E-09 0.2695E-07 0.9555E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 335 0001508 0.2678E-07 0.5551E-09 0.2680E-07 0.9500E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 336 0001509 0.2663E-07 0.5508E-09 0.2665E-07 0.9447E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 323 0001510 0.2648E-07 0.5465E-09 0.2650E-07 0.9393E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 340 0001511 0.2633E-07 0.5423E-09 0.2635E-07 0.9340E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 338 0001512 0.2618E-07 0.5381E-09 0.2620E-07 0.9287E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 339 0001513 0.2603E-07 0.5339E-09 0.2605E-07 0.9234E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 340 0001514 0.2589E-07 0.5298E-09 0.2590E-07 0.9182E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 339 0001515 0.2574E-07 0.5257E-09 0.2575E-07 0.9130E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 340 0001516 0.2559E-07 0.5216E-09 0.2561E-07 0.9078E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001517 0.2545E-07 0.5176E-09 0.2546E-07 0.9026E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 339 0001518 0.2530E-07 0.5136E-09 0.2532E-07 0.8975E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 320 0001519 0.2516E-07 0.5097E-09 0.2518E-07 0.8924E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 339 0001520 0.2502E-07 0.5058E-09 0.2503E-07 0.8874E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 324 0001521 0.2488E-07 0.5019E-09 0.2489E-07 0.8823E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 341 0001522 0.2474E-07 0.4980E-09 0.2475E-07 0.8773E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 341 0001523 0.2460E-07 0.4942E-09 0.2461E-07 0.8724E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 331 0001524 0.2446E-07 0.4905E-09 0.2447E-07 0.8674E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 332 0001525 0.2432E-07 0.4867E-09 0.2433E-07 0.8625E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 335 0001526 0.2418E-07 0.4830E-09 0.2420E-07 0.8576E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 340 0001527 0.2404E-07 0.4793E-09 0.2406E-07 0.8528E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 337 0001528 0.2391E-07 0.4757E-09 0.2392E-07 0.8479E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 342 0001529 0.2377E-07 0.4721E-09 0.2379E-07 0.8431E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 340 0001530 0.2364E-07 0.4685E-09 0.2365E-07 0.8383E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 339 0001531 0.2350E-07 0.4649E-09 0.2352E-07 0.8336E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 332 0001532 0.2337E-07 0.4614E-09 0.2338E-07 0.8289E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 339 0001533 0.2324E-07 0.4579E-09 0.2325E-07 0.8242E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 339 0001534 0.2311E-07 0.4545E-09 0.2312E-07 0.8195E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 342 0001535 0.2298E-07 0.4510E-09 0.2299E-07 0.8148E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 339 0001536 0.2285E-07 0.4476E-09 0.2286E-07 0.8102E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 340 0001537 0.2272E-07 0.4443E-09 0.2273E-07 0.8056E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 338 0001538 0.2259E-07 0.4409E-09 0.2260E-07 0.8011E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 343 0001539 0.2246E-07 0.4376E-09 0.2247E-07 0.7965E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 340 0001540 0.2233E-07 0.4343E-09 0.2235E-07 0.7920E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 333 0001541 0.2221E-07 0.4311E-09 0.2222E-07 0.7875E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 343 0001542 0.2208E-07 0.4279E-09 0.2209E-07 0.7831E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 341 0001543 0.2196E-07 0.4247E-09 0.2197E-07 0.7786E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 340 0001544 0.2183E-07 0.4215E-09 0.2184E-07 0.7742E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 343 0001545 0.2171E-07 0.4183E-09 0.2172E-07 0.7698E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 343 0001546 0.2159E-07 0.4152E-09 0.2160E-07 0.7655E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 343 0001547 0.2146E-07 0.4121E-09 0.2148E-07 0.7611E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 335 0001548 0.2134E-07 0.4091E-09 0.2135E-07 0.7568E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 344 0001549 0.2122E-07 0.4060E-09 0.2123E-07 0.7525E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 339 0001550 0.2110E-07 0.4030E-09 0.2111E-07 0.7482E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 339 0001551 0.2098E-07 0.4000E-09 0.2099E-07 0.7440E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 337 0001552 0.2086E-07 0.3971E-09 0.2087E-07 0.7398E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 344 0001553 0.2075E-07 0.3941E-09 0.2076E-07 0.7356E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 341 0001554 0.2063E-07 0.3912E-09 0.2064E-07 0.7314E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 341 0001555 0.2051E-07 0.3884E-09 0.2052E-07 0.7273E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 344 0001556 0.2039E-07 0.3855E-09 0.2041E-07 0.7232E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 339 0001557 0.2028E-07 0.3827E-09 0.2029E-07 0.7191E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 344 0001558 0.2016E-07 0.3798E-09 0.2018E-07 0.7150E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 344 0001559 0.2005E-07 0.3771E-09 0.2006E-07 0.7109E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 333 0001560 0.1994E-07 0.3743E-09 0.1995E-07 0.7069E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 344 0001561 0.1982E-07 0.3716E-09 0.1983E-07 0.7029E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 344 0001562 0.1971E-07 0.3688E-09 0.1972E-07 0.6989E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 344 0001563 0.1960E-07 0.3661E-09 0.1961E-07 0.6950E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 343 0001564 0.1949E-07 0.3635E-09 0.1950E-07 0.6910E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 344 0001565 0.1938E-07 0.3608E-09 0.1939E-07 0.6871E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 334 0001566 0.1927E-07 0.3582E-09 0.1928E-07 0.6832E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 340 0001567 0.1916E-07 0.3556E-09 0.1917E-07 0.6793E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 343 0001568 0.1905E-07 0.3530E-09 0.1906E-07 0.6755E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 340 0001569 0.1894E-07 0.3504E-09 0.1895E-07 0.6717E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 341 0001570 0.1884E-07 0.3479E-09 0.1885E-07 0.6679E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 344 0001571 0.1873E-07 0.3454E-09 0.1874E-07 0.6641E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 344 0001572 0.1862E-07 0.3429E-09 0.1863E-07 0.6603E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 343 0001573 0.1852E-07 0.3404E-09 0.1853E-07 0.6566E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 611 0001574 0.1841E-07 0.3380E-09 0.1842E-07 0.6529E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 344 0001575 0.1831E-07 0.3355E-09 0.1832E-07 0.6492E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 332 0001576 0.1821E-07 0.3331E-09 0.1822E-07 0.6455E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 339 0001577 0.1810E-07 0.3307E-09 0.1811E-07 0.6418E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 337 0001578 0.1800E-07 0.3283E-09 0.1801E-07 0.6382E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 336 0001579 0.1790E-07 0.3260E-09 0.1791E-07 0.6346E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 345 0001580 0.1780E-07 0.3237E-09 0.1781E-07 0.6310E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 345 0001581 0.1770E-07 0.3213E-09 0.1771E-07 0.6274E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 344 0001582 0.1760E-07 0.3190E-09 0.1761E-07 0.6239E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 344 0001583 0.1750E-07 0.3168E-09 0.1751E-07 0.6203E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 337 0001584 0.1740E-07 0.3145E-09 0.1741E-07 0.6168E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 344 0001585 0.1730E-07 0.3123E-09 0.1731E-07 0.6133E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 345 0001586 0.1720E-07 0.3100E-09 0.1721E-07 0.6098E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 344 0001587 0.1710E-07 0.3078E-09 0.1711E-07 0.6064E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 344 0001588 0.1701E-07 0.3056E-09 0.1702E-07 0.6030E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 337 0001589 0.1691E-07 0.3035E-09 0.1692E-07 0.5995E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 344 0001590 0.1682E-07 0.3013E-09 0.1682E-07 0.5961E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 345 0001591 0.1672E-07 0.2992E-09 0.1673E-07 0.5928E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 333 0001592 0.1663E-07 0.2971E-09 0.1663E-07 0.5894E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 346 0001593 0.1653E-07 0.2950E-09 0.1654E-07 0.5861E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 344 0001594 0.1644E-07 0.2929E-09 0.1645E-07 0.5827E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 338 0001595 0.1635E-07 0.2908E-09 0.1635E-07 0.5794E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 338 0001596 0.1625E-07 0.2888E-09 0.1626E-07 0.5762E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 345 0001597 0.1616E-07 0.2868E-09 0.1617E-07 0.5729E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 334 0001598 0.1607E-07 0.2847E-09 0.1608E-07 0.5697E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 344 0001599 0.1598E-07 0.2828E-09 0.1599E-07 0.5664E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 339 0001600 0.1589E-07 0.2808E-09 0.1590E-07 0.5632E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 333 0001601 0.1580E-07 0.2788E-09 0.1581E-07 0.5600E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 350 0001602 0.1571E-07 0.2769E-09 0.1572E-07 0.5569E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 346 0001603 0.1562E-07 0.2749E-09 0.1563E-07 0.5537E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 345 0001604 0.1553E-07 0.2730E-09 0.1554E-07 0.5506E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 340 0001605 0.1544E-07 0.2711E-09 0.1545E-07 0.5475E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_ITS iterations 10000 0001606 0.1536E-07 0.2692E-09 0.1536E-07 0.5444E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 339 0001607 0.1527E-07 0.2673E-09 0.1528E-07 0.5413E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 346 0001608 0.1518E-07 0.2655E-09 0.1519E-07 0.5382E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 339 0001609 0.1510E-07 0.2636E-09 0.1510E-07 0.5352E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 346 0001610 0.1501E-07 0.2618E-09 0.1502E-07 0.5321E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 334 0001611 0.1493E-07 0.2600E-09 0.1493E-07 0.5291E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 345 0001612 0.1484E-07 0.2582E-09 0.1485E-07 0.5261E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 344 0001613 0.1476E-07 0.2564E-09 0.1476E-07 0.5231E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 343 0001614 0.1468E-07 0.2546E-09 0.1468E-07 0.5202E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 340 0001615 0.1459E-07 0.2529E-09 0.1460E-07 0.5172E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 345 0001616 0.1451E-07 0.2512E-09 0.1452E-07 0.5143E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 337 0001617 0.1443E-07 0.2494E-09 0.1443E-07 0.5114E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 334 0001618 0.1435E-07 0.2477E-09 0.1435E-07 0.5085E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 346 0001619 0.1426E-07 0.2460E-09 0.1427E-07 0.5056E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 345 0001620 0.1418E-07 0.2443E-09 0.1419E-07 0.5028E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 348 0001621 0.1410E-07 0.2426E-09 0.1411E-07 0.4999E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 344 0001622 0.1402E-07 0.2410E-09 0.1403E-07 0.4971E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 371 0001623 0.1394E-07 0.2393E-09 0.1395E-07 0.4943E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 345 0001624 0.1387E-07 0.2377E-09 0.1387E-07 0.4915E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 331 0001625 0.1379E-07 0.2361E-09 0.1379E-07 0.4887E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 320 0001626 0.1371E-07 0.2345E-09 0.1371E-07 0.4859E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 346 0001627 0.1363E-07 0.2329E-09 0.1364E-07 0.4832E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 345 0001628 0.1355E-07 0.2313E-09 0.1356E-07 0.4804E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 420 0001629 0.1348E-07 0.2297E-09 0.1348E-07 0.4777E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 709 0001630 0.1340E-07 0.2282E-09 0.1341E-07 0.4750E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 375 0001631 0.1333E-07 0.2266E-09 0.1333E-07 0.4723E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 330 0001632 0.1325E-07 0.2251E-09 0.1326E-07 0.4696E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 346 0001633 0.1318E-07 0.2236E-09 0.1318E-07 0.4670E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 394 0001634 0.1310E-07 0.2220E-09 0.1311E-07 0.4643E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 346 0001635 0.1303E-07 0.2205E-09 0.1303E-07 0.4617E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_ITS iterations 10000 0001636 0.1295E-07 0.2191E-09 0.1296E-07 0.4591E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 346 0001637 0.1288E-07 0.2176E-09 0.1288E-07 0.4565E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 975 0001638 0.1281E-07 0.2161E-09 0.1281E-07 0.4539E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 394 0001639 0.1273E-07 0.2147E-09 0.1274E-07 0.4513E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 344 0001640 0.1266E-07 0.2132E-09 0.1267E-07 0.4488E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 346 0001641 0.1259E-07 0.2118E-09 0.1260E-07 0.4462E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 345 0001642 0.1252E-07 0.2104E-09 0.1252E-07 0.4437E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 559 0001643 0.1245E-07 0.2090E-09 0.1245E-07 0.4412E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 395 0001644 0.1238E-07 0.2076E-09 0.1238E-07 0.4387E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 346 0001645 0.1231E-07 0.2062E-09 0.1231E-07 0.4362E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 346 0001646 0.1224E-07 0.2048E-09 0.1224E-07 0.4338E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 347 0001647 0.1217E-07 0.2034E-09 0.1217E-07 0.4313E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 346 0001648 0.1210E-07 0.2021E-09 0.1210E-07 0.4289E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 348 0001649 0.1203E-07 0.2007E-09 0.1204E-07 0.4264E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 346 0001650 0.1196E-07 0.1994E-09 0.1197E-07 0.4240E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 355 0001651 0.1190E-07 0.1981E-09 0.1190E-07 0.4216E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 346 0001652 0.1183E-07 0.1968E-09 0.1183E-07 0.4192E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 347 0001653 0.1176E-07 0.1954E-09 0.1177E-07 0.4169E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 421 0001654 0.1170E-07 0.1941E-09 0.1170E-07 0.4145E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 347 0001655 0.1163E-07 0.1929E-09 0.1163E-07 0.4121E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 556 0001656 0.1156E-07 0.1916E-09 0.1157E-07 0.4098E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 349 0001657 0.1150E-07 0.1903E-09 0.1150E-07 0.4075E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 346 0001658 0.1143E-07 0.1891E-09 0.1144E-07 0.4052E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 370 0001659 0.1137E-07 0.1878E-09 0.1137E-07 0.4029E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 398 0001660 0.1130E-07 0.1866E-09 0.1131E-07 0.4006E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 531 0001661 0.1124E-07 0.1853E-09 0.1124E-07 0.3983E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 337 0001662 0.1118E-07 0.1841E-09 0.1118E-07 0.3961E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 351 0001663 0.1111E-07 0.1829E-09 0.1112E-07 0.3938E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 349 0001664 0.1105E-07 0.1817E-09 0.1105E-07 0.3916E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 347 0001665 0.1099E-07 0.1805E-09 0.1099E-07 0.3894E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 347 0001666 0.1093E-07 0.1793E-09 0.1093E-07 0.3872E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 382 0001667 0.1086E-07 0.1782E-09 0.1087E-07 0.3850E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 336 0001668 0.1080E-07 0.1770E-09 0.1081E-07 0.3828E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_ITS iterations 10000 0001669 0.1074E-07 0.1758E-09 0.1074E-07 0.3807E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 334 0001670 0.1068E-07 0.1747E-09 0.1068E-07 0.3785E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 347 0001671 0.1062E-07 0.1736E-09 0.1062E-07 0.3764E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 375 0001672 0.1056E-07 0.1724E-09 0.1056E-07 0.3742E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 334 0001673 0.1050E-07 0.1713E-09 0.1050E-07 0.3721E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 328 0001674 0.1044E-07 0.1702E-09 0.1044E-07 0.3700E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 367 0001675 0.1038E-07 0.1691E-09 0.1039E-07 0.3679E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 347 0001676 0.1032E-07 0.1680E-09 0.1033E-07 0.3658E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 539 0001677 0.1026E-07 0.1669E-09 0.1027E-07 0.3638E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 348 0001678 0.1021E-07 0.1658E-09 0.1021E-07 0.3617E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 343 0001679 0.1015E-07 0.1647E-09 0.1015E-07 0.3597E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 348 0001680 0.1009E-07 0.1637E-09 0.1009E-07 0.3576E-10 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 487 0001681 0.1003E-07 0.1626E-09 0.1004E-07 0.3556E-10 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 376 0001682 0.9978E-08 0.1616E-09 0.9981E-08 0.3536E-10 0.0000E+00 TIME FOR CALCULATION: 0.3764E+05 L2-NORM ERROR U VELOCITY 1.073682047939190E-005 L2-NORM ERROR V VELOCITY 1.133485976464090E-005 L2-NORM ERROR W VELOCITY 1.095441215075716E-005 L2-NORM ERROR ABS. VELOCITY 1.412181697151783E-005 L2-NORM ERROR PRESSURE 1.307959793303857E-003 *** CALCULATION FINISHED - SEE RESULTS *** ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./caffa3d.MB.lnx on a arch-openmpi-opt-intel-hlr-ext named hpb0093 with 1 processor, by gu08vomo Wed Feb 11 10:36:22 2015 Using Petsc Release Version 3.5.3, Jan, 31, 2015 Max Max/Min Avg Total Time (sec): 3.767e+04 1.00000 3.767e+04 Objects: 2.365e+04 1.00000 2.365e+04 Flops: 4.492e+13 1.00000 4.492e+13 4.492e+13 Flops/sec: 1.192e+09 1.00000 1.192e+09 1.192e+09 MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Reductions: 0.000e+00 0.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 4.5374e+03 12.0% 6.5910e+06 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 1: MOMENTUM: 1.6045e+03 4.3% 1.5283e+12 3.4% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 2: PRESCORR: 3.1529e+04 83.7% 4.3392e+13 96.6% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage ThreadCommRunKer 8411 1.0 5.5954e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNorm 1 1.0 2.2788e-01 1.0 4.39e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 67 0 0 0 19 VecScale 1 1.0 1.8089e-03 1.0 2.20e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 33 0 0 0 1215 VecSet 67286 1.0 6.2042e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 VecScatterBegin 75701 1.0 2.0827e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 1 1.0 1.8089e-03 1.0 2.20e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 33 0 0 0 1215 MatAssemblyBegin 3364 1.0 9.1219e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 3364 1.0 6.2114e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 --- Event Stage 1: MOMENTUM VecDot 10092 1.0 1.8554e+01 1.0 4.43e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 3 0 0 0 2390 VecDotNorm2 5046 1.0 1.2589e+01 1.0 4.43e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 3 0 0 0 3522 VecNorm 20184 1.0 2.3761e+01 1.0 8.87e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 6 0 0 0 3733 VecCopy 15138 1.0 3.2298e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 2 0 0 0 0 0 VecSet 10092 1.0 9.2871e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 VecAXPY 10092 1.0 2.5340e+01 1.0 4.43e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 2 3 0 0 0 1750 VecAXPBYCZ 10092 1.0 3.6799e+01 1.0 8.87e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 2 6 0 0 0 2410 VecWAXPY 10092 1.0 3.6827e+01 1.0 4.43e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 2 3 0 0 0 1204 MatMult 20184 1.0 3.5849e+02 1.0 5.48e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 22 36 0 0 0 1530 MatSolve 20184 1.0 5.7618e+02 1.0 5.48e+11 1.0 0.0e+00 0.0e+00 0.0e+00 2 1 0 0 0 36 36 0 0 0 952 MatLUFactorNum 1682 1.0 1.7723e+02 1.0 7.69e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 11 5 0 0 0 434 MatILUFactorSym 1682 1.0 1.4229e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 9 0 0 0 0 0 MatGetRowIJ 1682 1.0 3.4428e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 1682 1.0 1.5008e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 KSPSetUp 5046 1.0 3.8792e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 5046 1.0 1.3773e+03 1.0 1.35e+12 1.0 0.0e+00 0.0e+00 0.0e+00 4 3 0 0 0 86 88 0 0 0 978 PCSetUp 1682 1.0 3.3669e+02 1.0 7.69e+10 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 21 5 0 0 0 228 PCApply 20184 1.0 5.7620e+02 1.0 5.48e+11 1.0 0.0e+00 0.0e+00 0.0e+00 2 1 0 0 0 36 36 0 0 0 952 --- Event Stage 2: PRESCORR VecMDot 484380 1.0 1.0689e+03 1.0 2.13e+12 1.0 0.0e+00 0.0e+00 0.0e+00 3 5 0 0 0 3 5 0 0 0 1991 VecTDot 965431 1.0 1.9167e+03 1.0 4.24e+12 1.0 0.0e+00 0.0e+00 0.0e+00 5 9 0 0 0 6 10 0 0 0 2213 VecNorm 486062 1.0 6.4723e+02 1.0 2.14e+12 1.0 0.0e+00 0.0e+00 0.0e+00 2 5 0 0 0 2 5 0 0 0 3300 VecCopy 5046 1.0 1.0416e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 1682 1.0 1.5487e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 967078 1.0 2.5711e+03 1.0 4.25e+12 1.0 0.0e+00 0.0e+00 0.0e+00 7 9 0 0 0 8 10 0 0 0 1653 VecAYPX 481016 1.0 1.2653e+03 1.0 2.11e+12 1.0 0.0e+00 0.0e+00 0.0e+00 3 5 0 0 0 4 5 0 0 0 1670 VecMAXPY 484380 1.0 1.2692e+03 1.0 2.13e+12 1.0 0.0e+00 0.0e+00 0.0e+00 3 5 0 0 0 4 5 0 0 0 1677 MatMult 484380 1.0 8.6068e+03 1.0 1.32e+13 1.0 0.0e+00 0.0e+00 0.0e+00 23 29 0 0 0 27 30 0 0 0 1529 MatSolve 484380 1.0 1.3831e+04 1.0 1.32e+13 1.0 0.0e+00 0.0e+00 0.0e+00 37 29 0 0 0 44 30 0 0 0 951 MatLUFactorNum 1682 1.0 1.7705e+02 1.0 7.69e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 434 MatILUFactorSym 1682 1.0 1.4204e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRowIJ 1682 1.0 3.3236e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 1682 1.0 1.4995e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 2 1.0 1.4806e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSetUp 1682 1.0 1.9146e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1682 1.0 3.1490e+04 1.0 4.33e+13 1.0 0.0e+00 0.0e+00 0.0e+00 84 96 0 0 0 100100 0 0 0 1376 PCSetUp 1682 1.0 3.3613e+02 1.0 7.69e+10 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 229 PCApply 484380 1.0 1.3832e+04 1.0 1.32e+13 1.0 0.0e+00 0.0e+00 0.0e+00 37 29 0 0 0 44 30 0 0 0 951 --- Event Stage 3: Unknown ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Vector 67 78 791040816 0 Vector Scatter 2 2 1304 0 Index Set 4 10 35159920 0 IS L to G Mapping 2 2 17577192 0 Matrix 1 3 634001996 0 Matrix Null Space 0 1 620 0 Krylov Solver 0 2 2408 0 Preconditioner 0 2 2032 0 --- Event Stage 1: MOMENTUM Vector 10108 10092 88704078048 0 Index Set 5046 5043 29549250056 0 Matrix 1682 1681 355252454000 0 Matrix Null Space 1 0 0 0 Krylov Solver 2 0 0 0 Preconditioner 2 0 0 0 --- Event Stage 2: PRESCORR Vector 6 0 0 0 Index Set 5046 5043 29549250056 0 Matrix 1682 1681 355252454000 0 Viewer 1 0 0 0 --- Event Stage 3: Unknown ======================================================================================================================== Average time to get PetscTime(): 0 #PETSc Option Table entries: -log_summary -options_left -pressure_ksp_converged_reason -ressure_pc_type icc #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure options: PETSC_ARCH=arch-openmpi-opt-intel-hlr-ext PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3 -prefix=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr-ext --with-blas-lapack-dir=/shared/apps/intel/2015/composer_xe_2015/mkl/lib/intel64/ --with-mpi-dir=/shared/apps/openmpi/1.8.2_intel COPTFLAGS="-O3 -xHost" FOPTFLAGS="-O3 -xHost" CXXOPTFLAGS="-O3 -xHost" --with-debugging=0 --download-hypre --download-ml ----------------------------------------- Libraries compiled on Sun Feb 1 16:09:22 2015 on hla0003 Machine characteristics: Linux-3.0.101-0.40-default-x86_64-with-SuSE-11-x86_64 Using PETSc directory: /home/gu08vomo/soft/petsc/3.5.3 Using PETSc arch: arch-openmpi-opt-intel-hlr-ext ----------------------------------------- Using C compiler: /shared/apps/openmpi/1.8.2_intel/bin/mpicc -fPIC -wd1572 -O3 -xHost ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /shared/apps/openmpi/1.8.2_intel/bin/mpif90 -fPIC -O3 -xHost ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/include -I/home/gu08vomo/soft/petsc/3.5.3/include -I/home/gu08vomo/soft/petsc/3.5.3/include -I/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/include -I/shared/apps/openmpi/1.8.2_intel/include ----------------------------------------- Using C linker: /shared/apps/openmpi/1.8.2_intel/bin/mpicc Using Fortran linker: /shared/apps/openmpi/1.8.2_intel/bin/mpif90 Using libraries: -Wl,-rpath,/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -L/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -lpetsc -Wl,-rpath,/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -L/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -lHYPRE -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -L/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib -L/shared/apps/gcc/4.8.3/lib -lmpi_cxx -lml -lmpi_cxx -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm -lX11 -lpthread -lssl -lcrypto -lmpi_usempi_ignore_tkr -lmpi_mpifh -lifport -lifcore -lm -lmpi_cxx -ldl -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -lmpi -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -L/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib -L/shared/apps/gcc/4.8.3/lib -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -limf -lsvml -lirng -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lpthread -lirc_s -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -L/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib -L/shared/apps/gcc/4.8.3/lib -ldl ----------------------------------------- #PETSc Option Table entries: -log_summary -options_left -pressure_ksp_converged_reason -ressure_pc_type icc #End of PETSc Option Table entries There is one unused database option. It is: Option left: name:-ressure_pc_type value: icc From gideon.simpson at gmail.com Tue Feb 17 09:41:43 2015 From: gideon.simpson at gmail.com (Gideon Simpson) Date: Tue, 17 Feb 2015 10:41:43 -0500 Subject: [petsc-users] parallel interpolation? In-Reply-To: References: <260DC590-305E-4205-BEF4-9F2482A93E5F@gmail.com> <57071C37-3175-4A02-8A6C-AC8B9D240651@gmail.com> Message-ID: <4533A2BD-DE99-4789-9BE6-4865E1135CFC@gmail.com> This warrants a deeper description. I was thinking about trying to use petsc to solve a 1D PDE problem using the equidistribution principle for adaptive meshing. Briefly, the idea is that given a function u(x), where x is the physical coordinate, a monitor function, w[u(x)]>0, is introduced such that y(x) = \int_{0}^x w[u(s)] ds /\int_{0}^{xmax} w[u(s)] ds where C is a constant, and y is the computational coordinate in [0,1]. essentially, you want to pick the physical mesh points, x_i such that \int_{x_i}^{x_{i+1}} w[u(s)] ds = constant then the computational mesh points are uniformly spaced. Solving this can be posed as a nonlinear elliptic equation, in general dimension, though it simplifies further in 1D. In a paper by Gavish and Ditkowski, for the 1D problem, they came up with the idea that you could do the following: Given a starting guess on where the physical coordinates should be, let F_i = \int_0^{x_i} w[u(s)]ds/\int_0^{x_N} w[u(s)]ds Then 0= F_0 < F_1 On Feb 17, 2015, at 10:11 AM, Matthew Knepley wrote: > > On Tue, Feb 17, 2015 at 8:15 AM, Gideon Simpson > wrote: > I?m gathering from your suggestions that I would need, a priori, knowledge of how many ghost points I would need, is that right? > > We have to be more precise about a priori. You can certainly create a VecScatter on the fly every time > if your communication pattern is changing. However, how will you know what needs to be communicated. > > Matt > > -gideon > >> On Feb 17, 2015, at 9:10 AM, Matthew Knepley > wrote: >> >> On Tue, Feb 17, 2015 at 7:46 AM, Gideon Simpson > wrote: >> Suppose I have data in Vec x and Vec y, and I want to interpolate this onto Vec xx, storing the values in Vec yy. All vectors have the same layout. The problem is that, for example, some of the values in xx on processor 0 may need the values of x and y on processor 1, and so on. Aside from just using sequential vectors, so that everything is local, is there a reasonable way to make this computation? >> >> At the most basic linear algebra level, you would construct a VecScatter which mapped the pieces you need from other processes into a local vector along with the local portion, and you would use that to calculate values, which you then put back into your owned portion of a global vector. Thus local vectors have halos and global vectors do not. >> >> If you halo regions (values you need from other processes) have a common topology, then we have simpler >> support that will make the VecScatter for you. For example, if your values lie on a Cartesian grid and you >> just need neighbors within distance k, you can use a DMDA to express this and automatically make the >> VecScatter. Likewise, if you values lie on an unstructured mesh and you need a distance k adjacency, >> DMPlex can create the scatter for you. >> >> If you are creating the VecScatter yourself, it might be easier to use the new PetscSF instead since it only needs one-sided information, and performs the same job. This is what DMPlex uses to do the communication. >> >> Thanks, >> >> Matt >> >> -gideon >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jychang48 at gmail.com Tue Feb 17 20:12:56 2015 From: jychang48 at gmail.com (Justin Chang) Date: Tue, 17 Feb 2015 20:12:56 -0600 Subject: [petsc-users] PAPI availability Message-ID: Hi all, I want to document some profiling metrics like cache hits/misses using PAPI. Does PETSc automatically come with PAPI, or is it something I have to declare when configuring PETSc? Thanks, Justin -------------- next part -------------- An HTML attachment was scrubbed... URL: From hus003 at ucsd.edu Wed Feb 18 00:33:06 2015 From: hus003 at ucsd.edu (Sun, Hui) Date: Wed, 18 Feb 2015 06:33:06 +0000 Subject: [petsc-users] Question concerning ilu and bcgs Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010E891E@XMAIL-MBX-BH1.AD.UCSD.EDU> I have a matrix system Ax = b, A is of type MatSeqAIJ or MatMPIAIJ, depending on the number of cores. I try to solve this problem by pc_type ilu and ksp_type bcgs, it does not converge. The options I specify are: -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 -pc_hypre_pilut_tol 1e-3 -ksp_type b\ cgs -ksp_rtol 1e-10 -ksp_max_it 1000 -ksp_monitor_short -ksp_converged_reason The first a few lines of the output are: 0 KSP Residual norm 1404.62 1 KSP Residual norm 88.9068 2 KSP Residual norm 64.73 3 KSP Residual norm 71.0224 4 KSP Residual norm 69.5044 5 KSP Residual norm 455.458 6 KSP Residual norm 174.876 7 KSP Residual norm 183.031 8 KSP Residual norm 650.675 9 KSP Residual norm 79.2441 10 KSP Residual norm 84.1985 This clearly indicates non-convergence. However, I output the sparse matrix A and vector b to MATLAB, and run the following command: [L,U] = ilu(A,struct('type','ilutp','droptol',1e-3)); [ux1,fl1,rr1,it1,rv1] = bicgstab(A,b,1e-10,1000,L,U); And it converges in MATLAB, with flag fl1=0, relative residue rr1=8.2725e-11, and iteration it1=89.5. I'm wondering how can I figure out what's wrong. Best, Hui -------------- next part -------------- An HTML attachment was scrubbed... URL: From rupp at iue.tuwien.ac.at Wed Feb 18 02:30:48 2015 From: rupp at iue.tuwien.ac.at (Karl Rupp) Date: Wed, 18 Feb 2015 09:30:48 +0100 Subject: [petsc-users] Question concerning ilu and bcgs In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010E891E@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010E891E@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: <54E44DB8.8020202@iue.tuwien.ac.at> Hi, > I have a matrix system Ax = b, A is of type MatSeqAIJ or MatMPIAIJ, > depending on the number of cores. > > I try to solve this problem by pc_type ilu and ksp_type bcgs, it does > not converge. The options I specify are: > > -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 > -pc_hypre_pilut_tol 1e-3 -ksp_type b\ > > cgs -ksp_rtol 1e-10 -ksp_max_it 1000 -ksp_monitor_short > -ksp_converged_reason > > > The first a few lines of the output are: > > 0 KSP Residual norm 1404.62 > > 1 KSP Residual norm 88.9068 > > 2 KSP Residual norm 64.73 > > 3 KSP Residual norm 71.0224 > > 4 KSP Residual norm 69.5044 > > 5 KSP Residual norm 455.458 > > 6 KSP Residual norm 174.876 > > 7 KSP Residual norm 183.031 > > 8 KSP Residual norm 650.675 > > 9 KSP Residual norm 79.2441 > > 10 KSP Residual norm 84.1985 > > > This clearly indicates non-convergence. please send the full output. Where does your matrix A come from (i.e. which problem are you trying to solve)? Is this a serial run? ILU may fail depending on the particular choice of tolerance, pivoting, etc., so we need more information to suggest better options. Does the default GMRES+ILU0 work? Best regards, Karli From knepley at gmail.com Wed Feb 18 05:30:02 2015 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 18 Feb 2015 05:30:02 -0600 Subject: [petsc-users] Question concerning ilu and bcgs In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010E891E@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010E891E@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: On Wed, Feb 18, 2015 at 12:33 AM, Sun, Hui wrote: > I have a matrix system Ax = b, A is of type MatSeqAIJ or MatMPIAIJ, > depending on the number of cores. > > I try to solve this problem by pc_type ilu and ksp_type bcgs, it does > not converge. The options I specify are: > > -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 > -pc_hypre_pilut_tol 1e-3 -ksp_type b\ > > cgs -ksp_rtol 1e-10 -ksp_max_it 1000 -ksp_monitor_short > -ksp_converged_reason > 1) Run with -ksp_view, so we can see exactly what was used 2) ILUT is unfortunately not a well-defined algorithm, and I believe the parallel version makes different decisions than the serial version. Thanks, Matt > The first a few lines of the output are: > > 0 KSP Residual norm 1404.62 > > 1 KSP Residual norm 88.9068 > > 2 KSP Residual norm 64.73 > > 3 KSP Residual norm 71.0224 > > 4 KSP Residual norm 69.5044 > > 5 KSP Residual norm 455.458 > > 6 KSP Residual norm 174.876 > > 7 KSP Residual norm 183.031 > > 8 KSP Residual norm 650.675 > > 9 KSP Residual norm 79.2441 > > 10 KSP Residual norm 84.1985 > > > This clearly indicates non-convergence. However, I output the sparse > matrix A and vector b to MATLAB, and run the following command: > > [L,U] = ilu(A,struct('type','ilutp','droptol',1e-3)); > > [ux1,fl1,rr1,it1,rv1] = bicgstab(A,b,1e-10,1000,L,U); > > > And it converges in MATLAB, with flag fl1=0, relative residue > rr1=8.2725e-11, and iteration it1=89.5. I'm wondering how can I figure out > what's wrong. > > > Best, > > Hui > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 18 05:42:22 2015 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 18 Feb 2015 05:42:22 -0600 Subject: [petsc-users] PAPI availability In-Reply-To: References: Message-ID: On Tue, Feb 17, 2015 at 8:12 PM, Justin Chang wrote: > Hi all, > > I want to document some profiling metrics like cache hits/misses using > PAPI. Does PETSc automatically come with PAPI, or is it something I have to > declare when configuring PETSc? > You can turn on PAPI by configuring using --with-papi, just like other packages (might need the --with-papi-dir too). Right now, we only monitor flops. Feel free to add other events. Thanks, Matt > Thanks, > Justin > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 18 09:16:17 2015 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 18 Feb 2015 09:16:17 -0600 Subject: [petsc-users] parallel interpolation? In-Reply-To: <4533A2BD-DE99-4789-9BE6-4865E1135CFC@gmail.com> References: <260DC590-305E-4205-BEF4-9F2482A93E5F@gmail.com> <57071C37-3175-4A02-8A6C-AC8B9D240651@gmail.com> <4533A2BD-DE99-4789-9BE6-4865E1135CFC@gmail.com> Message-ID: On Tue, Feb 17, 2015 at 9:41 AM, Gideon Simpson wrote: > This warrants a deeper description. I was thinking about trying to use > petsc to solve a 1D PDE problem using the equidistribution principle for > adaptive meshing. Briefly, the idea is that given a function u(x), where x > is the physical coordinate, a monitor function, w[u(x)]>0, is introduced > such that > > y(x) = \int_{0}^x w[u(s)] ds /\int_{0}^{xmax} w[u(s)] ds > > where C is a constant, and y is the computational coordinate in [0,1]. > essentially, you want to pick the physical mesh points, x_i such that > > \int_{x_i}^{x_{i+1}} w[u(s)] ds = constant > > then the computational mesh points are uniformly spaced. Solving this can > be posed as a nonlinear elliptic equation, in general dimension, though it > simplifies further in 1D. In a paper by Gavish and Ditkowski, for the 1D > problem, they came up with the idea that you could do the following: > > Given a starting guess on where the physical coordinates should be, let > > F_i = \int_0^{x_i} w[u(s)]ds/\int_0^{x_N} w[u(s)]ds > > Then 0= F_0 < F_1 system for the x_i such that the successive differences of F_{i+1} - F_i > are uniformly spaced, they propose to do: > > interpolate (F_i, x_i) onto the uniform mesh for y, and the interpolated > x_i?s should approximately satisfy the equidistribution principle. This > can be iterated, to improve the equidistribution. > > Doing the quadrature with distributed vectors is not too hard. The > question then becomes how to do the interpolation in parallel. As I was > suggesting in the earlier example, if y_i = i * h, is the uniform mesh, it > may be that y_i could be on proc 0, but the values of {F_j} that this lies > between are on proc 1. There is monotonicity at work here, but I?m not > sure I can really say, when defining ghost points, how things will spread > out. > I think I am slowly understanding the problem. Tell me where I go wrong. You have a set of (x, y) points (you label them (F_i, x_i)), which you want to interpolate, and then evaluate on a regular grid. The interpolation is not specified, so to be concrete lets say that you choose cubic splines, which produces a tridiagonal system, so there is only communication between neighbors. It seems like the right thing to do is partition the domain in parallel such that y is evenly divided. Then the communication pattern for your actual PDE is the same as for the interpolation system, although the interpolation can be unbalanced. I don't think this will matter compared to the PDE system. If it does, then you can redistribute, interpolate, and distribute back. Does this make sense? Thanks, Matt > > On Feb 17, 2015, at 10:11 AM, Matthew Knepley wrote: > > On Tue, Feb 17, 2015 at 8:15 AM, Gideon Simpson > wrote: > >> I?m gathering from your suggestions that I would need, a priori, >> knowledge of how many ghost points I would need, is that right? >> > > We have to be more precise about a priori. You can certainly create a > VecScatter on the fly every time > if your communication pattern is changing. However, how will you know what > needs to be communicated. > > Matt > > >> -gideon >> >> On Feb 17, 2015, at 9:10 AM, Matthew Knepley wrote: >> >> On Tue, Feb 17, 2015 at 7:46 AM, Gideon Simpson > > wrote: >> >>> Suppose I have data in Vec x and Vec y, and I want to interpolate this >>> onto Vec xx, storing the values in Vec yy. All vectors have the same >>> layout. The problem is that, for example, some of the values in xx on >>> processor 0 may need the values of x and y on processor 1, and so on. >>> Aside from just using sequential vectors, so that everything is local, is >>> there a reasonable way to make this computation? >>> >> >> At the most basic linear algebra level, you would construct a VecScatter >> which mapped the pieces you need from other processes into a local vector >> along with the local portion, and you would use that to calculate values, >> which you then put back into your owned portion of a global vector. Thus >> local vectors have halos and global vectors do not. >> >> If you halo regions (values you need from other processes) have a common >> topology, then we have simpler >> support that will make the VecScatter for you. For example, if your >> values lie on a Cartesian grid and you >> just need neighbors within distance k, you can use a DMDA to express this >> and automatically make the >> VecScatter. Likewise, if you values lie on an unstructured mesh and you >> need a distance k adjacency, >> DMPlex can create the scatter for you. >> >> If you are creating the VecScatter yourself, it might be easier to use >> the new PetscSF instead since it only needs one-sided information, and >> performs the same job. This is what DMPlex uses to do the communication. >> >> Thanks, >> >> Matt >> >> >>> -gideon >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From gideon.simpson at gmail.com Wed Feb 18 09:30:55 2015 From: gideon.simpson at gmail.com (Gideon Simpson) Date: Wed, 18 Feb 2015 10:30:55 -0500 Subject: [petsc-users] parallel interpolation? In-Reply-To: References: <260DC590-305E-4205-BEF4-9F2482A93E5F@gmail.com> <57071C37-3175-4A02-8A6C-AC8B9D240651@gmail.com> <4533A2BD-DE99-4789-9BE6-4865E1135CFC@gmail.com> Message-ID: <5B5D8F97-63B4-4793-95CF-D268E77C6537@gmail.com> I think I see what you?re saying, I?m just not used to thinking of interpolation methods in terms of linear systems (too much MATLAB for my own good). I had been preoccupied with trying to do this as a local + ghost points, matrix free computation. Thanks Matt. > On Feb 18, 2015, at 10:16 AM, Matthew Knepley wrote: > > On Tue, Feb 17, 2015 at 9:41 AM, Gideon Simpson > wrote: > This warrants a deeper description. I was thinking about trying to use petsc to solve a 1D PDE problem using the equidistribution principle for adaptive meshing. Briefly, the idea is that given a function u(x), where x is the physical coordinate, a monitor function, w[u(x)]>0, is introduced such that > > y(x) = \int_{0}^x w[u(s)] ds /\int_{0}^{xmax} w[u(s)] ds > > where C is a constant, and y is the computational coordinate in [0,1]. essentially, you want to pick the physical mesh points, x_i such that > > \int_{x_i}^{x_{i+1}} w[u(s)] ds = constant > > then the computational mesh points are uniformly spaced. Solving this can be posed as a nonlinear elliptic equation, in general dimension, though it simplifies further in 1D. In a paper by Gavish and Ditkowski, for the 1D problem, they came up with the idea that you could do the following: > > Given a starting guess on where the physical coordinates should be, let > > F_i = \int_0^{x_i} w[u(s)]ds/\int_0^{x_N} w[u(s)]ds > > Then 0= F_0 < F_1 > interpolate (F_i, x_i) onto the uniform mesh for y, and the interpolated x_i?s should approximately satisfy the equidistribution principle. This can be iterated, to improve the equidistribution. > > Doing the quadrature with distributed vectors is not too hard. The question then becomes how to do the interpolation in parallel. As I was suggesting in the earlier example, if y_i = i * h, is the uniform mesh, it may be that y_i could be on proc 0, but the values of {F_j} that this lies between are on proc 1. There is monotonicity at work here, but I?m not sure I can really say, when defining ghost points, how things will spread out. > > I think I am slowly understanding the problem. Tell me where I go wrong. > > You have a set of (x, y) points (you label them (F_i, x_i)), which you want to interpolate, and then > evaluate on a regular grid. The interpolation is not specified, so to be concrete lets say that you > choose cubic splines, which produces a tridiagonal system, so there is only communication between > neighbors. > > It seems like the right thing to do is partition the domain in parallel such that y is evenly divided. Then the > communication pattern for your actual PDE is the same as for the interpolation system, although the > interpolation can be unbalanced. I don't think this will matter compared to the PDE system. If it does, > then you can redistribute, interpolate, and distribute back. > > Does this make sense? > > Thanks, > > Matt > > >> On Feb 17, 2015, at 10:11 AM, Matthew Knepley > wrote: >> >> On Tue, Feb 17, 2015 at 8:15 AM, Gideon Simpson > wrote: >> I?m gathering from your suggestions that I would need, a priori, knowledge of how many ghost points I would need, is that right? >> >> We have to be more precise about a priori. You can certainly create a VecScatter on the fly every time >> if your communication pattern is changing. However, how will you know what needs to be communicated. >> >> Matt >> >> -gideon >> >>> On Feb 17, 2015, at 9:10 AM, Matthew Knepley > wrote: >>> >>> On Tue, Feb 17, 2015 at 7:46 AM, Gideon Simpson > wrote: >>> Suppose I have data in Vec x and Vec y, and I want to interpolate this onto Vec xx, storing the values in Vec yy. All vectors have the same layout. The problem is that, for example, some of the values in xx on processor 0 may need the values of x and y on processor 1, and so on. Aside from just using sequential vectors, so that everything is local, is there a reasonable way to make this computation? >>> >>> At the most basic linear algebra level, you would construct a VecScatter which mapped the pieces you need from other processes into a local vector along with the local portion, and you would use that to calculate values, which you then put back into your owned portion of a global vector. Thus local vectors have halos and global vectors do not. >>> >>> If you halo regions (values you need from other processes) have a common topology, then we have simpler >>> support that will make the VecScatter for you. For example, if your values lie on a Cartesian grid and you >>> just need neighbors within distance k, you can use a DMDA to express this and automatically make the >>> VecScatter. Likewise, if you values lie on an unstructured mesh and you need a distance k adjacency, >>> DMPlex can create the scatter for you. >>> >>> If you are creating the VecScatter yourself, it might be easier to use the new PetscSF instead since it only needs one-sided information, and performs the same job. This is what DMPlex uses to do the communication. >>> >>> Thanks, >>> >>> Matt >>> >>> -gideon >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> -- Norbert Wiener >> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From hus003 at ucsd.edu Wed Feb 18 09:34:54 2015 From: hus003 at ucsd.edu (Sun, Hui) Date: Wed, 18 Feb 2015 15:34:54 +0000 Subject: [petsc-users] Question concerning ilu and bcgs In-Reply-To: References: <7501CC2B7BBCC44A92ECEEC316170ECB010E891E@XMAIL-MBX-BH1.AD.UCSD.EDU>, Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010E8948@XMAIL-MBX-BH1.AD.UCSD.EDU> With options: -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 -pc_hypre_pilut_tol 1e-3 -ksp_type bcgs -ksp_rtol 1e-10 -ksp_max_it 10 -ksp_monitor_short -ksp_converged_reason -ksp_view Here is the full output: 0 KSP Residual norm 1404.62 1 KSP Residual norm 88.9068 2 KSP Residual norm 64.73 3 KSP Residual norm 71.0224 4 KSP Residual norm 69.5044 5 KSP Residual norm 455.458 6 KSP Residual norm 174.876 7 KSP Residual norm 183.031 8 KSP Residual norm 650.675 9 KSP Residual norm 79.2441 10 KSP Residual norm 84.1985 Linear solve did not converge due to DIVERGED_ITS iterations 10 KSP Object: 1 MPI processes type: bcgs maximum iterations=10, initial guess is zero tolerances: relative=1e-10, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: hypre HYPRE Pilut preconditioning HYPRE Pilut: maximum number of iterations 1000 HYPRE Pilut: drop tolerance 0.001 HYPRE Pilut: default factor row size linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=62500, cols=62500 total: nonzeros=473355, allocated nonzeros=7.8125e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Time cost: 0.756198, 0.662984, 0.105672 ________________________________ From: Matthew Knepley [knepley at gmail.com] Sent: Wednesday, February 18, 2015 3:30 AM To: Sun, Hui Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question concerning ilu and bcgs On Wed, Feb 18, 2015 at 12:33 AM, Sun, Hui > wrote: I have a matrix system Ax = b, A is of type MatSeqAIJ or MatMPIAIJ, depending on the number of cores. I try to solve this problem by pc_type ilu and ksp_type bcgs, it does not converge. The options I specify are: -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 -pc_hypre_pilut_tol 1e-3 -ksp_type b\ cgs -ksp_rtol 1e-10 -ksp_max_it 1000 -ksp_monitor_short -ksp_converged_reason 1) Run with -ksp_view, so we can see exactly what was used 2) ILUT is unfortunately not a well-defined algorithm, and I believe the parallel version makes different decisions than the serial version. Thanks, Matt The first a few lines of the output are: 0 KSP Residual norm 1404.62 1 KSP Residual norm 88.9068 2 KSP Residual norm 64.73 3 KSP Residual norm 71.0224 4 KSP Residual norm 69.5044 5 KSP Residual norm 455.458 6 KSP Residual norm 174.876 7 KSP Residual norm 183.031 8 KSP Residual norm 650.675 9 KSP Residual norm 79.2441 10 KSP Residual norm 84.1985 This clearly indicates non-convergence. However, I output the sparse matrix A and vector b to MATLAB, and run the following command: [L,U] = ilu(A,struct('type','ilutp','droptol',1e-3)); [ux1,fl1,rr1,it1,rv1] = bicgstab(A,b,1e-10,1000,L,U); And it converges in MATLAB, with flag fl1=0, relative residue rr1=8.2725e-11, and iteration it1=89.5. I'm wondering how can I figure out what's wrong. Best, Hui -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 18 09:47:17 2015 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 18 Feb 2015 09:47:17 -0600 Subject: [petsc-users] Question concerning ilu and bcgs In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010E8948@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010E891E@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8948@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: On Wed, Feb 18, 2015 at 9:34 AM, Sun, Hui wrote: > With options: > > -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 > -pc_hypre_pilut_tol 1e-3 -ksp_type bcgs -ksp_rtol 1e-10 -ksp_max_it 10 > -ksp_monitor_short -ksp_converged_reason -ksp_view > Okay, it seems that the Hypre ILUT is a different algorithm than the Matlab ILUT. Thanks, Matt > Here is the full output: > > 0 KSP Residual norm 1404.62 > > 1 KSP Residual norm 88.9068 > > 2 KSP Residual norm 64.73 > > 3 KSP Residual norm 71.0224 > > 4 KSP Residual norm 69.5044 > > 5 KSP Residual norm 455.458 > > 6 KSP Residual norm 174.876 > > 7 KSP Residual norm 183.031 > > 8 KSP Residual norm 650.675 > > 9 KSP Residual norm 79.2441 > > 10 KSP Residual norm 84.1985 > > Linear solve did not converge due to DIVERGED_ITS iterations 10 > > KSP Object: 1 MPI processes > > type: bcgs > > maximum iterations=10, initial guess is zero > > tolerances: relative=1e-10, absolute=1e-50, divergence=10000 > > left preconditioning > > using PRECONDITIONED norm type for convergence test > > PC Object: 1 MPI processes > > type: hypre > > HYPRE Pilut preconditioning > > HYPRE Pilut: maximum number of iterations 1000 > > HYPRE Pilut: drop tolerance 0.001 > > HYPRE Pilut: default factor row size > > linear system matrix = precond matrix: > > Mat Object: 1 MPI processes > > type: seqaij > > rows=62500, cols=62500 > > total: nonzeros=473355, allocated nonzeros=7.8125e+06 > > total number of mallocs used during MatSetValues calls =0 > > not using I-node routines > > Time cost: 0.756198, 0.662984, 0.105672 > > > > > ------------------------------ > *From:* Matthew Knepley [knepley at gmail.com] > *Sent:* Wednesday, February 18, 2015 3:30 AM > *To:* Sun, Hui > *Cc:* petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] Question concerning ilu and bcgs > > On Wed, Feb 18, 2015 at 12:33 AM, Sun, Hui wrote: > >> I have a matrix system Ax = b, A is of type MatSeqAIJ or MatMPIAIJ, >> depending on the number of cores. >> >> I try to solve this problem by pc_type ilu and ksp_type bcgs, it does >> not converge. The options I specify are: >> >> -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 >> -pc_hypre_pilut_tol 1e-3 -ksp_type b\ >> >> cgs -ksp_rtol 1e-10 -ksp_max_it 1000 -ksp_monitor_short >> -ksp_converged_reason >> > > 1) Run with -ksp_view, so we can see exactly what was used > > 2) ILUT is unfortunately not a well-defined algorithm, and I believe the > parallel version makes different decisions > than the serial version. > > Thanks, > > Matt > > >> The first a few lines of the output are: >> >> 0 KSP Residual norm 1404.62 >> >> 1 KSP Residual norm 88.9068 >> >> 2 KSP Residual norm 64.73 >> >> 3 KSP Residual norm 71.0224 >> >> 4 KSP Residual norm 69.5044 >> >> 5 KSP Residual norm 455.458 >> >> 6 KSP Residual norm 174.876 >> >> 7 KSP Residual norm 183.031 >> >> 8 KSP Residual norm 650.675 >> >> 9 KSP Residual norm 79.2441 >> >> 10 KSP Residual norm 84.1985 >> >> >> This clearly indicates non-convergence. However, I output the sparse >> matrix A and vector b to MATLAB, and run the following command: >> >> [L,U] = ilu(A,struct('type','ilutp','droptol',1e-3)); >> >> [ux1,fl1,rr1,it1,rv1] = bicgstab(A,b,1e-10,1000,L,U); >> >> >> And it converges in MATLAB, with flag fl1=0, relative residue >> rr1=8.2725e-11, and iteration it1=89.5. I'm wondering how can I figure out >> what's wrong. >> >> >> Best, >> >> Hui >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From hong at aspiritech.org Wed Feb 18 09:49:52 2015 From: hong at aspiritech.org (hong at aspiritech.org) Date: Wed, 18 Feb 2015 09:49:52 -0600 Subject: [petsc-users] Question concerning ilu and bcgs In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010E8948@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010E891E@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8948@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: Have you tried other solvers, e.g., PETSc default gmres/ilu, bcgs/ilu etc. The matrix is small. If it is ill-conditioned, then pc_type lu would work the best. Hong On Wed, Feb 18, 2015 at 9:34 AM, Sun, Hui wrote: > With options: > > -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 > -pc_hypre_pilut_tol 1e-3 -ksp_type bcgs -ksp_rtol 1e-10 -ksp_max_it 10 > -ksp_monitor_short -ksp_converged_reason -ksp_view > > Here is the full output: > > 0 KSP Residual norm 1404.62 > > 1 KSP Residual norm 88.9068 > > 2 KSP Residual norm 64.73 > > 3 KSP Residual norm 71.0224 > > 4 KSP Residual norm 69.5044 > > 5 KSP Residual norm 455.458 > > 6 KSP Residual norm 174.876 > > 7 KSP Residual norm 183.031 > > 8 KSP Residual norm 650.675 > > 9 KSP Residual norm 79.2441 > > 10 KSP Residual norm 84.1985 > > Linear solve did not converge due to DIVERGED_ITS iterations 10 > > KSP Object: 1 MPI processes > > type: bcgs > > maximum iterations=10, initial guess is zero > > tolerances: relative=1e-10, absolute=1e-50, divergence=10000 > > left preconditioning > > using PRECONDITIONED norm type for convergence test > > PC Object: 1 MPI processes > > type: hypre > > HYPRE Pilut preconditioning > > HYPRE Pilut: maximum number of iterations 1000 > > HYPRE Pilut: drop tolerance 0.001 > > HYPRE Pilut: default factor row size > > linear system matrix = precond matrix: > > Mat Object: 1 MPI processes > > type: seqaij > > rows=62500, cols=62500 > > total: nonzeros=473355, allocated nonzeros=7.8125e+06 > > total number of mallocs used during MatSetValues calls =0 > > not using I-node routines > > Time cost: 0.756198, 0.662984, 0.105672 > > > > > ------------------------------ > *From:* Matthew Knepley [knepley at gmail.com] > *Sent:* Wednesday, February 18, 2015 3:30 AM > *To:* Sun, Hui > *Cc:* petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] Question concerning ilu and bcgs > > On Wed, Feb 18, 2015 at 12:33 AM, Sun, Hui wrote: > >> I have a matrix system Ax = b, A is of type MatSeqAIJ or MatMPIAIJ, >> depending on the number of cores. >> >> I try to solve this problem by pc_type ilu and ksp_type bcgs, it does >> not converge. The options I specify are: >> >> -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 >> -pc_hypre_pilut_tol 1e-3 -ksp_type b\ >> >> cgs -ksp_rtol 1e-10 -ksp_max_it 1000 -ksp_monitor_short >> -ksp_converged_reason >> > > 1) Run with -ksp_view, so we can see exactly what was used > > 2) ILUT is unfortunately not a well-defined algorithm, and I believe the > parallel version makes different decisions > than the serial version. > > Thanks, > > Matt > > >> The first a few lines of the output are: >> >> 0 KSP Residual norm 1404.62 >> >> 1 KSP Residual norm 88.9068 >> >> 2 KSP Residual norm 64.73 >> >> 3 KSP Residual norm 71.0224 >> >> 4 KSP Residual norm 69.5044 >> >> 5 KSP Residual norm 455.458 >> >> 6 KSP Residual norm 174.876 >> >> 7 KSP Residual norm 183.031 >> >> 8 KSP Residual norm 650.675 >> >> 9 KSP Residual norm 79.2441 >> >> 10 KSP Residual norm 84.1985 >> >> >> This clearly indicates non-convergence. However, I output the sparse >> matrix A and vector b to MATLAB, and run the following command: >> >> [L,U] = ilu(A,struct('type','ilutp','droptol',1e-3)); >> >> [ux1,fl1,rr1,it1,rv1] = bicgstab(A,b,1e-10,1000,L,U); >> >> >> And it converges in MATLAB, with flag fl1=0, relative residue >> rr1=8.2725e-11, and iteration it1=89.5. I'm wondering how can I figure out >> what's wrong. >> >> >> Best, >> >> Hui >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jychang48 at gmail.com Wed Feb 18 10:01:28 2015 From: jychang48 at gmail.com (Justin Chang) Date: Wed, 18 Feb 2015 10:01:28 -0600 Subject: [petsc-users] PAPI availability In-Reply-To: References: Message-ID: Matt, thank you for the response. For flop monitoring, are they hardware counts or manually counted? That is, would the flops documented by PETSc be affected by overhead factors like memory bandwidth, latency, etc, thus potentially giving you "poor efficiency" with respect to the theoretical performance? Also, are these flops the flops noted in -log_summary? Thanks, Justin On Wed, Feb 18, 2015 at 5:42 AM, Matthew Knepley wrote: > On Tue, Feb 17, 2015 at 8:12 PM, Justin Chang wrote: > >> Hi all, >> >> I want to document some profiling metrics like cache hits/misses using >> PAPI. Does PETSc automatically come with PAPI, or is it something I have to >> declare when configuring PETSc? >> > > You can turn on PAPI by configuring using --with-papi, just like other > packages (might need the --with-papi-dir too). Right now, we only monitor > flops. > Feel free to add other events. > > Thanks, > > Matt > > >> Thanks, >> Justin >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hus003 at ucsd.edu Wed Feb 18 10:02:47 2015 From: hus003 at ucsd.edu (Sun, Hui) Date: Wed, 18 Feb 2015 16:02:47 +0000 Subject: [petsc-users] Question concerning ilu and bcgs In-Reply-To: References: <7501CC2B7BBCC44A92ECEEC316170ECB010E891E@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8948@XMAIL-MBX-BH1.AD.UCSD.EDU>, Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010E8976@XMAIL-MBX-BH1.AD.UCSD.EDU> Yes I've tried other solvers, gmres/ilu does not work, neither does bcgs/ilu. Here are the options: -pc_type ilu -pc_factor_nonzeros_along_diagonal -pc_factor_levels 0 -pc_factor_reuse_ordering -ksp_ty\ pe bcgs -ksp_rtol 1e-6 -ksp_max_it 10 -ksp_monitor_short -ksp_view Here is the output: 0 KSP Residual norm 211292 1 KSP Residual norm 13990.2 2 KSP Residual norm 9870.08 3 KSP Residual norm 9173.9 4 KSP Residual norm 9121.94 5 KSP Residual norm 7386.1 6 KSP Residual norm 6222.55 7 KSP Residual norm 7192.94 8 KSP Residual norm 33964 9 KSP Residual norm 33960.4 10 KSP Residual norm 1068.54 KSP Object: 1 MPI processes type: bcgs maximum iterations=10, initial guess is zero tolerances: relative=1e-06, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: ilu ILU: out-of-place factorization ILU: Reusing reordering from past factorization 0 levels of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 1 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=62500, cols=62500 package used to perform factorization: petsc total: nonzeros=473355, allocated nonzeros=473355 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=62500, cols=62500 total: nonzeros=473355, allocated nonzeros=7.8125e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Time cost: 0.307149, 0.268402, 0.0990018 ________________________________ From: hong at aspiritech.org [hong at aspiritech.org] Sent: Wednesday, February 18, 2015 7:49 AM To: Sun, Hui Cc: Matthew Knepley; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question concerning ilu and bcgs Have you tried other solvers, e.g., PETSc default gmres/ilu, bcgs/ilu etc. The matrix is small. If it is ill-conditioned, then pc_type lu would work the best. Hong On Wed, Feb 18, 2015 at 9:34 AM, Sun, Hui > wrote: With options: -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 -pc_hypre_pilut_tol 1e-3 -ksp_type bcgs -ksp_rtol 1e-10 -ksp_max_it 10 -ksp_monitor_short -ksp_converged_reason -ksp_view Here is the full output: 0 KSP Residual norm 1404.62 1 KSP Residual norm 88.9068 2 KSP Residual norm 64.73 3 KSP Residual norm 71.0224 4 KSP Residual norm 69.5044 5 KSP Residual norm 455.458 6 KSP Residual norm 174.876 7 KSP Residual norm 183.031 8 KSP Residual norm 650.675 9 KSP Residual norm 79.2441 10 KSP Residual norm 84.1985 Linear solve did not converge due to DIVERGED_ITS iterations 10 KSP Object: 1 MPI processes type: bcgs maximum iterations=10, initial guess is zero tolerances: relative=1e-10, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: hypre HYPRE Pilut preconditioning HYPRE Pilut: maximum number of iterations 1000 HYPRE Pilut: drop tolerance 0.001 HYPRE Pilut: default factor row size linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=62500, cols=62500 total: nonzeros=473355, allocated nonzeros=7.8125e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Time cost: 0.756198, 0.662984, 0.105672 ________________________________ From: Matthew Knepley [knepley at gmail.com] Sent: Wednesday, February 18, 2015 3:30 AM To: Sun, Hui Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question concerning ilu and bcgs On Wed, Feb 18, 2015 at 12:33 AM, Sun, Hui > wrote: I have a matrix system Ax = b, A is of type MatSeqAIJ or MatMPIAIJ, depending on the number of cores. I try to solve this problem by pc_type ilu and ksp_type bcgs, it does not converge. The options I specify are: -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 -pc_hypre_pilut_tol 1e-3 -ksp_type b\ cgs -ksp_rtol 1e-10 -ksp_max_it 1000 -ksp_monitor_short -ksp_converged_reason 1) Run with -ksp_view, so we can see exactly what was used 2) ILUT is unfortunately not a well-defined algorithm, and I believe the parallel version makes different decisions than the serial version. Thanks, Matt The first a few lines of the output are: 0 KSP Residual norm 1404.62 1 KSP Residual norm 88.9068 2 KSP Residual norm 64.73 3 KSP Residual norm 71.0224 4 KSP Residual norm 69.5044 5 KSP Residual norm 455.458 6 KSP Residual norm 174.876 7 KSP Residual norm 183.031 8 KSP Residual norm 650.675 9 KSP Residual norm 79.2441 10 KSP Residual norm 84.1985 This clearly indicates non-convergence. However, I output the sparse matrix A and vector b to MATLAB, and run the following command: [L,U] = ilu(A,struct('type','ilutp','droptol',1e-3)); [ux1,fl1,rr1,it1,rv1] = bicgstab(A,b,1e-10,1000,L,U); And it converges in MATLAB, with flag fl1=0, relative residue rr1=8.2725e-11, and iteration it1=89.5. I'm wondering how can I figure out what's wrong. Best, Hui -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 18 10:08:24 2015 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 18 Feb 2015 10:08:24 -0600 Subject: [petsc-users] PAPI availability In-Reply-To: References: Message-ID: On Wed, Feb 18, 2015 at 10:01 AM, Justin Chang wrote: > Matt, thank you for the response. > > For flop monitoring, are they hardware counts or manually counted? That > is, would the flops documented by PETSc be affected by overhead factors > like memory bandwidth, latency, etc, thus potentially giving you "poor > efficiency" with respect to the theoretical performance? > Its always best to look at the code. Here is the event creation https://bitbucket.org/petsc/petsc/src/17ccb9bd50733b8f00ffcc6fc4ee6e9be3a3168d/src/sys/logging/plog.c?at=master#cl-211 This measures 'flops', not 'flops/s'. > Also, are these flops the flops noted in -log_summary? > Here is the log_summary code: https://bitbucket.org/petsc/petsc/src/17ccb9bd50733b8f00ffcc6fc4ee6e9be3a3168d/src/sys/logging/utils/eventlog.c?at=master#cl-661 so we are using the PAPI count instead of our manual count in log_summary. I am not sure what you mean about overheads. Memory bandwidth, cache usage, machine latency, etc. are not overheads, they are how the machine functions. They are reasons you might not obtain the floating point peak. Thanks, Matt > Thanks, > Justin > > On Wed, Feb 18, 2015 at 5:42 AM, Matthew Knepley > wrote: > >> On Tue, Feb 17, 2015 at 8:12 PM, Justin Chang >> wrote: >> >>> Hi all, >>> >>> I want to document some profiling metrics like cache hits/misses using >>> PAPI. Does PETSc automatically come with PAPI, or is it something I have to >>> declare when configuring PETSc? >>> >> >> You can turn on PAPI by configuring using --with-papi, just like other >> packages (might need the --with-papi-dir too). Right now, we only monitor >> flops. >> Feel free to add other events. >> >> Thanks, >> >> Matt >> >> >>> Thanks, >>> Justin >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 18 10:09:40 2015 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 18 Feb 2015 10:09:40 -0600 Subject: [petsc-users] Question concerning ilu and bcgs In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010E8976@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010E891E@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8948@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8976@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: On Wed, Feb 18, 2015 at 10:02 AM, Sun, Hui wrote: > Yes I've tried other solvers, gmres/ilu does not work, neither does > bcgs/ilu. Here are the options: > > -pc_type ilu -pc_factor_nonzeros_along_diagonal -pc_factor_levels 0 > -pc_factor_reuse_ordering -ksp_ty\ > > pe bcgs -ksp_rtol 1e-6 -ksp_max_it 10 -ksp_monitor_short -ksp_view > Note here that ILU(0) is an unreliable and generally crappy preconditioner. Have you looked in the literature for the kinds of preconditioners that are effective for your problem? Thanks, Matt > Here is the output: > > 0 KSP Residual norm 211292 > > 1 KSP Residual norm 13990.2 > > 2 KSP Residual norm 9870.08 > > 3 KSP Residual norm 9173.9 > > 4 KSP Residual norm 9121.94 > > 5 KSP Residual norm 7386.1 > > 6 KSP Residual norm 6222.55 > > 7 KSP Residual norm 7192.94 > > 8 KSP Residual norm 33964 > > 9 KSP Residual norm 33960.4 > > 10 KSP Residual norm 1068.54 > > KSP Object: 1 MPI processes > > type: bcgs > > maximum iterations=10, initial guess is zero > > tolerances: relative=1e-06, absolute=1e-50, divergence=10000 > > left preconditioning > > using PRECONDITIONED norm type for convergence test > > PC Object: 1 MPI processes > > type: ilu > > ILU: out-of-place factorization > > ILU: Reusing reordering from past factorization > > 0 levels of fill > > tolerance for zero pivot 2.22045e-14 > > using diagonal shift on blocks to prevent zero pivot [INBLOCKS] > > matrix ordering: natural > > factor fill ratio given 1, needed 1 > > Factored matrix follows: > > Mat Object: 1 MPI processes > > type: seqaij > > rows=62500, cols=62500 > > package used to perform factorization: petsc > > total: nonzeros=473355, allocated nonzeros=473355 > > total number of mallocs used during MatSetValues calls =0 > > not using I-node routines > > linear system matrix = precond matrix: > > Mat Object: 1 MPI processes > > type: seqaij > > rows=62500, cols=62500 > > total: nonzeros=473355, allocated nonzeros=7.8125e+06 > > total number of mallocs used during MatSetValues calls =0 > > not using I-node routines > > Time cost: 0.307149, 0.268402, 0.0990018 > > > > > ------------------------------ > *From:* hong at aspiritech.org [hong at aspiritech.org] > *Sent:* Wednesday, February 18, 2015 7:49 AM > *To:* Sun, Hui > *Cc:* Matthew Knepley; petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] Question concerning ilu and bcgs > > Have you tried other solvers, e.g., PETSc default gmres/ilu, bcgs/ilu > etc. > The matrix is small. If it is ill-conditioned, then pc_type lu would work > the best. > > Hong > > On Wed, Feb 18, 2015 at 9:34 AM, Sun, Hui wrote: > >> With options: >> >> -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 >> -pc_hypre_pilut_tol 1e-3 -ksp_type bcgs -ksp_rtol 1e-10 -ksp_max_it 10 >> -ksp_monitor_short -ksp_converged_reason -ksp_view >> >> Here is the full output: >> >> 0 KSP Residual norm 1404.62 >> >> 1 KSP Residual norm 88.9068 >> >> 2 KSP Residual norm 64.73 >> >> 3 KSP Residual norm 71.0224 >> >> 4 KSP Residual norm 69.5044 >> >> 5 KSP Residual norm 455.458 >> >> 6 KSP Residual norm 174.876 >> >> 7 KSP Residual norm 183.031 >> >> 8 KSP Residual norm 650.675 >> >> 9 KSP Residual norm 79.2441 >> >> 10 KSP Residual norm 84.1985 >> >> Linear solve did not converge due to DIVERGED_ITS iterations 10 >> >> KSP Object: 1 MPI processes >> >> type: bcgs >> >> maximum iterations=10, initial guess is zero >> >> tolerances: relative=1e-10, absolute=1e-50, divergence=10000 >> >> left preconditioning >> >> using PRECONDITIONED norm type for convergence test >> >> PC Object: 1 MPI processes >> >> type: hypre >> >> HYPRE Pilut preconditioning >> >> HYPRE Pilut: maximum number of iterations 1000 >> >> HYPRE Pilut: drop tolerance 0.001 >> >> HYPRE Pilut: default factor row size >> >> linear system matrix = precond matrix: >> >> Mat Object: 1 MPI processes >> >> type: seqaij >> >> rows=62500, cols=62500 >> >> total: nonzeros=473355, allocated nonzeros=7.8125e+06 >> >> total number of mallocs used during MatSetValues calls =0 >> >> not using I-node routines >> >> Time cost: 0.756198, 0.662984, 0.105672 >> >> >> >> >> ------------------------------ >> *From:* Matthew Knepley [knepley at gmail.com] >> *Sent:* Wednesday, February 18, 2015 3:30 AM >> *To:* Sun, Hui >> *Cc:* petsc-users at mcs.anl.gov >> *Subject:* Re: [petsc-users] Question concerning ilu and bcgs >> >> On Wed, Feb 18, 2015 at 12:33 AM, Sun, Hui wrote: >> >>> I have a matrix system Ax = b, A is of type MatSeqAIJ or MatMPIAIJ, >>> depending on the number of cores. >>> >>> I try to solve this problem by pc_type ilu and ksp_type bcgs, it does >>> not converge. The options I specify are: >>> >>> -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 >>> -pc_hypre_pilut_tol 1e-3 -ksp_type b\ >>> >>> cgs -ksp_rtol 1e-10 -ksp_max_it 1000 -ksp_monitor_short >>> -ksp_converged_reason >>> >> >> 1) Run with -ksp_view, so we can see exactly what was used >> >> 2) ILUT is unfortunately not a well-defined algorithm, and I believe >> the parallel version makes different decisions >> than the serial version. >> >> Thanks, >> >> Matt >> >> >>> The first a few lines of the output are: >>> >>> 0 KSP Residual norm 1404.62 >>> >>> 1 KSP Residual norm 88.9068 >>> >>> 2 KSP Residual norm 64.73 >>> >>> 3 KSP Residual norm 71.0224 >>> >>> 4 KSP Residual norm 69.5044 >>> >>> 5 KSP Residual norm 455.458 >>> >>> 6 KSP Residual norm 174.876 >>> >>> 7 KSP Residual norm 183.031 >>> >>> 8 KSP Residual norm 650.675 >>> >>> 9 KSP Residual norm 79.2441 >>> >>> 10 KSP Residual norm 84.1985 >>> >>> >>> This clearly indicates non-convergence. However, I output the sparse >>> matrix A and vector b to MATLAB, and run the following command: >>> >>> [L,U] = ilu(A,struct('type','ilutp','droptol',1e-3)); >>> >>> [ux1,fl1,rr1,it1,rv1] = bicgstab(A,b,1e-10,1000,L,U); >>> >>> >>> And it converges in MATLAB, with flag fl1=0, relative residue >>> rr1=8.2725e-11, and iteration it1=89.5. I'm wondering how can I figure out >>> what's wrong. >>> >>> >>> Best, >>> >>> Hui >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From hus003 at ucsd.edu Wed Feb 18 10:31:03 2015 From: hus003 at ucsd.edu (Sun, Hui) Date: Wed, 18 Feb 2015 16:31:03 +0000 Subject: [petsc-users] Question concerning ilu and bcgs In-Reply-To: References: <7501CC2B7BBCC44A92ECEEC316170ECB010E891E@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8948@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8976@XMAIL-MBX-BH1.AD.UCSD.EDU>, Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010E898D@XMAIL-MBX-BH1.AD.UCSD.EDU> So far I just try around, I haven't looked into literature yet. However, both MATLAB's ilu+gmres and ilu+bcgs work. Is it possible that some parameter or options need to be tuned in using PETSc's ilu or hypre's ilu? Besides, is there a way to view how good the performance of the pc is and output the matrices L and U, so that I can do some test in MATLAB? Hui ________________________________ From: Matthew Knepley [knepley at gmail.com] Sent: Wednesday, February 18, 2015 8:09 AM To: Sun, Hui Cc: hong at aspiritech.org; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question concerning ilu and bcgs On Wed, Feb 18, 2015 at 10:02 AM, Sun, Hui > wrote: Yes I've tried other solvers, gmres/ilu does not work, neither does bcgs/ilu. Here are the options: -pc_type ilu -pc_factor_nonzeros_along_diagonal -pc_factor_levels 0 -pc_factor_reuse_ordering -ksp_ty\ pe bcgs -ksp_rtol 1e-6 -ksp_max_it 10 -ksp_monitor_short -ksp_view Note here that ILU(0) is an unreliable and generally crappy preconditioner. Have you looked in the literature for the kinds of preconditioners that are effective for your problem? Thanks, Matt Here is the output: 0 KSP Residual norm 211292 1 KSP Residual norm 13990.2 2 KSP Residual norm 9870.08 3 KSP Residual norm 9173.9 4 KSP Residual norm 9121.94 5 KSP Residual norm 7386.1 6 KSP Residual norm 6222.55 7 KSP Residual norm 7192.94 8 KSP Residual norm 33964 9 KSP Residual norm 33960.4 10 KSP Residual norm 1068.54 KSP Object: 1 MPI processes type: bcgs maximum iterations=10, initial guess is zero tolerances: relative=1e-06, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: ilu ILU: out-of-place factorization ILU: Reusing reordering from past factorization 0 levels of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 1 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=62500, cols=62500 package used to perform factorization: petsc total: nonzeros=473355, allocated nonzeros=473355 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=62500, cols=62500 total: nonzeros=473355, allocated nonzeros=7.8125e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Time cost: 0.307149, 0.268402, 0.0990018 ________________________________ From: hong at aspiritech.org [hong at aspiritech.org] Sent: Wednesday, February 18, 2015 7:49 AM To: Sun, Hui Cc: Matthew Knepley; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question concerning ilu and bcgs Have you tried other solvers, e.g., PETSc default gmres/ilu, bcgs/ilu etc. The matrix is small. If it is ill-conditioned, then pc_type lu would work the best. Hong On Wed, Feb 18, 2015 at 9:34 AM, Sun, Hui > wrote: With options: -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 -pc_hypre_pilut_tol 1e-3 -ksp_type bcgs -ksp_rtol 1e-10 -ksp_max_it 10 -ksp_monitor_short -ksp_converged_reason -ksp_view Here is the full output: 0 KSP Residual norm 1404.62 1 KSP Residual norm 88.9068 2 KSP Residual norm 64.73 3 KSP Residual norm 71.0224 4 KSP Residual norm 69.5044 5 KSP Residual norm 455.458 6 KSP Residual norm 174.876 7 KSP Residual norm 183.031 8 KSP Residual norm 650.675 9 KSP Residual norm 79.2441 10 KSP Residual norm 84.1985 Linear solve did not converge due to DIVERGED_ITS iterations 10 KSP Object: 1 MPI processes type: bcgs maximum iterations=10, initial guess is zero tolerances: relative=1e-10, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: hypre HYPRE Pilut preconditioning HYPRE Pilut: maximum number of iterations 1000 HYPRE Pilut: drop tolerance 0.001 HYPRE Pilut: default factor row size linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=62500, cols=62500 total: nonzeros=473355, allocated nonzeros=7.8125e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Time cost: 0.756198, 0.662984, 0.105672 ________________________________ From: Matthew Knepley [knepley at gmail.com] Sent: Wednesday, February 18, 2015 3:30 AM To: Sun, Hui Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question concerning ilu and bcgs On Wed, Feb 18, 2015 at 12:33 AM, Sun, Hui > wrote: I have a matrix system Ax = b, A is of type MatSeqAIJ or MatMPIAIJ, depending on the number of cores. I try to solve this problem by pc_type ilu and ksp_type bcgs, it does not converge. The options I specify are: -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 -pc_hypre_pilut_tol 1e-3 -ksp_type b\ cgs -ksp_rtol 1e-10 -ksp_max_it 1000 -ksp_monitor_short -ksp_converged_reason 1) Run with -ksp_view, so we can see exactly what was used 2) ILUT is unfortunately not a well-defined algorithm, and I believe the parallel version makes different decisions than the serial version. Thanks, Matt The first a few lines of the output are: 0 KSP Residual norm 1404.62 1 KSP Residual norm 88.9068 2 KSP Residual norm 64.73 3 KSP Residual norm 71.0224 4 KSP Residual norm 69.5044 5 KSP Residual norm 455.458 6 KSP Residual norm 174.876 7 KSP Residual norm 183.031 8 KSP Residual norm 650.675 9 KSP Residual norm 79.2441 10 KSP Residual norm 84.1985 This clearly indicates non-convergence. However, I output the sparse matrix A and vector b to MATLAB, and run the following command: [L,U] = ilu(A,struct('type','ilutp','droptol',1e-3)); [ux1,fl1,rr1,it1,rv1] = bicgstab(A,b,1e-10,1000,L,U); And it converges in MATLAB, with flag fl1=0, relative residue rr1=8.2725e-11, and iteration it1=89.5. I'm wondering how can I figure out what's wrong. Best, Hui -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 18 10:33:44 2015 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 18 Feb 2015 10:33:44 -0600 Subject: [petsc-users] Question concerning ilu and bcgs In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010E898D@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010E891E@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8948@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8976@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E898D@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: On Wed, Feb 18, 2015 at 10:31 AM, Sun, Hui wrote: > So far I just try around, I haven't looked into literature yet. > > However, both MATLAB's ilu+gmres and ilu+bcgs work. Is it possible that > some parameter or options need to be tuned in using PETSc's ilu or hypre's > ilu? Besides, is there a way to view how good the performance of the pc is > and output the matrices L and U, so that I can do some test in MATLAB? > 1) Its not clear exactly what Matlab is doing 2) PETSc uses ILU(0) by default (you can set it to use ILU(k)) 3) I don't know what Hypre's ILU can do I would really discourage from using ILU. I cannot imagine it is faster than sparse direct factorization for your system, such as from SuperLU or MUMPS. Thanks, Matt > Hui > > > ------------------------------ > *From:* Matthew Knepley [knepley at gmail.com] > *Sent:* Wednesday, February 18, 2015 8:09 AM > *To:* Sun, Hui > *Cc:* hong at aspiritech.org; petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] Question concerning ilu and bcgs > > On Wed, Feb 18, 2015 at 10:02 AM, Sun, Hui wrote: > >> Yes I've tried other solvers, gmres/ilu does not work, neither does >> bcgs/ilu. Here are the options: >> >> -pc_type ilu -pc_factor_nonzeros_along_diagonal -pc_factor_levels 0 >> -pc_factor_reuse_ordering -ksp_ty\ >> >> pe bcgs -ksp_rtol 1e-6 -ksp_max_it 10 -ksp_monitor_short -ksp_view >> > > Note here that ILU(0) is an unreliable and generally crappy > preconditioner. Have you looked in the > literature for the kinds of preconditioners that are effective for your > problem? > > Thanks, > > Matt > > >> Here is the output: >> >> 0 KSP Residual norm 211292 >> >> 1 KSP Residual norm 13990.2 >> >> 2 KSP Residual norm 9870.08 >> >> 3 KSP Residual norm 9173.9 >> >> 4 KSP Residual norm 9121.94 >> >> 5 KSP Residual norm 7386.1 >> >> 6 KSP Residual norm 6222.55 >> >> 7 KSP Residual norm 7192.94 >> >> 8 KSP Residual norm 33964 >> >> 9 KSP Residual norm 33960.4 >> >> 10 KSP Residual norm 1068.54 >> >> KSP Object: 1 MPI processes >> >> type: bcgs >> >> maximum iterations=10, initial guess is zero >> >> tolerances: relative=1e-06, absolute=1e-50, divergence=10000 >> >> left preconditioning >> >> using PRECONDITIONED norm type for convergence test >> >> PC Object: 1 MPI processes >> >> type: ilu >> >> ILU: out-of-place factorization >> >> ILU: Reusing reordering from past factorization >> >> 0 levels of fill >> >> tolerance for zero pivot 2.22045e-14 >> >> using diagonal shift on blocks to prevent zero pivot [INBLOCKS] >> >> matrix ordering: natural >> >> factor fill ratio given 1, needed 1 >> >> Factored matrix follows: >> >> Mat Object: 1 MPI processes >> >> type: seqaij >> >> rows=62500, cols=62500 >> >> package used to perform factorization: petsc >> >> total: nonzeros=473355, allocated nonzeros=473355 >> >> total number of mallocs used during MatSetValues calls =0 >> >> not using I-node routines >> >> linear system matrix = precond matrix: >> >> Mat Object: 1 MPI processes >> >> type: seqaij >> >> rows=62500, cols=62500 >> >> total: nonzeros=473355, allocated nonzeros=7.8125e+06 >> >> total number of mallocs used during MatSetValues calls =0 >> >> not using I-node routines >> >> Time cost: 0.307149, 0.268402, 0.0990018 >> >> >> >> >> ------------------------------ >> *From:* hong at aspiritech.org [hong at aspiritech.org] >> *Sent:* Wednesday, February 18, 2015 7:49 AM >> *To:* Sun, Hui >> *Cc:* Matthew Knepley; petsc-users at mcs.anl.gov >> *Subject:* Re: [petsc-users] Question concerning ilu and bcgs >> >> Have you tried other solvers, e.g., PETSc default gmres/ilu, bcgs/ilu >> etc. >> The matrix is small. If it is ill-conditioned, then pc_type lu would work >> the best. >> >> Hong >> >> On Wed, Feb 18, 2015 at 9:34 AM, Sun, Hui wrote: >> >>> With options: >>> >>> -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 >>> -pc_hypre_pilut_tol 1e-3 -ksp_type bcgs -ksp_rtol 1e-10 -ksp_max_it 10 >>> -ksp_monitor_short -ksp_converged_reason -ksp_view >>> >>> Here is the full output: >>> >>> 0 KSP Residual norm 1404.62 >>> >>> 1 KSP Residual norm 88.9068 >>> >>> 2 KSP Residual norm 64.73 >>> >>> 3 KSP Residual norm 71.0224 >>> >>> 4 KSP Residual norm 69.5044 >>> >>> 5 KSP Residual norm 455.458 >>> >>> 6 KSP Residual norm 174.876 >>> >>> 7 KSP Residual norm 183.031 >>> >>> 8 KSP Residual norm 650.675 >>> >>> 9 KSP Residual norm 79.2441 >>> >>> 10 KSP Residual norm 84.1985 >>> >>> Linear solve did not converge due to DIVERGED_ITS iterations 10 >>> >>> KSP Object: 1 MPI processes >>> >>> type: bcgs >>> >>> maximum iterations=10, initial guess is zero >>> >>> tolerances: relative=1e-10, absolute=1e-50, divergence=10000 >>> >>> left preconditioning >>> >>> using PRECONDITIONED norm type for convergence test >>> >>> PC Object: 1 MPI processes >>> >>> type: hypre >>> >>> HYPRE Pilut preconditioning >>> >>> HYPRE Pilut: maximum number of iterations 1000 >>> >>> HYPRE Pilut: drop tolerance 0.001 >>> >>> HYPRE Pilut: default factor row size >>> >>> linear system matrix = precond matrix: >>> >>> Mat Object: 1 MPI processes >>> >>> type: seqaij >>> >>> rows=62500, cols=62500 >>> >>> total: nonzeros=473355, allocated nonzeros=7.8125e+06 >>> >>> total number of mallocs used during MatSetValues calls =0 >>> >>> not using I-node routines >>> >>> Time cost: 0.756198, 0.662984, 0.105672 >>> >>> >>> >>> >>> ------------------------------ >>> *From:* Matthew Knepley [knepley at gmail.com] >>> *Sent:* Wednesday, February 18, 2015 3:30 AM >>> *To:* Sun, Hui >>> *Cc:* petsc-users at mcs.anl.gov >>> *Subject:* Re: [petsc-users] Question concerning ilu and bcgs >>> >>> On Wed, Feb 18, 2015 at 12:33 AM, Sun, Hui wrote: >>> >>>> I have a matrix system Ax = b, A is of type MatSeqAIJ or MatMPIAIJ, >>>> depending on the number of cores. >>>> >>>> I try to solve this problem by pc_type ilu and ksp_type bcgs, it does >>>> not converge. The options I specify are: >>>> >>>> -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 >>>> -pc_hypre_pilut_tol 1e-3 -ksp_type b\ >>>> >>>> cgs -ksp_rtol 1e-10 -ksp_max_it 1000 -ksp_monitor_short >>>> -ksp_converged_reason >>>> >>> >>> 1) Run with -ksp_view, so we can see exactly what was used >>> >>> 2) ILUT is unfortunately not a well-defined algorithm, and I believe >>> the parallel version makes different decisions >>> than the serial version. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> The first a few lines of the output are: >>>> >>>> 0 KSP Residual norm 1404.62 >>>> >>>> 1 KSP Residual norm 88.9068 >>>> >>>> 2 KSP Residual norm 64.73 >>>> >>>> 3 KSP Residual norm 71.0224 >>>> >>>> 4 KSP Residual norm 69.5044 >>>> >>>> 5 KSP Residual norm 455.458 >>>> >>>> 6 KSP Residual norm 174.876 >>>> >>>> 7 KSP Residual norm 183.031 >>>> >>>> 8 KSP Residual norm 650.675 >>>> >>>> 9 KSP Residual norm 79.2441 >>>> >>>> 10 KSP Residual norm 84.1985 >>>> >>>> >>>> This clearly indicates non-convergence. However, I output the sparse >>>> matrix A and vector b to MATLAB, and run the following command: >>>> >>>> [L,U] = ilu(A,struct('type','ilutp','droptol',1e-3)); >>>> >>>> [ux1,fl1,rr1,it1,rv1] = bicgstab(A,b,1e-10,1000,L,U); >>>> >>>> >>>> And it converges in MATLAB, with flag fl1=0, relative residue >>>> rr1=8.2725e-11, and iteration it1=89.5. I'm wondering how can I figure out >>>> what's wrong. >>>> >>>> >>>> Best, >>>> >>>> Hui >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From hus003 at ucsd.edu Wed Feb 18 10:47:53 2015 From: hus003 at ucsd.edu (Sun, Hui) Date: Wed, 18 Feb 2015 16:47:53 +0000 Subject: [petsc-users] Question concerning ilu and bcgs In-Reply-To: References: <7501CC2B7BBCC44A92ECEEC316170ECB010E891E@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8948@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8976@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E898D@XMAIL-MBX-BH1.AD.UCSD.EDU>, Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010E89A2@XMAIL-MBX-BH1.AD.UCSD.EDU> The matrix is from a 3D fluid problem, with complicated irregular boundary conditions. I've tried using direct solvers such as UMFPACK, SuperLU_dist and MUMPS. It seems that SuperLU_dist does not solve for my linear system; UMFPACK solves the system but would run into memory issue even with small size matrices and it cannot parallelize; MUMPS does solve the system but it also fails when the size is big and it takes much time. That's why I'm seeking an iterative method. I guess the direct method is faster than an iterative method for a small A, but that may not be true for bigger A. ________________________________ From: Matthew Knepley [knepley at gmail.com] Sent: Wednesday, February 18, 2015 8:33 AM To: Sun, Hui Cc: hong at aspiritech.org; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question concerning ilu and bcgs On Wed, Feb 18, 2015 at 10:31 AM, Sun, Hui > wrote: So far I just try around, I haven't looked into literature yet. However, both MATLAB's ilu+gmres and ilu+bcgs work. Is it possible that some parameter or options need to be tuned in using PETSc's ilu or hypre's ilu? Besides, is there a way to view how good the performance of the pc is and output the matrices L and U, so that I can do some test in MATLAB? 1) Its not clear exactly what Matlab is doing 2) PETSc uses ILU(0) by default (you can set it to use ILU(k)) 3) I don't know what Hypre's ILU can do I would really discourage from using ILU. I cannot imagine it is faster than sparse direct factorization for your system, such as from SuperLU or MUMPS. Thanks, Matt Hui ________________________________ From: Matthew Knepley [knepley at gmail.com] Sent: Wednesday, February 18, 2015 8:09 AM To: Sun, Hui Cc: hong at aspiritech.org; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question concerning ilu and bcgs On Wed, Feb 18, 2015 at 10:02 AM, Sun, Hui > wrote: Yes I've tried other solvers, gmres/ilu does not work, neither does bcgs/ilu. Here are the options: -pc_type ilu -pc_factor_nonzeros_along_diagonal -pc_factor_levels 0 -pc_factor_reuse_ordering -ksp_ty\ pe bcgs -ksp_rtol 1e-6 -ksp_max_it 10 -ksp_monitor_short -ksp_view Note here that ILU(0) is an unreliable and generally crappy preconditioner. Have you looked in the literature for the kinds of preconditioners that are effective for your problem? Thanks, Matt Here is the output: 0 KSP Residual norm 211292 1 KSP Residual norm 13990.2 2 KSP Residual norm 9870.08 3 KSP Residual norm 9173.9 4 KSP Residual norm 9121.94 5 KSP Residual norm 7386.1 6 KSP Residual norm 6222.55 7 KSP Residual norm 7192.94 8 KSP Residual norm 33964 9 KSP Residual norm 33960.4 10 KSP Residual norm 1068.54 KSP Object: 1 MPI processes type: bcgs maximum iterations=10, initial guess is zero tolerances: relative=1e-06, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: ilu ILU: out-of-place factorization ILU: Reusing reordering from past factorization 0 levels of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 1 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=62500, cols=62500 package used to perform factorization: petsc total: nonzeros=473355, allocated nonzeros=473355 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=62500, cols=62500 total: nonzeros=473355, allocated nonzeros=7.8125e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Time cost: 0.307149, 0.268402, 0.0990018 ________________________________ From: hong at aspiritech.org [hong at aspiritech.org] Sent: Wednesday, February 18, 2015 7:49 AM To: Sun, Hui Cc: Matthew Knepley; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question concerning ilu and bcgs Have you tried other solvers, e.g., PETSc default gmres/ilu, bcgs/ilu etc. The matrix is small. If it is ill-conditioned, then pc_type lu would work the best. Hong On Wed, Feb 18, 2015 at 9:34 AM, Sun, Hui > wrote: With options: -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 -pc_hypre_pilut_tol 1e-3 -ksp_type bcgs -ksp_rtol 1e-10 -ksp_max_it 10 -ksp_monitor_short -ksp_converged_reason -ksp_view Here is the full output: 0 KSP Residual norm 1404.62 1 KSP Residual norm 88.9068 2 KSP Residual norm 64.73 3 KSP Residual norm 71.0224 4 KSP Residual norm 69.5044 5 KSP Residual norm 455.458 6 KSP Residual norm 174.876 7 KSP Residual norm 183.031 8 KSP Residual norm 650.675 9 KSP Residual norm 79.2441 10 KSP Residual norm 84.1985 Linear solve did not converge due to DIVERGED_ITS iterations 10 KSP Object: 1 MPI processes type: bcgs maximum iterations=10, initial guess is zero tolerances: relative=1e-10, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: hypre HYPRE Pilut preconditioning HYPRE Pilut: maximum number of iterations 1000 HYPRE Pilut: drop tolerance 0.001 HYPRE Pilut: default factor row size linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=62500, cols=62500 total: nonzeros=473355, allocated nonzeros=7.8125e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Time cost: 0.756198, 0.662984, 0.105672 ________________________________ From: Matthew Knepley [knepley at gmail.com] Sent: Wednesday, February 18, 2015 3:30 AM To: Sun, Hui Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question concerning ilu and bcgs On Wed, Feb 18, 2015 at 12:33 AM, Sun, Hui > wrote: I have a matrix system Ax = b, A is of type MatSeqAIJ or MatMPIAIJ, depending on the number of cores. I try to solve this problem by pc_type ilu and ksp_type bcgs, it does not converge. The options I specify are: -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 -pc_hypre_pilut_tol 1e-3 -ksp_type b\ cgs -ksp_rtol 1e-10 -ksp_max_it 1000 -ksp_monitor_short -ksp_converged_reason 1) Run with -ksp_view, so we can see exactly what was used 2) ILUT is unfortunately not a well-defined algorithm, and I believe the parallel version makes different decisions than the serial version. Thanks, Matt The first a few lines of the output are: 0 KSP Residual norm 1404.62 1 KSP Residual norm 88.9068 2 KSP Residual norm 64.73 3 KSP Residual norm 71.0224 4 KSP Residual norm 69.5044 5 KSP Residual norm 455.458 6 KSP Residual norm 174.876 7 KSP Residual norm 183.031 8 KSP Residual norm 650.675 9 KSP Residual norm 79.2441 10 KSP Residual norm 84.1985 This clearly indicates non-convergence. However, I output the sparse matrix A and vector b to MATLAB, and run the following command: [L,U] = ilu(A,struct('type','ilutp','droptol',1e-3)); [ux1,fl1,rr1,it1,rv1] = bicgstab(A,b,1e-10,1000,L,U); And it converges in MATLAB, with flag fl1=0, relative residue rr1=8.2725e-11, and iteration it1=89.5. I'm wondering how can I figure out what's wrong. Best, Hui -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 18 10:54:54 2015 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 18 Feb 2015 10:54:54 -0600 Subject: [petsc-users] Question concerning ilu and bcgs In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010E89A2@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010E891E@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8948@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8976@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E898D@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E89A2@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: On Wed, Feb 18, 2015 at 10:47 AM, Sun, Hui wrote: > The matrix is from a 3D fluid problem, with complicated irregular > boundary conditions. I've tried using direct solvers such as UMFPACK, > SuperLU_dist and MUMPS. It seems that SuperLU_dist does not solve for my > linear system; UMFPACK solves the system but would run into memory issue > even with small size matrices and it cannot parallelize; MUMPS does solve > the system but it also fails when the size is big and it takes much time. > That's why I'm seeking an iterative method. > > I guess the direct method is faster than an iterative method for a small > A, but that may not be true for bigger A. > If this is a Stokes flow, you should use PCFIELDSPLIT and multigrid. If it is advection dominated, I know of nothing better than sparse direct or perhaps Block-Jacobi with sparse direct blocks. Since MUMPS solved your system, I would consider using BJacobi/ASM and MUMPS or UMFPACK as the block solver. Thanks, Matt > > ------------------------------ > *From:* Matthew Knepley [knepley at gmail.com] > *Sent:* Wednesday, February 18, 2015 8:33 AM > *To:* Sun, Hui > *Cc:* hong at aspiritech.org; petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] Question concerning ilu and bcgs > > On Wed, Feb 18, 2015 at 10:31 AM, Sun, Hui wrote: > >> So far I just try around, I haven't looked into literature yet. >> >> However, both MATLAB's ilu+gmres and ilu+bcgs work. Is it possible that >> some parameter or options need to be tuned in using PETSc's ilu or hypre's >> ilu? Besides, is there a way to view how good the performance of the pc is >> and output the matrices L and U, so that I can do some test in MATLAB? >> > > 1) Its not clear exactly what Matlab is doing > > 2) PETSc uses ILU(0) by default (you can set it to use ILU(k)) > > 3) I don't know what Hypre's ILU can do > > I would really discourage from using ILU. I cannot imagine it is faster > than sparse direct factorization > for your system, such as from SuperLU or MUMPS. > > Thanks, > > Matt > > >> Hui >> >> >> ------------------------------ >> *From:* Matthew Knepley [knepley at gmail.com] >> *Sent:* Wednesday, February 18, 2015 8:09 AM >> *To:* Sun, Hui >> *Cc:* hong at aspiritech.org; petsc-users at mcs.anl.gov >> *Subject:* Re: [petsc-users] Question concerning ilu and bcgs >> >> On Wed, Feb 18, 2015 at 10:02 AM, Sun, Hui wrote: >> >>> Yes I've tried other solvers, gmres/ilu does not work, neither does >>> bcgs/ilu. Here are the options: >>> >>> -pc_type ilu -pc_factor_nonzeros_along_diagonal -pc_factor_levels 0 >>> -pc_factor_reuse_ordering -ksp_ty\ >>> >>> pe bcgs -ksp_rtol 1e-6 -ksp_max_it 10 -ksp_monitor_short -ksp_view >>> >> >> Note here that ILU(0) is an unreliable and generally crappy >> preconditioner. Have you looked in the >> literature for the kinds of preconditioners that are effective for your >> problem? >> >> Thanks, >> >> Matt >> >> >>> Here is the output: >>> >>> 0 KSP Residual norm 211292 >>> >>> 1 KSP Residual norm 13990.2 >>> >>> 2 KSP Residual norm 9870.08 >>> >>> 3 KSP Residual norm 9173.9 >>> >>> 4 KSP Residual norm 9121.94 >>> >>> 5 KSP Residual norm 7386.1 >>> >>> 6 KSP Residual norm 6222.55 >>> >>> 7 KSP Residual norm 7192.94 >>> >>> 8 KSP Residual norm 33964 >>> >>> 9 KSP Residual norm 33960.4 >>> >>> 10 KSP Residual norm 1068.54 >>> >>> KSP Object: 1 MPI processes >>> >>> type: bcgs >>> >>> maximum iterations=10, initial guess is zero >>> >>> tolerances: relative=1e-06, absolute=1e-50, divergence=10000 >>> >>> left preconditioning >>> >>> using PRECONDITIONED norm type for convergence test >>> >>> PC Object: 1 MPI processes >>> >>> type: ilu >>> >>> ILU: out-of-place factorization >>> >>> ILU: Reusing reordering from past factorization >>> >>> 0 levels of fill >>> >>> tolerance for zero pivot 2.22045e-14 >>> >>> using diagonal shift on blocks to prevent zero pivot [INBLOCKS] >>> >>> matrix ordering: natural >>> >>> factor fill ratio given 1, needed 1 >>> >>> Factored matrix follows: >>> >>> Mat Object: 1 MPI processes >>> >>> type: seqaij >>> >>> rows=62500, cols=62500 >>> >>> package used to perform factorization: petsc >>> >>> total: nonzeros=473355, allocated nonzeros=473355 >>> >>> total number of mallocs used during MatSetValues calls =0 >>> >>> not using I-node routines >>> >>> linear system matrix = precond matrix: >>> >>> Mat Object: 1 MPI processes >>> >>> type: seqaij >>> >>> rows=62500, cols=62500 >>> >>> total: nonzeros=473355, allocated nonzeros=7.8125e+06 >>> >>> total number of mallocs used during MatSetValues calls =0 >>> >>> not using I-node routines >>> >>> Time cost: 0.307149, 0.268402, 0.0990018 >>> >>> >>> >>> >>> ------------------------------ >>> *From:* hong at aspiritech.org [hong at aspiritech.org] >>> *Sent:* Wednesday, February 18, 2015 7:49 AM >>> *To:* Sun, Hui >>> *Cc:* Matthew Knepley; petsc-users at mcs.anl.gov >>> *Subject:* Re: [petsc-users] Question concerning ilu and bcgs >>> >>> Have you tried other solvers, e.g., PETSc default gmres/ilu, >>> bcgs/ilu etc. >>> The matrix is small. If it is ill-conditioned, then pc_type lu would >>> work the best. >>> >>> Hong >>> >>> On Wed, Feb 18, 2015 at 9:34 AM, Sun, Hui wrote: >>> >>>> With options: >>>> >>>> -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 >>>> -pc_hypre_pilut_tol 1e-3 -ksp_type bcgs -ksp_rtol 1e-10 -ksp_max_it 10 >>>> -ksp_monitor_short -ksp_converged_reason -ksp_view >>>> >>>> Here is the full output: >>>> >>>> 0 KSP Residual norm 1404.62 >>>> >>>> 1 KSP Residual norm 88.9068 >>>> >>>> 2 KSP Residual norm 64.73 >>>> >>>> 3 KSP Residual norm 71.0224 >>>> >>>> 4 KSP Residual norm 69.5044 >>>> >>>> 5 KSP Residual norm 455.458 >>>> >>>> 6 KSP Residual norm 174.876 >>>> >>>> 7 KSP Residual norm 183.031 >>>> >>>> 8 KSP Residual norm 650.675 >>>> >>>> 9 KSP Residual norm 79.2441 >>>> >>>> 10 KSP Residual norm 84.1985 >>>> >>>> Linear solve did not converge due to DIVERGED_ITS iterations 10 >>>> >>>> KSP Object: 1 MPI processes >>>> >>>> type: bcgs >>>> >>>> maximum iterations=10, initial guess is zero >>>> >>>> tolerances: relative=1e-10, absolute=1e-50, divergence=10000 >>>> >>>> left preconditioning >>>> >>>> using PRECONDITIONED norm type for convergence test >>>> >>>> PC Object: 1 MPI processes >>>> >>>> type: hypre >>>> >>>> HYPRE Pilut preconditioning >>>> >>>> HYPRE Pilut: maximum number of iterations 1000 >>>> >>>> HYPRE Pilut: drop tolerance 0.001 >>>> >>>> HYPRE Pilut: default factor row size >>>> >>>> linear system matrix = precond matrix: >>>> >>>> Mat Object: 1 MPI processes >>>> >>>> type: seqaij >>>> >>>> rows=62500, cols=62500 >>>> >>>> total: nonzeros=473355, allocated nonzeros=7.8125e+06 >>>> >>>> total number of mallocs used during MatSetValues calls =0 >>>> >>>> not using I-node routines >>>> >>>> Time cost: 0.756198, 0.662984, 0.105672 >>>> >>>> >>>> >>>> >>>> ------------------------------ >>>> *From:* Matthew Knepley [knepley at gmail.com] >>>> *Sent:* Wednesday, February 18, 2015 3:30 AM >>>> *To:* Sun, Hui >>>> *Cc:* petsc-users at mcs.anl.gov >>>> *Subject:* Re: [petsc-users] Question concerning ilu and bcgs >>>> >>>> On Wed, Feb 18, 2015 at 12:33 AM, Sun, Hui wrote: >>>> >>>>> I have a matrix system Ax = b, A is of type MatSeqAIJ or MatMPIAIJ, >>>>> depending on the number of cores. >>>>> >>>>> I try to solve this problem by pc_type ilu and ksp_type bcgs, it >>>>> does not converge. The options I specify are: >>>>> >>>>> -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 >>>>> -pc_hypre_pilut_tol 1e-3 -ksp_type b\ >>>>> >>>>> cgs -ksp_rtol 1e-10 -ksp_max_it 1000 -ksp_monitor_short >>>>> -ksp_converged_reason >>>>> >>>> >>>> 1) Run with -ksp_view, so we can see exactly what was used >>>> >>>> 2) ILUT is unfortunately not a well-defined algorithm, and I believe >>>> the parallel version makes different decisions >>>> than the serial version. >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> The first a few lines of the output are: >>>>> >>>>> 0 KSP Residual norm 1404.62 >>>>> >>>>> 1 KSP Residual norm 88.9068 >>>>> >>>>> 2 KSP Residual norm 64.73 >>>>> >>>>> 3 KSP Residual norm 71.0224 >>>>> >>>>> 4 KSP Residual norm 69.5044 >>>>> >>>>> 5 KSP Residual norm 455.458 >>>>> >>>>> 6 KSP Residual norm 174.876 >>>>> >>>>> 7 KSP Residual norm 183.031 >>>>> >>>>> 8 KSP Residual norm 650.675 >>>>> >>>>> 9 KSP Residual norm 79.2441 >>>>> >>>>> 10 KSP Residual norm 84.1985 >>>>> >>>>> >>>>> This clearly indicates non-convergence. However, I output the sparse >>>>> matrix A and vector b to MATLAB, and run the following command: >>>>> >>>>> [L,U] = ilu(A,struct('type','ilutp','droptol',1e-3)); >>>>> >>>>> [ux1,fl1,rr1,it1,rv1] = bicgstab(A,b,1e-10,1000,L,U); >>>>> >>>>> >>>>> And it converges in MATLAB, with flag fl1=0, relative residue >>>>> rr1=8.2725e-11, and iteration it1=89.5. I'm wondering how can I figure out >>>>> what's wrong. >>>>> >>>>> >>>>> Best, >>>>> >>>>> Hui >>>>> >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From hus003 at ucsd.edu Wed Feb 18 10:55:57 2015 From: hus003 at ucsd.edu (Sun, Hui) Date: Wed, 18 Feb 2015 16:55:57 +0000 Subject: [petsc-users] Question concerning ilu and bcgs In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010E89A2@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010E891E@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8948@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8976@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E898D@XMAIL-MBX-BH1.AD.UCSD.EDU>, , <7501CC2B7BBCC44A92ECEEC316170ECB010E89A2@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010E89B9@XMAIL-MBX-BH1.AD.UCSD.EDU> If I use PETSc's ilu(0), how can I output the matrix L and U? I mean, I can setup pc by PCSetType(pc,PCLU); And I can output a matrix by MatView. But how do I get the L and the U from the pc, so that I can output them? Best, Hui ________________________________ From: Sun, Hui Sent: Wednesday, February 18, 2015 8:47 AM To: Matthew Knepley Cc: hong at aspiritech.org; petsc-users at mcs.anl.gov Subject: RE: [petsc-users] Question concerning ilu and bcgs The matrix is from a 3D fluid problem, with complicated irregular boundary conditions. I've tried using direct solvers such as UMFPACK, SuperLU_dist and MUMPS. It seems that SuperLU_dist does not solve for my linear system; UMFPACK solves the system but would run into memory issue even with small size matrices and it cannot parallelize; MUMPS does solve the system but it also fails when the size is big and it takes much time. That's why I'm seeking an iterative method. I guess the direct method is faster than an iterative method for a small A, but that may not be true for bigger A. ________________________________ From: Matthew Knepley [knepley at gmail.com] Sent: Wednesday, February 18, 2015 8:33 AM To: Sun, Hui Cc: hong at aspiritech.org; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question concerning ilu and bcgs On Wed, Feb 18, 2015 at 10:31 AM, Sun, Hui > wrote: So far I just try around, I haven't looked into literature yet. However, both MATLAB's ilu+gmres and ilu+bcgs work. Is it possible that some parameter or options need to be tuned in using PETSc's ilu or hypre's ilu? Besides, is there a way to view how good the performance of the pc is and output the matrices L and U, so that I can do some test in MATLAB? 1) Its not clear exactly what Matlab is doing 2) PETSc uses ILU(0) by default (you can set it to use ILU(k)) 3) I don't know what Hypre's ILU can do I would really discourage from using ILU. I cannot imagine it is faster than sparse direct factorization for your system, such as from SuperLU or MUMPS. Thanks, Matt Hui ________________________________ From: Matthew Knepley [knepley at gmail.com] Sent: Wednesday, February 18, 2015 8:09 AM To: Sun, Hui Cc: hong at aspiritech.org; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question concerning ilu and bcgs On Wed, Feb 18, 2015 at 10:02 AM, Sun, Hui > wrote: Yes I've tried other solvers, gmres/ilu does not work, neither does bcgs/ilu. Here are the options: -pc_type ilu -pc_factor_nonzeros_along_diagonal -pc_factor_levels 0 -pc_factor_reuse_ordering -ksp_ty\ pe bcgs -ksp_rtol 1e-6 -ksp_max_it 10 -ksp_monitor_short -ksp_view Note here that ILU(0) is an unreliable and generally crappy preconditioner. Have you looked in the literature for the kinds of preconditioners that are effective for your problem? Thanks, Matt Here is the output: 0 KSP Residual norm 211292 1 KSP Residual norm 13990.2 2 KSP Residual norm 9870.08 3 KSP Residual norm 9173.9 4 KSP Residual norm 9121.94 5 KSP Residual norm 7386.1 6 KSP Residual norm 6222.55 7 KSP Residual norm 7192.94 8 KSP Residual norm 33964 9 KSP Residual norm 33960.4 10 KSP Residual norm 1068.54 KSP Object: 1 MPI processes type: bcgs maximum iterations=10, initial guess is zero tolerances: relative=1e-06, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: ilu ILU: out-of-place factorization ILU: Reusing reordering from past factorization 0 levels of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 1 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=62500, cols=62500 package used to perform factorization: petsc total: nonzeros=473355, allocated nonzeros=473355 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=62500, cols=62500 total: nonzeros=473355, allocated nonzeros=7.8125e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Time cost: 0.307149, 0.268402, 0.0990018 ________________________________ From: hong at aspiritech.org [hong at aspiritech.org] Sent: Wednesday, February 18, 2015 7:49 AM To: Sun, Hui Cc: Matthew Knepley; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question concerning ilu and bcgs Have you tried other solvers, e.g., PETSc default gmres/ilu, bcgs/ilu etc. The matrix is small. If it is ill-conditioned, then pc_type lu would work the best. Hong On Wed, Feb 18, 2015 at 9:34 AM, Sun, Hui > wrote: With options: -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 -pc_hypre_pilut_tol 1e-3 -ksp_type bcgs -ksp_rtol 1e-10 -ksp_max_it 10 -ksp_monitor_short -ksp_converged_reason -ksp_view Here is the full output: 0 KSP Residual norm 1404.62 1 KSP Residual norm 88.9068 2 KSP Residual norm 64.73 3 KSP Residual norm 71.0224 4 KSP Residual norm 69.5044 5 KSP Residual norm 455.458 6 KSP Residual norm 174.876 7 KSP Residual norm 183.031 8 KSP Residual norm 650.675 9 KSP Residual norm 79.2441 10 KSP Residual norm 84.1985 Linear solve did not converge due to DIVERGED_ITS iterations 10 KSP Object: 1 MPI processes type: bcgs maximum iterations=10, initial guess is zero tolerances: relative=1e-10, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: hypre HYPRE Pilut preconditioning HYPRE Pilut: maximum number of iterations 1000 HYPRE Pilut: drop tolerance 0.001 HYPRE Pilut: default factor row size linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=62500, cols=62500 total: nonzeros=473355, allocated nonzeros=7.8125e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Time cost: 0.756198, 0.662984, 0.105672 ________________________________ From: Matthew Knepley [knepley at gmail.com] Sent: Wednesday, February 18, 2015 3:30 AM To: Sun, Hui Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question concerning ilu and bcgs On Wed, Feb 18, 2015 at 12:33 AM, Sun, Hui > wrote: I have a matrix system Ax = b, A is of type MatSeqAIJ or MatMPIAIJ, depending on the number of cores. I try to solve this problem by pc_type ilu and ksp_type bcgs, it does not converge. The options I specify are: -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 -pc_hypre_pilut_tol 1e-3 -ksp_type b\ cgs -ksp_rtol 1e-10 -ksp_max_it 1000 -ksp_monitor_short -ksp_converged_reason 1) Run with -ksp_view, so we can see exactly what was used 2) ILUT is unfortunately not a well-defined algorithm, and I believe the parallel version makes different decisions than the serial version. Thanks, Matt The first a few lines of the output are: 0 KSP Residual norm 1404.62 1 KSP Residual norm 88.9068 2 KSP Residual norm 64.73 3 KSP Residual norm 71.0224 4 KSP Residual norm 69.5044 5 KSP Residual norm 455.458 6 KSP Residual norm 174.876 7 KSP Residual norm 183.031 8 KSP Residual norm 650.675 9 KSP Residual norm 79.2441 10 KSP Residual norm 84.1985 This clearly indicates non-convergence. However, I output the sparse matrix A and vector b to MATLAB, and run the following command: [L,U] = ilu(A,struct('type','ilutp','droptol',1e-3)); [ux1,fl1,rr1,it1,rv1] = bicgstab(A,b,1e-10,1000,L,U); And it converges in MATLAB, with flag fl1=0, relative residue rr1=8.2725e-11, and iteration it1=89.5. I'm wondering how can I figure out what's wrong. Best, Hui -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 18 10:57:27 2015 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 18 Feb 2015 10:57:27 -0600 Subject: [petsc-users] Question concerning ilu and bcgs In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010E89B9@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010E891E@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8948@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8976@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E898D@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E89A2@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E89B9@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: On Wed, Feb 18, 2015 at 10:55 AM, Sun, Hui wrote: > If I use PETSc's ilu(0), how can I output the matrix L and U? > > I mean, I can setup pc by PCSetType > > (pc,PCLU > > ); And I can output a matrix by MatView. But how do I get the L and the U > from the pc, so that I can output them? > We do not have output routines for the factors. Thanks, Matt > Best, > Hui > > > ------------------------------ > *From:* Sun, Hui > *Sent:* Wednesday, February 18, 2015 8:47 AM > *To:* Matthew Knepley > *Cc:* hong at aspiritech.org; petsc-users at mcs.anl.gov > *Subject:* RE: [petsc-users] Question concerning ilu and bcgs > > The matrix is from a 3D fluid problem, with complicated irregular > boundary conditions. I've tried using direct solvers such as UMFPACK, > SuperLU_dist and MUMPS. It seems that SuperLU_dist does not solve for my > linear system; UMFPACK solves the system but would run into memory issue > even with small size matrices and it cannot parallelize; MUMPS does solve > the system but it also fails when the size is big and it takes much time. > That's why I'm seeking an iterative method. > > I guess the direct method is faster than an iterative method for a small > A, but that may not be true for bigger A. > > > > ------------------------------ > *From:* Matthew Knepley [knepley at gmail.com] > *Sent:* Wednesday, February 18, 2015 8:33 AM > *To:* Sun, Hui > *Cc:* hong at aspiritech.org; petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] Question concerning ilu and bcgs > > On Wed, Feb 18, 2015 at 10:31 AM, Sun, Hui wrote: > >> So far I just try around, I haven't looked into literature yet. >> >> However, both MATLAB's ilu+gmres and ilu+bcgs work. Is it possible that >> some parameter or options need to be tuned in using PETSc's ilu or hypre's >> ilu? Besides, is there a way to view how good the performance of the pc is >> and output the matrices L and U, so that I can do some test in MATLAB? >> > > 1) Its not clear exactly what Matlab is doing > > 2) PETSc uses ILU(0) by default (you can set it to use ILU(k)) > > 3) I don't know what Hypre's ILU can do > > I would really discourage from using ILU. I cannot imagine it is faster > than sparse direct factorization > for your system, such as from SuperLU or MUMPS. > > Thanks, > > Matt > > >> Hui >> >> >> ------------------------------ >> *From:* Matthew Knepley [knepley at gmail.com] >> *Sent:* Wednesday, February 18, 2015 8:09 AM >> *To:* Sun, Hui >> *Cc:* hong at aspiritech.org; petsc-users at mcs.anl.gov >> *Subject:* Re: [petsc-users] Question concerning ilu and bcgs >> >> On Wed, Feb 18, 2015 at 10:02 AM, Sun, Hui wrote: >> >>> Yes I've tried other solvers, gmres/ilu does not work, neither does >>> bcgs/ilu. Here are the options: >>> >>> -pc_type ilu -pc_factor_nonzeros_along_diagonal -pc_factor_levels 0 >>> -pc_factor_reuse_ordering -ksp_ty\ >>> >>> pe bcgs -ksp_rtol 1e-6 -ksp_max_it 10 -ksp_monitor_short -ksp_view >>> >> >> Note here that ILU(0) is an unreliable and generally crappy >> preconditioner. Have you looked in the >> literature for the kinds of preconditioners that are effective for your >> problem? >> >> Thanks, >> >> Matt >> >> >>> Here is the output: >>> >>> 0 KSP Residual norm 211292 >>> >>> 1 KSP Residual norm 13990.2 >>> >>> 2 KSP Residual norm 9870.08 >>> >>> 3 KSP Residual norm 9173.9 >>> >>> 4 KSP Residual norm 9121.94 >>> >>> 5 KSP Residual norm 7386.1 >>> >>> 6 KSP Residual norm 6222.55 >>> >>> 7 KSP Residual norm 7192.94 >>> >>> 8 KSP Residual norm 33964 >>> >>> 9 KSP Residual norm 33960.4 >>> >>> 10 KSP Residual norm 1068.54 >>> >>> KSP Object: 1 MPI processes >>> >>> type: bcgs >>> >>> maximum iterations=10, initial guess is zero >>> >>> tolerances: relative=1e-06, absolute=1e-50, divergence=10000 >>> >>> left preconditioning >>> >>> using PRECONDITIONED norm type for convergence test >>> >>> PC Object: 1 MPI processes >>> >>> type: ilu >>> >>> ILU: out-of-place factorization >>> >>> ILU: Reusing reordering from past factorization >>> >>> 0 levels of fill >>> >>> tolerance for zero pivot 2.22045e-14 >>> >>> using diagonal shift on blocks to prevent zero pivot [INBLOCKS] >>> >>> matrix ordering: natural >>> >>> factor fill ratio given 1, needed 1 >>> >>> Factored matrix follows: >>> >>> Mat Object: 1 MPI processes >>> >>> type: seqaij >>> >>> rows=62500, cols=62500 >>> >>> package used to perform factorization: petsc >>> >>> total: nonzeros=473355, allocated nonzeros=473355 >>> >>> total number of mallocs used during MatSetValues calls =0 >>> >>> not using I-node routines >>> >>> linear system matrix = precond matrix: >>> >>> Mat Object: 1 MPI processes >>> >>> type: seqaij >>> >>> rows=62500, cols=62500 >>> >>> total: nonzeros=473355, allocated nonzeros=7.8125e+06 >>> >>> total number of mallocs used during MatSetValues calls =0 >>> >>> not using I-node routines >>> >>> Time cost: 0.307149, 0.268402, 0.0990018 >>> >>> >>> >>> >>> ------------------------------ >>> *From:* hong at aspiritech.org [hong at aspiritech.org] >>> *Sent:* Wednesday, February 18, 2015 7:49 AM >>> *To:* Sun, Hui >>> *Cc:* Matthew Knepley; petsc-users at mcs.anl.gov >>> *Subject:* Re: [petsc-users] Question concerning ilu and bcgs >>> >>> Have you tried other solvers, e.g., PETSc default gmres/ilu, >>> bcgs/ilu etc. >>> The matrix is small. If it is ill-conditioned, then pc_type lu would >>> work the best. >>> >>> Hong >>> >>> On Wed, Feb 18, 2015 at 9:34 AM, Sun, Hui wrote: >>> >>>> With options: >>>> >>>> -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 >>>> -pc_hypre_pilut_tol 1e-3 -ksp_type bcgs -ksp_rtol 1e-10 -ksp_max_it 10 >>>> -ksp_monitor_short -ksp_converged_reason -ksp_view >>>> >>>> Here is the full output: >>>> >>>> 0 KSP Residual norm 1404.62 >>>> >>>> 1 KSP Residual norm 88.9068 >>>> >>>> 2 KSP Residual norm 64.73 >>>> >>>> 3 KSP Residual norm 71.0224 >>>> >>>> 4 KSP Residual norm 69.5044 >>>> >>>> 5 KSP Residual norm 455.458 >>>> >>>> 6 KSP Residual norm 174.876 >>>> >>>> 7 KSP Residual norm 183.031 >>>> >>>> 8 KSP Residual norm 650.675 >>>> >>>> 9 KSP Residual norm 79.2441 >>>> >>>> 10 KSP Residual norm 84.1985 >>>> >>>> Linear solve did not converge due to DIVERGED_ITS iterations 10 >>>> >>>> KSP Object: 1 MPI processes >>>> >>>> type: bcgs >>>> >>>> maximum iterations=10, initial guess is zero >>>> >>>> tolerances: relative=1e-10, absolute=1e-50, divergence=10000 >>>> >>>> left preconditioning >>>> >>>> using PRECONDITIONED norm type for convergence test >>>> >>>> PC Object: 1 MPI processes >>>> >>>> type: hypre >>>> >>>> HYPRE Pilut preconditioning >>>> >>>> HYPRE Pilut: maximum number of iterations 1000 >>>> >>>> HYPRE Pilut: drop tolerance 0.001 >>>> >>>> HYPRE Pilut: default factor row size >>>> >>>> linear system matrix = precond matrix: >>>> >>>> Mat Object: 1 MPI processes >>>> >>>> type: seqaij >>>> >>>> rows=62500, cols=62500 >>>> >>>> total: nonzeros=473355, allocated nonzeros=7.8125e+06 >>>> >>>> total number of mallocs used during MatSetValues calls =0 >>>> >>>> not using I-node routines >>>> >>>> Time cost: 0.756198, 0.662984, 0.105672 >>>> >>>> >>>> >>>> >>>> ------------------------------ >>>> *From:* Matthew Knepley [knepley at gmail.com] >>>> *Sent:* Wednesday, February 18, 2015 3:30 AM >>>> *To:* Sun, Hui >>>> *Cc:* petsc-users at mcs.anl.gov >>>> *Subject:* Re: [petsc-users] Question concerning ilu and bcgs >>>> >>>> On Wed, Feb 18, 2015 at 12:33 AM, Sun, Hui wrote: >>>> >>>>> I have a matrix system Ax = b, A is of type MatSeqAIJ or MatMPIAIJ, >>>>> depending on the number of cores. >>>>> >>>>> I try to solve this problem by pc_type ilu and ksp_type bcgs, it >>>>> does not converge. The options I specify are: >>>>> >>>>> -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 >>>>> -pc_hypre_pilut_tol 1e-3 -ksp_type b\ >>>>> >>>>> cgs -ksp_rtol 1e-10 -ksp_max_it 1000 -ksp_monitor_short >>>>> -ksp_converged_reason >>>>> >>>> >>>> 1) Run with -ksp_view, so we can see exactly what was used >>>> >>>> 2) ILUT is unfortunately not a well-defined algorithm, and I believe >>>> the parallel version makes different decisions >>>> than the serial version. >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> The first a few lines of the output are: >>>>> >>>>> 0 KSP Residual norm 1404.62 >>>>> >>>>> 1 KSP Residual norm 88.9068 >>>>> >>>>> 2 KSP Residual norm 64.73 >>>>> >>>>> 3 KSP Residual norm 71.0224 >>>>> >>>>> 4 KSP Residual norm 69.5044 >>>>> >>>>> 5 KSP Residual norm 455.458 >>>>> >>>>> 6 KSP Residual norm 174.876 >>>>> >>>>> 7 KSP Residual norm 183.031 >>>>> >>>>> 8 KSP Residual norm 650.675 >>>>> >>>>> 9 KSP Residual norm 79.2441 >>>>> >>>>> 10 KSP Residual norm 84.1985 >>>>> >>>>> >>>>> This clearly indicates non-convergence. However, I output the sparse >>>>> matrix A and vector b to MATLAB, and run the following command: >>>>> >>>>> [L,U] = ilu(A,struct('type','ilutp','droptol',1e-3)); >>>>> >>>>> [ux1,fl1,rr1,it1,rv1] = bicgstab(A,b,1e-10,1000,L,U); >>>>> >>>>> >>>>> And it converges in MATLAB, with flag fl1=0, relative residue >>>>> rr1=8.2725e-11, and iteration it1=89.5. I'm wondering how can I figure out >>>>> what's wrong. >>>>> >>>>> >>>>> Best, >>>>> >>>>> Hui >>>>> >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From hus003 at ucsd.edu Wed Feb 18 11:10:20 2015 From: hus003 at ucsd.edu (Sun, Hui) Date: Wed, 18 Feb 2015 17:10:20 +0000 Subject: [petsc-users] Question concerning ilu and bcgs In-Reply-To: References: <7501CC2B7BBCC44A92ECEEC316170ECB010E891E@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8948@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8976@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E898D@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E89A2@XMAIL-MBX-BH1.AD.UCSD.EDU>, Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010E89DA@XMAIL-MBX-BH1.AD.UCSD.EDU> I tried fieldsplitting several months ago, it didn't work due to the complicated coupled irregular bdry conditions. So I tried direct solver and now I modified the PDE system a little bit so that the ILU/bcgs works in MATLAB. But thank you for the suggestions, although I doubt it would work, maybe I will still try fieldsplitting with my new system. ________________________________ From: Matthew Knepley [knepley at gmail.com] Sent: Wednesday, February 18, 2015 8:54 AM To: Sun, Hui Cc: hong at aspiritech.org; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question concerning ilu and bcgs On Wed, Feb 18, 2015 at 10:47 AM, Sun, Hui > wrote: The matrix is from a 3D fluid problem, with complicated irregular boundary conditions. I've tried using direct solvers such as UMFPACK, SuperLU_dist and MUMPS. It seems that SuperLU_dist does not solve for my linear system; UMFPACK solves the system but would run into memory issue even with small size matrices and it cannot parallelize; MUMPS does solve the system but it also fails when the size is big and it takes much time. That's why I'm seeking an iterative method. I guess the direct method is faster than an iterative method for a small A, but that may not be true for bigger A. If this is a Stokes flow, you should use PCFIELDSPLIT and multigrid. If it is advection dominated, I know of nothing better than sparse direct or perhaps Block-Jacobi with sparse direct blocks. Since MUMPS solved your system, I would consider using BJacobi/ASM and MUMPS or UMFPACK as the block solver. Thanks, Matt ________________________________ From: Matthew Knepley [knepley at gmail.com] Sent: Wednesday, February 18, 2015 8:33 AM To: Sun, Hui Cc: hong at aspiritech.org; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question concerning ilu and bcgs On Wed, Feb 18, 2015 at 10:31 AM, Sun, Hui > wrote: So far I just try around, I haven't looked into literature yet. However, both MATLAB's ilu+gmres and ilu+bcgs work. Is it possible that some parameter or options need to be tuned in using PETSc's ilu or hypre's ilu? Besides, is there a way to view how good the performance of the pc is and output the matrices L and U, so that I can do some test in MATLAB? 1) Its not clear exactly what Matlab is doing 2) PETSc uses ILU(0) by default (you can set it to use ILU(k)) 3) I don't know what Hypre's ILU can do I would really discourage from using ILU. I cannot imagine it is faster than sparse direct factorization for your system, such as from SuperLU or MUMPS. Thanks, Matt Hui ________________________________ From: Matthew Knepley [knepley at gmail.com] Sent: Wednesday, February 18, 2015 8:09 AM To: Sun, Hui Cc: hong at aspiritech.org; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question concerning ilu and bcgs On Wed, Feb 18, 2015 at 10:02 AM, Sun, Hui > wrote: Yes I've tried other solvers, gmres/ilu does not work, neither does bcgs/ilu. Here are the options: -pc_type ilu -pc_factor_nonzeros_along_diagonal -pc_factor_levels 0 -pc_factor_reuse_ordering -ksp_ty\ pe bcgs -ksp_rtol 1e-6 -ksp_max_it 10 -ksp_monitor_short -ksp_view Note here that ILU(0) is an unreliable and generally crappy preconditioner. Have you looked in the literature for the kinds of preconditioners that are effective for your problem? Thanks, Matt Here is the output: 0 KSP Residual norm 211292 1 KSP Residual norm 13990.2 2 KSP Residual norm 9870.08 3 KSP Residual norm 9173.9 4 KSP Residual norm 9121.94 5 KSP Residual norm 7386.1 6 KSP Residual norm 6222.55 7 KSP Residual norm 7192.94 8 KSP Residual norm 33964 9 KSP Residual norm 33960.4 10 KSP Residual norm 1068.54 KSP Object: 1 MPI processes type: bcgs maximum iterations=10, initial guess is zero tolerances: relative=1e-06, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: ilu ILU: out-of-place factorization ILU: Reusing reordering from past factorization 0 levels of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 1 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=62500, cols=62500 package used to perform factorization: petsc total: nonzeros=473355, allocated nonzeros=473355 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=62500, cols=62500 total: nonzeros=473355, allocated nonzeros=7.8125e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Time cost: 0.307149, 0.268402, 0.0990018 ________________________________ From: hong at aspiritech.org [hong at aspiritech.org] Sent: Wednesday, February 18, 2015 7:49 AM To: Sun, Hui Cc: Matthew Knepley; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question concerning ilu and bcgs Have you tried other solvers, e.g., PETSc default gmres/ilu, bcgs/ilu etc. The matrix is small. If it is ill-conditioned, then pc_type lu would work the best. Hong On Wed, Feb 18, 2015 at 9:34 AM, Sun, Hui > wrote: With options: -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 -pc_hypre_pilut_tol 1e-3 -ksp_type bcgs -ksp_rtol 1e-10 -ksp_max_it 10 -ksp_monitor_short -ksp_converged_reason -ksp_view Here is the full output: 0 KSP Residual norm 1404.62 1 KSP Residual norm 88.9068 2 KSP Residual norm 64.73 3 KSP Residual norm 71.0224 4 KSP Residual norm 69.5044 5 KSP Residual norm 455.458 6 KSP Residual norm 174.876 7 KSP Residual norm 183.031 8 KSP Residual norm 650.675 9 KSP Residual norm 79.2441 10 KSP Residual norm 84.1985 Linear solve did not converge due to DIVERGED_ITS iterations 10 KSP Object: 1 MPI processes type: bcgs maximum iterations=10, initial guess is zero tolerances: relative=1e-10, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: hypre HYPRE Pilut preconditioning HYPRE Pilut: maximum number of iterations 1000 HYPRE Pilut: drop tolerance 0.001 HYPRE Pilut: default factor row size linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=62500, cols=62500 total: nonzeros=473355, allocated nonzeros=7.8125e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Time cost: 0.756198, 0.662984, 0.105672 ________________________________ From: Matthew Knepley [knepley at gmail.com] Sent: Wednesday, February 18, 2015 3:30 AM To: Sun, Hui Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question concerning ilu and bcgs On Wed, Feb 18, 2015 at 12:33 AM, Sun, Hui > wrote: I have a matrix system Ax = b, A is of type MatSeqAIJ or MatMPIAIJ, depending on the number of cores. I try to solve this problem by pc_type ilu and ksp_type bcgs, it does not converge. The options I specify are: -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 -pc_hypre_pilut_tol 1e-3 -ksp_type b\ cgs -ksp_rtol 1e-10 -ksp_max_it 1000 -ksp_monitor_short -ksp_converged_reason 1) Run with -ksp_view, so we can see exactly what was used 2) ILUT is unfortunately not a well-defined algorithm, and I believe the parallel version makes different decisions than the serial version. Thanks, Matt The first a few lines of the output are: 0 KSP Residual norm 1404.62 1 KSP Residual norm 88.9068 2 KSP Residual norm 64.73 3 KSP Residual norm 71.0224 4 KSP Residual norm 69.5044 5 KSP Residual norm 455.458 6 KSP Residual norm 174.876 7 KSP Residual norm 183.031 8 KSP Residual norm 650.675 9 KSP Residual norm 79.2441 10 KSP Residual norm 84.1985 This clearly indicates non-convergence. However, I output the sparse matrix A and vector b to MATLAB, and run the following command: [L,U] = ilu(A,struct('type','ilutp','droptol',1e-3)); [ux1,fl1,rr1,it1,rv1] = bicgstab(A,b,1e-10,1000,L,U); And it converges in MATLAB, with flag fl1=0, relative residue rr1=8.2725e-11, and iteration it1=89.5. I'm wondering how can I figure out what's wrong. Best, Hui -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Wed Feb 18 12:00:50 2015 From: dave.mayhem23 at gmail.com (Dave May) Date: Wed, 18 Feb 2015 19:00:50 +0100 Subject: [petsc-users] Question concerning ilu and bcgs In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010E89DA@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010E891E@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8948@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8976@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E898D@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E89A2@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E89DA@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: Fieldsplit will not work if you just set pc_type fieldsplit and you have an operator with a block size if 1. In this case, you will need to define the splits using index sets. I cannot believe that defining all the v and p dofs is really hard. Certainly it is far easier than trying to understand the difference between the petsc, matlab and the hypre implementations of ilut. Even if you did happen to find one implemtation of ilu you were "happy" with, as soon as you refine the mesh a couple of times the iterations will increase. I second Matt's opinion - forget about ilu and focus time on trying to make fieldsplit work. Fieldsplit will generate spectrally equivalent operators of your flow problem, ilu won't Cheers Dave On Wednesday, 18 February 2015, Sun, Hui wrote: > I tried fieldsplitting several months ago, it didn't work due to the > complicated coupled irregular bdry conditions. So I tried direct solver and > now I modified the PDE system a little bit so that the ILU/bcgs works in > MATLAB. But thank you for the suggestions, although I doubt it would work, > maybe I will still try fieldsplitting with my new system. > > > ------------------------------ > *From:* Matthew Knepley [knepley at gmail.com > ] > *Sent:* Wednesday, February 18, 2015 8:54 AM > *To:* Sun, Hui > *Cc:* hong at aspiritech.org > ; > petsc-users at mcs.anl.gov > > *Subject:* Re: [petsc-users] Question concerning ilu and bcgs > > On Wed, Feb 18, 2015 at 10:47 AM, Sun, Hui > wrote: > >> The matrix is from a 3D fluid problem, with complicated irregular >> boundary conditions. I've tried using direct solvers such as UMFPACK, >> SuperLU_dist and MUMPS. It seems that SuperLU_dist does not solve for my >> linear system; UMFPACK solves the system but would run into memory issue >> even with small size matrices and it cannot parallelize; MUMPS does solve >> the system but it also fails when the size is big and it takes much time. >> That's why I'm seeking an iterative method. >> >> I guess the direct method is faster than an iterative method for a >> small A, but that may not be true for bigger A. >> > > If this is a Stokes flow, you should use PCFIELDSPLIT and multigrid. If > it is advection dominated, I know of nothing better > than sparse direct or perhaps Block-Jacobi with sparse direct blocks. > Since MUMPS solved your system, I would consider > using BJacobi/ASM and MUMPS or UMFPACK as the block solver. > > Thanks, > > Matt > > >> >> ------------------------------ >> *From:* Matthew Knepley [knepley at gmail.com >> ] >> *Sent:* Wednesday, February 18, 2015 8:33 AM >> *To:* Sun, Hui >> *Cc:* hong at aspiritech.org >> ; >> petsc-users at mcs.anl.gov >> >> *Subject:* Re: [petsc-users] Question concerning ilu and bcgs >> >> On Wed, Feb 18, 2015 at 10:31 AM, Sun, Hui > > wrote: >> >>> So far I just try around, I haven't looked into literature yet. >>> >>> However, both MATLAB's ilu+gmres and ilu+bcgs work. Is it possible >>> that some parameter or options need to be tuned in using PETSc's ilu or >>> hypre's ilu? Besides, is there a way to view how good the performance of >>> the pc is and output the matrices L and U, so that I can do some test in >>> MATLAB? >>> >> >> 1) Its not clear exactly what Matlab is doing >> >> 2) PETSc uses ILU(0) by default (you can set it to use ILU(k)) >> >> 3) I don't know what Hypre's ILU can do >> >> I would really discourage from using ILU. I cannot imagine it is faster >> than sparse direct factorization >> for your system, such as from SuperLU or MUMPS. >> >> Thanks, >> >> Matt >> >> >>> Hui >>> >>> >>> ------------------------------ >>> *From:* Matthew Knepley [knepley at gmail.com >>> ] >>> *Sent:* Wednesday, February 18, 2015 8:09 AM >>> *To:* Sun, Hui >>> *Cc:* hong at aspiritech.org >>> ; >>> petsc-users at mcs.anl.gov >>> >>> *Subject:* Re: [petsc-users] Question concerning ilu and bcgs >>> >>> On Wed, Feb 18, 2015 at 10:02 AM, Sun, Hui >> > wrote: >>> >>>> Yes I've tried other solvers, gmres/ilu does not work, neither does >>>> bcgs/ilu. Here are the options: >>>> >>>> -pc_type ilu -pc_factor_nonzeros_along_diagonal -pc_factor_levels 0 >>>> -pc_factor_reuse_ordering -ksp_ty\ >>>> >>>> pe bcgs -ksp_rtol 1e-6 -ksp_max_it 10 -ksp_monitor_short -ksp_view >>>> >>> >>> Note here that ILU(0) is an unreliable and generally crappy >>> preconditioner. Have you looked in the >>> literature for the kinds of preconditioners that are effective for your >>> problem? >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Here is the output: >>>> >>>> 0 KSP Residual norm 211292 >>>> >>>> 1 KSP Residual norm 13990.2 >>>> >>>> 2 KSP Residual norm 9870.08 >>>> >>>> 3 KSP Residual norm 9173.9 >>>> >>>> 4 KSP Residual norm 9121.94 >>>> >>>> 5 KSP Residual norm 7386.1 >>>> >>>> 6 KSP Residual norm 6222.55 >>>> >>>> 7 KSP Residual norm 7192.94 >>>> >>>> 8 KSP Residual norm 33964 >>>> >>>> 9 KSP Residual norm 33960.4 >>>> >>>> 10 KSP Residual norm 1068.54 >>>> >>>> KSP Object: 1 MPI processes >>>> >>>> type: bcgs >>>> >>>> maximum iterations=10, initial guess is zero >>>> >>>> tolerances: relative=1e-06, absolute=1e-50, divergence=10000 >>>> >>>> left preconditioning >>>> >>>> using PRECONDITIONED norm type for convergence test >>>> >>>> PC Object: 1 MPI processes >>>> >>>> type: ilu >>>> >>>> ILU: out-of-place factorization >>>> >>>> ILU: Reusing reordering from past factorization >>>> >>>> 0 levels of fill >>>> >>>> tolerance for zero pivot 2.22045e-14 >>>> >>>> using diagonal shift on blocks to prevent zero pivot [INBLOCKS] >>>> >>>> matrix ordering: natural >>>> >>>> factor fill ratio given 1, needed 1 >>>> >>>> Factored matrix follows: >>>> >>>> Mat Object: 1 MPI processes >>>> >>>> type: seqaij >>>> >>>> rows=62500, cols=62500 >>>> >>>> package used to perform factorization: petsc >>>> >>>> total: nonzeros=473355, allocated nonzeros=473355 >>>> >>>> total number of mallocs used during MatSetValues calls =0 >>>> >>>> not using I-node routines >>>> >>>> linear system matrix = precond matrix: >>>> >>>> Mat Object: 1 MPI processes >>>> >>>> type: seqaij >>>> >>>> rows=62500, cols=62500 >>>> >>>> total: nonzeros=473355, allocated nonzeros=7.8125e+06 >>>> >>>> total number of mallocs used during MatSetValues calls =0 >>>> >>>> not using I-node routines >>>> >>>> Time cost: 0.307149, 0.268402, 0.0990018 >>>> >>>> >>>> >>>> >>>> ------------------------------ >>>> *From:* hong at aspiritech.org >>>> [ >>>> hong at aspiritech.org >>>> ] >>>> *Sent:* Wednesday, February 18, 2015 7:49 AM >>>> *To:* Sun, Hui >>>> *Cc:* Matthew Knepley; petsc-users at mcs.anl.gov >>>> >>>> *Subject:* Re: [petsc-users] Question concerning ilu and bcgs >>>> >>>> Have you tried other solvers, e.g., PETSc default gmres/ilu, >>>> bcgs/ilu etc. >>>> The matrix is small. If it is ill-conditioned, then pc_type lu would >>>> work the best. >>>> >>>> Hong >>>> >>>> On Wed, Feb 18, 2015 at 9:34 AM, Sun, Hui >>> > wrote: >>>> >>>>> With options: >>>>> >>>>> -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 >>>>> -pc_hypre_pilut_tol 1e-3 -ksp_type bcgs -ksp_rtol 1e-10 -ksp_max_it >>>>> 10 -ksp_monitor_short -ksp_converged_reason -ksp_view >>>>> >>>>> Here is the full output: >>>>> >>>>> 0 KSP Residual norm 1404.62 >>>>> >>>>> 1 KSP Residual norm 88.9068 >>>>> >>>>> 2 KSP Residual norm 64.73 >>>>> >>>>> 3 KSP Residual norm 71.0224 >>>>> >>>>> 4 KSP Residual norm 69.5044 >>>>> >>>>> 5 KSP Residual norm 455.458 >>>>> >>>>> 6 KSP Residual norm 174.876 >>>>> >>>>> 7 KSP Residual norm 183.031 >>>>> >>>>> 8 KSP Residual norm 650.675 >>>>> >>>>> 9 KSP Residual norm 79.2441 >>>>> >>>>> 10 KSP Residual norm 84.1985 >>>>> >>>>> Linear solve did not converge due to DIVERGED_ITS iterations 10 >>>>> >>>>> KSP Object: 1 MPI processes >>>>> >>>>> type: bcgs >>>>> >>>>> maximum iterations=10, initial guess is zero >>>>> >>>>> tolerances: relative=1e-10, absolute=1e-50, divergence=10000 >>>>> >>>>> left preconditioning >>>>> >>>>> using PRECONDITIONED norm type for convergence test >>>>> >>>>> PC Object: 1 MPI processes >>>>> >>>>> type: hypre >>>>> >>>>> HYPRE Pilut preconditioning >>>>> >>>>> HYPRE Pilut: maximum number of iterations 1000 >>>>> >>>>> HYPRE Pilut: drop tolerance 0.001 >>>>> >>>>> HYPRE Pilut: default factor row size >>>>> >>>>> linear system matrix = precond matrix: >>>>> >>>>> Mat Object: 1 MPI processes >>>>> >>>>> type: seqaij >>>>> >>>>> rows=62500, cols=62500 >>>>> >>>>> total: nonzeros=473355, allocated nonzeros=7.8125e+06 >>>>> >>>>> total number of mallocs used during MatSetValues calls =0 >>>>> >>>>> not using I-node routines >>>>> >>>>> Time cost: 0.756198, 0.662984, 0.105672 >>>>> >>>>> >>>>> >>>>> >>>>> ------------------------------ >>>>> *From:* Matthew Knepley [knepley at gmail.com >>>>> ] >>>>> *Sent:* Wednesday, February 18, 2015 3:30 AM >>>>> *To:* Sun, Hui >>>>> *Cc:* petsc-users at mcs.anl.gov >>>>> >>>>> *Subject:* Re: [petsc-users] Question concerning ilu and bcgs >>>>> >>>>> On Wed, Feb 18, 2015 at 12:33 AM, Sun, Hui >>>> > wrote: >>>>> >>>>>> I have a matrix system Ax = b, A is of type MatSeqAIJ or MatMPIAIJ, >>>>>> depending on the number of cores. >>>>>> >>>>>> I try to solve this problem by pc_type ilu and ksp_type bcgs, it >>>>>> does not converge. The options I specify are: >>>>>> >>>>>> -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 >>>>>> -pc_hypre_pilut_tol 1e-3 -ksp_type b\ >>>>>> >>>>>> cgs -ksp_rtol 1e-10 -ksp_max_it 1000 -ksp_monitor_short >>>>>> -ksp_converged_reason >>>>>> >>>>> >>>>> 1) Run with -ksp_view, so we can see exactly what was used >>>>> >>>>> 2) ILUT is unfortunately not a well-defined algorithm, and I believe >>>>> the parallel version makes different decisions >>>>> than the serial version. >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> >>>>>> The first a few lines of the output are: >>>>>> >>>>>> 0 KSP Residual norm 1404.62 >>>>>> >>>>>> 1 KSP Residual norm 88.9068 >>>>>> >>>>>> 2 KSP Residual norm 64.73 >>>>>> >>>>>> 3 KSP Residual norm 71.0224 >>>>>> >>>>>> 4 KSP Residual norm 69.5044 >>>>>> >>>>>> 5 KSP Residual norm 455.458 >>>>>> >>>>>> 6 KSP Residual norm 174.876 >>>>>> >>>>>> 7 KSP Residual norm 183.031 >>>>>> >>>>>> 8 KSP Residual norm 650.675 >>>>>> >>>>>> 9 KSP Residual norm 79.2441 >>>>>> >>>>>> 10 KSP Residual norm 84.1985 >>>>>> >>>>>> >>>>>> This clearly indicates non-convergence. However, I output the >>>>>> sparse matrix A and vector b to MATLAB, and run the following command: >>>>>> >>>>>> [L,U] = ilu(A,struct('type','ilutp','droptol',1e-3)); >>>>>> >>>>>> [ux1,fl1,rr1,it1,rv1] = bicgstab(A,b,1e-10,1000,L,U); >>>>>> >>>>>> >>>>>> And it converges in MATLAB, with flag fl1=0, relative residue >>>>>> rr1=8.2725e-11, and iteration it1=89.5. I'm wondering how can I figure out >>>>>> what's wrong. >>>>>> >>>>>> >>>>>> Best, >>>>>> >>>>>> Hui >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Feb 18 12:20:36 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 18 Feb 2015 12:20:36 -0600 Subject: [petsc-users] Efficient Use of GAMG for Poisson Equation with Full Neumann Boundary Conditions In-Reply-To: <1424186090.3298.2.camel@gmail.com> References: <1424186090.3298.2.camel@gmail.com> Message-ID: <54828F31-EBDC-4F83-9EE8-ECED68A56443@mcs.anl.gov> Fabian, CG requires that the preconditioner be symmetric positive definite. ICC even if given a symmetric positive definite matrix can generate an indefinite preconditioner. Similarly if an algebraic multigrid application is not "strong enough" it can also result in a preconditioner that is indefinite. You never want to use ICC for pressure type problems it cannot compete with multigrid for large problems so let's forget about ICC and focus on the GAMG. > -pressure_mg_coarse_sub_pc_type svd > -pressure_mg_levels_ksp_rtol 1e-4 > -pressure_mg_levels_ksp_type richardson > -pressure_mg_levels_pc_type sor > -pressure_pc_gamg_agg_nsmooths 1 > -pressure_pc_type gamg There are many many tuning parameters for MG. First, is your pressure problem changing dramatically at each new solver? That is, for example, is the mesh moving or are there very different numerical values in the matrix? Is the nonzero structure of the pressure matrix changing? Currently the entire GAMG process is done for each new solve, if you use the flag -pressure_pc_gamg_reuse_interpolation true it will create the interpolation needed for GAMG once and reuse it for all the solves. Please try that and see what happens. Then I will have many more suggestions. Barry > On Feb 17, 2015, at 9:14 AM, Fabian Gabel wrote: > > Dear PETSc team, > > I am trying to optimize the solver parameters for the linear system I > get, when I discretize the pressure correction equation Poisson equation > with Neumann boundary conditions) in a SIMPLE-type algorithm using a > finite volume method. > > The resulting system is symmetric and positive semi-definite. A basis to > the associated nullspace has been provided to the KSP object. > > Using a CG solver with ICC preconditioning the solver needs a lot of > inner iterations to converge (-ksp_monitor -ksp_view output attached for > a case with approx. 2e6 unknowns; the lines beginning with 000XXXX show > the relative residual regarding the initial residual in the outer > iteration no. 1 for the variables u,v,w,p). Furthermore I don't quite > understand, why the solver reports > > Linear solve did not converge due to DIVERGED_INDEFINITE_PC > > at the later stages of my Picard iteration process (iteration 0001519). > > I then tried out CG+GAMG preconditioning with success regarding the > number of inner iterations, but without advantages regarding wall time > (output attached). Also the DIVERGED_INDEFINITE_PC reason shows up > repeatedly after iteration 0001487. I used the following options > > -pressure_mg_coarse_sub_pc_type svd > -pressure_mg_levels_ksp_rtol 1e-4 > -pressure_mg_levels_ksp_type richardson > -pressure_mg_levels_pc_type sor > -pressure_pc_gamg_agg_nsmooths 1 > -pressure_pc_type gamg > > I would like to get an opinion on how the solver performance could be > increased further. -log_summary shows that my code spends 80% of the > time solving the linear systems for the pressure correction (STAGE 2: > PRESSCORR). Furthermore, do you know what could be causing the > DIVERGED_INDEFINITE_PC converged reason? > > Regards, > Fabian Gabel > From hus003 at ucsd.edu Wed Feb 18 12:51:58 2015 From: hus003 at ucsd.edu (Sun, Hui) Date: Wed, 18 Feb 2015 18:51:58 +0000 Subject: [petsc-users] Question concerning ilu and bcgs In-Reply-To: References: <7501CC2B7BBCC44A92ECEEC316170ECB010E891E@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8948@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8976@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E898D@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E89A2@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E89DA@XMAIL-MBX-BH1.AD.UCSD.EDU>, Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010E89FC@XMAIL-MBX-BH1.AD.UCSD.EDU> Thank you Dave. In fact I have tried fieldsplit several months ago, and today I go back to the previous code and ran it again. How can I tell it is doing what I want it to do? Here are the options: -pc_type fieldsplit -fieldsplit_0_pc_type jacobi -fieldsplit_1_pc_type jacobi -pc_fieldsplit_type SC\ HUR -ksp_monitor_short -ksp_converged_reason -ksp_rtol 1e-4 -fieldsplit_1_ksp_rtol 1e-2 -fieldsplit_0_ksp_rtol 1e-4 -fieldsplit_1_ksp_max_it 10 -fieldsplit_0_ksp_max_it 10 -ksp_type fgmres -ksp_max_it 10 -ksp_view And here is the output: Starting... 0 KSP Residual norm 17.314 1 KSP Residual norm 10.8324 2 KSP Residual norm 10.8312 3 KSP Residual norm 10.7726 4 KSP Residual norm 10.7642 5 KSP Residual norm 10.7634 6 KSP Residual norm 10.7399 7 KSP Residual norm 10.7159 8 KSP Residual norm 10.6602 9 KSP Residual norm 10.5756 10 KSP Residual norm 10.5224 Linear solve did not converge due to DIVERGED_ITS iterations 10 KSP Object: 1 MPI processes type: fgmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10, initial guess is zero tolerances: relative=0.0001, absolute=1e-50, divergence=10000 right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: fieldsplit FieldSplit with Schur preconditioner, factorization FULL Preconditioner for the Schur complement formed from A11 Split info: Split number 0 Defined by IS Split number 1 Defined by IS KSP solver for A00 block KSP Object: (fieldsplit_0_) 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10, initial guess is zero tolerances: relative=0.0001, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: (fieldsplit_0_) 1 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: (fieldsplit_0_) 1 MPI processes type: mpiaij rows=20000, cols=20000 total: nonzeros=85580, allocated nonzeros=760000 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines KSP solver for S = A11 - A10 inv(A00) A01 KSP Object: (fieldsplit_1_) 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10, initial guess is zero tolerances: relative=0.01, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: (fieldsplit_1_) 1 MPI processes type: jacobi linear system matrix followed by preconditioner matrix: Mat Object: (fieldsplit_1_) 1 MPI processes type: schurcomplement rows=10000, cols=10000 Schur complement A11 - A10 inv(A00) A01 A11 Mat Object: (fieldsplit_1_) 1 MPI processes type: mpiaij rows=10000, cols=10000 total: nonzeros=2110, allocated nonzeros=80000 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 3739 nodes, limit used is 5 A10 Mat Object: (a10_) 1 MPI processes type: mpiaij rows=10000, cols=20000 total: nonzeros=31560, allocated nonzeros=80000 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines KSP of A00 KSP Object: (fieldsplit_0_) 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10, initial guess is zero tolerances: relative=0.0001, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: (fieldsplit_0_) 1 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: (fieldsplit_0_) 1 MPI processes type: mpiaij rows=20000, cols=20000 total: nonzeros=85580, allocated nonzeros=760000 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines A01 Mat Object: (a01_) 1 MPI processes type: mpiaij rows=20000, cols=10000 total: nonzeros=32732, allocated nonzeros=240000 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Mat Object: (fieldsplit_1_) 1 MPI processes type: mpiaij rows=10000, cols=10000 total: nonzeros=2110, allocated nonzeros=80000 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 3739 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: nest rows=30000, cols=30000 Matrix object: type=nest, rows=2, cols=2 MatNest structure: (0,0) : prefix="fieldsplit_0_", type=mpiaij, rows=20000, cols=20000 (0,1) : prefix="a01_", type=mpiaij, rows=20000, cols=10000 (1,0) : prefix="a10_", type=mpiaij, rows=10000, cols=20000 (1,1) : prefix="fieldsplit_1_", type=mpiaij, rows=10000, cols=10000 residual u = 10.3528 residual p = 1.88199 residual [u,p] = 10.5224 L^2 discretization error u = 0.698386 L^2 discretization error p = 1.0418 L^2 discretization error [u,p] = 1.25423 number of processors = 1 0 Time cost for creating solver context 0.100217 s, and for solving 3.78879 s, and for printing 0.0908558 s. ________________________________ From: Dave May [dave.mayhem23 at gmail.com] Sent: Wednesday, February 18, 2015 10:00 AM To: Sun, Hui Cc: Matthew Knepley; petsc-users at mcs.anl.gov; hong at aspiritech.org Subject: Re: [petsc-users] Question concerning ilu and bcgs Fieldsplit will not work if you just set pc_type fieldsplit and you have an operator with a block size if 1. In this case, you will need to define the splits using index sets. I cannot believe that defining all the v and p dofs is really hard. Certainly it is far easier than trying to understand the difference between the petsc, matlab and the hypre implementations of ilut. Even if you did happen to find one implemtation of ilu you were "happy" with, as soon as you refine the mesh a couple of times the iterations will increase. I second Matt's opinion - forget about ilu and focus time on trying to make fieldsplit work. Fieldsplit will generate spectrally equivalent operators of your flow problem, ilu won't Cheers Dave On Wednesday, 18 February 2015, Sun, Hui > wrote: I tried fieldsplitting several months ago, it didn't work due to the complicated coupled irregular bdry conditions. So I tried direct solver and now I modified the PDE system a little bit so that the ILU/bcgs works in MATLAB. But thank you for the suggestions, although I doubt it would work, maybe I will still try fieldsplitting with my new system. ________________________________ From: Matthew Knepley [knepley at gmail.com] Sent: Wednesday, February 18, 2015 8:54 AM To: Sun, Hui Cc: hong at aspiritech.org; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question concerning ilu and bcgs On Wed, Feb 18, 2015 at 10:47 AM, Sun, Hui > wrote: The matrix is from a 3D fluid problem, with complicated irregular boundary conditions. I've tried using direct solvers such as UMFPACK, SuperLU_dist and MUMPS. It seems that SuperLU_dist does not solve for my linear system; UMFPACK solves the system but would run into memory issue even with small size matrices and it cannot parallelize; MUMPS does solve the system but it also fails when the size is big and it takes much time. That's why I'm seeking an iterative method. I guess the direct method is faster than an iterative method for a small A, but that may not be true for bigger A. If this is a Stokes flow, you should use PCFIELDSPLIT and multigrid. If it is advection dominated, I know of nothing better than sparse direct or perhaps Block-Jacobi with sparse direct blocks. Since MUMPS solved your system, I would consider using BJacobi/ASM and MUMPS or UMFPACK as the block solver. Thanks, Matt ________________________________ From: Matthew Knepley [knepley at gmail.com] Sent: Wednesday, February 18, 2015 8:33 AM To: Sun, Hui Cc: hong at aspiritech.org; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question concerning ilu and bcgs On Wed, Feb 18, 2015 at 10:31 AM, Sun, Hui > wrote: So far I just try around, I haven't looked into literature yet. However, both MATLAB's ilu+gmres and ilu+bcgs work. Is it possible that some parameter or options need to be tuned in using PETSc's ilu or hypre's ilu? Besides, is there a way to view how good the performance of the pc is and output the matrices L and U, so that I can do some test in MATLAB? 1) Its not clear exactly what Matlab is doing 2) PETSc uses ILU(0) by default (you can set it to use ILU(k)) 3) I don't know what Hypre's ILU can do I would really discourage from using ILU. I cannot imagine it is faster than sparse direct factorization for your system, such as from SuperLU or MUMPS. Thanks, Matt Hui ________________________________ From: Matthew Knepley [knepley at gmail.com] Sent: Wednesday, February 18, 2015 8:09 AM To: Sun, Hui Cc: hong at aspiritech.org; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question concerning ilu and bcgs On Wed, Feb 18, 2015 at 10:02 AM, Sun, Hui > wrote: Yes I've tried other solvers, gmres/ilu does not work, neither does bcgs/ilu. Here are the options: -pc_type ilu -pc_factor_nonzeros_along_diagonal -pc_factor_levels 0 -pc_factor_reuse_ordering -ksp_ty\ pe bcgs -ksp_rtol 1e-6 -ksp_max_it 10 -ksp_monitor_short -ksp_view Note here that ILU(0) is an unreliable and generally crappy preconditioner. Have you looked in the literature for the kinds of preconditioners that are effective for your problem? Thanks, Matt Here is the output: 0 KSP Residual norm 211292 1 KSP Residual norm 13990.2 2 KSP Residual norm 9870.08 3 KSP Residual norm 9173.9 4 KSP Residual norm 9121.94 5 KSP Residual norm 7386.1 6 KSP Residual norm 6222.55 7 KSP Residual norm 7192.94 8 KSP Residual norm 33964 9 KSP Residual norm 33960.4 10 KSP Residual norm 1068.54 KSP Object: 1 MPI processes type: bcgs maximum iterations=10, initial guess is zero tolerances: relative=1e-06, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: ilu ILU: out-of-place factorization ILU: Reusing reordering from past factorization 0 levels of fill tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: natural factor fill ratio given 1, needed 1 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=62500, cols=62500 package used to perform factorization: petsc total: nonzeros=473355, allocated nonzeros=473355 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=62500, cols=62500 total: nonzeros=473355, allocated nonzeros=7.8125e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Time cost: 0.307149, 0.268402, 0.0990018 ________________________________ From: hong at aspiritech.org [hong at aspiritech.org] Sent: Wednesday, February 18, 2015 7:49 AM To: Sun, Hui Cc: Matthew Knepley; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question concerning ilu and bcgs Have you tried other solvers, e.g., PETSc default gmres/ilu, bcgs/ilu etc. The matrix is small. If it is ill-conditioned, then pc_type lu would work the best. Hong On Wed, Feb 18, 2015 at 9:34 AM, Sun, Hui > wrote: With options: -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 -pc_hypre_pilut_tol 1e-3 -ksp_type bcgs -ksp_rtol 1e-10 -ksp_max_it 10 -ksp_monitor_short -ksp_converged_reason -ksp_view Here is the full output: 0 KSP Residual norm 1404.62 1 KSP Residual norm 88.9068 2 KSP Residual norm 64.73 3 KSP Residual norm 71.0224 4 KSP Residual norm 69.5044 5 KSP Residual norm 455.458 6 KSP Residual norm 174.876 7 KSP Residual norm 183.031 8 KSP Residual norm 650.675 9 KSP Residual norm 79.2441 10 KSP Residual norm 84.1985 Linear solve did not converge due to DIVERGED_ITS iterations 10 KSP Object: 1 MPI processes type: bcgs maximum iterations=10, initial guess is zero tolerances: relative=1e-10, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: hypre HYPRE Pilut preconditioning HYPRE Pilut: maximum number of iterations 1000 HYPRE Pilut: drop tolerance 0.001 HYPRE Pilut: default factor row size linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=62500, cols=62500 total: nonzeros=473355, allocated nonzeros=7.8125e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Time cost: 0.756198, 0.662984, 0.105672 ________________________________ From: Matthew Knepley [knepley at gmail.com] Sent: Wednesday, February 18, 2015 3:30 AM To: Sun, Hui Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question concerning ilu and bcgs On Wed, Feb 18, 2015 at 12:33 AM, Sun, Hui > wrote: I have a matrix system Ax = b, A is of type MatSeqAIJ or MatMPIAIJ, depending on the number of cores. I try to solve this problem by pc_type ilu and ksp_type bcgs, it does not converge. The options I specify are: -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 -pc_hypre_pilut_tol 1e-3 -ksp_type b\ cgs -ksp_rtol 1e-10 -ksp_max_it 1000 -ksp_monitor_short -ksp_converged_reason 1) Run with -ksp_view, so we can see exactly what was used 2) ILUT is unfortunately not a well-defined algorithm, and I believe the parallel version makes different decisions than the serial version. Thanks, Matt The first a few lines of the output are: 0 KSP Residual norm 1404.62 1 KSP Residual norm 88.9068 2 KSP Residual norm 64.73 3 KSP Residual norm 71.0224 4 KSP Residual norm 69.5044 5 KSP Residual norm 455.458 6 KSP Residual norm 174.876 7 KSP Residual norm 183.031 8 KSP Residual norm 650.675 9 KSP Residual norm 79.2441 10 KSP Residual norm 84.1985 This clearly indicates non-convergence. However, I output the sparse matrix A and vector b to MATLAB, and run the following command: [L,U] = ilu(A,struct('type','ilutp','droptol',1e-3)); [ux1,fl1,rr1,it1,rv1] = bicgstab(A,b,1e-10,1000,L,U); And it converges in MATLAB, with flag fl1=0, relative residue rr1=8.2725e-11, and iteration it1=89.5. I'm wondering how can I figure out what's wrong. Best, Hui -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 18 12:56:46 2015 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 18 Feb 2015 12:56:46 -0600 Subject: [petsc-users] Question concerning ilu and bcgs In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010E89FC@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010E891E@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8948@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8976@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E898D@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E89A2@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E89DA@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E89FC@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: On Wed, Feb 18, 2015 at 12:51 PM, Sun, Hui wrote: > Thank you Dave. In fact I have tried fieldsplit several months ago, and > today I go back to the previous code and ran it again. How can I tell it is > doing what I want it to do? Here are the options: > Solver performance is all about tradeoffs, so start from a place where you completely understand things, and then make small changes. PCFIELDSPLIT is an exact solver for a saddle point system using -pc_type fieldsplit -pc_fieldsplit_factorization_type full -fieldsplit_0_pc_type lu -fieldsplit_1_pc_type jacobi -fieldsplit_1_ksp_rtol 1e-9 That should converge in one iteration. Then you can look at the time and see whether the A block or the S block is expensive. You can substitute AMG for LU for A. You can find a better preconditioner than Jacobi for the S block. You can use upper factorization instead of full. etc. Thanks, Matt > -pc_type fieldsplit -fieldsplit_0_pc_type jacobi -fieldsplit_1_pc_type > jacobi -pc_fieldsplit_type SC\ > > HUR -ksp_monitor_short -ksp_converged_reason -ksp_rtol 1e-4 > -fieldsplit_1_ksp_rtol 1e-2 -fieldsplit_0_ksp_rtol 1e-4 > -fieldsplit_1_ksp_max_it 10 -fieldsplit_0_ksp_max_it 10 -ksp_type fgmres > -ksp_max_it 10 -ksp_view > > And here is the output: > > Starting... > > 0 KSP Residual norm 17.314 > > 1 KSP Residual norm 10.8324 > > 2 KSP Residual norm 10.8312 > > 3 KSP Residual norm 10.7726 > > 4 KSP Residual norm 10.7642 > > 5 KSP Residual norm 10.7634 > > 6 KSP Residual norm 10.7399 > > 7 KSP Residual norm 10.7159 > > 8 KSP Residual norm 10.6602 > > 9 KSP Residual norm 10.5756 > > 10 KSP Residual norm 10.5224 > > Linear solve did not converge due to DIVERGED_ITS iterations 10 > > KSP Object: 1 MPI processes > > type: fgmres > > GMRES: restart=30, using Classical (unmodified) Gram-Schmidt > Orthogonalization with no iterative refinement > > GMRES: happy breakdown tolerance 1e-30 > > maximum iterations=10, initial guess is zero > > tolerances: relative=0.0001, absolute=1e-50, divergence=10000 > > right preconditioning > > using UNPRECONDITIONED norm type for convergence test > > PC Object: 1 MPI processes > > type: fieldsplit > > FieldSplit with Schur preconditioner, factorization FULL > > Preconditioner for the Schur complement formed from A11 > > Split info: > > Split number 0 Defined by IS > > Split number 1 Defined by IS > > KSP solver for A00 block > > KSP Object: (fieldsplit_0_) 1 MPI processes > > type: gmres > > GMRES: restart=30, using Classical (unmodified) Gram-Schmidt > Orthogonalization with no iterative refinement > > GMRES: happy breakdown tolerance 1e-30 > > maximum iterations=10, initial guess is zero > > tolerances: relative=0.0001, absolute=1e-50, divergence=10000 > > left preconditioning > > using PRECONDITIONED norm type for convergence test > > PC Object: (fieldsplit_0_) 1 MPI processes > > type: jacobi > > linear system matrix = precond matrix: > > Mat Object: (fieldsplit_0_) 1 MPI processes > > type: mpiaij > > rows=20000, cols=20000 > > total: nonzeros=85580, allocated nonzeros=760000 > > total number of mallocs used during MatSetValues calls =0 > > not using I-node (on process 0) routines > > KSP solver for S = A11 - A10 inv(A00) A01 > > KSP Object: (fieldsplit_1_) 1 MPI processes > > type: gmres > > GMRES: restart=30, using Classical (unmodified) Gram-Schmidt > Orthogonalization with no iterative refinement > > GMRES: happy breakdown tolerance 1e-30 > > maximum iterations=10, initial guess is zero > > tolerances: relative=0.01, absolute=1e-50, divergence=10000 > > left preconditioning > > using PRECONDITIONED norm type for convergence test > > PC Object: (fieldsplit_1_) 1 MPI processes > > type: jacobi > > linear system matrix followed by preconditioner matrix: > > Mat Object: (fieldsplit_1_) 1 MPI processes > > type: schurcomplement > > rows=10000, cols=10000 > > Schur complement A11 - A10 inv(A00) A01 > > A11 > > Mat Object: (fieldsplit_1_) 1 MPI > processes > > type: mpiaij > > rows=10000, cols=10000 > > total: nonzeros=2110, allocated nonzeros=80000 > > total number of mallocs used during MatSetValues calls =0 > > using I-node (on process 0) routines: found 3739 nodes, > limit used is 5 > > A10 > > Mat Object: (a10_) 1 MPI processes > > type: mpiaij > > rows=10000, cols=20000 > > total: nonzeros=31560, allocated nonzeros=80000 > > total number of mallocs used during MatSetValues calls =0 > > not using I-node (on process 0) routines > > KSP of A00 > > KSP Object: (fieldsplit_0_) 1 MPI > processes > > type: gmres > > GMRES: restart=30, using Classical (unmodified) > Gram-Schmidt Orthogonalization with no iterative refinement > > GMRES: happy breakdown tolerance 1e-30 > > maximum iterations=10, initial guess is zero > > tolerances: relative=0.0001, absolute=1e-50, > divergence=10000 > > left preconditioning > > using PRECONDITIONED norm type for convergence test > > PC Object: (fieldsplit_0_) 1 MPI > processes > > type: jacobi > > linear system matrix = precond matrix: > > Mat Object: (fieldsplit_0_) > 1 MPI processes > > type: mpiaij > > rows=20000, cols=20000 > > total: nonzeros=85580, allocated nonzeros=760000 > > total number of mallocs used during MatSetValues calls =0 > > not using I-node (on process 0) routines > > A01 > > Mat Object: (a01_) 1 MPI processes > > type: mpiaij > > rows=20000, cols=10000 > > total: nonzeros=32732, allocated nonzeros=240000 > > total number of mallocs used during MatSetValues calls =0 > > not using I-node (on process 0) routines > > Mat Object: (fieldsplit_1_) 1 MPI processes > > type: mpiaij > > rows=10000, cols=10000 > > total: nonzeros=2110, allocated nonzeros=80000 > > total number of mallocs used during MatSetValues calls =0 > > using I-node (on process 0) routines: found 3739 nodes, limit > used is 5 > > linear system matrix = precond matrix: > > Mat Object: 1 MPI processes > > type: nest > > rows=30000, cols=30000 > > Matrix object: > > type=nest, rows=2, cols=2 > > MatNest structure: > > (0,0) : prefix="fieldsplit_0_", type=mpiaij, rows=20000, > cols=20000 > > (0,1) : prefix="a01_", type=mpiaij, rows=20000, cols=10000 > > (1,0) : prefix="a10_", type=mpiaij, rows=10000, cols=20000 > > (1,1) : prefix="fieldsplit_1_", type=mpiaij, rows=10000, > cols=10000 > > residual u = 10.3528 > > residual p = 1.88199 > > residual [u,p] = 10.5224 > > L^2 discretization error u = 0.698386 > > L^2 discretization error p = 1.0418 > > L^2 discretization error [u,p] = 1.25423 > > number of processors = 1 0 > > Time cost for creating solver context 0.100217 s, and for solving 3.78879 > s, and for printing 0.0908558 s. > > > ------------------------------ > *From:* Dave May [dave.mayhem23 at gmail.com] > *Sent:* Wednesday, February 18, 2015 10:00 AM > *To:* Sun, Hui > *Cc:* Matthew Knepley; petsc-users at mcs.anl.gov; hong at aspiritech.org > *Subject:* Re: [petsc-users] Question concerning ilu and bcgs > > > Fieldsplit will not work if you just set pc_type fieldsplit and you have > an operator with a block size if 1. In this case, you will need to define > the splits using index sets. > > I cannot believe that defining all the v and p dofs is really hard. > Certainly it is far easier than trying to understand the difference between > the petsc, matlab and the hypre implementations of ilut. Even if you did > happen to find one implemtation of ilu you were "happy" with, as soon as > you refine the mesh a couple of times the iterations will increase. > > I second Matt's opinion - forget about ilu and focus time on trying > to make fieldsplit work. Fieldsplit will generate spectrally equivalent > operators of your flow problem, ilu won't > > Cheers > Dave > > > On Wednesday, 18 February 2015, Sun, Hui wrote: > >> I tried fieldsplitting several months ago, it didn't work due to the >> complicated coupled irregular bdry conditions. So I tried direct solver and >> now I modified the PDE system a little bit so that the ILU/bcgs works in >> MATLAB. But thank you for the suggestions, although I doubt it would work, >> maybe I will still try fieldsplitting with my new system. >> >> >> ------------------------------ >> *From:* Matthew Knepley [knepley at gmail.com ] >> *Sent:* Wednesday, February 18, 2015 8:54 AM >> *To:* Sun, Hui >> *Cc:* hong at aspiritech.org ; >> petsc-users at mcs.anl.gov >> *Subject:* Re: [petsc-users] Question concerning ilu and bcgs >> >> On Wed, Feb 18, 2015 at 10:47 AM, Sun, Hui > > wrote: >> >>> The matrix is from a 3D fluid problem, with complicated irregular >>> boundary conditions. I've tried using direct solvers such as UMFPACK, >>> SuperLU_dist and MUMPS. It seems that SuperLU_dist does not solve for my >>> linear system; UMFPACK solves the system but would run into memory issue >>> even with small size matrices and it cannot parallelize; MUMPS does solve >>> the system but it also fails when the size is big and it takes much time. >>> That's why I'm seeking an iterative method. >>> >>> I guess the direct method is faster than an iterative method for a >>> small A, but that may not be true for bigger A. >>> >> >> If this is a Stokes flow, you should use PCFIELDSPLIT and multigrid. If >> it is advection dominated, I know of nothing better >> than sparse direct or perhaps Block-Jacobi with sparse direct blocks. >> Since MUMPS solved your system, I would consider >> using BJacobi/ASM and MUMPS or UMFPACK as the block solver. >> >> Thanks, >> >> Matt >> >> >>> >>> ------------------------------ >>> *From:* Matthew Knepley [knepley at gmail.com >>> ] >>> *Sent:* Wednesday, February 18, 2015 8:33 AM >>> *To:* Sun, Hui >>> *Cc:* hong at aspiritech.org ; >>> petsc-users at mcs.anl.gov >>> *Subject:* Re: [petsc-users] Question concerning ilu and bcgs >>> >>> On Wed, Feb 18, 2015 at 10:31 AM, Sun, Hui >> > wrote: >>> >>>> So far I just try around, I haven't looked into literature yet. >>>> >>>> However, both MATLAB's ilu+gmres and ilu+bcgs work. Is it possible >>>> that some parameter or options need to be tuned in using PETSc's ilu or >>>> hypre's ilu? Besides, is there a way to view how good the performance of >>>> the pc is and output the matrices L and U, so that I can do some test in >>>> MATLAB? >>>> >>> >>> 1) Its not clear exactly what Matlab is doing >>> >>> 2) PETSc uses ILU(0) by default (you can set it to use ILU(k)) >>> >>> 3) I don't know what Hypre's ILU can do >>> >>> I would really discourage from using ILU. I cannot imagine it is >>> faster than sparse direct factorization >>> for your system, such as from SuperLU or MUMPS. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Hui >>>> >>>> >>>> ------------------------------ >>>> *From:* Matthew Knepley [knepley at gmail.com >>>> ] >>>> *Sent:* Wednesday, February 18, 2015 8:09 AM >>>> *To:* Sun, Hui >>>> *Cc:* hong at aspiritech.org ; >>>> petsc-users at mcs.anl.gov >>>> *Subject:* Re: [petsc-users] Question concerning ilu and bcgs >>>> >>>> On Wed, Feb 18, 2015 at 10:02 AM, Sun, Hui >>> > wrote: >>>> >>>>> Yes I've tried other solvers, gmres/ilu does not work, neither does >>>>> bcgs/ilu. Here are the options: >>>>> >>>>> -pc_type ilu -pc_factor_nonzeros_along_diagonal -pc_factor_levels 0 >>>>> -pc_factor_reuse_ordering -ksp_ty\ >>>>> >>>>> pe bcgs -ksp_rtol 1e-6 -ksp_max_it 10 -ksp_monitor_short -ksp_view >>>>> >>>> >>>> Note here that ILU(0) is an unreliable and generally crappy >>>> preconditioner. Have you looked in the >>>> literature for the kinds of preconditioners that are effective for your >>>> problem? >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> Here is the output: >>>>> >>>>> 0 KSP Residual norm 211292 >>>>> >>>>> 1 KSP Residual norm 13990.2 >>>>> >>>>> 2 KSP Residual norm 9870.08 >>>>> >>>>> 3 KSP Residual norm 9173.9 >>>>> >>>>> 4 KSP Residual norm 9121.94 >>>>> >>>>> 5 KSP Residual norm 7386.1 >>>>> >>>>> 6 KSP Residual norm 6222.55 >>>>> >>>>> 7 KSP Residual norm 7192.94 >>>>> >>>>> 8 KSP Residual norm 33964 >>>>> >>>>> 9 KSP Residual norm 33960.4 >>>>> >>>>> 10 KSP Residual norm 1068.54 >>>>> >>>>> KSP Object: 1 MPI processes >>>>> >>>>> type: bcgs >>>>> >>>>> maximum iterations=10, initial guess is zero >>>>> >>>>> tolerances: relative=1e-06, absolute=1e-50, divergence=10000 >>>>> >>>>> left preconditioning >>>>> >>>>> using PRECONDITIONED norm type for convergence test >>>>> >>>>> PC Object: 1 MPI processes >>>>> >>>>> type: ilu >>>>> >>>>> ILU: out-of-place factorization >>>>> >>>>> ILU: Reusing reordering from past factorization >>>>> >>>>> 0 levels of fill >>>>> >>>>> tolerance for zero pivot 2.22045e-14 >>>>> >>>>> using diagonal shift on blocks to prevent zero pivot [INBLOCKS] >>>>> >>>>> matrix ordering: natural >>>>> >>>>> factor fill ratio given 1, needed 1 >>>>> >>>>> Factored matrix follows: >>>>> >>>>> Mat Object: 1 MPI processes >>>>> >>>>> type: seqaij >>>>> >>>>> rows=62500, cols=62500 >>>>> >>>>> package used to perform factorization: petsc >>>>> >>>>> total: nonzeros=473355, allocated nonzeros=473355 >>>>> >>>>> total number of mallocs used during MatSetValues calls =0 >>>>> >>>>> not using I-node routines >>>>> >>>>> linear system matrix = precond matrix: >>>>> >>>>> Mat Object: 1 MPI processes >>>>> >>>>> type: seqaij >>>>> >>>>> rows=62500, cols=62500 >>>>> >>>>> total: nonzeros=473355, allocated nonzeros=7.8125e+06 >>>>> >>>>> total number of mallocs used during MatSetValues calls =0 >>>>> >>>>> not using I-node routines >>>>> >>>>> Time cost: 0.307149, 0.268402, 0.0990018 >>>>> >>>>> >>>>> >>>>> >>>>> ------------------------------ >>>>> *From:* hong at aspiritech.org [ >>>>> hong at aspiritech.org ] >>>>> *Sent:* Wednesday, February 18, 2015 7:49 AM >>>>> *To:* Sun, Hui >>>>> *Cc:* Matthew Knepley; petsc-users at mcs.anl.gov >>>>> >>>>> *Subject:* Re: [petsc-users] Question concerning ilu and bcgs >>>>> >>>>> Have you tried other solvers, e.g., PETSc default gmres/ilu, >>>>> bcgs/ilu etc. >>>>> The matrix is small. If it is ill-conditioned, then pc_type lu would >>>>> work the best. >>>>> >>>>> Hong >>>>> >>>>> On Wed, Feb 18, 2015 at 9:34 AM, Sun, Hui >>>> > wrote: >>>>> >>>>>> With options: >>>>>> >>>>>> -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 >>>>>> -pc_hypre_pilut_tol 1e-3 -ksp_type bcgs -ksp_rtol 1e-10 -ksp_max_it >>>>>> 10 -ksp_monitor_short -ksp_converged_reason -ksp_view >>>>>> >>>>>> Here is the full output: >>>>>> >>>>>> 0 KSP Residual norm 1404.62 >>>>>> >>>>>> 1 KSP Residual norm 88.9068 >>>>>> >>>>>> 2 KSP Residual norm 64.73 >>>>>> >>>>>> 3 KSP Residual norm 71.0224 >>>>>> >>>>>> 4 KSP Residual norm 69.5044 >>>>>> >>>>>> 5 KSP Residual norm 455.458 >>>>>> >>>>>> 6 KSP Residual norm 174.876 >>>>>> >>>>>> 7 KSP Residual norm 183.031 >>>>>> >>>>>> 8 KSP Residual norm 650.675 >>>>>> >>>>>> 9 KSP Residual norm 79.2441 >>>>>> >>>>>> 10 KSP Residual norm 84.1985 >>>>>> >>>>>> Linear solve did not converge due to DIVERGED_ITS iterations 10 >>>>>> >>>>>> KSP Object: 1 MPI processes >>>>>> >>>>>> type: bcgs >>>>>> >>>>>> maximum iterations=10, initial guess is zero >>>>>> >>>>>> tolerances: relative=1e-10, absolute=1e-50, divergence=10000 >>>>>> >>>>>> left preconditioning >>>>>> >>>>>> using PRECONDITIONED norm type for convergence test >>>>>> >>>>>> PC Object: 1 MPI processes >>>>>> >>>>>> type: hypre >>>>>> >>>>>> HYPRE Pilut preconditioning >>>>>> >>>>>> HYPRE Pilut: maximum number of iterations 1000 >>>>>> >>>>>> HYPRE Pilut: drop tolerance 0.001 >>>>>> >>>>>> HYPRE Pilut: default factor row size >>>>>> >>>>>> linear system matrix = precond matrix: >>>>>> >>>>>> Mat Object: 1 MPI processes >>>>>> >>>>>> type: seqaij >>>>>> >>>>>> rows=62500, cols=62500 >>>>>> >>>>>> total: nonzeros=473355, allocated nonzeros=7.8125e+06 >>>>>> >>>>>> total number of mallocs used during MatSetValues calls =0 >>>>>> >>>>>> not using I-node routines >>>>>> >>>>>> Time cost: 0.756198, 0.662984, 0.105672 >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> ------------------------------ >>>>>> *From:* Matthew Knepley [knepley at gmail.com >>>>>> ] >>>>>> *Sent:* Wednesday, February 18, 2015 3:30 AM >>>>>> *To:* Sun, Hui >>>>>> *Cc:* petsc-users at mcs.anl.gov >>>>>> *Subject:* Re: [petsc-users] Question concerning ilu and bcgs >>>>>> >>>>>> On Wed, Feb 18, 2015 at 12:33 AM, Sun, Hui >>>>> > wrote: >>>>>> >>>>>>> I have a matrix system Ax = b, A is of type MatSeqAIJ or >>>>>>> MatMPIAIJ, depending on the number of cores. >>>>>>> >>>>>>> I try to solve this problem by pc_type ilu and ksp_type bcgs, it >>>>>>> does not converge. The options I specify are: >>>>>>> >>>>>>> -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 >>>>>>> -pc_hypre_pilut_tol 1e-3 -ksp_type b\ >>>>>>> >>>>>>> cgs -ksp_rtol 1e-10 -ksp_max_it 1000 -ksp_monitor_short >>>>>>> -ksp_converged_reason >>>>>>> >>>>>> >>>>>> 1) Run with -ksp_view, so we can see exactly what was used >>>>>> >>>>>> 2) ILUT is unfortunately not a well-defined algorithm, and I >>>>>> believe the parallel version makes different decisions >>>>>> than the serial version. >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> >>>>>> >>>>>>> The first a few lines of the output are: >>>>>>> >>>>>>> 0 KSP Residual norm 1404.62 >>>>>>> >>>>>>> 1 KSP Residual norm 88.9068 >>>>>>> >>>>>>> 2 KSP Residual norm 64.73 >>>>>>> >>>>>>> 3 KSP Residual norm 71.0224 >>>>>>> >>>>>>> 4 KSP Residual norm 69.5044 >>>>>>> >>>>>>> 5 KSP Residual norm 455.458 >>>>>>> >>>>>>> 6 KSP Residual norm 174.876 >>>>>>> >>>>>>> 7 KSP Residual norm 183.031 >>>>>>> >>>>>>> 8 KSP Residual norm 650.675 >>>>>>> >>>>>>> 9 KSP Residual norm 79.2441 >>>>>>> >>>>>>> 10 KSP Residual norm 84.1985 >>>>>>> >>>>>>> >>>>>>> This clearly indicates non-convergence. However, I output the >>>>>>> sparse matrix A and vector b to MATLAB, and run the following command: >>>>>>> >>>>>>> [L,U] = ilu(A,struct('type','ilutp','droptol',1e-3)); >>>>>>> >>>>>>> [ux1,fl1,rr1,it1,rv1] = bicgstab(A,b,1e-10,1000,L,U); >>>>>>> >>>>>>> >>>>>>> And it converges in MATLAB, with flag fl1=0, relative residue >>>>>>> rr1=8.2725e-11, and iteration it1=89.5. I'm wondering how can I figure out >>>>>>> what's wrong. >>>>>>> >>>>>>> >>>>>>> Best, >>>>>>> >>>>>>> Hui >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Wed Feb 18 13:04:12 2015 From: dave.mayhem23 at gmail.com (Dave May) Date: Wed, 18 Feb 2015 20:04:12 +0100 Subject: [petsc-users] Question concerning ilu and bcgs In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010E89FC@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010E891E@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8948@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E8976@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E898D@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E89A2@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E89DA@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E89FC@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: (Thanks to Matt for taking the words out of my mouth :D). If using LU on the splits isn't possible as the problem is too large, you will have to use an iterative method. However, in your initial tests you need to remove all the junk which causes early termination of the Krylov solves (e.g. -fieldsplit_0_ksp_max_it 10 -fieldsplit_1_ksp_max_it 10). I would also monitor the residual history of the outer method AND the splits. If your system is badly conditioned, I recommend examining the true residual in your experiments until you understand what FS is really doing. Cheers Dave On 18 February 2015 at 19:51, Sun, Hui wrote: > Thank you Dave. In fact I have tried fieldsplit several months ago, and > today I go back to the previous code and ran it again. How can I tell it is > doing what I want it to do? Here are the options: > > -pc_type fieldsplit -fieldsplit_0_pc_type jacobi -fieldsplit_1_pc_type > jacobi -pc_fieldsplit_type SC\ > > HUR -ksp_monitor_short -ksp_converged_reason -ksp_rtol 1e-4 > -fieldsplit_1_ksp_rtol 1e-2 -fieldsplit_0_ksp_rtol 1e-4 > -fieldsplit_1_ksp_max_it 10 -fieldsplit_0_ksp_max_it 10 -ksp_type fgmres > -ksp_max_it 10 -ksp_view > > And here is the output: > > Starting... > > 0 KSP Residual norm 17.314 > > 1 KSP Residual norm 10.8324 > > 2 KSP Residual norm 10.8312 > > 3 KSP Residual norm 10.7726 > > 4 KSP Residual norm 10.7642 > > 5 KSP Residual norm 10.7634 > > 6 KSP Residual norm 10.7399 > > 7 KSP Residual norm 10.7159 > > 8 KSP Residual norm 10.6602 > > 9 KSP Residual norm 10.5756 > > 10 KSP Residual norm 10.5224 > > Linear solve did not converge due to DIVERGED_ITS iterations 10 > > KSP Object: 1 MPI processes > > type: fgmres > > GMRES: restart=30, using Classical (unmodified) Gram-Schmidt > Orthogonalization with no iterative refinement > > GMRES: happy breakdown tolerance 1e-30 > > maximum iterations=10, initial guess is zero > > tolerances: relative=0.0001, absolute=1e-50, divergence=10000 > > right preconditioning > > using UNPRECONDITIONED norm type for convergence test > > PC Object: 1 MPI processes > > type: fieldsplit > > FieldSplit with Schur preconditioner, factorization FULL > > Preconditioner for the Schur complement formed from A11 > > Split info: > > Split number 0 Defined by IS > > Split number 1 Defined by IS > > KSP solver for A00 block > > KSP Object: (fieldsplit_0_) 1 MPI processes > > type: gmres > > GMRES: restart=30, using Classical (unmodified) Gram-Schmidt > Orthogonalization with no iterative refinement > > GMRES: happy breakdown tolerance 1e-30 > > maximum iterations=10, initial guess is zero > > tolerances: relative=0.0001, absolute=1e-50, divergence=10000 > > left preconditioning > > using PRECONDITIONED norm type for convergence test > > PC Object: (fieldsplit_0_) 1 MPI processes > > type: jacobi > > linear system matrix = precond matrix: > > Mat Object: (fieldsplit_0_) 1 MPI processes > > type: mpiaij > > rows=20000, cols=20000 > > total: nonzeros=85580, allocated nonzeros=760000 > > total number of mallocs used during MatSetValues calls =0 > > not using I-node (on process 0) routines > > KSP solver for S = A11 - A10 inv(A00) A01 > > KSP Object: (fieldsplit_1_) 1 MPI processes > > type: gmres > > GMRES: restart=30, using Classical (unmodified) Gram-Schmidt > Orthogonalization with no iterative refinement > > GMRES: happy breakdown tolerance 1e-30 > > maximum iterations=10, initial guess is zero > > tolerances: relative=0.01, absolute=1e-50, divergence=10000 > > left preconditioning > > using PRECONDITIONED norm type for convergence test > > PC Object: (fieldsplit_1_) 1 MPI processes > > type: jacobi > > linear system matrix followed by preconditioner matrix: > > Mat Object: (fieldsplit_1_) 1 MPI processes > > type: schurcomplement > > rows=10000, cols=10000 > > Schur complement A11 - A10 inv(A00) A01 > > A11 > > Mat Object: (fieldsplit_1_) 1 MPI > processes > > type: mpiaij > > rows=10000, cols=10000 > > total: nonzeros=2110, allocated nonzeros=80000 > > total number of mallocs used during MatSetValues calls =0 > > using I-node (on process 0) routines: found 3739 nodes, > limit used is 5 > > A10 > > Mat Object: (a10_) 1 MPI processes > > type: mpiaij > > rows=10000, cols=20000 > > total: nonzeros=31560, allocated nonzeros=80000 > > total number of mallocs used during MatSetValues calls =0 > > not using I-node (on process 0) routines > > KSP of A00 > > KSP Object: (fieldsplit_0_) 1 MPI > processes > > type: gmres > > GMRES: restart=30, using Classical (unmodified) > Gram-Schmidt Orthogonalization with no iterative refinement > > GMRES: happy breakdown tolerance 1e-30 > > maximum iterations=10, initial guess is zero > > tolerances: relative=0.0001, absolute=1e-50, > divergence=10000 > > left preconditioning > > using PRECONDITIONED norm type for convergence test > > PC Object: (fieldsplit_0_) 1 MPI > processes > > type: jacobi > > linear system matrix = precond matrix: > > Mat Object: (fieldsplit_0_) > 1 MPI processes > > type: mpiaij > > rows=20000, cols=20000 > > total: nonzeros=85580, allocated nonzeros=760000 > > total number of mallocs used during MatSetValues calls =0 > > not using I-node (on process 0) routines > > A01 > > Mat Object: (a01_) 1 MPI processes > > type: mpiaij > > rows=20000, cols=10000 > > total: nonzeros=32732, allocated nonzeros=240000 > > total number of mallocs used during MatSetValues calls =0 > > not using I-node (on process 0) routines > > Mat Object: (fieldsplit_1_) 1 MPI processes > > type: mpiaij > > rows=10000, cols=10000 > > total: nonzeros=2110, allocated nonzeros=80000 > > total number of mallocs used during MatSetValues calls =0 > > using I-node (on process 0) routines: found 3739 nodes, limit > used is 5 > > linear system matrix = precond matrix: > > Mat Object: 1 MPI processes > > type: nest > > rows=30000, cols=30000 > > Matrix object: > > type=nest, rows=2, cols=2 > > MatNest structure: > > (0,0) : prefix="fieldsplit_0_", type=mpiaij, rows=20000, > cols=20000 > > (0,1) : prefix="a01_", type=mpiaij, rows=20000, cols=10000 > > (1,0) : prefix="a10_", type=mpiaij, rows=10000, cols=20000 > > (1,1) : prefix="fieldsplit_1_", type=mpiaij, rows=10000, > cols=10000 > > residual u = 10.3528 > > residual p = 1.88199 > > residual [u,p] = 10.5224 > > L^2 discretization error u = 0.698386 > > L^2 discretization error p = 1.0418 > > L^2 discretization error [u,p] = 1.25423 > > number of processors = 1 0 > > Time cost for creating solver context 0.100217 s, and for solving 3.78879 > s, and for printing 0.0908558 s. > > > ------------------------------ > *From:* Dave May [dave.mayhem23 at gmail.com] > *Sent:* Wednesday, February 18, 2015 10:00 AM > *To:* Sun, Hui > *Cc:* Matthew Knepley; petsc-users at mcs.anl.gov; hong at aspiritech.org > > *Subject:* Re: [petsc-users] Question concerning ilu and bcgs > > > Fieldsplit will not work if you just set pc_type fieldsplit and you have > an operator with a block size if 1. In this case, you will need to define > the splits using index sets. > > I cannot believe that defining all the v and p dofs is really hard. > Certainly it is far easier than trying to understand the difference between > the petsc, matlab and the hypre implementations of ilut. Even if you did > happen to find one implemtation of ilu you were "happy" with, as soon as > you refine the mesh a couple of times the iterations will increase. > > I second Matt's opinion - forget about ilu and focus time on trying > to make fieldsplit work. Fieldsplit will generate spectrally equivalent > operators of your flow problem, ilu won't > > Cheers > Dave > > > On Wednesday, 18 February 2015, Sun, Hui wrote: > >> I tried fieldsplitting several months ago, it didn't work due to the >> complicated coupled irregular bdry conditions. So I tried direct solver and >> now I modified the PDE system a little bit so that the ILU/bcgs works in >> MATLAB. But thank you for the suggestions, although I doubt it would work, >> maybe I will still try fieldsplitting with my new system. >> >> >> ------------------------------ >> *From:* Matthew Knepley [knepley at gmail.com ] >> *Sent:* Wednesday, February 18, 2015 8:54 AM >> *To:* Sun, Hui >> *Cc:* hong at aspiritech.org ; >> petsc-users at mcs.anl.gov >> *Subject:* Re: [petsc-users] Question concerning ilu and bcgs >> >> On Wed, Feb 18, 2015 at 10:47 AM, Sun, Hui > > wrote: >> >>> The matrix is from a 3D fluid problem, with complicated irregular >>> boundary conditions. I've tried using direct solvers such as UMFPACK, >>> SuperLU_dist and MUMPS. It seems that SuperLU_dist does not solve for my >>> linear system; UMFPACK solves the system but would run into memory issue >>> even with small size matrices and it cannot parallelize; MUMPS does solve >>> the system but it also fails when the size is big and it takes much time. >>> That's why I'm seeking an iterative method. >>> >>> I guess the direct method is faster than an iterative method for a >>> small A, but that may not be true for bigger A. >>> >> >> If this is a Stokes flow, you should use PCFIELDSPLIT and multigrid. If >> it is advection dominated, I know of nothing better >> than sparse direct or perhaps Block-Jacobi with sparse direct blocks. >> Since MUMPS solved your system, I would consider >> using BJacobi/ASM and MUMPS or UMFPACK as the block solver. >> >> Thanks, >> >> Matt >> >> >>> >>> ------------------------------ >>> *From:* Matthew Knepley [knepley at gmail.com >>> ] >>> *Sent:* Wednesday, February 18, 2015 8:33 AM >>> *To:* Sun, Hui >>> *Cc:* hong at aspiritech.org ; >>> petsc-users at mcs.anl.gov >>> *Subject:* Re: [petsc-users] Question concerning ilu and bcgs >>> >>> On Wed, Feb 18, 2015 at 10:31 AM, Sun, Hui >> > wrote: >>> >>>> So far I just try around, I haven't looked into literature yet. >>>> >>>> However, both MATLAB's ilu+gmres and ilu+bcgs work. Is it possible >>>> that some parameter or options need to be tuned in using PETSc's ilu or >>>> hypre's ilu? Besides, is there a way to view how good the performance of >>>> the pc is and output the matrices L and U, so that I can do some test in >>>> MATLAB? >>>> >>> >>> 1) Its not clear exactly what Matlab is doing >>> >>> 2) PETSc uses ILU(0) by default (you can set it to use ILU(k)) >>> >>> 3) I don't know what Hypre's ILU can do >>> >>> I would really discourage from using ILU. I cannot imagine it is >>> faster than sparse direct factorization >>> for your system, such as from SuperLU or MUMPS. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Hui >>>> >>>> >>>> ------------------------------ >>>> *From:* Matthew Knepley [knepley at gmail.com >>>> ] >>>> *Sent:* Wednesday, February 18, 2015 8:09 AM >>>> *To:* Sun, Hui >>>> *Cc:* hong at aspiritech.org ; >>>> petsc-users at mcs.anl.gov >>>> *Subject:* Re: [petsc-users] Question concerning ilu and bcgs >>>> >>>> On Wed, Feb 18, 2015 at 10:02 AM, Sun, Hui >>> > wrote: >>>> >>>>> Yes I've tried other solvers, gmres/ilu does not work, neither does >>>>> bcgs/ilu. Here are the options: >>>>> >>>>> -pc_type ilu -pc_factor_nonzeros_along_diagonal -pc_factor_levels 0 >>>>> -pc_factor_reuse_ordering -ksp_ty\ >>>>> >>>>> pe bcgs -ksp_rtol 1e-6 -ksp_max_it 10 -ksp_monitor_short -ksp_view >>>>> >>>> >>>> Note here that ILU(0) is an unreliable and generally crappy >>>> preconditioner. Have you looked in the >>>> literature for the kinds of preconditioners that are effective for your >>>> problem? >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> Here is the output: >>>>> >>>>> 0 KSP Residual norm 211292 >>>>> >>>>> 1 KSP Residual norm 13990.2 >>>>> >>>>> 2 KSP Residual norm 9870.08 >>>>> >>>>> 3 KSP Residual norm 9173.9 >>>>> >>>>> 4 KSP Residual norm 9121.94 >>>>> >>>>> 5 KSP Residual norm 7386.1 >>>>> >>>>> 6 KSP Residual norm 6222.55 >>>>> >>>>> 7 KSP Residual norm 7192.94 >>>>> >>>>> 8 KSP Residual norm 33964 >>>>> >>>>> 9 KSP Residual norm 33960.4 >>>>> >>>>> 10 KSP Residual norm 1068.54 >>>>> >>>>> KSP Object: 1 MPI processes >>>>> >>>>> type: bcgs >>>>> >>>>> maximum iterations=10, initial guess is zero >>>>> >>>>> tolerances: relative=1e-06, absolute=1e-50, divergence=10000 >>>>> >>>>> left preconditioning >>>>> >>>>> using PRECONDITIONED norm type for convergence test >>>>> >>>>> PC Object: 1 MPI processes >>>>> >>>>> type: ilu >>>>> >>>>> ILU: out-of-place factorization >>>>> >>>>> ILU: Reusing reordering from past factorization >>>>> >>>>> 0 levels of fill >>>>> >>>>> tolerance for zero pivot 2.22045e-14 >>>>> >>>>> using diagonal shift on blocks to prevent zero pivot [INBLOCKS] >>>>> >>>>> matrix ordering: natural >>>>> >>>>> factor fill ratio given 1, needed 1 >>>>> >>>>> Factored matrix follows: >>>>> >>>>> Mat Object: 1 MPI processes >>>>> >>>>> type: seqaij >>>>> >>>>> rows=62500, cols=62500 >>>>> >>>>> package used to perform factorization: petsc >>>>> >>>>> total: nonzeros=473355, allocated nonzeros=473355 >>>>> >>>>> total number of mallocs used during MatSetValues calls =0 >>>>> >>>>> not using I-node routines >>>>> >>>>> linear system matrix = precond matrix: >>>>> >>>>> Mat Object: 1 MPI processes >>>>> >>>>> type: seqaij >>>>> >>>>> rows=62500, cols=62500 >>>>> >>>>> total: nonzeros=473355, allocated nonzeros=7.8125e+06 >>>>> >>>>> total number of mallocs used during MatSetValues calls =0 >>>>> >>>>> not using I-node routines >>>>> >>>>> Time cost: 0.307149, 0.268402, 0.0990018 >>>>> >>>>> >>>>> >>>>> >>>>> ------------------------------ >>>>> *From:* hong at aspiritech.org [ >>>>> hong at aspiritech.org ] >>>>> *Sent:* Wednesday, February 18, 2015 7:49 AM >>>>> *To:* Sun, Hui >>>>> *Cc:* Matthew Knepley; petsc-users at mcs.anl.gov >>>>> >>>>> *Subject:* Re: [petsc-users] Question concerning ilu and bcgs >>>>> >>>>> Have you tried other solvers, e.g., PETSc default gmres/ilu, >>>>> bcgs/ilu etc. >>>>> The matrix is small. If it is ill-conditioned, then pc_type lu would >>>>> work the best. >>>>> >>>>> Hong >>>>> >>>>> On Wed, Feb 18, 2015 at 9:34 AM, Sun, Hui >>>> > wrote: >>>>> >>>>>> With options: >>>>>> >>>>>> -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 >>>>>> -pc_hypre_pilut_tol 1e-3 -ksp_type bcgs -ksp_rtol 1e-10 -ksp_max_it >>>>>> 10 -ksp_monitor_short -ksp_converged_reason -ksp_view >>>>>> >>>>>> Here is the full output: >>>>>> >>>>>> 0 KSP Residual norm 1404.62 >>>>>> >>>>>> 1 KSP Residual norm 88.9068 >>>>>> >>>>>> 2 KSP Residual norm 64.73 >>>>>> >>>>>> 3 KSP Residual norm 71.0224 >>>>>> >>>>>> 4 KSP Residual norm 69.5044 >>>>>> >>>>>> 5 KSP Residual norm 455.458 >>>>>> >>>>>> 6 KSP Residual norm 174.876 >>>>>> >>>>>> 7 KSP Residual norm 183.031 >>>>>> >>>>>> 8 KSP Residual norm 650.675 >>>>>> >>>>>> 9 KSP Residual norm 79.2441 >>>>>> >>>>>> 10 KSP Residual norm 84.1985 >>>>>> >>>>>> Linear solve did not converge due to DIVERGED_ITS iterations 10 >>>>>> >>>>>> KSP Object: 1 MPI processes >>>>>> >>>>>> type: bcgs >>>>>> >>>>>> maximum iterations=10, initial guess is zero >>>>>> >>>>>> tolerances: relative=1e-10, absolute=1e-50, divergence=10000 >>>>>> >>>>>> left preconditioning >>>>>> >>>>>> using PRECONDITIONED norm type for convergence test >>>>>> >>>>>> PC Object: 1 MPI processes >>>>>> >>>>>> type: hypre >>>>>> >>>>>> HYPRE Pilut preconditioning >>>>>> >>>>>> HYPRE Pilut: maximum number of iterations 1000 >>>>>> >>>>>> HYPRE Pilut: drop tolerance 0.001 >>>>>> >>>>>> HYPRE Pilut: default factor row size >>>>>> >>>>>> linear system matrix = precond matrix: >>>>>> >>>>>> Mat Object: 1 MPI processes >>>>>> >>>>>> type: seqaij >>>>>> >>>>>> rows=62500, cols=62500 >>>>>> >>>>>> total: nonzeros=473355, allocated nonzeros=7.8125e+06 >>>>>> >>>>>> total number of mallocs used during MatSetValues calls =0 >>>>>> >>>>>> not using I-node routines >>>>>> >>>>>> Time cost: 0.756198, 0.662984, 0.105672 >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> ------------------------------ >>>>>> *From:* Matthew Knepley [knepley at gmail.com >>>>>> ] >>>>>> *Sent:* Wednesday, February 18, 2015 3:30 AM >>>>>> *To:* Sun, Hui >>>>>> *Cc:* petsc-users at mcs.anl.gov >>>>>> *Subject:* Re: [petsc-users] Question concerning ilu and bcgs >>>>>> >>>>>> On Wed, Feb 18, 2015 at 12:33 AM, Sun, Hui >>>>> > wrote: >>>>>> >>>>>>> I have a matrix system Ax = b, A is of type MatSeqAIJ or >>>>>>> MatMPIAIJ, depending on the number of cores. >>>>>>> >>>>>>> I try to solve this problem by pc_type ilu and ksp_type bcgs, it >>>>>>> does not converge. The options I specify are: >>>>>>> >>>>>>> -pc_type hypre -pc_hypre_type pilut -pc_hypre_pilut_maxiter 1000 >>>>>>> -pc_hypre_pilut_tol 1e-3 -ksp_type b\ >>>>>>> >>>>>>> cgs -ksp_rtol 1e-10 -ksp_max_it 1000 -ksp_monitor_short >>>>>>> -ksp_converged_reason >>>>>>> >>>>>> >>>>>> 1) Run with -ksp_view, so we can see exactly what was used >>>>>> >>>>>> 2) ILUT is unfortunately not a well-defined algorithm, and I >>>>>> believe the parallel version makes different decisions >>>>>> than the serial version. >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> >>>>>> >>>>>>> The first a few lines of the output are: >>>>>>> >>>>>>> 0 KSP Residual norm 1404.62 >>>>>>> >>>>>>> 1 KSP Residual norm 88.9068 >>>>>>> >>>>>>> 2 KSP Residual norm 64.73 >>>>>>> >>>>>>> 3 KSP Residual norm 71.0224 >>>>>>> >>>>>>> 4 KSP Residual norm 69.5044 >>>>>>> >>>>>>> 5 KSP Residual norm 455.458 >>>>>>> >>>>>>> 6 KSP Residual norm 174.876 >>>>>>> >>>>>>> 7 KSP Residual norm 183.031 >>>>>>> >>>>>>> 8 KSP Residual norm 650.675 >>>>>>> >>>>>>> 9 KSP Residual norm 79.2441 >>>>>>> >>>>>>> 10 KSP Residual norm 84.1985 >>>>>>> >>>>>>> >>>>>>> This clearly indicates non-convergence. However, I output the >>>>>>> sparse matrix A and vector b to MATLAB, and run the following command: >>>>>>> >>>>>>> [L,U] = ilu(A,struct('type','ilutp','droptol',1e-3)); >>>>>>> >>>>>>> [ux1,fl1,rr1,it1,rv1] = bicgstab(A,b,1e-10,1000,L,U); >>>>>>> >>>>>>> >>>>>>> And it converges in MATLAB, with flag fl1=0, relative residue >>>>>>> rr1=8.2725e-11, and iteration it1=89.5. I'm wondering how can I figure out >>>>>>> what's wrong. >>>>>>> >>>>>>> >>>>>>> Best, >>>>>>> >>>>>>> Hui >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Feb 18 18:28:40 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 18 Feb 2015 18:28:40 -0600 Subject: [petsc-users] PCMG and DMMG In-Reply-To: References: <1215E602-63B5-4856-8261-ED3B7A0C0D45@mcs.anl.gov> Message-ID: > On Feb 18, 2015, at 2:38 PM, DU Yongle wrote: > > Dear Smith: > > Sorry to bother. A few questions about PCMG, because I am confused while reading the manual and source code. > > Suppose I have 29 points along a one-dimensional domain, which can be reduced twice, and gives 3 grid levels. > > (1) when I set PCMG, should the levels be 2 or 3? or in other words, the PCMG includes the finest level on which we are solving the equations or not? Yes, that counts as a level > > (2) What's the difference between PCMGGetsmoother and PCMGGetCoaseSolve? One the mid level #1, should I use the former or the latter? PCMGGetsmoother() gives back the solver used as the smoother for ANY level. PCMGGetCoaseSolve() is just syntactic sugar for the coarsest level > > > (3) What are PCMGGetsmootherup and PCMGGetSmootherDown supposed to do? By default the pre and post smooth (what I call the down and up smoother) are the same. If you want them to be different you can call the above routines and set different solver options on each (plus supply the operator). Normally it is fine for them to be the same unless you know something about your problem that would make you want them to be different. > > (4) It appears simpler to solve the equation Ax=b using PCMG. However, if I have several consecutive equations in the similar form, for example multi-level implicit RK method, where we have to solve A1 x1 = b1, A2 x2 = b2 ... successively in a step,, and An are defined as matrix-free, what's the best approach to implement? I would make a different KSP for each system, for example ksp1, ksp2, ksp3. Multigrid (and PCMG) can be used with matrix-free operators but you are limited with what smoothers you can use (essentially Chebychev or one of the other Krylov methods with a PCType of none). You need to use MatCreateShell() to create the matrix (one matrix for each level) and provide a matrix-vector routine to do the matrix-vector product with your matrix-free operator). Otherwise you put together the PCMG the same way whether it is matrix free or not. In fact if you can provide a matrix-free matrix-vector product for the interpolation and restriction then you can also make those operators with MatCreateShell(). Barry > > > Thanks a lot for your help. > > > > On Mon, Feb 9, 2015 at 12:09 PM, Barry Smith wrote: > > Looks like you are looking at a very old PETSc. You should be using version 3.5.3 and nothing earlier. DMMG has been gone from PETSc for a long time. > > Here is the easiest way to provide the information: Loop over the levels yourself and provide the matrices, function pointers etc. See for example src/ksp/ksp/examples/tests/ex19.c This example only sets up MG for two levels but you can see the pattern from the code. It creates the matrix operator for each level and vectors, and sets it for the level, > > ierr = FormJacobian_Grid(&user,&user.coarse,&user.coarse.J);CHKERRQ(ierr); > ierr = FormJacobian_Grid(&user,&user.fine,&user.fine.J);CHKERRQ(ierr); > > /* Create coarse level */ > ierr = PCMGGetCoarseSolve(pc,&user.ksp_coarse);CHKERRQ(ierr); > ierr = KSPSetOptionsPrefix(user.ksp_coarse,"coarse_");CHKERRQ(ierr); > ierr = KSPSetFromOptions(user.ksp_coarse);CHKERRQ(ierr); > ierr = KSPSetOperators(user.ksp_coarse,user.coarse.J,user.coarse.J);CHKERRQ(ierr); > ierr = PCMGSetX(pc,COARSE_LEVEL,user.coarse.x);CHKERRQ(ierr); > ierr = PCMGSetRhs(pc,COARSE_LEVEL,user.coarse.b);CHKERRQ(ierr); > > /* Create fine level */ > ierr = PCMGGetSmoother(pc,FINE_LEVEL,&ksp_fine);CHKERRQ(ierr); > ierr = KSPSetOptionsPrefix(ksp_fine,"fine_");CHKERRQ(ierr); > ierr = KSPSetFromOptions(ksp_fine);CHKERRQ(ierr); > ierr = KSPSetOperators(ksp_fine,user.fine.J,user.fine.J);CHKERRQ(ierr); > ierr = PCMGSetR(pc,FINE_LEVEL,user.fine.r);CHKERRQ(ierr); > > and it creates the interpolation and sets it > /* Create interpolation between the levels */ > ierr = DMCreateInterpolation(user.coarse.da,user.fine.da,&user.Ii,NULL);CHKERRQ(ierr); > ierr = PCMGSetInterpolation(pc,FINE_LEVEL,user.Ii);CHKERRQ(ierr); > ierr = PCMGSetRestriction(pc,FINE_LEVEL,user.Ii);CHKERRQ(ierr); > > Note that PETSc by default uses the transpose of the interpolation for the restriction so even though it looks strange to set the same operator for both PETSc automatically uses the transpose when needed. > > Barry > > > > On Feb 9, 2015, at 10:09 AM, DU Yongle wrote: > > > > Good morning, everyone: > > > > I have an existing general CFD solver with multigrid implemented. All functions (initialization, restriction, prolong/interpolation, coarse/fine grids solver......) are working correctly. Now I am trying to rewrite it with PETSc. I found that the manual provides very little information about this and is difficult to follow. I found another lecture notes by Barry Smith on web, which is: > > http://www.mcs.anl.gov/petsc/documentation/tutorials/Columbia04/DDandMultigrid.pdf > > > > However, it is still not clear the difference and connection between PCMG and DMMG. Some questions are: > > > > 1. Should DMMG be used to initialize the coarse grids (and boundary conditions) before PCMG could be used? If not, how does PCMG know all information on coarse grids? > > > > 2. Due to the customized boundary conditions, indices of the grids, boundary conditions, grid dimensions on each coarse grid levels are required for particular computations. How to extract these information in either DMMG or PCMG? I have not found a function for this purpose. I have set up all information myself, should I pass these information to the coarse grid levels to the coarse levels? How and again how to extract these information? > > > > 3. I have the restriction, interpolation, coarse grid solver ... implemented. How could these be integrated with PETSc functions? It appears that some functions like PCMGGetSmoother/UP/Down, PCMGSetInterpolation .... should be used, but how? The online manual simply repeat the name, and provides no other information. > > > > Thanks a lot. > > > > > > > > > > From bsmith at mcs.anl.gov Wed Feb 18 21:48:18 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 18 Feb 2015 21:48:18 -0600 Subject: [petsc-users] GAMG having very large coarse problem? Message-ID: Mark, When I run ksp/ksp/examples/tutorials/ex45 I get a VERY large coarse problem. It seems to ignore the -pc_gamg_coarse_eq_limit 200 argument. Any idea what is going on? Thanks Barry $ ./ex45 -da_refine 3 -pc_type gamg -ksp_monitor -ksp_view -log_summary -pc_gamg_coarse_eq_limit 200 0 KSP Residual norm 2.790769524030e+02 1 KSP Residual norm 4.484052193577e+01 2 KSP Residual norm 2.409368790441e+00 3 KSP Residual norm 1.553421589919e-01 4 KSP Residual norm 9.821441923699e-03 5 KSP Residual norm 5.610434857134e-04 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using PRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes type: bjacobi block Jacobi: number of blocks = 1 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5, needed 36.4391 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=16587, cols=16587 package used to perform factorization: petsc total: nonzeros=1.8231e+07, allocated nonzeros=1.8231e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=16587, cols=16587 total: nonzeros=500315, allocated nonzeros=500315 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=16587, cols=16587 total: nonzeros=500315, allocated nonzeros=500315 total number of mallocs used during MatSetValues calls =0 not using I-node routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: chebyshev Chebyshev: eigenvalue estimates: min = 0.0976343, max = 2.05032 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=117649, cols=117649 total: nonzeros=809137, allocated nonzeros=809137 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=117649, cols=117649 total: nonzeros=809137, allocated nonzeros=809137 total number of mallocs used during MatSetValues calls =0 not using I-node routines Residual norm 3.81135e-05 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./ex45 on a arch-opt named Barrys-MacBook-Pro.local with 1 processor, by barrysmith Wed Feb 18 21:38:03 2015 Using Petsc Development GIT revision: v3.5.3-1998-geddef31 GIT Date: 2015-02-18 11:05:09 -0600 Max Max/Min Avg Total Time (sec): 1.103e+01 1.00000 1.103e+01 Objects: 9.200e+01 1.00000 9.200e+01 Flops: 1.756e+10 1.00000 1.756e+10 1.756e+10 Flops/sec: 1.592e+09 1.00000 1.592e+09 1.592e+09 MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Reductions: 0.000e+00 0.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 1.1030e+01 100.0% 1.7556e+10 100.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage KSPGMRESOrthog 21 1.0 8.8868e-03 1.0 3.33e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3752 KSPSetUp 5 1.0 4.3986e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1 1.0 1.0995e+01 1.0 1.76e+10 1.0 0.0e+00 0.0e+00 0.0e+00100100 0 0 0 100100 0 0 0 1596 VecMDot 21 1.0 4.7335e-03 1.0 1.67e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3522 VecNorm 30 1.0 9.4804e-04 1.0 4.63e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 4887 VecScale 29 1.0 7.8293e-04 1.0 2.20e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2809 VecCopy 14 1.0 7.7058e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 102 1.0 1.4530e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 9 1.0 3.8154e-04 1.0 9.05e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2372 VecAYPX 48 1.0 5.6449e-03 1.0 7.06e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1251 VecAXPBYCZ 24 1.0 4.0700e-03 1.0 1.41e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3469 VecMAXPY 29 1.0 5.1512e-03 1.0 2.04e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3960 VecAssemblyBegin 1 1.0 6.7055e-08 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyEnd 1 1.0 8.1025e-08 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecPointwiseMult 11 1.0 1.8083e-03 1.0 1.29e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 716 VecSetRandom 1 1.0 1.7628e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 29 1.0 1.7100e-03 1.0 6.60e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3858 MatMult 58 1.0 5.0949e-02 1.0 8.39e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1647 MatMultAdd 6 1.0 5.2584e-03 1.0 5.01e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 952 MatMultTranspose 6 1.0 6.1330e-03 1.0 5.01e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 816 MatSolve 12 1.0 2.0657e-01 1.0 4.37e+08 1.0 0.0e+00 0.0e+00 0.0e+00 2 2 0 0 0 2 2 0 0 0 2117 MatSOR 36 1.0 7.1355e-02 1.0 5.84e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 818 MatLUFactorSym 1 1.0 3.4310e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0 MatLUFactorNum 1 1.0 9.8038e+00 1.0 1.69e+10 1.0 0.0e+00 0.0e+00 0.0e+00 89 96 0 0 0 89 96 0 0 0 1721 MatConvert 1 1.0 5.6955e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatScale 3 1.0 2.7223e-03 1.0 2.45e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 901 MatResidual 6 1.0 6.2142e-03 1.0 9.71e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1562 MatAssemblyBegin 12 1.0 2.7413e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 12 1.0 2.4857e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRow 470596 1.0 2.4337e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRowIJ 1 1.0 2.3254e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 1 1.0 1.7668e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatCoarsen 1 1.0 8.5790e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 5 1.0 2.2273e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAXPY 1 1.0 1.8864e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatMatMult 1 1.0 2.4513e-02 1.0 2.03e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 83 MatMatMultSym 1 1.0 1.7885e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatMatMultNum 1 1.0 6.6144e-03 1.0 2.03e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 307 MatPtAP 1 1.0 1.1460e-01 1.0 1.30e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 114 MatPtAPSymbolic 1 1.0 4.6803e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatPtAPNumeric 1 1.0 6.7781e-02 1.0 1.30e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 192 MatTrnMatMult 1 1.0 9.1702e-02 1.0 1.02e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 111 MatTrnMatMultSym 1 1.0 6.0173e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 MatTrnMatMultNum 1 1.0 3.1526e-02 1.0 1.02e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 324 MatGetSymTrans 2 1.0 4.2753e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCGAMGgraph_AGG 1 1.0 6.9175e-02 1.0 1.62e+06 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 23 PCGAMGcoarse_AGG 1 1.0 1.1130e-01 1.0 1.02e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 92 PCGAMGProl_AGG 1 1.0 2.9380e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCGAMGPOpt_AGG 1 1.0 9.1377e-02 1.0 5.15e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 564 PCSetUp 2 1.0 1.0587e+01 1.0 1.69e+10 1.0 0.0e+00 0.0e+00 0.0e+00 96 97 0 0 0 96 97 0 0 0 1601 PCSetUpOnBlocks 6 1.0 1.0165e+01 1.0 1.69e+10 1.0 0.0e+00 0.0e+00 0.0e+00 92 96 0 0 0 92 96 0 0 0 1660 PCApply 6 1.0 1.0503e+01 1.0 1.75e+10 1.0 0.0e+00 0.0e+00 0.0e+00 95 99 0 0 0 95 99 0 0 0 1662 ------------------------------------------------------------------------------------------------------------------------ From karpeev at mcs.anl.gov Thu Feb 19 09:33:46 2015 From: karpeev at mcs.anl.gov (Dmitry Karpeyev) Date: Thu, 19 Feb 2015 08:33:46 -0700 Subject: [petsc-users] SNESSetFunctionDomainError In-Reply-To: <87iol8asht.fsf@jedbrown.org> References: <87oav0at7l.fsf@jedbrown.org> <87iol8asht.fsf@jedbrown.org> Message-ID: I wanted to revive this thread and move it to petsc-dev. This problem seems to be harder than I realized. Suppose MatMult inside KSPSolve() inside SNESSolve() cannot compute a valid output vector. For example, it's a MatMFFD and as part of its function evaluation it has to evaluate an implicitly-defined constitutive model (e.g., solve an equation of state) and this inner solve diverges (e.g., the time step is too big). I want to be able to abort the linear solve and the nonlinear solve, return a suitable "converged" reason and let the user retry, maybe with a different timestep size. This is for a hand-rolled time stepper, but TS would face similar issues. Based on the previous thread here http://lists.mcs.anl.gov/pipermail/petsc-users/2014-August/022597.html I tried marking the result of MatMult as "invalid" and let it propagate up to KSPSolve() where it can be handled. This quickly gets out of control, since the invalid Vec isn't returned to the KSP immediately. It could be a work vector, which is fed into PCApply() along various code paths, depending on the side of the preconditioner, whether it's a transpose solve, etc. Each of these transformations (e.g., PCApply()) would then have to check the validity of the input argument, clear its error condition and set it on the output argument, etc. Very error-prone and fragile. Not to mention the large amount of code to sift through. This is a general problem of exception handling -- we want to "unwind" the stack to the point where the problem should be handled, but there doesn't seem to a good way to do it. We also want to be able to clear all of the error conditions on the way up (e.g., mark vectors as valid again, but not too early), otherwise we leave the solver in an invalid state. Instead of passing an exception condition up the stack I could try storing that condition in one of the more globally-visible objects (e.g., the Mat), but if the error occurs inside the evaluation of the residual that's being differenced, it doesn't really have access to the Mat. This probably raises various thread safety issues as well. Using SNESSetFunctionDomainError() doesn't seem to be a solution: a MatMFFD created with MatCreateSNESMF() has a pointer to SNES, but the function evaluation code actually has no clue about that. More generally, I don't know whether we want to wait for the linear solve to complete before handling this exception: it is unnecessary, it might be an expensive linear solve and the result of such a KSPSolve() is probably undefined and might blow up in unexpected ways. I suppose if there is a way to get a hold of SNES, each subsequent MatMult_MFFD has to check whether the domain error is set and return early in that case? We would still have to wait for the linear solve to grind through the rest of its iterations. I don't know, however, if there is a good way to guarantee that linear solver will get through this quickly and without unintended consequences. Should MatMFFD also get a hold of the KSP and set a flag there to abort? I still don't know what the intervening code (e.g., the various PCApply()) will do before the KSP has a chance to deal with this. I'm now thinking that setting some vector entries to NaN might be a good solution: I hope this NaN will propagate all the way up through the subsequent arithmetic operations (does the IEEE floating-point arithmetic guarantees?), this "error condition" gets automatically cleared the next time the vector is recomputed, since its values are reset. Finally, I want this exception to be detected globally but without incurring an extra reduction every time the residual is evaluated, and NaN will be show up in the norm that (most) KSPs would compute anyway. That way KSP could diverge with a KSP_DIVERGED_NAN or a similar reason and the user would have an option to retry. The problem with this approach is that VecValidEntries() in MatMult() and PCApply() will throw an error before this can work, so I'm trying to think about good ways of turning it off. Any ideas about how to do this? Incidentally, I realized that I don't understand how SNESFunctionDomainError can be handled gracefully in the current set up: it's not set or checked collectively, so there isn't a good way to abort and retry across the whole comm, is there? Dmitry. On Sun Aug 31 2014 at 10:12:53 PM Jed Brown wrote: > Dmitry Karpeyev writes: > > > Handling this at the KSP level (I actually think the Mat level is more > > appropriate, since the operator, not the solver, knows its domain), > > We are dynamically discovering the domain, but I don't think it's > appropriate for Mat to refuse to evaluate any more matrix actions until > some action is taken at the MatMFFD/SNESMF level. Marking the Vec > invalid is fine, but some action needs to be taken and if Derek wants > the SNES to skip further evaluations, we need to propagate the > information up the stack somehow. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Thu Feb 19 11:45:15 2015 From: hzhang at mcs.anl.gov (Hong) Date: Thu, 19 Feb 2015 11:45:15 -0600 Subject: [petsc-users] Efficient Use of GAMG for Poisson Equation with Full Neumann Boundary Conditions In-Reply-To: <54828F31-EBDC-4F83-9EE8-ECED68A56443@mcs.anl.gov> References: <1424186090.3298.2.camel@gmail.com> <54828F31-EBDC-4F83-9EE8-ECED68A56443@mcs.anl.gov> Message-ID: Fabian, Too much time was spent on the matrix operations during setup phase, which has plenty room for optimization. Can you provide us a stand-alone code used in your experiment so we can investigate how to make our gamg more efficient? Hong On Wed, Feb 18, 2015 at 12:20 PM, Barry Smith wrote: > > Fabian, > > CG requires that the preconditioner be symmetric positive definite. ICC > even if given a symmetric positive definite matrix can generate an > indefinite preconditioner. > > Similarly if an algebraic multigrid application is not "strong enough" > it can also result in a preconditioner that is indefinite. > > You never want to use ICC for pressure type problems it cannot compete > with multigrid for large problems so let's forget about ICC and focus on > the GAMG. > > > -pressure_mg_coarse_sub_pc_type svd > > -pressure_mg_levels_ksp_rtol 1e-4 > > -pressure_mg_levels_ksp_type richardson > > -pressure_mg_levels_pc_type sor > > -pressure_pc_gamg_agg_nsmooths 1 > > -pressure_pc_type gamg > > There are many many tuning parameters for MG. > > First, is your pressure problem changing dramatically at each new > solver? That is, for example, is the mesh moving or are there very > different numerical values in the matrix? Is the nonzero structure of the > pressure matrix changing? Currently the entire GAMG process is done for > each new solve, if you use the flag > > -pressure_pc_gamg_reuse_interpolation true > > it will create the interpolation needed for GAMG once and reuse it for all > the solves. Please try that and see what happens. > > Then I will have many more suggestions. > > > Barry > > > > > On Feb 17, 2015, at 9:14 AM, Fabian Gabel > wrote: > > > > Dear PETSc team, > > > > I am trying to optimize the solver parameters for the linear system I > > get, when I discretize the pressure correction equation Poisson equation > > with Neumann boundary conditions) in a SIMPLE-type algorithm using a > > finite volume method. > > > > The resulting system is symmetric and positive semi-definite. A basis to > > the associated nullspace has been provided to the KSP object. > > > > Using a CG solver with ICC preconditioning the solver needs a lot of > > inner iterations to converge (-ksp_monitor -ksp_view output attached for > > a case with approx. 2e6 unknowns; the lines beginning with 000XXXX show > > the relative residual regarding the initial residual in the outer > > iteration no. 1 for the variables u,v,w,p). Furthermore I don't quite > > understand, why the solver reports > > > > Linear solve did not converge due to DIVERGED_INDEFINITE_PC > > > > at the later stages of my Picard iteration process (iteration 0001519). > > > > I then tried out CG+GAMG preconditioning with success regarding the > > number of inner iterations, but without advantages regarding wall time > > (output attached). Also the DIVERGED_INDEFINITE_PC reason shows up > > repeatedly after iteration 0001487. I used the following options > > > > -pressure_mg_coarse_sub_pc_type svd > > -pressure_mg_levels_ksp_rtol 1e-4 > > -pressure_mg_levels_ksp_type richardson > > -pressure_mg_levels_pc_type sor > > -pressure_pc_gamg_agg_nsmooths 1 > > -pressure_pc_type gamg > > > > I would like to get an opinion on how the solver performance could be > > increased further. -log_summary shows that my code spends 80% of the > > time solving the linear systems for the pressure correction (STAGE 2: > > PRESSCORR). Furthermore, do you know what could be causing the > > DIVERGED_INDEFINITE_PC converged reason? > > > > Regards, > > Fabian Gabel > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amesga1 at tigers.lsu.edu Thu Feb 19 13:07:09 2015 From: amesga1 at tigers.lsu.edu (Ataollah Mesgarnejad) Date: Thu, 19 Feb 2015 13:07:09 -0600 Subject: [petsc-users] Parallel HDF5 write. Message-ID: Dear all, When I try to write a distributed DMPlex vector using HDF5 I get the following error: libpetsc.so.3.5: undefined symbol: H5Pset_fapl_mpio As far as I understand this is an error due to HDF5 if it is not compiled in parallel! I tried both: PETSc compiled with its own HDF5, and also with a HDF5 compiled on system (stampede). I'm attaching PETSc configure.log as well as the config.log from external packages HDF5 here (which as far as I can tell was configured and compiled with --enable-parallel flag). Many thanks in advance, Ata -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: config.log Type: application/octet-stream Size: 247191 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 4664120 bytes Desc: not available URL: From bsmith at mcs.anl.gov Thu Feb 19 13:25:17 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 19 Feb 2015 13:25:17 -0600 Subject: [petsc-users] Parallel HDF5 write. In-Reply-To: References: Message-ID: <21D7844D-CE63-4D96-B8F8-0DC88B5A4D3D@mcs.anl.gov> Check for that symbol in all the libraries -L/work/01624/amesga/Software/petsc/sandybridge-cxx-dbg/lib -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 with nm -o (and grep for the symbol) Check the dependencies of the PETSc library with ldd /work/01624/amesga/Software/petsc/sandybridge-cxx-dbg/lib/libpetsc.so Send all the output Barry > On Feb 19, 2015, at 1:07 PM, Ataollah Mesgarnejad wrote: > > Dear all, > > When I try to write a distributed DMPlex vector using HDF5 I get the following error: > > libpetsc.so.3.5: undefined symbol: H5Pset_fapl_mpio > > As far as I understand this is an error due to HDF5 if it is not compiled in parallel! > > I tried both: PETSc compiled with its own HDF5, and also with a HDF5 compiled on system (stampede). I'm attaching PETSc configure.log as well as the config.log from external packages HDF5 here (which as far as I can tell was configured and compiled with --enable-parallel flag). > > > Many thanks in advance, > Ata > From ronalcelayavzla at gmail.com Thu Feb 19 13:29:37 2015 From: ronalcelayavzla at gmail.com (Ronal Celaya) Date: Thu, 19 Feb 2015 14:59:37 -0430 Subject: [petsc-users] PETSc publications Message-ID: Are there publications and/or documentation that could help me gain an understanding of the algorithms and architecture of: 1. PETSc's sparse matrix-vector multiplication 2. PETSc's CG algorithm I need to gain a deep and thorough understanding of these, but would prefer not to start with studying the code first. Any recommendations as to how to best approach my study I'd appreciate. I know how to use PETSc, and have a working knowledge of numerical linear algebra parallel algorithms. Thanks in advance! -- Ronal Celaya -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Feb 19 13:40:03 2015 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 19 Feb 2015 13:40:03 -0600 Subject: [petsc-users] PETSc publications In-Reply-To: References: Message-ID: On Thu, Feb 19, 2015 at 1:29 PM, Ronal Celaya wrote: > Are there publications and/or documentation that could help me gain an > understanding of the algorithms and architecture of: > > 1. PETSc's sparse matrix-vector multiplication > There is nice stuff in: http://www.mcs.anl.gov/~kaushik/Papers/pcfd99_gkks.pdf and several discussions in the slides on the Tutorials page. > 2. PETSc's CG algorithm > > I need to gain a deep and thorough understanding of these, but would > prefer not to start with studying the code first. Any recommendations as to > how to best approach my study I'd appreciate. I know how to use PETSc, and > have a working knowledge of numerical linear algebra parallel algorithms. > There is nothing special about -pc_type cg. It follows Saad's book ( http://www-users.cs.umn.edu/~saad/IterMethBook_2ndEd.pdf) or http://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf Thanks, Matt > Thanks in advance! > > -- > Ronal Celaya > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From gabel.fabian at gmail.com Fri Feb 20 03:35:09 2015 From: gabel.fabian at gmail.com (Fabian Gabel) Date: Fri, 20 Feb 2015 10:35:09 +0100 Subject: [petsc-users] Efficient Use of GAMG for Poisson Equation with Full Neumann Boundary Conditions In-Reply-To: <54828F31-EBDC-4F83-9EE8-ECED68A56443@mcs.anl.gov> References: <1424186090.3298.2.camel@gmail.com> <54828F31-EBDC-4F83-9EE8-ECED68A56443@mcs.anl.gov> Message-ID: <1424424909.3272.1.camel@gmail.com> Barry, > > First, is your pressure problem changing dramatically at each new solver? That is, for example, is the mesh moving or are there very different numerical values in the matrix? Is the nonzero structure of the pressure matrix changing? No moving grids, the non-zero structure is maintained throughout the entire solution process. I am not sure about the "very different numerical values". I determined the minimal matrix coefficient to be approx -5e-7 and the maximal matrix coefficient to be 3e-6 (but once I use block structured grids with locally refined blocks the range will become wider), but there are some lines containing only a 1 on the diagonal. This comes from the variable indexing I use, which includes boundary values. If this should present a problem, I think I could scale the corresponding rows with a factor depending on the maximal/minimal element of the matrix. > Currently the entire GAMG process is done for each new solve, if you use the flag > > -pressure_pc_gamg_reuse_interpolation true > > it will create the interpolation needed for GAMG once and reuse it for all the solves. Please try that and see what happens. I attached the output for the additional solver option (-reuse_interpolation). Since there appear to be some inconsistencies with the previous output file for the GAMG solve I provided, I'll attach the results for the solution process without the flag for reusing the interpolation once again. So far wall clock time has been reduced by almost 50%. Fabian > > Then I will have many more suggestions. > > > Barry > > > > > On Feb 17, 2015, at 9:14 AM, Fabian Gabel wrote: > > > > Dear PETSc team, > > > > I am trying to optimize the solver parameters for the linear system I > > get, when I discretize the pressure correction equation Poisson equation > > with Neumann boundary conditions) in a SIMPLE-type algorithm using a > > finite volume method. > > > > The resulting system is symmetric and positive semi-definite. A basis to > > the associated nullspace has been provided to the KSP object. > > > > Using a CG solver with ICC preconditioning the solver needs a lot of > > inner iterations to converge (-ksp_monitor -ksp_view output attached for > > a case with approx. 2e6 unknowns; the lines beginning with 000XXXX show > > the relative residual regarding the initial residual in the outer > > iteration no. 1 for the variables u,v,w,p). Furthermore I don't quite > > understand, why the solver reports > > > > Linear solve did not converge due to DIVERGED_INDEFINITE_PC > > > > at the later stages of my Picard iteration process (iteration 0001519). > > > > I then tried out CG+GAMG preconditioning with success regarding the > > number of inner iterations, but without advantages regarding wall time > > (output attached). Also the DIVERGED_INDEFINITE_PC reason shows up > > repeatedly after iteration 0001487. I used the following options > > > > -pressure_mg_coarse_sub_pc_type svd > > -pressure_mg_levels_ksp_rtol 1e-4 > > -pressure_mg_levels_ksp_type richardson > > -pressure_mg_levels_pc_type sor > > -pressure_pc_gamg_agg_nsmooths 1 > > -pressure_pc_type gamg > > > > I would like to get an opinion on how the solver performance could be > > increased further. -log_summary shows that my code spends 80% of the > > time solving the linear systems for the pressure correction (STAGE 2: > > PRESSCORR). Furthermore, do you know what could be causing the > > DIVERGED_INDEFINITE_PC converged reason? > > > > Regards, > > Fabian Gabel > > > -------------- next part -------------- Sender: LSF System Subject: Job 544044: in cluster Done Job was submitted from host by user in cluster . Job was executed on host(s) , in queue , as user in cluster . was used as the home directory. was used as the working directory. Started at Thu Feb 19 20:41:19 2015 Results reported at Fri Feb 20 03:10:18 2015 Your job looked like: ------------------------------------------------------------ # LSBATCH: User input #! /bin/sh #BSUB -J mg_test #BSUB -o /home/gu08vomo/thesis/mgtest/selfcontained.gamg.128.out.%J #BSUB -n 1 #BSUB -W 14:00 #BSUB -x #BSUB -q test_mpi2 #BSUB -a openmpi module load openmpi/intel/1.8.2 #export PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr export PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr-ext export MYWORKDIR=/work/scratch/gu08vomo/thesis/singleblock/128_1_1_seg_selfcontained/ export OUTPUTDIR=/home/gu08vomo/thesis/coupling export PETSC_OPS="-options_file ops.gamg" cat ops.gamg echo "PETSC_DIR="$PETSC_DIR echo "MYWORKDIR="$MYWORKDIR cd $MYWORKDIR mpirun -n 1 ./caffa3d.MB.lnx ${PETSC_OPS} ------------------------------------------------------------ Successfully completed. Resource usage summary: CPU time : 23345.93 sec. Max Memory : 2114 MB Average Memory : 2091.60 MB Total Requested Memory : - Delta Memory : - (Delta: the difference between total requested memory and actual max usage.) Max Swap : 2872 MB Max Processes : 6 Max Threads : 11 The output (if any) follows: Modules: loading openmpi/intel/1.8.2 -momentum_ksp_type gmres -pressure_pc_type gamg -pressure_mg_coarse_sub_pc_type svd -pressure_pc_gamg_agg_nsmooths 1 -pressure_mg_levels_ksp_type richardson -pressure_mg_levels_pc_type sor -pressure_mg_levels_ksp_rtol 1e-4 -pressure_pc_gamg_reuse_interpolation true -log_summary -options_left -pressure_ksp_converged_reason PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr-ext MYWORKDIR=/work/scratch/gu08vomo/thesis/singleblock/128_1_1_seg_selfcontained/ ENTER PROBLEM NAME (SIX CHARACTERS): *************************************************** NAME OF PROBLEM SOLVED control *************************************************** *************************************************** CONTROL SETTINGS *************************************************** LREAD,LWRITE,LPOST,LTEST,LOUTS,LOUTE,LTIME,LGRAD F F F F F F F F IMON, JMON, KMON, MMON, RMON, IPR, JPR, KPR, MPR,NPCOR,NIGRAD 8 9 8 1 0 2 2 3 1 1 1 SORMAX, SLARGE, ALFA 0.1000E-07 0.1000E+31 0.9200E+00 (URF(I),I=1,5) 0.9000E+00 0.9000E+00 0.9000E+00 0.1000E+00 0.1000E+01 (SOR(I),I=1,5) 0.1000E+00 0.1000E+00 0.1000E+00 0.1000E+00 0.1000E+00 (GDS(I),I=1,5) - BLENDING (CDS-UDS) 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 LSG 100000 *************************************************** START SIMPLE RELAXATIONS *************************************************** Linear solve converged due to CONVERGED_RTOL iterations 2 KSP Object:(pressure_) 1 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=0.1, absolute=1e-50, divergence=10000 left preconditioning has attached null space using PRECONDITIONED norm type for convergence test PC Object:(pressure_) 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=4 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (pressure_mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (pressure_mg_coarse_) 1 MPI processes type: bjacobi block Jacobi: number of blocks = 1 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (pressure_mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (pressure_mg_coarse_sub_) 1 MPI processes type: svd linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=26, cols=26 total: nonzeros=536, allocated nonzeros=536 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=26, cols=26 total: nonzeros=536, allocated nonzeros=536 total number of mallocs used during MatSetValues calls =0 not using I-node routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (pressure_mg_levels_1_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=0.0001, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (pressure_mg_levels_1_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=2781, cols=2781 total: nonzeros=156609, allocated nonzeros=156609 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (pressure_mg_levels_2_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=0.0001, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (pressure_mg_levels_2_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=188698, cols=188698 total: nonzeros=6.12809e+06, allocated nonzeros=6.12809e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (pressure_mg_levels_3_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=0.0001, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (pressure_mg_levels_3_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines 0000001 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 2 0000002 0.7654E+00 0.7194E+00 0.7661E+00 0.7330E+00 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 3 0000003 0.4442E+00 0.3597E+00 0.4375E+00 0.2886E+00 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 5 0000004 0.1641E+00 0.1296E+00 0.1648E+00 0.4463E-01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 5 0000005 0.5112E-01 0.3598E-01 0.5166E-01 0.3104E-01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 0000006 0.1957E-01 0.7820E-02 0.1983E-01 0.1447E-01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 0000007 0.1497E-01 0.7200E-02 0.1495E-01 0.8590E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 0000008 0.1272E-01 0.5662E-02 0.1269E-01 0.8206E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 7 0000009 0.1135E-01 0.4639E-02 0.1135E-01 0.5068E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 7 0000010 0.1033E-01 0.4072E-02 0.1035E-01 0.5708E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 7 0000011 0.9421E-02 0.3706E-02 0.9432E-02 0.3233E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 7 0000012 0.8734E-02 0.3483E-02 0.8746E-02 0.3921E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 7 0000013 0.8122E-02 0.3221E-02 0.8130E-02 0.2095E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 7 0000014 0.7652E-02 0.3058E-02 0.7662E-02 0.3045E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 7 0000015 0.7237E-02 0.2890E-02 0.7245E-02 0.1618E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000016 0.6956E-02 0.2818E-02 0.6965E-02 0.1471E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 7 0000017 0.6339E-02 0.2521E-02 0.6346E-02 0.1578E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000018 0.6017E-02 0.2392E-02 0.6023E-02 0.1044E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000019 0.5734E-02 0.2283E-02 0.5740E-02 0.1212E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000020 0.5476E-02 0.2180E-02 0.5481E-02 0.7940E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000021 0.5248E-02 0.2095E-02 0.5254E-02 0.1054E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000022 0.5044E-02 0.2020E-02 0.5050E-02 0.8217E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000023 0.4875E-02 0.1970E-02 0.4881E-02 0.1209E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 7 0000024 0.4745E-02 0.1949E-02 0.4751E-02 0.1315E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000025 0.4687E-02 0.1991E-02 0.4694E-02 0.4871E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000026 0.4312E-02 0.1718E-02 0.4317E-02 0.4958E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000027 0.4170E-02 0.1661E-02 0.4174E-02 0.4818E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000028 0.4040E-02 0.1617E-02 0.4045E-02 0.5838E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000029 0.3930E-02 0.1591E-02 0.3935E-02 0.7477E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000030 0.3852E-02 0.1602E-02 0.3858E-02 0.1019E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000031 0.3836E-02 0.1682E-02 0.3842E-02 0.1235E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000032 0.3904E-02 0.1425E-02 0.3914E-02 0.4310E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000033 0.3462E-02 0.1459E-02 0.3465E-02 0.4804E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000034 0.3375E-02 0.1481E-02 0.3378E-02 0.4951E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000035 0.3307E-02 0.1300E-02 0.3311E-02 0.5204E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000036 0.3259E-02 0.1284E-02 0.3263E-02 0.7728E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000037 0.3254E-02 0.1329E-02 0.3259E-02 0.1037E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000038 0.3346E-02 0.1486E-02 0.3353E-02 0.3282E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000039 0.2956E-02 0.1177E-02 0.2960E-02 0.2359E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000040 0.2892E-02 0.1160E-02 0.2896E-02 0.4576E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000041 0.2838E-02 0.1160E-02 0.2841E-02 0.5030E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000042 0.2811E-02 0.1200E-02 0.2815E-02 0.7012E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000043 0.2820E-02 0.1068E-02 0.2824E-02 0.8205E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000044 0.2892E-02 0.1123E-02 0.2897E-02 0.4962E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000045 0.2573E-02 0.1220E-02 0.2576E-02 0.2104E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000046 0.2534E-02 0.9942E-03 0.2537E-02 0.4036E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000047 0.2498E-02 0.9794E-03 0.2501E-02 0.4000E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000048 0.2489E-02 0.9990E-03 0.2492E-02 0.7027E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000049 0.2529E-02 0.1090E-02 0.2532E-02 0.8338E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000050 0.2644E-02 0.9371E-03 0.2648E-02 0.3577E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000051 0.2274E-02 0.1012E-02 0.2276E-02 0.1423E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000052 0.2239E-02 0.8736E-03 0.2242E-02 0.3321E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000053 0.2206E-02 0.8618E-03 0.2209E-02 0.3065E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000054 0.2193E-02 0.8675E-03 0.2196E-02 0.5479E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000055 0.2211E-02 0.9236E-03 0.2213E-02 0.6273E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000056 0.2286E-02 0.8231E-03 0.2289E-02 0.2699E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000057 0.2029E-02 0.8714E-03 0.2032E-02 0.1083E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000058 0.1999E-02 0.7774E-03 0.2002E-02 0.2617E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000059 0.1970E-02 0.7665E-03 0.1973E-02 0.2341E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000060 0.1954E-02 0.7659E-03 0.1957E-02 0.4193E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000061 0.1959E-02 0.7992E-03 0.1961E-02 0.5482E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000062 0.2019E-02 0.8872E-03 0.2021E-02 0.1874E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000063 0.1828E-02 0.7171E-03 0.1830E-02 0.1414E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000064 0.1802E-02 0.7103E-03 0.1804E-02 0.2711E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000065 0.1780E-02 0.7150E-03 0.1782E-02 0.2977E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000066 0.1775E-02 0.7407E-03 0.1777E-02 0.4188E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000067 0.1793E-02 0.6706E-03 0.1794E-02 0.4859E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000068 0.1853E-02 0.7134E-03 0.1854E-02 0.1724E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000069 0.1658E-02 0.6460E-03 0.1660E-02 0.1030E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000070 0.1634E-02 0.6452E-03 0.1636E-02 0.2447E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000071 0.1615E-02 0.6498E-03 0.1617E-02 0.2467E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000072 0.1610E-02 0.6750E-03 0.1612E-02 0.3604E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000073 0.1623E-02 0.6071E-03 0.1624E-02 0.4072E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000074 0.1670E-02 0.6422E-03 0.1672E-02 0.2927E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000075 0.1511E-02 0.6956E-03 0.1513E-02 0.1032E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000076 0.1498E-02 0.5768E-03 0.1500E-02 0.2234E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000077 0.1486E-02 0.5720E-03 0.1488E-02 0.2207E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000078 0.1490E-02 0.5875E-03 0.1492E-02 0.4043E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000079 0.1524E-02 0.6456E-03 0.1525E-02 0.4684E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000080 0.1606E-02 0.5600E-03 0.1607E-02 0.2275E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000081 0.1386E-02 0.6101E-03 0.1387E-02 0.8311E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000082 0.1372E-02 0.5256E-03 0.1374E-02 0.1982E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000083 0.1360E-02 0.5216E-03 0.1362E-02 0.1917E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000084 0.1360E-02 0.5293E-03 0.1362E-02 0.3408E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000085 0.1382E-02 0.5710E-03 0.1383E-02 0.3904E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000086 0.1443E-02 0.5078E-03 0.1443E-02 0.1817E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000087 0.1275E-02 0.5449E-03 0.1276E-02 0.6985E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000088 0.1262E-02 0.4818E-03 0.1264E-02 0.1668E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000089 0.1250E-02 0.4775E-03 0.1252E-02 0.1587E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000090 0.1247E-02 0.4809E-03 0.1248E-02 0.2788E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000091 0.1259E-02 0.5097E-03 0.1261E-02 0.3156E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000092 0.1303E-02 0.4637E-03 0.1303E-02 0.1476E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000093 0.1176E-02 0.4901E-03 0.1178E-02 0.1786E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000094 0.1165E-02 0.5038E-03 0.1166E-02 0.1799E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000095 0.1161E-02 0.4410E-03 0.1162E-02 0.1606E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000096 0.1165E-02 0.4456E-03 0.1166E-02 0.2954E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000097 0.1190E-02 0.4843E-03 0.1191E-02 0.3493E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000098 0.1251E-02 0.4346E-03 0.1251E-02 0.1745E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000099 0.1089E-02 0.4733E-03 0.1091E-02 0.6287E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000100 0.1081E-02 0.4100E-03 0.1082E-02 0.1508E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000101 0.1072E-02 0.4073E-03 0.1074E-02 0.1477E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000102 0.1073E-02 0.4132E-03 0.1075E-02 0.2604E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000103 0.1091E-02 0.4455E-03 0.1092E-02 0.2983E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000104 0.1139E-02 0.3984E-03 0.1139E-02 0.1423E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000105 0.1011E-02 0.4275E-03 0.1012E-02 0.5486E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000106 0.1002E-02 0.3791E-03 0.1003E-02 0.1289E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000107 0.9939E-03 0.3763E-03 0.9951E-03 0.1251E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000108 0.9925E-03 0.3793E-03 0.9936E-03 0.2172E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000109 0.1003E-02 0.4023E-03 0.1004E-02 0.2469E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000110 0.1039E-02 0.3670E-03 0.1039E-02 0.1175E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000111 0.9400E-03 0.3884E-03 0.9411E-03 0.4702E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000112 0.9314E-03 0.3515E-03 0.9326E-03 0.1080E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000113 0.9232E-03 0.3486E-03 0.9243E-03 0.1045E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000114 0.9200E-03 0.3496E-03 0.9211E-03 0.1799E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000115 0.9259E-03 0.3656E-03 0.9268E-03 0.2472E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000116 0.9561E-03 0.4049E-03 0.9566E-03 0.3358E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000117 0.1017E-02 0.3465E-03 0.1017E-02 0.1589E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000118 0.8667E-03 0.3811E-03 0.8678E-03 0.7975E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000119 0.8612E-03 0.3240E-03 0.8623E-03 0.1111E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000120 0.8578E-03 0.3235E-03 0.8588E-03 0.1533E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000121 0.8630E-03 0.3341E-03 0.8638E-03 0.2287E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000122 0.8873E-03 0.3706E-03 0.8878E-03 0.2975E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000123 0.9398E-03 0.3204E-03 0.9399E-03 0.1344E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000124 0.8088E-03 0.3522E-03 0.8098E-03 0.5957E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000125 0.8036E-03 0.3015E-03 0.8046E-03 0.1102E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000126 0.7996E-03 0.3006E-03 0.8005E-03 0.1291E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000127 0.8031E-03 0.3073E-03 0.8039E-03 0.2047E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000128 0.8216E-03 0.3367E-03 0.8222E-03 0.2506E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000129 0.8655E-03 0.2962E-03 0.8657E-03 0.1150E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000130 0.7558E-03 0.3227E-03 0.7567E-03 0.4880E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000131 0.7506E-03 0.2811E-03 0.7514E-03 0.1009E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000132 0.7460E-03 0.2798E-03 0.7468E-03 0.1086E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000133 0.7475E-03 0.2838E-03 0.7482E-03 0.1776E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000134 0.7604E-03 0.3062E-03 0.7610E-03 0.2101E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000135 0.7945E-03 0.2747E-03 0.7948E-03 0.9760E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000136 0.7071E-03 0.2953E-03 0.7078E-03 0.4072E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000137 0.7017E-03 0.2624E-03 0.7025E-03 0.8785E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000138 0.6968E-03 0.2608E-03 0.6976E-03 0.9057E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000139 0.6966E-03 0.2631E-03 0.6973E-03 0.1507E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000140 0.7050E-03 0.2795E-03 0.7056E-03 0.1748E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000141 0.7305E-03 0.2554E-03 0.7309E-03 0.8246E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000142 0.6621E-03 0.2708E-03 0.6628E-03 0.3419E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000143 0.6568E-03 0.2453E-03 0.6575E-03 0.7491E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000144 0.6517E-03 0.2436E-03 0.6524E-03 0.7552E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000145 0.6502E-03 0.2447E-03 0.6508E-03 0.1269E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000146 0.6552E-03 0.2565E-03 0.6558E-03 0.1776E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000147 0.6775E-03 0.2847E-03 0.6779E-03 0.2377E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000148 0.7225E-03 0.2438E-03 0.7225E-03 0.1154E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000149 0.6148E-03 0.2689E-03 0.6154E-03 0.5615E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000150 0.6115E-03 0.2280E-03 0.6122E-03 0.7955E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000151 0.6098E-03 0.2280E-03 0.6104E-03 0.1097E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000152 0.6143E-03 0.2361E-03 0.6148E-03 0.1648E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000153 0.6327E-03 0.2630E-03 0.6330E-03 0.2129E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000154 0.6718E-03 0.2269E-03 0.6718E-03 0.9835E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000155 0.5767E-03 0.2504E-03 0.5773E-03 0.4277E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000156 0.5736E-03 0.2135E-03 0.5741E-03 0.7934E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000157 0.5713E-03 0.2131E-03 0.5718E-03 0.9365E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000158 0.5745E-03 0.2184E-03 0.5750E-03 0.1485E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000159 0.5889E-03 0.2405E-03 0.5892E-03 0.1814E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000160 0.6221E-03 0.2109E-03 0.6221E-03 0.8473E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000161 0.5414E-03 0.2309E-03 0.5418E-03 0.3557E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000162 0.5381E-03 0.2001E-03 0.5386E-03 0.7318E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000163 0.5353E-03 0.1994E-03 0.5358E-03 0.7973E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000164 0.5370E-03 0.2028E-03 0.5375E-03 0.1300E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000165 0.5474E-03 0.2199E-03 0.5477E-03 0.1538E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000166 0.5737E-03 0.1966E-03 0.5738E-03 0.7252E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000167 0.5084E-03 0.2123E-03 0.5089E-03 0.3004E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000168 0.5050E-03 0.1878E-03 0.5054E-03 0.6436E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000169 0.5019E-03 0.1868E-03 0.5023E-03 0.6726E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000170 0.5024E-03 0.1889E-03 0.5027E-03 0.1115E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000171 0.5095E-03 0.2018E-03 0.5098E-03 0.1295E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000172 0.5296E-03 0.1836E-03 0.5298E-03 0.6188E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000173 0.4777E-03 0.1957E-03 0.4781E-03 0.2550E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000174 0.4742E-03 0.1763E-03 0.4746E-03 0.5548E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000175 0.4710E-03 0.1752E-03 0.4713E-03 0.5675E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000176 0.4704E-03 0.1764E-03 0.4708E-03 0.9496E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000177 0.4750E-03 0.1859E-03 0.4753E-03 0.1337E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000178 0.4930E-03 0.2079E-03 0.4932E-03 0.4391E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000179 0.4493E-03 0.1693E-03 0.4496E-03 0.3741E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000180 0.4458E-03 0.1689E-03 0.4462E-03 0.6681E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000181 0.4435E-03 0.1715E-03 0.4438E-03 0.7805E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000182 0.4455E-03 0.1794E-03 0.4458E-03 0.1020E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000183 0.4540E-03 0.1630E-03 0.4543E-03 0.1249E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000184 0.4746E-03 0.1776E-03 0.4748E-03 0.4554E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000185 0.4227E-03 0.1588E-03 0.4230E-03 0.2914E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000186 0.4191E-03 0.1599E-03 0.4194E-03 0.6642E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000187 0.4168E-03 0.1626E-03 0.4171E-03 0.7010E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000188 0.4185E-03 0.1712E-03 0.4188E-03 0.9635E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000189 0.4261E-03 0.1531E-03 0.4263E-03 0.1137E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000190 0.4445E-03 0.1665E-03 0.4446E-03 0.4378E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000191 0.3977E-03 0.1492E-03 0.3979E-03 0.2408E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000192 0.3943E-03 0.1503E-03 0.3945E-03 0.6230E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000193 0.3920E-03 0.1525E-03 0.3923E-03 0.6226E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000194 0.3933E-03 0.1603E-03 0.3935E-03 0.8894E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000195 0.3998E-03 0.1438E-03 0.3999E-03 0.1031E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000196 0.4158E-03 0.1557E-03 0.4159E-03 0.4061E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000197 0.3742E-03 0.1403E-03 0.3744E-03 0.2071E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000198 0.3710E-03 0.1412E-03 0.3712E-03 0.5712E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000199 0.3688E-03 0.1431E-03 0.3690E-03 0.5552E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000200 0.3696E-03 0.1502E-03 0.3698E-03 0.8150E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000201 0.3751E-03 0.1352E-03 0.3752E-03 0.9356E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000202 0.3890E-03 0.1456E-03 0.3891E-03 0.3714E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000203 0.3522E-03 0.1320E-03 0.3524E-03 0.1826E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000204 0.3492E-03 0.1327E-03 0.3493E-03 0.5202E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000205 0.3470E-03 0.1344E-03 0.3471E-03 0.4991E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000206 0.3475E-03 0.1406E-03 0.3477E-03 0.7452E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000207 0.3521E-03 0.1272E-03 0.3522E-03 0.8514E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000208 0.3642E-03 0.1363E-03 0.3642E-03 0.3380E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000209 0.3316E-03 0.1242E-03 0.3317E-03 0.1633E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000210 0.3287E-03 0.1248E-03 0.3288E-03 0.4733E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000211 0.3265E-03 0.1262E-03 0.3267E-03 0.4520E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000212 0.3269E-03 0.1318E-03 0.3270E-03 0.6826E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000213 0.3307E-03 0.1197E-03 0.3308E-03 0.7786E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000214 0.3413E-03 0.1278E-03 0.3413E-03 0.1209E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000215 0.3624E-03 0.1207E-03 0.3624E-03 0.4753E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000216 0.3094E-03 0.1352E-03 0.3095E-03 0.2928E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000217 0.3079E-03 0.1144E-03 0.3081E-03 0.4019E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000218 0.3072E-03 0.1143E-03 0.3073E-03 0.5720E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000219 0.3095E-03 0.1178E-03 0.3096E-03 0.8150E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000220 0.3186E-03 0.1312E-03 0.3187E-03 0.1088E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000221 0.3382E-03 0.1136E-03 0.3381E-03 0.4429E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000222 0.2914E-03 0.1261E-03 0.2915E-03 0.2233E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000223 0.2899E-03 0.1077E-03 0.2900E-03 0.4024E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000224 0.2889E-03 0.1076E-03 0.2890E-03 0.4806E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000225 0.2907E-03 0.1102E-03 0.2907E-03 0.7508E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000226 0.2981E-03 0.1214E-03 0.2981E-03 0.9334E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000227 0.3151E-03 0.1066E-03 0.3151E-03 0.4049E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000228 0.2744E-03 0.1172E-03 0.2745E-03 0.1845E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000229 0.2729E-03 0.1015E-03 0.2730E-03 0.3757E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000230 0.2717E-03 0.1012E-03 0.2718E-03 0.4122E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000231 0.2729E-03 0.1031E-03 0.2729E-03 0.6738E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000232 0.2786E-03 0.1123E-03 0.2786E-03 0.8076E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000233 0.2928E-03 0.1000E-03 0.2927E-03 0.3621E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000234 0.2585E-03 0.1088E-03 0.2586E-03 0.1567E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000235 0.2570E-03 0.9566E-04 0.2570E-03 0.3375E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000236 0.2556E-03 0.9529E-04 0.2557E-03 0.3549E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000237 0.2562E-03 0.9664E-04 0.2563E-03 0.5946E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000238 0.2606E-03 0.1040E-03 0.2606E-03 0.6992E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000239 0.2722E-03 0.9395E-04 0.2722E-03 0.3214E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000240 0.2436E-03 0.1011E-03 0.2436E-03 0.1355E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000241 0.2420E-03 0.9017E-04 0.2420E-03 0.2990E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000242 0.2405E-03 0.8973E-04 0.2406E-03 0.3082E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000243 0.2407E-03 0.9069E-04 0.2408E-03 0.5231E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000244 0.2440E-03 0.9658E-04 0.2440E-03 0.6096E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000245 0.2534E-03 0.8830E-04 0.2534E-03 0.2845E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000246 0.2295E-03 0.9407E-04 0.2295E-03 0.1181E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000247 0.2279E-03 0.8500E-04 0.2279E-03 0.2636E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000248 0.2264E-03 0.8453E-04 0.2265E-03 0.2693E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000249 0.2263E-03 0.8519E-04 0.2263E-03 0.4602E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000250 0.2287E-03 0.8995E-04 0.2287E-03 0.6504E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000251 0.2379E-03 0.1012E-03 0.2379E-03 0.2121E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000252 0.2164E-03 0.8187E-04 0.2164E-03 0.1794E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000253 0.2148E-03 0.8184E-04 0.2149E-03 0.3297E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000254 0.2139E-03 0.8340E-04 0.2139E-03 0.3899E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000255 0.2152E-03 0.8784E-04 0.2153E-03 0.5173E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000256 0.2201E-03 0.7910E-04 0.2202E-03 0.6376E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000257 0.2314E-03 0.8696E-04 0.2314E-03 0.2269E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000258 0.2040E-03 0.7713E-04 0.2040E-03 0.1473E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000259 0.2024E-03 0.7794E-04 0.2025E-03 0.3393E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000260 0.2016E-03 0.7966E-04 0.2016E-03 0.3651E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000261 0.2030E-03 0.8472E-04 0.2030E-03 0.5052E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000262 0.2078E-03 0.7458E-04 0.2078E-03 0.6018E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000263 0.2186E-03 0.8236E-04 0.2186E-03 0.2258E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000264 0.1924E-03 0.7272E-04 0.1924E-03 0.1272E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000265 0.1909E-03 0.7356E-04 0.1909E-03 0.3295E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000266 0.1900E-03 0.7515E-04 0.1901E-03 0.3372E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000267 0.1914E-03 0.8008E-04 0.1914E-03 0.4828E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000268 0.1960E-03 0.7030E-04 0.1959E-03 0.5657E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000269 0.2061E-03 0.7768E-04 0.2061E-03 0.2168E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000270 0.1813E-03 0.6858E-04 0.1813E-03 0.1138E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000271 0.1799E-03 0.6940E-04 0.1799E-03 0.3127E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000272 0.1792E-03 0.7089E-04 0.1792E-03 0.3119E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000273 0.1804E-03 0.7561E-04 0.1804E-03 0.4570E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000274 0.1847E-03 0.6630E-04 0.1847E-03 0.5304E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000275 0.1942E-03 0.7325E-04 0.1942E-03 0.2051E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000276 0.1710E-03 0.6469E-04 0.1710E-03 0.1040E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000277 0.1697E-03 0.6547E-04 0.1697E-03 0.2944E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000278 0.1689E-03 0.6687E-04 0.1689E-03 0.2900E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000279 0.1701E-03 0.7134E-04 0.1701E-03 0.4312E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000280 0.1741E-03 0.6254E-04 0.1741E-03 0.4978E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000281 0.1830E-03 0.6904E-04 0.1829E-03 0.1928E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000282 0.1612E-03 0.6103E-04 0.1612E-03 0.9613E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000283 0.1600E-03 0.6176E-04 0.1600E-03 0.2765E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000284 0.1593E-03 0.6308E-04 0.1593E-03 0.2709E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000285 0.1604E-03 0.6731E-04 0.1604E-03 0.4065E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000286 0.1641E-03 0.5899E-04 0.1641E-03 0.4681E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000287 0.1724E-03 0.6510E-04 0.1724E-03 0.1810E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000288 0.1520E-03 0.5759E-04 0.1520E-03 0.8957E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000289 0.1509E-03 0.5827E-04 0.1508E-03 0.2597E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000290 0.1502E-03 0.5952E-04 0.1502E-03 0.2541E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000291 0.1512E-03 0.6352E-04 0.1512E-03 0.3837E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000292 0.1547E-03 0.5566E-04 0.1547E-03 0.4413E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000293 0.1626E-03 0.6140E-04 0.1625E-03 0.1701E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000294 0.1434E-03 0.5434E-04 0.1434E-03 0.8393E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000295 0.1423E-03 0.5498E-04 0.1423E-03 0.2445E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000296 0.1416E-03 0.5618E-04 0.1416E-03 0.2393E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000297 0.1426E-03 0.5998E-04 0.1426E-03 0.3628E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000298 0.1459E-03 0.5252E-04 0.1459E-03 0.4171E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000299 0.1534E-03 0.5795E-04 0.1533E-03 0.1601E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000300 0.1352E-03 0.5128E-04 0.1352E-03 0.7901E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000301 0.1342E-03 0.5189E-04 0.1342E-03 0.2307E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000302 0.1336E-03 0.5304E-04 0.1336E-03 0.2260E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000303 0.1346E-03 0.5666E-04 0.1345E-03 0.3436E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000304 0.1377E-03 0.4956E-04 0.1377E-03 0.3952E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000305 0.1447E-03 0.5472E-04 0.1447E-03 0.1511E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000306 0.1275E-03 0.4840E-04 0.1275E-03 0.7463E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000307 0.1266E-03 0.4898E-04 0.1265E-03 0.2181E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000308 0.1260E-03 0.5008E-04 0.1260E-03 0.2141E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000309 0.1270E-03 0.5355E-04 0.1269E-03 0.3261E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000310 0.1300E-03 0.4678E-04 0.1299E-03 0.3751E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000311 0.1367E-03 0.5169E-04 0.1367E-03 0.1429E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000312 0.1203E-03 0.4568E-04 0.1203E-03 0.7068E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000313 0.1194E-03 0.4624E-04 0.1194E-03 0.2067E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000314 0.1189E-03 0.4731E-04 0.1189E-03 0.2032E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000315 0.1198E-03 0.5063E-04 0.1198E-03 0.3100E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000316 0.1227E-03 0.4415E-04 0.1227E-03 0.3566E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000317 0.1291E-03 0.4885E-04 0.1291E-03 0.1354E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000318 0.1135E-03 0.4311E-04 0.1135E-03 0.6707E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000319 0.1126E-03 0.4366E-04 0.1126E-03 0.1962E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000320 0.1122E-03 0.4469E-04 0.1122E-03 0.1931E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000321 0.1131E-03 0.4788E-04 0.1130E-03 0.2950E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000322 0.1159E-03 0.4167E-04 0.1158E-03 0.3395E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000323 0.1220E-03 0.4618E-04 0.1220E-03 0.1285E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000324 0.1071E-03 0.4069E-04 0.1070E-03 0.6374E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000325 0.1063E-03 0.4123E-04 0.1062E-03 0.1865E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000326 0.1058E-03 0.4223E-04 0.1058E-03 0.1838E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000327 0.1067E-03 0.4530E-04 0.1067E-03 0.2811E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000328 0.1094E-03 0.3934E-04 0.1094E-03 0.3234E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000329 0.1154E-03 0.4367E-04 0.1154E-03 0.1221E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000330 0.1010E-03 0.3841E-04 0.1010E-03 0.6063E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000331 0.1003E-03 0.3893E-04 0.1002E-03 0.1775E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000332 0.9987E-04 0.3990E-04 0.9986E-04 0.1751E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000333 0.1007E-03 0.4286E-04 0.1007E-03 0.2680E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000334 0.1034E-03 0.3714E-04 0.1034E-03 0.3083E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000335 0.1091E-03 0.4131E-04 0.1091E-03 0.1161E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000336 0.9530E-04 0.3626E-04 0.9529E-04 0.5772E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000337 0.9460E-04 0.3677E-04 0.9459E-04 0.1690E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000338 0.9425E-04 0.3771E-04 0.9424E-04 0.1669E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000339 0.9511E-04 0.4056E-04 0.9511E-04 0.2556E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000340 0.9766E-04 0.3506E-04 0.9765E-04 0.2939E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000341 0.1032E-03 0.3907E-04 0.1032E-03 0.1105E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000342 0.8993E-04 0.3423E-04 0.8992E-04 0.5496E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000343 0.8927E-04 0.3473E-04 0.8926E-04 0.1610E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000344 0.8896E-04 0.3564E-04 0.8895E-04 0.1591E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000345 0.8981E-04 0.3838E-04 0.8980E-04 0.2438E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000346 0.9228E-04 0.3309E-04 0.9227E-04 0.2803E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000347 0.9759E-04 0.3696E-04 0.9758E-04 0.1052E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000348 0.8486E-04 0.3231E-04 0.8485E-04 0.5234E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000349 0.8425E-04 0.3280E-04 0.8424E-04 0.1534E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000350 0.8397E-04 0.3368E-04 0.8396E-04 0.1516E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000351 0.8480E-04 0.3633E-04 0.8480E-04 0.2326E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000352 0.8720E-04 0.3124E-04 0.8719E-04 0.2673E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000353 0.9232E-04 0.3497E-04 0.9231E-04 0.1002E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000354 0.8009E-04 0.3050E-04 0.8008E-04 0.4984E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000355 0.7951E-04 0.3097E-04 0.7951E-04 0.1462E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000356 0.7926E-04 0.3184E-04 0.7925E-04 0.1445E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000357 0.8008E-04 0.3438E-04 0.8008E-04 0.2218E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000358 0.8240E-04 0.2950E-04 0.8240E-04 0.2548E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000359 0.8733E-04 0.3308E-04 0.8732E-04 0.9537E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000360 0.7559E-04 0.2879E-04 0.7558E-04 0.4745E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000361 0.7505E-04 0.2925E-04 0.7505E-04 0.1393E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000362 0.7482E-04 0.3009E-04 0.7482E-04 0.1377E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000363 0.7563E-04 0.3254E-04 0.7563E-04 0.2115E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000364 0.7788E-04 0.2785E-04 0.7787E-04 0.2428E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000365 0.8261E-04 0.3130E-04 0.8261E-04 0.9079E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000366 0.7135E-04 0.2718E-04 0.7134E-04 0.4516E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000367 0.7084E-04 0.2763E-04 0.7084E-04 0.1326E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000368 0.7064E-04 0.2844E-04 0.7064E-04 0.1311E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000369 0.7143E-04 0.3079E-04 0.7143E-04 0.2016E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000370 0.7360E-04 0.2629E-04 0.7360E-04 0.2313E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000371 0.7815E-04 0.2961E-04 0.7815E-04 0.8641E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000372 0.6735E-04 0.2566E-04 0.6735E-04 0.4297E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000373 0.6688E-04 0.2609E-04 0.6688E-04 0.1263E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000374 0.6669E-04 0.2687E-04 0.6669E-04 0.1249E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000375 0.6747E-04 0.2913E-04 0.6747E-04 0.1920E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000376 0.6957E-04 0.2482E-04 0.6957E-04 0.2202E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000377 0.7393E-04 0.2800E-04 0.7393E-04 0.8221E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000378 0.6358E-04 0.2422E-04 0.6358E-04 0.4087E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000379 0.6314E-04 0.2464E-04 0.6314E-04 0.1202E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000380 0.6297E-04 0.2539E-04 0.6298E-04 0.1188E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000381 0.6373E-04 0.2756E-04 0.6373E-04 0.1829E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000382 0.6575E-04 0.2343E-04 0.6575E-04 0.2095E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000383 0.6994E-04 0.2649E-04 0.6994E-04 0.7819E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000384 0.6003E-04 0.2286E-04 0.6003E-04 0.3885E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000385 0.5961E-04 0.2327E-04 0.5962E-04 0.1144E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000386 0.5946E-04 0.2400E-04 0.5947E-04 0.1130E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000387 0.6020E-04 0.2607E-04 0.6021E-04 0.1740E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000388 0.6215E-04 0.2212E-04 0.6215E-04 0.1993E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000389 0.6615E-04 0.2505E-04 0.6616E-04 0.7433E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000390 0.5668E-04 0.2158E-04 0.5668E-04 0.3692E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000391 0.5629E-04 0.2197E-04 0.5629E-04 0.1088E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000392 0.5616E-04 0.2267E-04 0.5616E-04 0.1075E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000393 0.5687E-04 0.2466E-04 0.5688E-04 0.1655E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000394 0.5874E-04 0.2088E-04 0.5875E-04 0.1895E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000395 0.6257E-04 0.2369E-04 0.6258E-04 0.7064E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000396 0.5352E-04 0.2037E-04 0.5352E-04 0.3507E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000397 0.5315E-04 0.2075E-04 0.5316E-04 0.1034E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000398 0.5304E-04 0.2142E-04 0.5304E-04 0.1021E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000399 0.5373E-04 0.2332E-04 0.5374E-04 0.1574E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000400 0.5552E-04 0.1971E-04 0.5553E-04 0.1801E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000401 0.5918E-04 0.2240E-04 0.5919E-04 0.6710E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000402 0.5054E-04 0.1923E-04 0.5055E-04 0.3329E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000403 0.5020E-04 0.1959E-04 0.5021E-04 0.9820E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000404 0.5009E-04 0.2024E-04 0.5010E-04 0.9698E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000405 0.5076E-04 0.2205E-04 0.5077E-04 0.1496E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000406 0.5248E-04 0.1860E-04 0.5249E-04 0.1710E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000407 0.5598E-04 0.2117E-04 0.5598E-04 0.6371E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000408 0.4773E-04 0.1815E-04 0.4774E-04 0.3160E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000409 0.4741E-04 0.1850E-04 0.4742E-04 0.9325E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000410 0.4732E-04 0.1912E-04 0.4733E-04 0.9208E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000411 0.4796E-04 0.2085E-04 0.4797E-04 0.1420E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000412 0.4960E-04 0.1756E-04 0.4962E-04 0.1624E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000413 0.5294E-04 0.2001E-04 0.5295E-04 0.6046E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000414 0.4508E-04 0.1713E-04 0.4509E-04 0.2998E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000415 0.4478E-04 0.1747E-04 0.4480E-04 0.8852E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000416 0.4470E-04 0.1806E-04 0.4471E-04 0.8738E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000417 0.4532E-04 0.1971E-04 0.4533E-04 0.1349E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000418 0.4689E-04 0.1657E-04 0.4690E-04 0.1541E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000419 0.5007E-04 0.1892E-04 0.5008E-04 0.5736E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000420 0.4258E-04 0.1617E-04 0.4260E-04 0.2843E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000421 0.4230E-04 0.1649E-04 0.4232E-04 0.8399E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000422 0.4223E-04 0.1705E-04 0.4224E-04 0.8289E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000423 0.4282E-04 0.1863E-04 0.4284E-04 0.1280E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000424 0.4432E-04 0.1564E-04 0.4434E-04 0.1462E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000425 0.4735E-04 0.1788E-04 0.4736E-04 0.5441E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000426 0.4023E-04 0.1526E-04 0.4024E-04 0.2695E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000427 0.3996E-04 0.1557E-04 0.3998E-04 0.7967E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000428 0.3989E-04 0.1611E-04 0.3991E-04 0.7861E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000429 0.4047E-04 0.1761E-04 0.4048E-04 0.1214E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000430 0.4190E-04 0.1477E-04 0.4191E-04 0.1386E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000431 0.4478E-04 0.1690E-04 0.4479E-04 0.5158E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000432 0.3801E-04 0.1440E-04 0.3802E-04 0.2555E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000433 0.3776E-04 0.1470E-04 0.3778E-04 0.7554E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000434 0.3770E-04 0.1521E-04 0.3771E-04 0.7452E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000435 0.3824E-04 0.1664E-04 0.3826E-04 0.1152E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000436 0.3961E-04 0.1394E-04 0.3963E-04 0.1314E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000437 0.4235E-04 0.1597E-04 0.4236E-04 0.4890E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000438 0.3591E-04 0.1359E-04 0.3593E-04 0.2421E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000439 0.3568E-04 0.1387E-04 0.3570E-04 0.7161E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000440 0.3562E-04 0.1436E-04 0.3564E-04 0.7063E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000441 0.3615E-04 0.1572E-04 0.3617E-04 0.1092E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000442 0.3745E-04 0.1315E-04 0.3746E-04 0.1246E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000443 0.4005E-04 0.1508E-04 0.4007E-04 0.4634E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000444 0.3394E-04 0.1283E-04 0.3396E-04 0.2293E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000445 0.3372E-04 0.1310E-04 0.3374E-04 0.6786E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000446 0.3367E-04 0.1356E-04 0.3369E-04 0.6693E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000447 0.3417E-04 0.1486E-04 0.3419E-04 0.1035E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000448 0.3540E-04 0.1241E-04 0.3542E-04 0.1181E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000449 0.3788E-04 0.1425E-04 0.3789E-04 0.4390E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000450 0.3207E-04 0.1210E-04 0.3209E-04 0.2172E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000451 0.3187E-04 0.1236E-04 0.3189E-04 0.6430E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000452 0.3182E-04 0.1281E-04 0.3184E-04 0.6340E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000453 0.3230E-04 0.1404E-04 0.3232E-04 0.9806E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000454 0.3347E-04 0.1172E-04 0.3349E-04 0.1119E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000455 0.3582E-04 0.1346E-04 0.3584E-04 0.4158E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000456 0.3031E-04 0.1142E-04 0.3034E-04 0.2057E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000457 0.3012E-04 0.1167E-04 0.3014E-04 0.6091E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000458 0.3008E-04 0.1209E-04 0.3010E-04 0.6005E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000459 0.3053E-04 0.1326E-04 0.3056E-04 0.9291E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000460 0.3165E-04 0.1106E-04 0.3167E-04 0.1060E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000461 0.3388E-04 0.1272E-04 0.3390E-04 0.3939E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000462 0.2866E-04 0.1078E-04 0.2868E-04 0.1948E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000463 0.2847E-04 0.1102E-04 0.2850E-04 0.5769E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000464 0.2844E-04 0.1142E-04 0.2846E-04 0.5687E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000465 0.2887E-04 0.1253E-04 0.2889E-04 0.8801E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000466 0.2993E-04 0.1044E-04 0.2995E-04 0.1004E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000467 0.3205E-04 0.1202E-04 0.3207E-04 0.3730E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000468 0.2709E-04 0.1017E-04 0.2712E-04 0.1845E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000469 0.2692E-04 0.1040E-04 0.2695E-04 0.5464E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000470 0.2689E-04 0.1078E-04 0.2691E-04 0.5385E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000471 0.2730E-04 0.1183E-04 0.2732E-04 0.8336E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000472 0.2831E-04 0.9849E-05 0.2833E-04 0.9504E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000473 0.3031E-04 0.1135E-04 0.3033E-04 0.3532E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000474 0.2562E-04 0.9601E-05 0.2564E-04 0.1746E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000475 0.2545E-04 0.9813E-05 0.2548E-04 0.5174E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000476 0.2542E-04 0.1018E-04 0.2545E-04 0.5099E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000477 0.2582E-04 0.1118E-04 0.2584E-04 0.7895E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000478 0.2677E-04 0.9295E-05 0.2680E-04 0.9000E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000479 0.2867E-04 0.1072E-04 0.2870E-04 0.3344E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000480 0.2422E-04 0.9060E-05 0.2425E-04 0.1653E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000481 0.2407E-04 0.9262E-05 0.2410E-04 0.4899E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000482 0.2404E-04 0.9610E-05 0.2407E-04 0.4828E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000483 0.2442E-04 0.1056E-04 0.2444E-04 0.7476E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000484 0.2533E-04 0.8772E-05 0.2535E-04 0.8522E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000485 0.2713E-04 0.1013E-04 0.2715E-04 0.3166E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000486 0.2291E-04 0.8549E-05 0.2294E-04 0.1565E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000487 0.2277E-04 0.8742E-05 0.2279E-04 0.4638E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000488 0.2274E-04 0.9073E-05 0.2277E-04 0.4570E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000489 0.2310E-04 0.9973E-05 0.2312E-04 0.7079E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000490 0.2396E-04 0.8278E-05 0.2398E-04 0.8070E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000491 0.2567E-04 0.9566E-05 0.2569E-04 0.2997E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000492 0.2167E-04 0.8068E-05 0.2170E-04 0.1482E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000493 0.2154E-04 0.8251E-05 0.2156E-04 0.4391E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000494 0.2151E-04 0.8566E-05 0.2154E-04 0.4327E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000495 0.2185E-04 0.9419E-05 0.2188E-04 0.6703E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000496 0.2267E-04 0.7812E-05 0.2269E-04 0.7641E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000497 0.2429E-04 0.9036E-05 0.2431E-04 0.2837E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000498 0.2050E-04 0.7613E-05 0.2053E-04 0.1403E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000499 0.2037E-04 0.7787E-05 0.2040E-04 0.4157E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000500 0.2035E-04 0.8087E-05 0.2038E-04 0.4096E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000501 0.2067E-04 0.8897E-05 0.2070E-04 0.6347E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000502 0.2145E-04 0.7372E-05 0.2147E-04 0.5131E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000503 0.1955E-04 0.8178E-05 0.2257E-04 0.2567E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000504 0.1972E-04 0.7106E-05 0.1942E-04 0.2393E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000505 0.1983E-04 0.7233E-05 0.1927E-04 0.4170E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000506 0.2021E-04 0.7619E-05 0.1933E-04 0.4282E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000507 0.2102E-04 0.7013E-05 0.1986E-04 0.5020E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000508 0.1868E-04 0.7610E-05 0.2058E-04 0.1450E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000509 0.1873E-04 0.6772E-05 0.1854E-04 0.2447E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000510 0.1875E-04 0.6832E-05 0.1839E-04 0.2730E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000511 0.1900E-04 0.7131E-05 0.1841E-04 0.4694E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000512 0.1968E-04 0.7953E-05 0.1881E-04 0.3935E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000513 0.1784E-04 0.6592E-05 0.1949E-04 0.2154E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000514 0.1786E-04 0.6816E-05 0.1770E-04 0.2863E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000515 0.1793E-04 0.7138E-05 0.1759E-04 0.2937E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000516 0.1822E-04 0.6350E-05 0.1768E-04 0.3848E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000517 0.1881E-04 0.6694E-05 0.1810E-04 0.4680E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000518 0.1704E-04 0.7557E-05 0.1881E-04 0.1663E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000519 0.1716E-04 0.6159E-05 0.1691E-04 0.2344E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000520 0.1723E-04 0.6224E-05 0.1680E-04 0.3035E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000521 0.1756E-04 0.6572E-05 0.1688E-04 0.4017E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000522 0.1822E-04 0.6059E-05 0.1734E-04 0.4249E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000523 0.1628E-04 0.6591E-05 0.1798E-04 0.1496E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000524 0.1633E-04 0.5850E-05 0.1616E-04 0.1919E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000525 0.1636E-04 0.5913E-05 0.1603E-04 0.2682E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000526 0.1658E-04 0.6175E-05 0.1605E-04 0.3907E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000527 0.1720E-04 0.6929E-05 0.1642E-04 0.3700E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000528 0.1555E-04 0.5695E-05 0.1700E-04 0.1737E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000529 0.1557E-04 0.5899E-05 0.1543E-04 0.2645E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000530 0.1564E-04 0.6178E-05 0.1534E-04 0.2483E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000531 0.1589E-04 0.5494E-05 0.1543E-04 0.3540E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000532 0.1642E-04 0.5809E-05 0.1581E-04 0.4142E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000533 0.1486E-04 0.6586E-05 0.1643E-04 0.1563E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000534 0.1497E-04 0.5324E-05 0.1475E-04 0.2018E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000535 0.1505E-04 0.5387E-05 0.1466E-04 0.2800E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000536 0.1534E-04 0.5702E-05 0.1473E-04 0.3534E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000537 0.1594E-04 0.5245E-05 0.1516E-04 0.3896E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000538 0.1420E-04 0.5726E-05 0.1572E-04 0.1300E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000539 0.1425E-04 0.5059E-05 0.1410E-04 0.1760E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000540 0.1428E-04 0.5119E-05 0.1399E-04 0.2364E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000541 0.1449E-04 0.5364E-05 0.1402E-04 0.3549E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000542 0.1506E-04 0.6048E-05 0.1436E-04 0.3335E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000543 0.1357E-04 0.4928E-05 0.1488E-04 0.1587E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000544 0.1360E-04 0.5116E-05 0.1347E-04 0.2345E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000545 0.1366E-04 0.5372E-05 0.1340E-04 0.2262E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000546 0.1390E-04 0.4752E-05 0.1348E-04 0.3161E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000547 0.1438E-04 0.5046E-05 0.1383E-04 0.3758E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000548 0.1297E-04 0.5749E-05 0.1440E-04 0.1391E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000549 0.1308E-04 0.4604E-05 0.1288E-04 0.1824E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000550 0.1316E-04 0.4665E-05 0.1281E-04 0.2500E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000551 0.1343E-04 0.4956E-05 0.1288E-04 0.3179E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000552 0.1397E-04 0.4540E-05 0.1327E-04 0.3507E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000553 0.1240E-04 0.4980E-05 0.1378E-04 0.1176E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000554 0.1246E-04 0.4374E-05 0.1232E-04 0.1571E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000555 0.1249E-04 0.4432E-05 0.1223E-04 0.2131E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000556 0.1268E-04 0.4658E-05 0.1225E-04 0.3170E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000557 0.1320E-04 0.5279E-05 0.1257E-04 0.3009E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000558 0.1186E-04 0.4264E-05 0.1305E-04 0.1415E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000559 0.1189E-04 0.4437E-05 0.1177E-04 0.2098E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000560 0.1195E-04 0.4669E-05 0.1171E-04 0.2021E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000561 0.1217E-04 0.4110E-05 0.1180E-04 0.2829E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000562 0.1260E-04 0.4382E-05 0.1211E-04 0.2860E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000563 0.1134E-04 0.4067E-05 0.1254E-04 0.1439E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000564 0.1136E-04 0.4270E-05 0.1126E-04 0.2012E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000565 0.1141E-04 0.4511E-05 0.1120E-04 0.1920E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000566 0.1162E-04 0.3914E-05 0.1129E-04 0.2665E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000567 0.1203E-04 0.4169E-05 0.1159E-04 0.2727E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000568 0.1084E-04 0.3876E-05 0.1200E-04 0.1371E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000569 0.1086E-04 0.4076E-05 0.1077E-04 0.1969E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000570 0.1092E-04 0.4306E-05 0.1072E-04 0.1850E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000571 0.1112E-04 0.3730E-05 0.1080E-04 0.2566E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000572 0.1151E-04 0.3976E-05 0.1110E-04 0.2597E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000573 0.1037E-04 0.3696E-05 0.1149E-04 0.1344E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000574 0.1039E-04 0.3891E-05 0.1030E-04 0.1897E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000575 0.1045E-04 0.4116E-05 0.1025E-04 0.1780E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000576 0.1064E-04 0.3553E-05 0.1034E-04 0.2468E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000577 0.1101E-04 0.3792E-05 0.1063E-04 0.2494E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000578 0.9922E-05 0.3523E-05 0.1101E-04 0.1295E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000579 0.9947E-05 0.3714E-05 0.9860E-05 0.1841E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000580 0.9999E-05 0.3930E-05 0.9811E-05 0.1716E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000581 0.1019E-04 0.3385E-05 0.9894E-05 0.2379E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000582 0.1055E-04 0.3617E-05 0.1018E-04 0.2392E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000583 0.9494E-05 0.3359E-05 0.1055E-04 0.1258E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000584 0.9519E-05 0.3545E-05 0.9436E-05 0.1778E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000585 0.9571E-05 0.3755E-05 0.9390E-05 0.1653E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000586 0.9755E-05 0.3225E-05 0.9472E-05 0.2291E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000587 0.1010E-04 0.3451E-05 0.9746E-05 0.2300E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000588 0.9086E-05 0.3201E-05 0.1011E-04 0.1215E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000589 0.9112E-05 0.3383E-05 0.9032E-05 0.1719E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000590 0.9163E-05 0.3586E-05 0.8988E-05 0.1592E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000591 0.9341E-05 0.3073E-05 0.9069E-05 0.2207E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000592 0.9679E-05 0.3293E-05 0.9336E-05 0.2209E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000593 0.8696E-05 0.3051E-05 0.9688E-05 0.1176E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000594 0.8723E-05 0.3228E-05 0.8646E-05 0.1111E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000595 0.8756E-05 0.2926E-05 0.8594E-05 0.1255E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000596 0.8855E-05 0.2972E-05 0.8607E-05 0.1978E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000597 0.9121E-05 0.3217E-05 0.8789E-05 0.1879E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000598 0.8325E-05 0.2894E-05 0.9055E-05 0.1063E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000599 0.8330E-05 0.3009E-05 0.8276E-05 0.1374E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000600 0.8354E-05 0.3164E-05 0.8230E-05 0.1317E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000601 0.8474E-05 0.2779E-05 0.8276E-05 0.1818E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000602 0.8723E-05 0.2941E-05 0.8464E-05 0.2144E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000603 0.7971E-05 0.3343E-05 0.8785E-05 0.8287E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000604 0.8028E-05 0.2695E-05 0.7926E-05 0.1113E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000605 0.8062E-05 0.2727E-05 0.7878E-05 0.1456E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000606 0.8203E-05 0.2894E-05 0.7912E-05 0.1880E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000607 0.8494E-05 0.2655E-05 0.8121E-05 0.2007E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000608 0.7632E-05 0.2911E-05 0.8405E-05 0.7173E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000609 0.7660E-05 0.2558E-05 0.7592E-05 0.9335E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000610 0.7674E-05 0.2591E-05 0.7536E-05 0.1264E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000611 0.7773E-05 0.2717E-05 0.7546E-05 0.1853E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000612 0.8050E-05 0.3073E-05 0.7716E-05 0.1755E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000613 0.7311E-05 0.2497E-05 0.7980E-05 0.8404E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000614 0.7326E-05 0.2597E-05 0.7270E-05 0.1261E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000615 0.7356E-05 0.2730E-05 0.7230E-05 0.1197E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000616 0.7472E-05 0.2407E-05 0.7273E-05 0.1679E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000617 0.7710E-05 0.2565E-05 0.7447E-05 0.1685E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000618 0.7001E-05 0.2383E-05 0.7688E-05 0.8746E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000619 0.7013E-05 0.2502E-05 0.6966E-05 0.1204E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000620 0.7041E-05 0.2644E-05 0.6929E-05 0.1146E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000621 0.7156E-05 0.2291E-05 0.6974E-05 0.1591E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000622 0.7382E-05 0.2445E-05 0.7146E-05 0.1623E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000623 0.6708E-05 0.2272E-05 0.7382E-05 0.8292E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000624 0.6721E-05 0.2391E-05 0.6675E-05 0.1191E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000625 0.6750E-05 0.2527E-05 0.6640E-05 0.1111E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000626 0.6862E-05 0.2184E-05 0.6687E-05 0.1543E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000627 0.7083E-05 0.2334E-05 0.6856E-05 0.1556E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000628 0.6428E-05 0.2167E-05 0.7088E-05 0.8209E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000629 0.6442E-05 0.2285E-05 0.6398E-05 0.1150E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000630 0.6472E-05 0.2420E-05 0.6365E-05 0.1075E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000631 0.6583E-05 0.2081E-05 0.6412E-05 0.1491E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000632 0.6798E-05 0.2229E-05 0.6580E-05 0.1504E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000633 0.6161E-05 0.2066E-05 0.6808E-05 0.7918E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000634 0.6177E-05 0.2183E-05 0.6133E-05 0.1122E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000635 0.6206E-05 0.2314E-05 0.6103E-05 0.1040E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000636 0.6316E-05 0.1983E-05 0.6150E-05 0.1445E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000637 0.6527E-05 0.2128E-05 0.6315E-05 0.1450E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000638 0.5907E-05 0.1970E-05 0.6539E-05 0.7730E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000639 0.5923E-05 0.2085E-05 0.5881E-05 0.7311E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000640 0.5943E-05 0.1887E-05 0.5845E-05 0.8256E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000641 0.6003E-05 0.1919E-05 0.5852E-05 0.1301E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000642 0.6171E-05 0.2082E-05 0.5964E-05 0.1239E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000643 0.5664E-05 0.1868E-05 0.6135E-05 0.1374E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000644 0.5706E-05 0.2036E-05 0.6349E-05 0.1009E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000645 0.5833E-05 0.1816E-05 0.5594E-05 0.1131E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000646 0.5954E-05 0.1906E-05 0.5587E-05 0.1605E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000647 0.6217E-05 0.2167E-05 0.5721E-05 0.1195E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000648 0.5435E-05 0.1787E-05 0.5957E-05 0.6666E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000649 0.5444E-05 0.1875E-05 0.5409E-05 0.9845E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000650 0.5463E-05 0.1981E-05 0.5380E-05 0.9117E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000651 0.5550E-05 0.1712E-05 0.5408E-05 0.1229E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000652 0.5715E-05 0.1829E-05 0.5537E-05 0.1281E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000653 0.5211E-05 0.1698E-05 0.5711E-05 0.6658E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000654 0.5220E-05 0.1788E-05 0.5190E-05 0.9082E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000655 0.5243E-05 0.1893E-05 0.5163E-05 0.8431E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000656 0.5326E-05 0.1631E-05 0.5200E-05 0.1197E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000657 0.5495E-05 0.1747E-05 0.5328E-05 0.1174E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000658 0.4998E-05 0.1620E-05 0.5511E-05 0.6311E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000659 0.5011E-05 0.1713E-05 0.4980E-05 0.6153E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000660 0.5024E-05 0.1553E-05 0.4949E-05 0.6902E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000661 0.5071E-05 0.1578E-05 0.4951E-05 0.1076E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000662 0.5202E-05 0.1709E-05 0.5039E-05 0.1397E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000663 0.5483E-05 0.1565E-05 0.5266E-05 0.6865E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000664 0.4762E-05 0.1734E-05 0.4778E-05 0.3466E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000665 0.4734E-05 0.1478E-05 0.4760E-05 0.5710E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000666 0.4716E-05 0.1479E-05 0.4746E-05 0.6979E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000667 0.4734E-05 0.1533E-05 0.4766E-05 0.1145E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000668 0.4822E-05 0.1713E-05 0.4868E-05 0.1458E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000669 0.5045E-05 0.1482E-05 0.5095E-05 0.6344E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000670 0.4531E-05 0.1646E-05 0.4550E-05 0.2832E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000671 0.4512E-05 0.1393E-05 0.4529E-05 0.5875E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000672 0.4496E-05 0.1394E-05 0.4514E-05 0.6486E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000673 0.4517E-05 0.1440E-05 0.4533E-05 0.1108E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000674 0.4608E-05 0.1611E-05 0.4626E-05 0.1363E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000675 0.4833E-05 0.1396E-05 0.4847E-05 0.5949E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000676 0.4315E-05 0.1556E-05 0.4333E-05 0.2575E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000677 0.4296E-05 0.1314E-05 0.4314E-05 0.5763E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000678 0.4281E-05 0.1314E-05 0.4299E-05 0.6076E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000679 0.4300E-05 0.1355E-05 0.4318E-05 0.1053E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000680 0.4384E-05 0.1517E-05 0.4403E-05 0.1270E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000681 0.4593E-05 0.1316E-05 0.4611E-05 0.5578E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000682 0.4110E-05 0.1466E-05 0.4127E-05 0.2365E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000683 0.4092E-05 0.1239E-05 0.4109E-05 0.5451E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000684 0.4077E-05 0.1239E-05 0.4094E-05 0.5623E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000685 0.4094E-05 0.1275E-05 0.4111E-05 0.9831E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000686 0.4170E-05 0.1423E-05 0.4187E-05 0.1172E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000687 0.4364E-05 0.1239E-05 0.4380E-05 0.5216E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000688 0.3916E-05 0.1379E-05 0.3933E-05 0.2187E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000689 0.3899E-05 0.1168E-05 0.3916E-05 0.5091E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000690 0.3884E-05 0.1168E-05 0.3900E-05 0.5213E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000691 0.3898E-05 0.1200E-05 0.3914E-05 0.9139E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000692 0.3967E-05 0.1336E-05 0.3983E-05 0.1084E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000693 0.4145E-05 0.1168E-05 0.4161E-05 0.4870E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000694 0.3732E-05 0.1296E-05 0.3749E-05 0.2024E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000695 0.3715E-05 0.1101E-05 0.3732E-05 0.4723E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000696 0.3701E-05 0.1101E-05 0.3717E-05 0.4830E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000697 0.3712E-05 0.1130E-05 0.3728E-05 0.8475E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000698 0.3775E-05 0.1254E-05 0.3790E-05 0.1003E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000699 0.3938E-05 0.1100E-05 0.3953E-05 0.4548E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000700 0.3558E-05 0.1218E-05 0.3574E-05 0.1879E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000701 0.3542E-05 0.1038E-05 0.3558E-05 0.4379E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000702 0.3527E-05 0.1037E-05 0.3543E-05 0.4487E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000703 0.3537E-05 0.1064E-05 0.3553E-05 0.7870E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000704 0.3593E-05 0.1178E-05 0.3608E-05 0.9306E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000705 0.3743E-05 0.1037E-05 0.3757E-05 0.4251E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000706 0.3393E-05 0.1146E-05 0.3409E-05 0.1748E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000707 0.3377E-05 0.9790E-06 0.3393E-05 0.4063E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000708 0.3363E-05 0.9780E-06 0.3379E-05 0.4176E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000709 0.3371E-05 0.1002E-05 0.3386E-05 0.7318E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000710 0.3422E-05 0.1108E-05 0.3437E-05 0.8655E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000711 0.3559E-05 0.9772E-06 0.3574E-05 0.3979E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000712 0.3237E-05 0.1078E-05 0.3252E-05 0.1630E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000713 0.3222E-05 0.9231E-06 0.3237E-05 0.3778E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000714 0.3208E-05 0.9220E-06 0.3223E-05 0.3897E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000715 0.3215E-05 0.9440E-06 0.3229E-05 0.6820E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000716 0.3261E-05 0.1042E-05 0.3275E-05 0.8072E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000717 0.3387E-05 0.9212E-06 0.3401E-05 0.3729E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000718 0.3089E-05 0.1015E-05 0.3104E-05 0.1524E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000719 0.3074E-05 0.8704E-06 0.3089E-05 0.3521E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000720 0.3061E-05 0.8693E-06 0.3075E-05 0.3644E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000721 0.3066E-05 0.8895E-06 0.3081E-05 0.6369E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000722 0.3108E-05 0.9805E-06 0.3122E-05 0.7545E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000723 0.3225E-05 0.8685E-06 0.3238E-05 0.3501E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000724 0.2949E-05 0.9553E-06 0.2963E-05 0.1428E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000725 0.2934E-05 0.8207E-06 0.2949E-05 0.3288E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000726 0.2921E-05 0.8196E-06 0.2936E-05 0.3414E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000727 0.2926E-05 0.8383E-06 0.2940E-05 0.5960E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000728 0.2964E-05 0.9232E-06 0.2978E-05 0.7067E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000729 0.3072E-05 0.8188E-06 0.3085E-05 0.1106E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000730 0.3284E-05 0.1022E-05 0.3296E-05 0.3272E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000731 0.2799E-05 0.7956E-06 0.2813E-05 0.2863E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000732 0.2786E-05 0.8254E-06 0.2800E-05 0.5168E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000733 0.2789E-05 0.8752E-06 0.2803E-05 0.5050E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000734 0.2825E-05 0.7685E-06 0.2838E-05 0.7573E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000735 0.2911E-05 0.8529E-06 0.2923E-05 0.9903E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000736 0.3094E-05 0.7992E-06 0.3106E-05 0.2797E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000737 0.2672E-05 0.7519E-06 0.2686E-05 0.2361E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000738 0.2657E-05 0.7763E-06 0.2671E-05 0.4487E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000739 0.2653E-05 0.8191E-06 0.2666E-05 0.3828E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000740 0.2674E-05 0.7185E-06 0.2687E-05 0.6412E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000741 0.2728E-05 0.7757E-06 0.2740E-05 0.7961E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000742 0.2859E-05 0.7356E-06 0.2871E-05 0.3786E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000743 0.2552E-05 0.8375E-06 0.2565E-05 0.1463E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000744 0.2543E-05 0.6833E-06 0.2556E-05 0.3232E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000745 0.2537E-05 0.6859E-06 0.2550E-05 0.3563E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000746 0.2552E-05 0.7188E-06 0.2564E-05 0.6264E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000747 0.2609E-05 0.8270E-06 0.2621E-05 0.7529E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000748 0.2746E-05 0.6984E-06 0.2757E-05 0.3513E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000749 0.2440E-05 0.7994E-06 0.2452E-05 0.1385E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000750 0.2431E-05 0.6440E-06 0.2444E-05 0.3151E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000751 0.2425E-05 0.6468E-06 0.2437E-05 0.3374E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000752 0.2439E-05 0.6764E-06 0.2451E-05 0.5898E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000753 0.2493E-05 0.7805E-06 0.2504E-05 0.7041E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000754 0.2622E-05 0.6583E-06 0.2633E-05 0.3263E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000755 0.2333E-05 0.7550E-06 0.2345E-05 0.1307E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000756 0.2325E-05 0.6072E-06 0.2337E-05 0.3001E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000757 0.2318E-05 0.6097E-06 0.2330E-05 0.3165E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000758 0.2331E-05 0.6361E-06 0.2342E-05 0.5517E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000759 0.2379E-05 0.7331E-06 0.2391E-05 0.6552E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000760 0.2499E-05 0.6199E-06 0.2509E-05 0.3036E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000761 0.2231E-05 0.7101E-06 0.2243E-05 0.1226E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000762 0.2223E-05 0.5725E-06 0.2235E-05 0.2818E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000763 0.2216E-05 0.5746E-06 0.2228E-05 0.2953E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000764 0.2227E-05 0.5982E-06 0.2238E-05 0.5141E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000765 0.2271E-05 0.6875E-06 0.2282E-05 0.6089E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000766 0.2380E-05 0.5836E-06 0.2390E-05 0.2826E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000767 0.2135E-05 0.6669E-06 0.2146E-05 0.1145E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000768 0.2127E-05 0.5398E-06 0.2138E-05 0.2630E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000769 0.2120E-05 0.5415E-06 0.2131E-05 0.2749E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000770 0.2129E-05 0.5628E-06 0.2140E-05 0.4782E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000771 0.2168E-05 0.6448E-06 0.2178E-05 0.5658E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000772 0.2267E-05 0.5495E-06 0.2277E-05 0.2634E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000773 0.2043E-05 0.6262E-06 0.2054E-05 0.1068E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000774 0.2035E-05 0.5089E-06 0.2046E-05 0.2448E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000775 0.2028E-05 0.5104E-06 0.2039E-05 0.2560E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000776 0.2035E-05 0.5297E-06 0.2046E-05 0.4451E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000777 0.2070E-05 0.6051E-06 0.2081E-05 0.5266E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000778 0.2160E-05 0.5175E-06 0.2170E-05 0.2459E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000779 0.1956E-05 0.5883E-06 0.1966E-05 0.9960E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000780 0.1948E-05 0.4799E-06 0.1958E-05 0.2280E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000781 0.1941E-05 0.4811E-06 0.1951E-05 0.2386E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000782 0.1947E-05 0.4987E-06 0.1957E-05 0.4147E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000783 0.1978E-05 0.5683E-06 0.1988E-05 0.4909E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000784 0.2060E-05 0.4875E-06 0.2070E-05 0.2298E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000785 0.1873E-05 0.5529E-06 0.1883E-05 0.9302E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000786 0.1865E-05 0.4524E-06 0.1875E-05 0.2125E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000787 0.1858E-05 0.4535E-06 0.1868E-05 0.2228E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000788 0.1863E-05 0.4697E-06 0.1873E-05 0.3870E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000789 0.1891E-05 0.5341E-06 0.1901E-05 0.4585E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000790 0.1966E-05 0.4594E-06 0.1975E-05 0.2152E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000791 0.1794E-05 0.5200E-06 0.1804E-05 0.8701E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000792 0.1786E-05 0.4266E-06 0.1796E-05 0.1984E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000793 0.1779E-05 0.4275E-06 0.1789E-05 0.2084E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000794 0.1784E-05 0.4424E-06 0.1793E-05 0.3618E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000795 0.1809E-05 0.5023E-06 0.1818E-05 0.4291E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000796 0.1877E-05 0.4329E-06 0.1886E-05 0.6729E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000797 0.2011E-05 0.5743E-06 0.2019E-05 0.2009E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000798 0.1710E-05 0.4187E-06 0.1719E-05 0.1753E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000799 0.1703E-05 0.4402E-06 0.1712E-05 0.2185E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000800 0.1704E-05 0.3965E-06 0.1713E-05 0.2669E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000801 0.1718E-05 0.4178E-06 0.1727E-05 0.4518E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000802 0.1765E-05 0.4859E-06 0.1773E-05 0.5603E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000803 0.1869E-05 0.4247E-06 0.1877E-05 0.1685E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000804 0.1637E-05 0.3927E-06 0.1646E-05 0.1253E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000805 0.1629E-05 0.4084E-06 0.1638E-05 0.1838E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000806 0.1625E-05 0.3733E-06 0.1633E-05 0.1794E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000807 0.1629E-05 0.3834E-06 0.1637E-05 0.3506E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000808 0.1651E-05 0.4255E-06 0.1660E-05 0.4096E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000809 0.1714E-05 0.3846E-06 0.1722E-05 0.2152E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000810 0.1569E-05 0.4342E-06 0.1577E-05 0.7127E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000811 0.1562E-05 0.3548E-06 0.1571E-05 0.1716E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000812 0.1557E-05 0.3565E-06 0.1566E-05 0.1795E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000813 0.1562E-05 0.3749E-06 0.1570E-05 0.3341E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000814 0.1586E-05 0.4317E-06 0.1594E-05 0.3919E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000815 0.1650E-05 0.3656E-06 0.1658E-05 0.1952E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000816 0.1504E-05 0.4181E-06 0.1513E-05 0.7037E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000817 0.1499E-05 0.3344E-06 0.1507E-05 0.1670E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000818 0.1494E-05 0.3362E-06 0.1502E-05 0.1761E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000819 0.1498E-05 0.3529E-06 0.1506E-05 0.3154E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000820 0.1522E-05 0.4092E-06 0.1530E-05 0.3739E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000821 0.1584E-05 0.3448E-06 0.1591E-05 0.1798E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000822 0.1443E-05 0.3967E-06 0.1451E-05 0.6871E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000823 0.1438E-05 0.3153E-06 0.1445E-05 0.1603E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000824 0.1433E-05 0.3170E-06 0.1441E-05 0.1688E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000825 0.1437E-05 0.3321E-06 0.1445E-05 0.2973E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000826 0.1459E-05 0.3859E-06 0.1466E-05 0.3532E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000827 0.1517E-05 0.3250E-06 0.1524E-05 0.1675E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000828 0.1385E-05 0.3746E-06 0.1392E-05 0.6593E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000829 0.1379E-05 0.2973E-06 0.1387E-05 0.1520E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000830 0.1375E-05 0.2988E-06 0.1382E-05 0.1600E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000831 0.1378E-05 0.3126E-06 0.1385E-05 0.2794E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000832 0.1398E-05 0.3630E-06 0.1405E-05 0.3322E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000833 0.1452E-05 0.3061E-06 0.1458E-05 0.5201E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000834 0.1556E-05 0.4235E-06 0.1562E-05 0.1545E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000835 0.1322E-05 0.2948E-06 0.1329E-05 0.1356E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000836 0.1317E-05 0.3128E-06 0.1325E-05 0.1688E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000837 0.1318E-05 0.2767E-06 0.1326E-05 0.2061E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000838 0.1330E-05 0.2955E-06 0.1337E-05 0.3485E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000839 0.1366E-05 0.3523E-06 0.1373E-05 0.4329E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000840 0.1448E-05 0.3030E-06 0.1454E-05 0.1301E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000841 0.1268E-05 0.2761E-06 0.1275E-05 0.9704E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000842 0.1262E-05 0.2894E-06 0.1269E-05 0.1420E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000843 0.1259E-05 0.2604E-06 0.1266E-05 0.1386E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000844 0.1263E-05 0.2696E-06 0.1269E-05 0.2708E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000845 0.1281E-05 0.3053E-06 0.1287E-05 0.3165E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000846 0.1329E-05 0.2721E-06 0.1335E-05 0.3443E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000847 0.1212E-05 0.2714E-06 0.1400E-05 0.1015E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000848 0.1225E-05 0.2521E-06 0.1218E-05 0.1711E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000849 0.1231E-05 0.2651E-06 0.1208E-05 0.1986E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000850 0.1251E-05 0.2965E-06 0.1211E-05 0.2580E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000851 0.1292E-05 0.2523E-06 0.1234E-05 0.2584E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000852 0.1176E-05 0.2961E-06 0.1274E-05 0.2699E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000853 0.1184E-05 0.2468E-06 0.1323E-05 0.2085E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000854 0.1213E-05 0.2848E-06 0.1169E-05 0.2154E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000855 0.1243E-05 0.2365E-06 0.1168E-05 0.2706E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000856 0.1295E-05 0.2812E-06 0.1191E-05 0.2233E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000857 0.1138E-05 0.2386E-06 0.1237E-05 0.2829E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000858 0.1143E-05 0.2888E-06 0.1284E-05 0.1974E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000859 0.1171E-05 0.2300E-06 0.1131E-05 0.1946E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000860 0.1200E-05 0.2597E-06 0.1129E-05 0.2708E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000861 0.1247E-05 0.2300E-06 0.1151E-05 0.2390E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000862 0.1101E-05 0.2785E-06 0.1193E-05 0.2672E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000863 0.1108E-05 0.2230E-06 0.1238E-05 0.1981E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000864 0.1135E-05 0.2595E-06 0.1094E-05 0.2020E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000865 0.1163E-05 0.2140E-06 0.1093E-05 0.2575E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000866 0.1211E-05 0.2569E-06 0.1116E-05 0.2251E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000867 0.1065E-05 0.2163E-06 0.1159E-05 0.2674E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000868 0.1071E-05 0.2634E-06 0.1202E-05 0.1905E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000869 0.1098E-05 0.2087E-06 0.1059E-05 0.1901E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000870 0.1125E-05 0.2381E-06 0.1057E-05 0.2608E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000871 0.1169E-05 0.2091E-06 0.1080E-05 0.2297E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000872 0.1031E-05 0.2563E-06 0.1120E-05 0.2636E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000873 0.1039E-05 0.2024E-06 0.1161E-05 0.1937E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000874 0.1065E-05 0.2381E-06 0.1025E-05 0.1959E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000875 0.1092E-05 0.1943E-06 0.1024E-05 0.2515E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000876 0.1136E-05 0.2358E-06 0.1048E-05 0.2241E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000877 0.9978E-06 0.1967E-06 0.1089E-05 0.9582E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000878 0.1000E-05 0.2204E-06 0.9984E-06 0.1168E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000879 0.1003E-05 0.1831E-06 0.9924E-06 0.1235E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000880 0.1011E-05 0.1919E-06 0.9919E-06 0.1972E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000881 0.1032E-05 0.2226E-06 0.1005E-05 0.2527E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000882 0.1080E-05 0.1961E-06 0.1041E-05 0.7799E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000883 0.9612E-06 0.1811E-06 0.9654E-06 0.6375E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000884 0.9550E-06 0.1883E-06 0.9610E-06 0.1182E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000885 0.9516E-06 0.2002E-06 0.9581E-06 0.1128E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000886 0.9526E-06 0.1732E-06 0.9593E-06 0.1765E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000887 0.9595E-06 0.1892E-06 0.9677E-06 0.2261E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000888 0.9823E-06 0.1793E-06 0.9912E-06 0.3389E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000889 0.1033E-05 0.2436E-06 0.1044E-05 0.8961E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000890 0.9188E-06 0.1730E-06 0.9236E-06 0.8349E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000891 0.9153E-06 0.1831E-06 0.9197E-06 0.1143E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000892 0.9145E-06 0.1620E-06 0.9186E-06 0.1318E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000893 0.9192E-06 0.1730E-06 0.9231E-06 0.2073E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000894 0.9350E-06 0.1653E-06 0.9381E-06 0.2763E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000895 0.9746E-06 0.2141E-06 0.9771E-06 0.8663E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000896 0.8837E-06 0.1609E-06 0.8881E-06 0.5772E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000897 0.8796E-06 0.1689E-06 0.8841E-06 0.1024E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000898 0.8775E-06 0.1522E-06 0.8818E-06 0.9833E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000899 0.8796E-06 0.1600E-06 0.8839E-06 0.1920E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000900 0.8911E-06 0.1865E-06 0.8952E-06 0.2240E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000901 0.9223E-06 0.1626E-06 0.9263E-06 0.7194E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000902 0.8499E-06 0.1509E-06 0.8542E-06 0.4501E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000903 0.8454E-06 0.1570E-06 0.8496E-06 0.1080E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000904 0.8424E-06 0.1674E-06 0.8465E-06 0.8239E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000905 0.8434E-06 0.1438E-06 0.8475E-06 0.1558E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000906 0.8498E-06 0.1579E-06 0.8537E-06 0.1819E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000907 0.8704E-06 0.1496E-06 0.8741E-06 0.2941E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000908 0.9158E-06 0.2069E-06 0.9190E-06 0.7491E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000909 0.8135E-06 0.1447E-06 0.8175E-06 0.7709E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000910 0.8102E-06 0.1541E-06 0.8143E-06 0.9455E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000911 0.8096E-06 0.1347E-06 0.8136E-06 0.1188E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000912 0.8136E-06 0.1451E-06 0.8175E-06 0.1773E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000913 0.8276E-06 0.1382E-06 0.8313E-06 0.2453E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000914 0.8625E-06 0.1830E-06 0.8660E-06 0.7462E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000915 0.7828E-06 0.1345E-06 0.7866E-06 0.5273E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000916 0.7793E-06 0.1422E-06 0.7832E-06 0.8729E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000917 0.7776E-06 0.1265E-06 0.7814E-06 0.8797E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000918 0.7796E-06 0.1341E-06 0.7834E-06 0.1677E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000919 0.7903E-06 0.1587E-06 0.7939E-06 0.1985E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000920 0.8189E-06 0.1372E-06 0.8222E-06 0.2128E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000921 0.7498E-06 0.1370E-06 0.8614E-06 0.6295E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000922 0.7578E-06 0.1241E-06 0.7535E-06 0.1059E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000923 0.7621E-06 0.1340E-06 0.7479E-06 0.9907E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000924 0.7725E-06 0.1210E-06 0.7492E-06 0.1583E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000925 0.7943E-06 0.1437E-06 0.7615E-06 0.2014E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000926 0.8405E-06 0.1325E-06 0.7953E-06 0.6047E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000927 0.7262E-06 0.1215E-06 0.7288E-06 0.5657E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000928 0.7216E-06 0.1283E-06 0.7263E-06 0.7237E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000929 0.7193E-06 0.1128E-06 0.7245E-06 0.7697E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000930 0.7198E-06 0.1179E-06 0.7254E-06 0.1388E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000931 0.7260E-06 0.1374E-06 0.7333E-06 0.1672E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000932 0.7459E-06 0.1193E-06 0.7549E-06 0.2487E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000933 0.7860E-06 0.1297E-06 0.7976E-06 0.6162E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000934 0.6948E-06 0.1154E-06 0.6982E-06 0.3926E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000935 0.6918E-06 0.1054E-06 0.6948E-06 0.6731E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000936 0.6897E-06 0.1071E-06 0.6926E-06 0.8666E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000937 0.6910E-06 0.1153E-06 0.6936E-06 0.1255E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000938 0.6986E-06 0.1066E-06 0.7006E-06 0.1748E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000939 0.7195E-06 0.1320E-06 0.7209E-06 0.2467E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000940 0.7642E-06 0.1197E-06 0.7644E-06 0.6077E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000941 0.6654E-06 0.1088E-06 0.6684E-06 0.3772E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000942 0.6624E-06 0.9833E-07 0.6656E-06 0.6891E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000943 0.6604E-06 0.9999E-07 0.6636E-06 0.8637E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000944 0.6619E-06 0.1075E-06 0.6650E-06 0.1264E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000945 0.6694E-06 0.9985E-07 0.6727E-06 0.1735E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000946 0.6904E-06 0.1240E-06 0.6937E-06 0.2459E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000947 0.7344E-06 0.1125E-06 0.7379E-06 0.5970E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000948 0.6372E-06 0.1021E-06 0.6401E-06 0.3768E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000949 0.6345E-06 0.9187E-07 0.6374E-06 0.6741E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000950 0.6328E-06 0.9356E-07 0.6357E-06 0.8536E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000951 0.6346E-06 0.1012E-06 0.6374E-06 0.1242E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000952 0.6428E-06 0.9341E-07 0.6454E-06 0.1715E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000953 0.6645E-06 0.1173E-06 0.6668E-06 0.2421E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000954 0.7094E-06 0.1060E-06 0.7114E-06 0.5860E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000955 0.6104E-06 0.9606E-07 0.6131E-06 0.3708E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000956 0.6079E-06 0.8580E-07 0.6106E-06 0.6685E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000957 0.6064E-06 0.8752E-07 0.6092E-06 0.8426E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000958 0.6084E-06 0.9503E-07 0.6111E-06 0.1227E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000959 0.6168E-06 0.8751E-07 0.6195E-06 0.1688E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000960 0.6387E-06 0.1109E-06 0.6412E-06 0.2382E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000961 0.6833E-06 0.9994E-07 0.6858E-06 0.5723E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000962 0.5847E-06 0.9036E-07 0.5874E-06 0.3639E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000963 0.5824E-06 0.8015E-07 0.5851E-06 0.6538E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000964 0.5812E-06 0.8191E-07 0.5838E-06 0.8244E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000965 0.5834E-06 0.8941E-07 0.5859E-06 0.1200E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000966 0.5920E-06 0.8198E-07 0.5944E-06 0.1651E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000967 0.6141E-06 0.1050E-06 0.6163E-06 0.2326E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000968 0.6586E-06 0.9431E-07 0.6606E-06 0.5572E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000969 0.5603E-06 0.8504E-07 0.5628E-06 0.3559E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000970 0.5582E-06 0.7487E-07 0.5607E-06 0.6397E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000971 0.5571E-06 0.7666E-07 0.5596E-06 0.8056E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000972 0.5594E-06 0.8409E-07 0.5618E-06 0.1172E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000973 0.5682E-06 0.7682E-07 0.5705E-06 0.1611E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000974 0.5901E-06 0.9944E-07 0.5923E-06 0.4690E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000975 0.5398E-06 0.7550E-07 0.5421E-06 0.3424E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000976 0.5373E-06 0.7993E-07 0.5397E-06 0.5520E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000977 0.5359E-06 0.7042E-07 0.5382E-06 0.5345E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000978 0.5366E-06 0.7436E-07 0.5389E-06 0.1048E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000979 0.5420E-06 0.8776E-07 0.5442E-06 0.1207E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000980 0.5581E-06 0.7620E-07 0.5602E-06 0.1860E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000981 0.5902E-06 0.8355E-07 0.5922E-06 0.4400E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000982 0.5175E-06 0.7420E-07 0.5197E-06 0.3114E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000983 0.5153E-06 0.6594E-07 0.5175E-06 0.4982E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000984 0.5138E-06 0.6736E-07 0.5160E-06 0.6724E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000985 0.5148E-06 0.7352E-07 0.5170E-06 0.9387E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000986 0.5205E-06 0.6736E-07 0.5226E-06 0.1334E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000987 0.5360E-06 0.8617E-07 0.5380E-06 0.1856E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000988 0.5692E-06 0.7759E-07 0.5710E-06 0.4562E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000989 0.4961E-06 0.6990E-07 0.4982E-06 0.2734E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000990 0.4940E-06 0.6156E-07 0.4961E-06 0.5183E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000991 0.4927E-06 0.6299E-07 0.4948E-06 0.6321E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000992 0.4940E-06 0.6900E-07 0.4960E-06 0.9396E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000993 0.4999E-06 0.6316E-07 0.5019E-06 0.1278E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000994 0.5160E-06 0.8159E-07 0.5178E-06 0.1829E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000995 0.5496E-06 0.7336E-07 0.5513E-06 0.4355E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000996 0.4757E-06 0.6585E-07 0.4776E-06 0.2848E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000997 0.4738E-06 0.5752E-07 0.4757E-06 0.4987E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000998 0.4726E-06 0.5898E-07 0.4746E-06 0.6375E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000999 0.4740E-06 0.6499E-07 0.4759E-06 0.9191E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001000 0.4802E-06 0.5918E-07 0.4820E-06 0.1275E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001001 0.4963E-06 0.7731E-07 0.4981E-06 0.1797E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001002 0.5299E-06 0.6923E-07 0.5315E-06 0.4306E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001003 0.4561E-06 0.6205E-07 0.4580E-06 0.2718E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001004 0.4544E-06 0.5373E-07 0.4562E-06 0.4941E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001005 0.4534E-06 0.5522E-07 0.4552E-06 0.6165E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001006 0.4549E-06 0.6119E-07 0.4567E-06 0.9023E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001007 0.4612E-06 0.5550E-07 0.4629E-06 0.1238E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001008 0.4775E-06 0.7338E-07 0.4791E-06 0.1756E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001009 0.5109E-06 0.6550E-07 0.5124E-06 0.4166E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001010 0.4375E-06 0.5851E-07 0.4392E-06 0.2691E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001011 0.4358E-06 0.5021E-07 0.4376E-06 0.4804E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001012 0.4349E-06 0.5172E-07 0.4367E-06 0.6057E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001013 0.4366E-06 0.5764E-07 0.4382E-06 0.8801E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001014 0.4429E-06 0.5204E-07 0.4446E-06 0.1213E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001015 0.4591E-06 0.6959E-07 0.4607E-06 0.1712E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001016 0.4921E-06 0.6191E-07 0.4935E-06 0.4055E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001017 0.4196E-06 0.5515E-07 0.4213E-06 0.2602E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001018 0.4181E-06 0.4691E-07 0.4197E-06 0.4685E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001019 0.4173E-06 0.4843E-07 0.4189E-06 0.5869E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001020 0.4190E-06 0.5430E-07 0.4206E-06 0.8563E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001021 0.4254E-06 0.4881E-07 0.4269E-06 0.1176E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001022 0.4414E-06 0.6603E-07 0.4429E-06 0.1663E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001023 0.4738E-06 0.5856E-07 0.4752E-06 0.3920E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001024 0.4025E-06 0.5200E-07 0.4041E-06 0.2536E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001025 0.4011E-06 0.4383E-07 0.4026E-06 0.4545E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001026 0.4004E-06 0.4536E-07 0.4019E-06 0.5705E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001027 0.4021E-06 0.5115E-07 0.4036E-06 0.8312E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001028 0.4085E-06 0.4577E-07 0.4099E-06 0.1142E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001029 0.4243E-06 0.6259E-07 0.4256E-06 0.1613E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001030 0.4559E-06 0.5535E-07 0.4572E-06 0.3788E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001031 0.3862E-06 0.4902E-07 0.3877E-06 0.2450E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001032 0.3848E-06 0.4096E-07 0.3863E-06 0.4403E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001033 0.3842E-06 0.4249E-07 0.3857E-06 0.5513E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001034 0.3860E-06 0.4818E-07 0.3874E-06 0.8045E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001035 0.3922E-06 0.4293E-07 0.3936E-06 0.1104E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001036 0.4077E-06 0.5934E-07 0.4090E-06 0.1560E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001037 0.4386E-06 0.5233E-07 0.4398E-06 0.3651E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001038 0.3706E-06 0.4621E-07 0.3719E-06 0.2372E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001039 0.3693E-06 0.3828E-07 0.3707E-06 0.4256E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001040 0.3687E-06 0.3979E-07 0.3701E-06 0.5330E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001041 0.3705E-06 0.4537E-07 0.3718E-06 0.7778E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001042 0.3766E-06 0.4026E-07 0.3779E-06 0.1067E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001043 0.3917E-06 0.5620E-07 0.3929E-06 0.1507E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001044 0.4217E-06 0.4944E-07 0.4228E-06 0.3514E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001045 0.3556E-06 0.4356E-07 0.3569E-06 0.2287E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001046 0.3544E-06 0.3577E-07 0.3557E-06 0.4107E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001047 0.3539E-06 0.3727E-07 0.3552E-06 0.5136E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001048 0.3556E-06 0.4273E-07 0.3569E-06 0.7502E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001049 0.3616E-06 0.3776E-07 0.3628E-06 0.1029E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001050 0.3763E-06 0.5323E-07 0.3775E-06 0.1453E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001051 0.4054E-06 0.4672E-07 0.4065E-06 0.3378E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001052 0.3413E-06 0.4107E-07 0.3425E-06 0.2205E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001053 0.3401E-06 0.3343E-07 0.3413E-06 0.3959E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001054 0.3397E-06 0.3491E-07 0.3409E-06 0.4949E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001055 0.3414E-06 0.4023E-07 0.3426E-06 0.7232E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001056 0.3472E-06 0.3541E-07 0.3483E-06 0.9912E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001057 0.3615E-06 0.5038E-07 0.3625E-06 0.1400E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001058 0.3896E-06 0.4413E-07 0.3906E-06 0.3243E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001059 0.3276E-06 0.3871E-07 0.3287E-06 0.2121E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001060 0.3265E-06 0.3124E-07 0.3276E-06 0.3811E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001061 0.3261E-06 0.3270E-07 0.3272E-06 0.4758E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001062 0.3277E-06 0.3788E-07 0.3288E-06 0.6960E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001063 0.3334E-06 0.3322E-07 0.3345E-06 0.9534E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001064 0.3472E-06 0.4769E-07 0.3482E-06 0.1347E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001065 0.3744E-06 0.4169E-07 0.3753E-06 0.3112E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001066 0.3144E-06 0.3650E-07 0.3155E-06 0.2041E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001067 0.3134E-06 0.2920E-07 0.3145E-06 0.3666E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001068 0.3130E-06 0.3063E-07 0.3140E-06 0.4576E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001069 0.3147E-06 0.3566E-07 0.3157E-06 0.6696E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001070 0.3201E-06 0.3115E-07 0.3211E-06 0.8774E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001071 0.3332E-06 0.3411E-07 0.3341E-06 0.1279E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001072 0.3585E-06 0.3910E-07 0.3593E-06 0.2989E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001073 0.3018E-06 0.3403E-07 0.3028E-06 0.1912E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001074 0.3008E-06 0.2733E-07 0.3018E-06 0.3510E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001075 0.3004E-06 0.2867E-07 0.3014E-06 0.4318E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001076 0.3020E-06 0.3356E-07 0.3030E-06 0.6387E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001077 0.3072E-06 0.2908E-07 0.3081E-06 0.8343E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001078 0.3196E-06 0.3204E-07 0.3205E-06 0.1221E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001079 0.3437E-06 0.3662E-07 0.3445E-06 0.2817E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001080 0.2898E-06 0.3198E-07 0.2907E-06 0.1832E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001081 0.2888E-06 0.2552E-07 0.2897E-06 0.3337E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001082 0.2884E-06 0.2679E-07 0.2893E-06 0.4129E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001083 0.2899E-06 0.3135E-07 0.2908E-06 0.6088E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001084 0.2948E-06 0.2721E-07 0.2957E-06 0.7973E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001085 0.3066E-06 0.2996E-07 0.3074E-06 0.1166E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001086 0.3297E-06 0.3437E-07 0.3304E-06 0.2687E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001087 0.2782E-06 0.3002E-07 0.2791E-06 0.1742E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001088 0.2773E-06 0.2385E-07 0.2782E-06 0.3190E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001089 0.2769E-06 0.2507E-07 0.2778E-06 0.3932E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001090 0.2783E-06 0.2944E-07 0.2792E-06 0.5817E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001091 0.2830E-06 0.2547E-07 0.2838E-06 0.7616E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001092 0.2943E-06 0.2813E-07 0.2951E-06 0.1116E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001093 0.3163E-06 0.3230E-07 0.3171E-06 0.2555E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001094 0.2671E-06 0.2823E-07 0.2679E-06 0.1668E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001095 0.2662E-06 0.2229E-07 0.2671E-06 0.3049E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001096 0.2659E-06 0.2345E-07 0.2667E-06 0.3762E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001097 0.2672E-06 0.2761E-07 0.2680E-06 0.5566E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001098 0.2717E-06 0.2385E-07 0.2725E-06 0.7294E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001099 0.2825E-06 0.2638E-07 0.2832E-06 0.1069E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001100 0.3037E-06 0.3037E-07 0.3044E-06 0.2439E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001101 0.2565E-06 0.2655E-07 0.2573E-06 0.1595E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001102 0.2556E-06 0.2083E-07 0.2564E-06 0.2921E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001103 0.2553E-06 0.2195E-07 0.2561E-06 0.3598E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001104 0.2566E-06 0.2595E-07 0.2573E-06 0.5333E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001105 0.2609E-06 0.2234E-07 0.2616E-06 0.6990E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001106 0.2713E-06 0.2479E-07 0.2720E-06 0.1025E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001107 0.2916E-06 0.2861E-07 0.2923E-06 0.2328E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001108 0.2463E-06 0.2500E-07 0.2470E-06 0.1530E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001109 0.2455E-06 0.1947E-07 0.2462E-06 0.2802E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001110 0.2452E-06 0.2056E-07 0.2459E-06 0.3451E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001111 0.2464E-06 0.2439E-07 0.2471E-06 0.5119E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001112 0.2506E-06 0.2094E-07 0.2513E-06 0.6713E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001113 0.2606E-06 0.2330E-07 0.2613E-06 0.9846E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001114 0.2802E-06 0.2696E-07 0.2809E-06 0.2228E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001115 0.2365E-06 0.2356E-07 0.2372E-06 0.1469E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001116 0.2358E-06 0.1821E-07 0.2365E-06 0.2693E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001117 0.2355E-06 0.1926E-07 0.2362E-06 0.3315E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001118 0.2367E-06 0.2296E-07 0.2374E-06 0.4923E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001119 0.2407E-06 0.1964E-07 0.2414E-06 0.6458E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001120 0.2504E-06 0.2193E-07 0.2511E-06 0.9473E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001121 0.2694E-06 0.2545E-07 0.2701E-06 0.2136E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001122 0.2272E-06 0.2222E-07 0.2278E-06 0.1415E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001123 0.2265E-06 0.1702E-07 0.2271E-06 0.2595E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001124 0.2262E-06 0.1805E-07 0.2268E-06 0.3194E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001125 0.2274E-06 0.2164E-07 0.2280E-06 0.4748E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001126 0.2314E-06 0.1843E-07 0.2320E-06 0.6230E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001127 0.2408E-06 0.2066E-07 0.2414E-06 0.9137E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001128 0.2592E-06 0.2406E-07 0.2598E-06 0.2053E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001129 0.2182E-06 0.2098E-07 0.2188E-06 0.1366E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001130 0.2175E-06 0.1592E-07 0.2181E-06 0.2508E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001131 0.2173E-06 0.1693E-07 0.2179E-06 0.3087E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001132 0.2185E-06 0.2043E-07 0.2191E-06 0.4593E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001133 0.2224E-06 0.1732E-07 0.2231E-06 0.6029E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001134 0.2317E-06 0.1950E-07 0.2323E-06 0.8838E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001135 0.2496E-06 0.2280E-07 0.2503E-06 0.1980E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001136 0.2096E-06 0.1985E-07 0.2102E-06 0.1325E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001137 0.2090E-06 0.1489E-07 0.2095E-06 0.2434E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001138 0.2088E-06 0.1590E-07 0.2094E-06 0.2997E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001139 0.2100E-06 0.1933E-07 0.2106E-06 0.4462E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001140 0.2139E-06 0.1629E-07 0.2145E-06 0.5860E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001141 0.2231E-06 0.1844E-07 0.2237E-06 0.8581E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001142 0.2408E-06 0.2165E-07 0.2414E-06 0.1918E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001143 0.2014E-06 0.1882E-07 0.2019E-06 0.1291E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001144 0.2008E-06 0.1393E-07 0.2013E-06 0.2373E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001145 0.2006E-06 0.1495E-07 0.2012E-06 0.2924E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001146 0.2019E-06 0.1834E-07 0.2025E-06 0.4357E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001147 0.2059E-06 0.1535E-07 0.2065E-06 0.5723E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001148 0.2150E-06 0.1748E-07 0.2157E-06 0.1553E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001149 0.1943E-06 0.1563E-07 0.1948E-06 0.7117E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001150 0.1935E-06 0.1308E-07 0.1940E-06 0.1823E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001151 0.1929E-06 0.1353E-07 0.1934E-06 0.1813E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001152 0.1930E-06 0.1545E-07 0.1935E-06 0.3103E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001153 0.1944E-06 0.1358E-07 0.1950E-06 0.3881E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001154 0.1988E-06 0.1881E-07 0.1994E-06 0.5979E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001155 0.2086E-06 0.1666E-07 0.2093E-06 0.1241E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001156 0.1867E-06 0.1522E-07 0.1872E-06 0.1034E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001157 0.1859E-06 0.1220E-07 0.1864E-06 0.1593E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001158 0.1855E-06 0.1275E-07 0.1860E-06 0.2188E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001159 0.1857E-06 0.1473E-07 0.1863E-06 0.3045E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001160 0.1876E-06 0.1297E-07 0.1882E-06 0.4372E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001161 0.1927E-06 0.1860E-07 0.1934E-06 0.6130E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001162 0.2039E-06 0.1638E-07 0.2047E-06 0.1434E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001163 0.1794E-06 0.1466E-07 0.1798E-06 0.9184E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001164 0.1787E-06 0.1145E-07 0.1792E-06 0.1793E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001165 0.1784E-06 0.1211E-07 0.1789E-06 0.2139E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001166 0.1789E-06 0.1444E-07 0.1795E-06 0.3271E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001167 0.1813E-06 0.1236E-07 0.1819E-06 0.4260E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001168 0.1873E-06 0.1382E-07 0.1880E-06 0.6364E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001169 0.1995E-06 0.1612E-07 0.2003E-06 0.1424E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001170 0.1724E-06 0.1422E-07 0.1728E-06 0.1016E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001171 0.1718E-06 0.1074E-07 0.1723E-06 0.1821E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001172 0.1716E-06 0.1151E-07 0.1721E-06 0.2299E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001173 0.1725E-06 0.1410E-07 0.1731E-06 0.3399E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001174 0.1754E-06 0.1182E-07 0.1761E-06 0.4531E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001175 0.1823E-06 0.1347E-07 0.1831E-06 0.6623E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001176 0.1959E-06 0.1593E-07 0.1969E-06 0.1500E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001177 0.1657E-06 0.1389E-07 0.1661E-06 0.1030E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001178 0.1652E-06 0.1007E-07 0.1657E-06 0.1928E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001179 0.1652E-06 0.1096E-07 0.1657E-06 0.2219E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001180 0.1663E-06 0.1053E-07 0.1669E-06 0.3653E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001181 0.1697E-06 0.1450E-07 0.1704E-06 0.4702E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001182 0.1777E-06 0.1288E-07 0.1786E-06 0.1252E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001183 0.1599E-06 0.1171E-07 0.1603E-06 0.6346E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001184 0.1593E-06 0.9444E-08 0.1597E-06 0.1549E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001185 0.1589E-06 0.9911E-08 0.1593E-06 0.1620E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001186 0.1592E-06 0.1167E-07 0.1597E-06 0.2712E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001187 0.1609E-06 0.1007E-07 0.1615E-06 0.3466E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001188 0.1654E-06 0.1484E-07 0.1662E-06 0.5267E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001189 0.1752E-06 0.1298E-07 0.1761E-06 0.1115E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001190 0.1537E-06 0.1171E-07 0.1541E-06 0.9267E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001191 0.1532E-06 0.8852E-08 0.1536E-06 0.1489E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001192 0.1530E-06 0.9497E-08 0.1535E-06 0.2019E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001193 0.1537E-06 0.1162E-07 0.1542E-06 0.2867E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001194 0.1562E-06 0.9795E-08 0.1568E-06 0.3946E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001195 0.1620E-06 0.1114E-07 0.1628E-06 0.5677E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001196 0.1738E-06 0.1328E-07 0.1747E-06 0.1323E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001197 0.1478E-06 0.1159E-07 0.1482E-06 0.8725E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001198 0.1474E-06 0.8337E-08 0.1478E-06 0.1720E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001199 0.1474E-06 0.9137E-08 0.1478E-06 0.1934E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001200 0.1484E-06 0.8774E-08 0.1490E-06 0.3249E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001201 0.1515E-06 0.1229E-07 0.1522E-06 0.4147E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001202 0.1587E-06 0.1089E-07 0.1596E-06 0.1123E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001203 0.1427E-06 0.9857E-08 0.1430E-06 0.5589E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001204 0.1421E-06 0.7817E-08 0.1425E-06 0.1408E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001205 0.1418E-06 0.8269E-08 0.1422E-06 0.1457E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001206 0.1422E-06 0.9925E-08 0.1426E-06 0.2470E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001207 0.1438E-06 0.8437E-08 0.1443E-06 0.3044E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001208 0.1481E-06 0.9512E-08 0.1488E-06 0.4762E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001209 0.1571E-06 0.1112E-07 0.1580E-06 0.1006E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001210 0.1371E-06 0.9932E-08 0.1375E-06 0.8418E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001211 0.1367E-06 0.7341E-08 0.1371E-06 0.1367E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001212 0.1366E-06 0.7963E-08 0.1370E-06 0.1849E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001213 0.1373E-06 0.9981E-08 0.1378E-06 0.2644E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001214 0.1398E-06 0.8246E-08 0.1404E-06 0.3642E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001215 0.1456E-06 0.9562E-08 0.1463E-06 0.3800E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001216 0.1546E-06 0.1027E-07 0.1320E-06 0.1136E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001217 0.1319E-06 0.8199E-08 0.1338E-06 0.1536E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001218 0.1310E-06 0.7215E-08 0.1350E-06 0.1945E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001219 0.1313E-06 0.9354E-08 0.1375E-06 0.2732E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001220 0.1335E-06 0.8435E-08 0.1426E-06 0.3802E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001221 0.1394E-06 0.1010E-07 0.1524E-06 0.3323E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001222 0.1492E-06 0.1057E-07 0.1280E-06 0.9847E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001223 0.1274E-06 0.7854E-08 0.1293E-06 0.1615E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001224 0.1266E-06 0.6914E-08 0.1304E-06 0.1720E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001225 0.1267E-06 0.9103E-08 0.1327E-06 0.2567E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001226 0.1287E-06 0.8173E-08 0.1371E-06 0.3391E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001227 0.1341E-06 0.9810E-08 0.1460E-06 0.3082E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001228 0.1428E-06 0.1002E-07 0.1236E-06 0.8683E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001229 0.1231E-06 0.7398E-08 0.1247E-06 0.1528E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001230 0.1223E-06 0.6533E-08 0.1254E-06 0.1500E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001231 0.1223E-06 0.8466E-08 0.1271E-06 0.2289E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001232 0.1238E-06 0.7643E-08 0.1305E-06 0.2964E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001233 0.1280E-06 0.9079E-08 0.1378E-06 0.2749E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001234 0.1351E-06 0.9180E-08 0.1194E-06 0.7762E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001235 0.1190E-06 0.6878E-08 0.1202E-06 0.1412E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001236 0.1182E-06 0.6141E-08 0.1206E-06 0.1319E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001237 0.1180E-06 0.7796E-08 0.1218E-06 0.2036E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001238 0.1191E-06 0.7081E-08 0.1243E-06 0.2604E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001239 0.1223E-06 0.8306E-08 0.1300E-06 0.2441E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001240 0.1278E-06 0.8330E-08 0.1154E-06 0.7017E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001241 0.1149E-06 0.6379E-08 0.1159E-06 0.1289E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001242 0.1142E-06 0.5762E-08 0.1161E-06 0.1175E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001243 0.1139E-06 0.7173E-08 0.1170E-06 0.1813E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001244 0.1147E-06 0.6553E-08 0.1188E-06 0.2314E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001245 0.1171E-06 0.7598E-08 0.1233E-06 0.3356E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001246 0.1235E-06 0.9489E-08 0.1312E-06 0.9021E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001247 0.1109E-06 0.7581E-08 0.1114E-06 0.5746E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001248 0.1106E-06 0.5194E-08 0.1107E-06 0.1067E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001249 0.1103E-06 0.5669E-08 0.1104E-06 0.1130E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001250 0.1104E-06 0.5434E-08 0.1105E-06 0.1882E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001251 0.1113E-06 0.7450E-08 0.1113E-06 0.2362E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001252 0.1140E-06 0.6587E-08 0.1138E-06 0.3509E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001253 0.1197E-06 0.8046E-08 0.1193E-06 0.7720E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001254 0.1067E-06 0.6880E-08 0.1069E-06 0.5523E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001255 0.1062E-06 0.4860E-08 0.1065E-06 0.9736E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001256 0.1059E-06 0.5314E-08 0.1062E-06 0.1111E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001257 0.1060E-06 0.5104E-08 0.1064E-06 0.1802E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001258 0.1069E-06 0.7061E-08 0.1073E-06 0.2330E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001259 0.1096E-06 0.6263E-08 0.1102E-06 0.3430E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001260 0.1153E-06 0.7694E-08 0.1161E-06 0.7730E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001261 0.1025E-06 0.6576E-08 0.1028E-06 0.5306E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001262 0.1021E-06 0.4557E-08 0.1024E-06 0.9919E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001263 0.1019E-06 0.5023E-08 0.1021E-06 0.1113E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001264 0.1021E-06 0.4810E-08 0.1023E-06 0.1834E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001265 0.1030E-06 0.6763E-08 0.1034E-06 0.2353E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001266 0.1059E-06 0.5972E-08 0.1064E-06 0.3475E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001267 0.1119E-06 0.7387E-08 0.1125E-06 0.7664E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001268 0.9856E-07 0.6341E-08 0.9877E-07 0.5368E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001269 0.9820E-07 0.4280E-08 0.9842E-07 0.9968E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001270 0.9799E-07 0.4775E-08 0.9825E-07 0.1133E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001271 0.9827E-07 0.4561E-08 0.9859E-07 0.1799E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001272 0.9943E-07 0.4941E-08 0.9986E-07 0.2375E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001273 0.1025E-06 0.5782E-08 0.1031E-06 0.3507E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001274 0.1090E-06 0.7086E-08 0.1098E-06 0.7668E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001275 0.9473E-07 0.6148E-08 0.9496E-07 0.5490E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001276 0.9443E-07 0.4017E-08 0.9465E-07 0.1016E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001277 0.9429E-07 0.4539E-08 0.9456E-07 0.1168E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001278 0.9469E-07 0.4319E-08 0.9503E-07 0.1849E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001279 0.9608E-07 0.4735E-08 0.9654E-07 0.2448E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001280 0.9959E-07 0.5591E-08 0.1002E-06 0.3594E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001281 0.1066E-06 0.6929E-08 0.1074E-06 0.7829E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001282 0.9108E-07 0.6006E-08 0.9130E-07 0.5619E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001283 0.9083E-07 0.3781E-08 0.9105E-07 0.1047E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001284 0.9077E-07 0.4346E-08 0.9104E-07 0.1207E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001285 0.9133E-07 0.4123E-08 0.9169E-07 0.1915E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001286 0.9300E-07 0.4569E-08 0.9350E-07 0.2532E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001287 0.9702E-07 0.5490E-08 0.9771E-07 0.6661E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001288 0.8796E-07 0.4853E-08 0.8815E-07 0.3358E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001289 0.8762E-07 0.3543E-08 0.8781E-07 0.8476E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001290 0.8738E-07 0.3855E-08 0.8760E-07 0.8065E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001291 0.8751E-07 0.3722E-08 0.8779E-07 0.1509E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001292 0.8830E-07 0.5119E-08 0.8867E-07 0.1815E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001293 0.9064E-07 0.4574E-08 0.9115E-07 0.2827E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001294 0.9557E-07 0.5600E-08 0.9626E-07 0.5889E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001295 0.8457E-07 0.4958E-08 0.8476E-07 0.4950E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001296 0.8429E-07 0.3332E-08 0.8448E-07 0.8190E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001297 0.8417E-07 0.3746E-08 0.8440E-07 0.1025E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001298 0.8451E-07 0.3574E-08 0.8480E-07 0.1549E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001299 0.8572E-07 0.3918E-08 0.8612E-07 0.2129E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001300 0.8877E-07 0.4626E-08 0.8932E-07 0.3073E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001301 0.9493E-07 0.5742E-08 0.9564E-07 0.6912E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001302 0.8133E-07 0.5029E-08 0.8151E-07 0.4771E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001303 0.8112E-07 0.3148E-08 0.8130E-07 0.9415E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001304 0.8111E-07 0.3648E-08 0.8133E-07 0.1064E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001305 0.8169E-07 0.3461E-08 0.8198E-07 0.1727E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001306 0.8334E-07 0.3863E-08 0.8373E-07 0.2267E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001307 0.8721E-07 0.4690E-08 0.8773E-07 0.6058E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001308 0.7855E-07 0.4152E-08 0.7871E-07 0.3025E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001309 0.7827E-07 0.2956E-08 0.7842E-07 0.7850E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001310 0.7809E-07 0.3258E-08 0.7827E-07 0.7479E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001311 0.7831E-07 0.3140E-08 0.7853E-07 0.1409E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001312 0.7919E-07 0.4461E-08 0.7949E-07 0.1699E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001313 0.8166E-07 0.3966E-08 0.8206E-07 0.2648E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001314 0.8668E-07 0.4929E-08 0.8719E-07 0.5446E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001315 0.7555E-07 0.4365E-08 0.7570E-07 0.4680E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001316 0.7533E-07 0.2786E-08 0.7548E-07 0.7751E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001317 0.7529E-07 0.3204E-08 0.7547E-07 0.9813E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001318 0.7574E-07 0.3045E-08 0.7596E-07 0.1484E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001319 0.7714E-07 0.3394E-08 0.7743E-07 0.2044E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001320 0.8043E-07 0.4095E-08 0.8082E-07 0.5090E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001321 0.7297E-07 0.3671E-08 0.7310E-07 0.2956E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001322 0.7270E-07 0.2622E-08 0.7283E-07 0.6805E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001323 0.7253E-07 0.2890E-08 0.7268E-07 0.7033E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001324 0.7271E-07 0.2785E-08 0.7289E-07 0.1257E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001325 0.7352E-07 0.3963E-08 0.7374E-07 0.1571E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001326 0.7574E-07 0.3524E-08 0.7603E-07 0.2404E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001327 0.8032E-07 0.4394E-08 0.8070E-07 0.5053E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001328 0.7018E-07 0.3898E-08 0.7031E-07 0.4109E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001329 0.6997E-07 0.2476E-08 0.7010E-07 0.7193E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001330 0.6993E-07 0.2851E-08 0.7008E-07 0.8843E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001331 0.7035E-07 0.2712E-08 0.7052E-07 0.1369E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001332 0.7162E-07 0.3027E-08 0.7184E-07 0.1866E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001333 0.7466E-07 0.3667E-08 0.7495E-07 0.1890E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001334 0.7947E-07 0.4066E-08 0.6751E-07 0.5383E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001335 0.6753E-07 0.3096E-08 0.6841E-07 0.7770E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001336 0.6712E-07 0.2533E-08 0.6907E-07 0.9559E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001337 0.6731E-07 0.3679E-08 0.7037E-07 0.1384E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001338 0.6856E-07 0.3233E-08 0.7302E-07 0.1902E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001339 0.7182E-07 0.4083E-08 0.7812E-07 0.1720E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001340 0.7700E-07 0.4334E-08 0.6538E-07 0.4778E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001341 0.6527E-07 0.2982E-08 0.6613E-07 0.8021E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001342 0.6486E-07 0.2454E-08 0.6669E-07 0.8373E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001343 0.6496E-07 0.2636E-08 0.6786E-07 0.1285E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001344 0.6605E-07 0.3171E-08 0.7014E-07 0.1706E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001345 0.6892E-07 0.3911E-08 0.7474E-07 0.1565E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001346 0.7353E-07 0.4089E-08 0.6320E-07 0.4343E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001347 0.6307E-07 0.2798E-08 0.6381E-07 0.7715E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001348 0.6267E-07 0.2332E-08 0.6422E-07 0.7492E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001349 0.6268E-07 0.2465E-08 0.6515E-07 0.1171E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001350 0.6354E-07 0.2974E-08 0.6697E-07 0.1531E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001351 0.6584E-07 0.3618E-08 0.7085E-07 0.1401E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001352 0.6969E-07 0.3759E-08 0.6110E-07 0.3969E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001353 0.6094E-07 0.2593E-08 0.6156E-07 0.7229E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001354 0.6055E-07 0.2197E-08 0.6182E-07 0.6711E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001355 0.6050E-07 0.2301E-08 0.6252E-07 0.1059E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001356 0.6115E-07 0.2760E-08 0.6392E-07 0.1371E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001357 0.6295E-07 0.3319E-08 0.6709E-07 0.1262E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001358 0.6608E-07 0.3425E-08 0.5907E-07 0.3660E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001359 0.5889E-07 0.2400E-08 0.5940E-07 0.6690E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001360 0.5851E-07 0.2064E-08 0.5956E-07 0.6076E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001361 0.5842E-07 0.2148E-08 0.6006E-07 0.9575E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001362 0.5890E-07 0.2553E-08 0.6114E-07 0.1238E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001363 0.6032E-07 0.3043E-08 0.6369E-07 0.1151E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001364 0.6286E-07 0.3122E-08 0.5710E-07 0.1538E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001365 0.6555E-07 0.3015E-08 0.5742E-07 0.9388E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001366 0.5662E-07 0.2924E-08 0.5885E-07 0.8586E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001367 0.5647E-07 0.2371E-08 0.6031E-07 0.1301E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001368 0.5743E-07 0.2952E-08 0.6257E-07 0.1051E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001369 0.5925E-07 0.2926E-08 0.5560E-07 0.1215E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001370 0.6118E-07 0.2566E-08 0.5567E-07 0.7306E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001371 0.5499E-07 0.2520E-08 0.5664E-07 0.8470E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001372 0.5480E-07 0.2168E-08 0.5752E-07 0.1064E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001373 0.5537E-07 0.2594E-08 0.5914E-07 0.1426E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001374 0.5735E-07 0.3346E-08 0.6219E-07 0.1463E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001375 0.6018E-07 0.3317E-08 0.5363E-07 0.4389E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001376 0.5353E-07 0.2215E-08 0.5404E-07 0.6521E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001377 0.5316E-07 0.1798E-08 0.5428E-07 0.6744E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001378 0.5317E-07 0.2003E-08 0.5479E-07 0.9918E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001379 0.5380E-07 0.2402E-08 0.5594E-07 0.1329E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001380 0.5553E-07 0.3032E-08 0.5847E-07 0.1387E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001381 0.5807E-07 0.3042E-08 0.5179E-07 0.4122E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001382 0.5174E-07 0.2076E-08 0.5224E-07 0.6871E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001383 0.5137E-07 0.1721E-08 0.5243E-07 0.6593E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001384 0.5136E-07 0.1884E-08 0.5290E-07 0.9816E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001385 0.5195E-07 0.2275E-08 0.5392E-07 0.1306E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001386 0.5351E-07 0.2832E-08 0.5625E-07 0.1318E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001387 0.5587E-07 0.2826E-08 0.5005E-07 0.3970E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001388 0.5002E-07 0.1937E-08 0.5047E-07 0.6898E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001389 0.4965E-07 0.1633E-08 0.5063E-07 0.6382E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001390 0.4962E-07 0.1766E-08 0.5105E-07 0.9532E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001391 0.5016E-07 0.2131E-08 0.5194E-07 0.1262E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001392 0.5156E-07 0.2626E-08 0.5406E-07 0.1261E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001393 0.5372E-07 0.2610E-08 0.4838E-07 0.3838E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001394 0.4835E-07 0.1807E-08 0.4877E-07 0.6771E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001395 0.4800E-07 0.1545E-08 0.4890E-07 0.6174E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001396 0.4795E-07 0.1657E-08 0.4926E-07 0.9209E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001397 0.4844E-07 0.1993E-08 0.5006E-07 0.1219E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001398 0.4973E-07 0.2435E-08 0.5200E-07 0.1214E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001399 0.5171E-07 0.2415E-08 0.4676E-07 0.3720E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001400 0.4674E-07 0.1690E-08 0.4713E-07 0.6599E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001401 0.4639E-07 0.1462E-08 0.4724E-07 0.5986E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001402 0.4634E-07 0.1558E-08 0.4757E-07 0.8907E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001403 0.4680E-07 0.1866E-08 0.4829E-07 0.1180E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001404 0.4800E-07 0.2267E-08 0.5009E-07 0.1178E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001405 0.4984E-07 0.2244E-08 0.4520E-07 0.3619E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001406 0.4518E-07 0.1585E-08 0.4555E-07 0.6428E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001407 0.4485E-07 0.1384E-08 0.4565E-07 0.5829E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001408 0.4480E-07 0.1469E-08 0.4595E-07 0.8649E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001409 0.4523E-07 0.1753E-08 0.4662E-07 0.1148E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001410 0.4637E-07 0.2120E-08 0.4832E-07 0.1148E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001411 0.4811E-07 0.2095E-08 0.4369E-07 0.3533E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001412 0.4368E-07 0.1492E-08 0.4403E-07 0.6273E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001413 0.4335E-07 0.1311E-08 0.4413E-07 0.5698E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001414 0.4330E-07 0.1388E-08 0.4441E-07 0.8431E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001415 0.4373E-07 0.1652E-08 0.4505E-07 0.1121E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001416 0.4482E-07 0.1992E-08 0.4667E-07 0.1124E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001417 0.4649E-07 0.1965E-08 0.4223E-07 0.3458E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001418 0.4223E-07 0.1408E-08 0.4257E-07 0.6136E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001419 0.4191E-07 0.1244E-08 0.4266E-07 0.5587E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001420 0.4187E-07 0.1314E-08 0.4294E-07 0.8243E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001421 0.4228E-07 0.1561E-08 0.4355E-07 0.1098E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001422 0.4335E-07 0.1877E-08 0.4512E-07 0.3058E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001423 0.4098E-07 0.1498E-08 0.4106E-07 0.2460E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001424 0.4081E-07 0.1074E-08 0.4081E-07 0.3730E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001425 0.4063E-07 0.1192E-08 0.4062E-07 0.3926E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001426 0.4052E-07 0.1123E-08 0.4051E-07 0.6160E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001427 0.4055E-07 0.1203E-08 0.4049E-07 0.7741E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001428 0.4085E-07 0.1349E-08 0.4077E-07 0.1156E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001429 0.4173E-07 0.1622E-08 0.4154E-07 0.9780E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001430 0.3922E-07 0.1756E-08 0.4297E-07 0.3860E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001431 0.3952E-07 0.1260E-08 0.3927E-07 0.5239E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001432 0.3959E-07 0.1084E-08 0.3899E-07 0.5518E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001433 0.3981E-07 0.1171E-08 0.3897E-07 0.7511E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001434 0.4037E-07 0.1354E-08 0.3934E-07 0.1033E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001435 0.4173E-07 0.1671E-08 0.4038E-07 0.2703E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001436 0.3811E-07 0.1346E-08 0.3812E-07 0.2044E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001437 0.3788E-07 0.9648E-09 0.3796E-07 0.3376E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001438 0.3771E-07 0.1080E-08 0.3781E-07 0.3704E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001439 0.3760E-07 0.1017E-08 0.3772E-07 0.5734E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001440 0.3759E-07 0.1089E-08 0.3778E-07 0.7401E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001441 0.3785E-07 0.1243E-08 0.3814E-07 0.1089E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001442 0.3858E-07 0.1496E-08 0.3908E-07 0.9615E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001443 0.3989E-07 0.1576E-08 0.3648E-07 0.3316E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001444 0.3645E-07 0.1158E-08 0.3675E-07 0.4450E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001445 0.3620E-07 0.9957E-09 0.3685E-07 0.5025E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001446 0.3618E-07 0.1075E-08 0.3710E-07 0.6824E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001447 0.3655E-07 0.1279E-08 0.3771E-07 0.9622E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001448 0.3756E-07 0.1576E-08 0.3912E-07 0.2474E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001449 0.3538E-07 0.1260E-08 0.3545E-07 0.1925E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001450 0.3524E-07 0.8664E-09 0.3524E-07 0.3159E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001451 0.3510E-07 0.9780E-09 0.3508E-07 0.3468E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001452 0.3502E-07 0.9190E-09 0.3499E-07 0.5396E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001453 0.3508E-07 0.9955E-09 0.3500E-07 0.6947E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001454 0.3543E-07 0.1142E-08 0.3528E-07 0.1025E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001455 0.3633E-07 0.1397E-08 0.3605E-07 0.8772E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001456 0.3387E-07 0.1539E-08 0.3742E-07 0.3337E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001457 0.3415E-07 0.1069E-08 0.3391E-07 0.4439E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001458 0.3425E-07 0.8949E-09 0.3366E-07 0.4826E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001459 0.3450E-07 0.9822E-09 0.3366E-07 0.6592E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001460 0.3509E-07 0.1167E-08 0.3403E-07 0.9092E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001461 0.3645E-07 0.1468E-08 0.3504E-07 0.2348E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001462 0.3291E-07 0.1171E-08 0.3291E-07 0.1752E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001463 0.3271E-07 0.7830E-09 0.3278E-07 0.2974E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001464 0.3257E-07 0.8950E-09 0.3265E-07 0.3239E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001465 0.3249E-07 0.8399E-09 0.3259E-07 0.5073E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001466 0.3251E-07 0.9129E-09 0.3267E-07 0.6528E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001467 0.3280E-07 0.1067E-08 0.3304E-07 0.9649E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001468 0.3356E-07 0.1314E-08 0.3397E-07 0.8492E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001469 0.3487E-07 0.1410E-08 0.3149E-07 0.2955E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001470 0.3148E-07 0.9974E-09 0.3176E-07 0.3942E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001471 0.3126E-07 0.8281E-09 0.3188E-07 0.4472E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001472 0.3126E-07 0.9091E-09 0.3214E-07 0.6077E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001473 0.3163E-07 0.1109E-08 0.3275E-07 0.8542E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001474 0.3260E-07 0.1394E-08 0.3412E-07 0.2160E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001475 0.3055E-07 0.1104E-08 0.3061E-07 0.1682E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001476 0.3044E-07 0.7061E-09 0.3043E-07 0.2781E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001477 0.3032E-07 0.8175E-09 0.3030E-07 0.3072E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001478 0.3028E-07 0.7645E-09 0.3023E-07 0.4782E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001479 0.3037E-07 0.8428E-09 0.3026E-07 0.6181E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001480 0.3074E-07 0.9931E-09 0.3057E-07 0.9117E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001481 0.3166E-07 0.1242E-08 0.3135E-07 0.7840E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001482 0.2924E-07 0.1384E-08 0.3270E-07 0.2906E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001483 0.2952E-07 0.9276E-09 0.2928E-07 0.3886E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001484 0.2965E-07 0.7499E-09 0.2907E-07 0.4274E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001485 0.2991E-07 0.8380E-09 0.2908E-07 0.5851E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001486 0.3052E-07 0.1025E-08 0.2945E-07 0.8082E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001487 0.3185E-07 0.1312E-08 0.3041E-07 0.2055E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001488 0.2842E-07 0.1035E-08 0.2841E-07 0.1549E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001489 0.2825E-07 0.6408E-09 0.2832E-07 0.2627E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001490 0.2814E-07 0.7526E-09 0.2821E-07 0.2877E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001491 0.2808E-07 0.7028E-09 0.2817E-07 0.4508E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001492 0.2813E-07 0.7778E-09 0.2828E-07 0.5816E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001493 0.2844E-07 0.9340E-09 0.2866E-07 0.8596E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001494 0.2922E-07 0.1174E-08 0.2960E-07 0.1830E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001495 0.2730E-07 0.9764E-09 0.2733E-07 0.1363E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001496 0.2718E-07 0.6040E-09 0.2719E-07 0.2467E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001497 0.2708E-07 0.7156E-09 0.2709E-07 0.2735E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001498 0.2705E-07 0.6693E-09 0.2706E-07 0.4373E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001499 0.2717E-07 0.7538E-09 0.2714E-07 0.5695E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001500 0.2758E-07 0.9145E-09 0.2753E-07 0.8480E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001501 0.2857E-07 0.1172E-08 0.2847E-07 0.1836E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001502 0.2625E-07 0.9683E-09 0.2626E-07 0.1380E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001503 0.2613E-07 0.5759E-09 0.2615E-07 0.2515E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001504 0.2604E-07 0.6997E-09 0.2606E-07 0.2818E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001505 0.2603E-07 0.6502E-09 0.2605E-07 0.4474E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001506 0.2616E-07 0.7414E-09 0.2618E-07 0.5843E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001507 0.2662E-07 0.9147E-09 0.2664E-07 0.8642E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001508 0.2766E-07 0.1182E-08 0.2770E-07 0.1846E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001509 0.2524E-07 0.9781E-09 0.2525E-07 0.1375E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001510 0.2513E-07 0.5491E-09 0.2514E-07 0.2534E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001511 0.2506E-07 0.6828E-09 0.2507E-07 0.2840E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001512 0.2507E-07 0.6327E-09 0.2507E-07 0.4537E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001513 0.2525E-07 0.7322E-09 0.2524E-07 0.5926E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001514 0.2580E-07 0.9194E-09 0.2577E-07 0.8770E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001515 0.2700E-07 0.1201E-08 0.2694E-07 0.1871E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001516 0.2427E-07 0.9908E-09 0.2428E-07 0.1415E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001517 0.2417E-07 0.5252E-09 0.2418E-07 0.2598E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001518 0.2411E-07 0.6736E-09 0.2412E-07 0.2938E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001519 0.2416E-07 0.6207E-09 0.2416E-07 0.4671E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001520 0.2439E-07 0.7310E-09 0.2438E-07 0.6111E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001521 0.2503E-07 0.9333E-09 0.2501E-07 0.8988E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001522 0.2637E-07 0.1230E-08 0.2634E-07 0.1908E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001523 0.2333E-07 0.1014E-08 0.2334E-07 0.1438E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001524 0.2325E-07 0.5032E-09 0.2326E-07 0.2656E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001525 0.2321E-07 0.6660E-09 0.2321E-07 0.3012E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001526 0.2329E-07 0.6117E-09 0.2328E-07 0.4798E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001527 0.2359E-07 0.7322E-09 0.2356E-07 0.6279E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001528 0.2435E-07 0.9509E-09 0.2430E-07 0.6233E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001529 0.2242E-07 0.1086E-08 0.2551E-07 0.1917E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001530 0.2266E-07 0.7279E-09 0.2246E-07 0.2822E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001531 0.2280E-07 0.5441E-09 0.2231E-07 0.3092E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001532 0.2308E-07 0.6336E-09 0.2234E-07 0.4583E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001533 0.2369E-07 0.7980E-09 0.2265E-07 0.6145E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001534 0.2495E-07 0.1056E-08 0.2349E-07 0.1587E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001535 0.2181E-07 0.8377E-09 0.2179E-07 0.1023E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001536 0.2168E-07 0.4530E-09 0.2173E-07 0.2058E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001537 0.2161E-07 0.5580E-09 0.2166E-07 0.2116E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001538 0.2160E-07 0.5220E-09 0.2167E-07 0.3556E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001539 0.2171E-07 0.5958E-09 0.2182E-07 0.4493E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001540 0.2210E-07 0.7557E-09 0.2228E-07 0.6826E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001541 0.2298E-07 0.9802E-09 0.2328E-07 0.1400E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001542 0.2095E-07 0.8274E-09 0.2097E-07 0.1141E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001543 0.2088E-07 0.4297E-09 0.2087E-07 0.1970E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001544 0.2083E-07 0.5478E-09 0.2082E-07 0.2331E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001545 0.2087E-07 0.5089E-09 0.2084E-07 0.3627E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001546 0.2107E-07 0.6025E-09 0.2101E-07 0.4859E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001547 0.2164E-07 0.7724E-09 0.2153E-07 0.7129E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001548 0.2283E-07 0.1026E-08 0.2265E-07 0.1550E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001549 0.2015E-07 0.8544E-09 0.2015E-07 0.1141E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001550 0.2008E-07 0.4149E-09 0.2009E-07 0.2168E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001551 0.2005E-07 0.5519E-09 0.2005E-07 0.2450E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001552 0.2012E-07 0.5099E-09 0.2012E-07 0.3943E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001553 0.2039E-07 0.6129E-09 0.2038E-07 0.5170E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001554 0.2106E-07 0.8044E-09 0.2104E-07 0.7633E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001555 0.2241E-07 0.1072E-08 0.2239E-07 0.1603E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001556 0.1938E-07 0.8966E-09 0.1939E-07 0.1238E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001557 0.1933E-07 0.4003E-09 0.1933E-07 0.2278E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001558 0.1932E-07 0.5574E-09 0.1931E-07 0.2643E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001559 0.1944E-07 0.5123E-09 0.1943E-07 0.4194E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001560 0.1980E-07 0.6307E-09 0.1977E-07 0.5540E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001561 0.2065E-07 0.8412E-09 0.2059E-07 0.1416E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001562 0.1872E-07 0.7046E-09 0.1872E-07 0.7479E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001563 0.1865E-07 0.3745E-09 0.1865E-07 0.1884E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001564 0.1860E-07 0.4746E-09 0.1860E-07 0.1806E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001565 0.1864E-07 0.4465E-09 0.1864E-07 0.3275E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001566 0.1883E-07 0.5188E-09 0.1881E-07 0.4035E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001567 0.1934E-07 0.6803E-09 0.1931E-07 0.6288E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001568 0.2041E-07 0.8938E-09 0.2037E-07 0.1250E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001569 0.1801E-07 0.7693E-09 0.1801E-07 0.1120E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001570 0.1795E-07 0.3591E-09 0.1795E-07 0.1848E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001571 0.1793E-07 0.4871E-09 0.1793E-07 0.2318E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001572 0.1802E-07 0.4491E-09 0.1801E-07 0.3515E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001573 0.1831E-07 0.5510E-09 0.1829E-07 0.4812E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001574 0.1902E-07 0.7265E-09 0.1898E-07 0.4688E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001575 0.1730E-07 0.8505E-09 0.2010E-07 0.1436E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001576 0.1751E-07 0.5640E-09 0.1733E-07 0.2065E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001577 0.1765E-07 0.4055E-09 0.1723E-07 0.2427E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001578 0.1794E-07 0.4803E-09 0.1727E-07 0.3544E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001579 0.1853E-07 0.6212E-09 0.1755E-07 0.4860E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001580 0.1970E-07 0.8314E-09 0.1832E-07 0.4381E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001581 0.1675E-07 0.9228E-09 0.1953E-07 0.1326E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001582 0.1693E-07 0.5492E-09 0.1676E-07 0.2196E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001583 0.1706E-07 0.4020E-09 0.1665E-07 0.2246E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001584 0.1733E-07 0.4704E-09 0.1667E-07 0.3392E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001585 0.1785E-07 0.6162E-09 0.1694E-07 0.4494E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001586 0.1893E-07 0.8181E-09 0.1765E-07 0.4167E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001587 0.1618E-07 0.8805E-09 0.1875E-07 0.1260E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001588 0.1636E-07 0.5224E-09 0.1620E-07 0.2206E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001589 0.1646E-07 0.3867E-09 0.1609E-07 0.2115E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001590 0.1668E-07 0.4460E-09 0.1610E-07 0.3236E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001591 0.1712E-07 0.5838E-09 0.1633E-07 0.4231E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001592 0.1806E-07 0.7683E-09 0.1694E-07 0.3924E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001593 0.1564E-07 0.8153E-09 0.1790E-07 0.1208E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001594 0.1580E-07 0.4880E-09 0.1566E-07 0.2160E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001595 0.1587E-07 0.3679E-09 0.1555E-07 0.1998E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001596 0.1606E-07 0.4204E-09 0.1555E-07 0.3067E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001597 0.1642E-07 0.5472E-09 0.1575E-07 0.3987E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001598 0.1724E-07 0.7152E-09 0.1627E-07 0.3743E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001599 0.1512E-07 0.7494E-09 0.1711E-07 0.1169E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001600 0.1526E-07 0.4542E-09 0.1513E-07 0.2099E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001601 0.1532E-07 0.3481E-09 0.1503E-07 0.1912E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001602 0.1547E-07 0.3952E-09 0.1502E-07 0.2922E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001603 0.1577E-07 0.5103E-09 0.1520E-07 0.3801E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001604 0.1649E-07 0.6632E-09 0.1566E-07 0.3608E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001605 0.1462E-07 0.6880E-09 0.1640E-07 0.1139E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001606 0.1474E-07 0.4228E-09 0.1463E-07 0.2042E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001607 0.1479E-07 0.3292E-09 0.1453E-07 0.1850E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001608 0.1492E-07 0.3720E-09 0.1452E-07 0.2808E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001609 0.1518E-07 0.4765E-09 0.1467E-07 0.3665E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001610 0.1582E-07 0.6161E-09 0.1509E-07 0.3518E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001611 0.1413E-07 0.6337E-09 0.1576E-07 0.1117E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001612 0.1425E-07 0.3949E-09 0.1414E-07 0.1996E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001613 0.1428E-07 0.3120E-09 0.1404E-07 0.1807E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001614 0.1440E-07 0.3511E-09 0.1403E-07 0.2724E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001615 0.1464E-07 0.4464E-09 0.1418E-07 0.3568E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001616 0.1522E-07 0.5746E-09 0.1457E-07 0.3460E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001617 0.1366E-07 0.5866E-09 0.1518E-07 0.1102E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001618 0.1377E-07 0.3703E-09 0.1367E-07 0.1962E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001619 0.1381E-07 0.2964E-09 0.1357E-07 0.1779E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001620 0.1391E-07 0.3322E-09 0.1356E-07 0.2663E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001621 0.1413E-07 0.4199E-09 0.1371E-07 0.3502E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001622 0.1468E-07 0.5381E-09 0.1408E-07 0.9602E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001623 0.1328E-07 0.4122E-09 0.1327E-07 0.7725E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001624 0.1320E-07 0.2423E-09 0.1322E-07 0.1193E-08 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 20 0001625 0.1314E-07 0.2914E-09 0.1316E-07 0.1268E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001626 0.1311E-07 0.2702E-09 0.1314E-07 0.1994E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001627 0.1311E-07 0.3037E-09 0.1317E-07 0.2526E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001628 0.1323E-07 0.3698E-09 0.1330E-07 0.3777E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001629 0.1354E-07 0.4716E-09 0.1366E-07 0.3257E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001630 0.1409E-07 0.5177E-09 0.1270E-07 0.1213E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001631 0.1270E-07 0.3420E-09 0.1281E-07 0.1670E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001632 0.1261E-07 0.2684E-09 0.1285E-07 0.1823E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001633 0.1261E-07 0.3048E-09 0.1296E-07 0.2492E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001634 0.1276E-07 0.3848E-09 0.1319E-07 0.3461E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001635 0.1316E-07 0.4991E-09 0.1374E-07 0.8705E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 22 0001636 0.1233E-07 0.3893E-09 0.1234E-07 0.6876E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001637 0.1228E-07 0.2219E-09 0.1227E-07 0.1130E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001638 0.1224E-07 0.2726E-09 0.1222E-07 0.1259E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001639 0.1222E-07 0.2517E-09 0.1220E-07 0.1952E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001640 0.1227E-07 0.2868E-09 0.1222E-07 0.2536E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001641 0.1244E-07 0.3537E-09 0.1235E-07 0.3734E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001642 0.1285E-07 0.4557E-09 0.1269E-07 0.8039E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001643 0.1186E-07 0.3726E-09 0.1186E-07 0.6067E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001644 0.1180E-07 0.2114E-09 0.1181E-07 0.1103E-08 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 19 0001645 0.1176E-07 0.2639E-09 0.1177E-07 0.1223E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001646 0.1175E-07 0.2440E-09 0.1176E-07 0.1961E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001647 0.1179E-07 0.2829E-09 0.1181E-07 0.2550E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001648 0.1198E-07 0.3546E-09 0.1201E-07 0.3793E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001649 0.1241E-07 0.4638E-09 0.1246E-07 0.8208E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 19 0001650 0.1140E-07 0.3799E-09 0.1140E-07 0.6329E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001651 0.1135E-07 0.2040E-09 0.1135E-07 0.1148E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001652 0.1131E-07 0.2638E-09 0.1132E-07 0.1297E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001653 0.1132E-07 0.2421E-09 0.1132E-07 0.2056E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001654 0.1140E-07 0.2859E-09 0.1140E-07 0.2690E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001655 0.1163E-07 0.3648E-09 0.1163E-07 0.3961E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001656 0.1215E-07 0.4815E-09 0.1213E-07 0.8501E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 1354 0001657 0.1096E-07 0.3949E-09 0.1096E-07 0.6489E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001658 0.1092E-07 0.1972E-09 0.1092E-07 0.1194E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001659 0.1089E-07 0.2641E-09 0.1090E-07 0.1348E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001660 0.1091E-07 0.2416E-09 0.1092E-07 0.2150E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001661 0.1101E-07 0.2905E-09 0.1102E-07 0.2806E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001662 0.1130E-07 0.3780E-09 0.1132E-07 0.4124E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001663 0.1190E-07 0.5026E-09 0.1192E-07 0.8804E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 20 0001664 0.1054E-07 0.4124E-09 0.1055E-07 0.6829E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001665 0.1050E-07 0.1914E-09 0.1051E-07 0.1250E-08 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 19 0001666 0.1049E-07 0.2671E-09 0.1049E-07 0.1424E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001667 0.1053E-07 0.2432E-09 0.1053E-07 0.2260E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001668 0.1067E-07 0.2982E-09 0.1067E-07 0.2953E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 22 0001669 0.1102E-07 0.3944E-09 0.1103E-07 0.7625E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 34 0001670 0.1018E-07 0.3263E-09 0.1019E-07 0.3999E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001671 0.1014E-07 0.1790E-09 0.1014E-07 0.1009E-08 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 19 0001672 0.1011E-07 0.2263E-09 0.1011E-07 0.9446E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 19 0001673 0.1011E-07 0.2117E-09 0.1011E-07 0.1723E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001674 0.1017E-07 0.2452E-09 0.1018E-07 0.2096E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001675 0.1038E-07 0.3179E-09 0.1039E-07 0.3274E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 21 0001676 0.1082E-07 0.4166E-09 0.1084E-07 0.6585E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 19 0001677 0.9792E-08 0.3539E-09 0.9797E-08 0.6003E-09 0.0000E+00 TIME FOR CALCULATION: 0.2333E+05 L2-NORM ERROR U VELOCITY 2.794552122239916E-005 L2-NORM ERROR V VELOCITY 2.790982482205062E-005 L2-NORM ERROR W VELOCITY 2.911527032104786E-005 L2-NORM ERROR ABS. VELOCITY 3.166182228011700E-005 L2-NORM ERROR PRESSURE 1.392953546653255E-003 *** CALCULATION FINISHED - SEE RESULTS *** ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./caffa3d.MB.lnx on a arch-openmpi-opt-intel-hlr-ext named hpb0075 with 1 processor, by gu08vomo Fri Feb 20 03:10:17 2015 Using Petsc Release Version 3.5.3, Jan, 31, 2015 Max Max/Min Avg Total Time (sec): 2.334e+04 1.00000 2.334e+04 Objects: 9.246e+04 1.00000 9.246e+04 Flops: 1.177e+13 1.00000 1.177e+13 1.177e+13 Flops/sec: 5.046e+08 1.00000 5.046e+08 5.046e+08 MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Reductions: 0.000e+00 0.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 6.3490e+03 27.2% 6.5910e+06 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 1: MOMENTUM: 2.5123e+03 10.8% 1.4158e+12 12.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 2: PRESCORR: 1.4475e+04 62.0% 1.0359e+13 88.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage ThreadCommRunKer 8386 1.0 1.1347e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNorm 1 1.0 2.3177e-01 1.0 4.39e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 67 0 0 0 19 VecScale 1 1.0 2.4431e-03 1.0 2.20e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 33 0 0 0 899 VecSet 67086 1.0 1.3405e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 2 0 0 0 0 0 VecScatterBegin 75476 1.0 3.7952e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 1 1.0 2.4450e-03 1.0 2.20e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 33 0 0 0 899 MatAssemblyBegin 3354 1.0 2.0044e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 3354 1.0 7.7460e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 --- Event Stage 1: MOMENTUM VecMDot 6511 1.0 1.6014e+01 1.0 3.51e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 2 0 0 0 2193 VecNorm 21604 1.0 3.0017e+01 1.0 9.49e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 1 7 0 0 0 3163 VecScale 11542 1.0 1.5715e+01 1.0 2.54e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 2 0 0 0 1614 VecCopy 10062 1.0 3.4057e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 VecSet 5031 1.0 4.6499e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 15093 1.0 4.9896e+01 1.0 6.63e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 2 5 0 0 0 1329 VecMAXPY 11542 1.0 3.5569e+01 1.0 6.37e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 1 5 0 0 0 1792 VecNormalize 11542 1.0 2.9960e+01 1.0 7.61e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 1 5 0 0 0 2539 MatMult 16573 1.0 4.5783e+02 1.0 4.50e+11 1.0 0.0e+00 0.0e+00 0.0e+00 2 4 0 0 0 18 32 0 0 0 983 MatSolve 16573 1.0 4.7564e+02 1.0 4.50e+11 1.0 0.0e+00 0.0e+00 0.0e+00 2 4 0 0 0 19 32 0 0 0 947 MatLUFactorNum 5031 1.0 5.4508e+02 1.0 2.30e+11 1.0 0.0e+00 0.0e+00 0.0e+00 2 2 0 0 0 22 16 0 0 0 422 MatILUFactorSym 5031 1.0 4.7919e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 19 0 0 0 0 0 MatGetRowIJ 5031 1.0 8.9526e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 5031 1.0 4.4503e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 2 0 0 0 0 0 KSPGMRESOrthog 6511 1.0 3.5106e+01 1.0 7.02e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 1 5 0 0 0 2000 KSPSetUp 5031 1.0 1.3708e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 5 0 0 0 0 0 KSPSolve 5031 1.0 2.1856e+03 1.0 1.23e+12 1.0 0.0e+00 0.0e+00 0.0e+00 9 10 0 0 0 87 87 0 0 0 565 PCSetUp 5031 1.0 1.0748e+03 1.0 2.30e+11 1.0 0.0e+00 0.0e+00 0.0e+00 5 2 0 0 0 43 16 0 0 0 214 PCApply 16573 1.0 4.7568e+02 1.0 4.50e+11 1.0 0.0e+00 0.0e+00 0.0e+00 2 4 0 0 0 19 32 0 0 0 946 --- Event Stage 2: PRESCORR VecMDot 26952 1.0 9.9884e+01 1.0 1.19e+11 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 1 1 0 0 0 1187 VecTDot 50500 1.0 1.8378e+02 1.0 2.22e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 1 2 0 0 0 1207 VecNorm 28632 1.0 6.0194e+01 1.0 1.26e+11 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 2089 VecScale 33 1.0 1.6309e-02 1.0 2.63e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1611 VecCopy 5034 1.0 1.7482e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 252460 1.0 5.7891e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 52170 1.0 2.2045e+02 1.0 2.29e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 2 2 0 0 0 1040 VecAYPX 104334 1.0 2.1384e+02 1.0 1.68e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 2 0 0 0 785 VecMAXPY 26955 1.0 1.2491e+02 1.0 1.19e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 950 VecAssemblyBegin 3 1.0 0.0000e+00 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyEnd 3 1.0 1.9073e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecPointwiseMult 26955 1.0 7.0147e-02 1.0 2.70e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 385 VecSetRandom 3 1.0 2.3869e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 33 1.0 3.1275e-02 1.0 7.88e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2520 MatMult 107718 1.0 1.6463e+03 1.0 1.80e+12 1.0 0.0e+00 0.0e+00 0.0e+00 7 15 0 0 0 11 17 0 0 0 1091 MatMultAdd 80766 1.0 5.6387e+02 1.0 4.34e+11 1.0 0.0e+00 0.0e+00 0.0e+00 2 4 0 0 0 4 4 0 0 0 770 MatMultTranspose 134610 1.0 6.3971e+02 1.0 4.34e+11 1.0 0.0e+00 0.0e+00 0.0e+00 3 4 0 0 0 4 4 0 0 0 679 MatSOR 161532 1.0 8.1896e+03 1.0 6.21e+12 1.0 0.0e+00 0.0e+00 0.0e+00 35 53 0 0 0 57 60 0 0 0 759 MatConvert 1682 1.0 1.4112e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatScale 9 1.0 6.8224e-02 1.0 5.81e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 851 MatResidual 80766 1.0 1.0138e+03 1.0 1.13e+12 1.0 0.0e+00 0.0e+00 0.0e+00 4 10 0 0 0 7 11 0 0 0 1114 MatAssemblyBegin 6738 1.0 2.8052e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 6738 1.0 8.4092e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRow 9597518 1.0 5.0723e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatCoarsen 3 1.0 1.8700e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 6 1.0 6.3229e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAXPY 3 1.0 4.1201e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatMatMult 3 1.0 4.6048e-01 1.0 4.98e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 108 MatMatMultSym 3 1.0 3.0840e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatMatMultNum 3 1.0 1.5204e-01 1.0 4.98e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 328 MatPtAP 5031 1.0 2.4475e+03 1.0 4.98e+11 1.0 0.0e+00 0.0e+00 0.0e+00 10 4 0 0 0 17 5 0 0 0 204 MatPtAPSymbolic 6 1.0 1.8564e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatPtAPNumeric 5031 1.0 2.4456e+03 1.0 4.98e+11 1.0 0.0e+00 0.0e+00 0.0e+00 10 4 0 0 0 17 5 0 0 0 204 MatTrnMatMult 3 1.0 6.2841e+00 1.0 6.33e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 101 MatTrnMatMultSym 3 1.0 2.9903e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatTrnMatMultNum 3 1.0 3.2938e+00 1.0 6.33e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 192 MatGetSymTrans 9 1.0 2.8709e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPGMRESOrthog 30 1.0 1.8173e-01 1.0 5.25e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2891 KSPSetUp 11742 1.0 1.6553e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1677 1.0 1.4421e+04 1.0 1.03e+13 1.0 0.0e+00 0.0e+00 0.0e+00 62 87 0 0 0 100 99 0 0 0 714 PCGAMGgraph_AGG 3 1.0 1.4958e+00 1.0 4.19e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 28 PCGAMGcoarse_AGG 3 1.0 6.7805e+00 1.0 6.33e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 93 PCGAMGProl_AGG 3 1.0 4.9031e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCGAMGPOpt_AGG 3 1.0 2.0095e+00 1.0 1.14e+09 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 570 PCSetUp 3354 1.0 2.4591e+03 1.0 5.00e+11 1.0 0.0e+00 0.0e+00 0.0e+00 11 4 0 0 0 17 5 0 0 0 203 PCSetUpOnBlocks 26922 1.0 5.5923e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCApply 26922 1.0 1.0459e+04 1.0 8.21e+12 1.0 0.0e+00 0.0e+00 0.0e+00 45 70 0 0 0 72 79 0 0 0 785 --- Event Stage 3: Unknown ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Vector 67 95 833878448 0 Vector Scatter 2 2 1304 0 Index Set 4 7 17581544 0 IS L to G Mapping 2 2 17577192 0 Matrix 1 11 490350860 0 Matrix Null Space 0 1 620 0 Krylov Solver 0 7 115533784 0 Preconditioner 0 7 7436 0 --- Event Stage 1: MOMENTUM Vector 60376 60362 530554454928 0 Index Set 15093 15090 88419231280 0 Matrix 5031 5030 1063010020000 0 Matrix Null Space 1 0 0 0 Krylov Solver 2 0 0 0 Preconditioner 2 0 0 0 --- Event Stage 2: PRESCORR Vector 8478 8453 434897736 0 Index Set 3 3 2376 0 Matrix 3380 3371 687941176 0 Matrix Coarsen 3 3 1932 0 Krylov Solver 8 3 90648 0 Preconditioner 8 3 3144 0 PetscRandom 3 3 1920 0 Viewer 1 0 0 0 --- Event Stage 3: Unknown ======================================================================================================================== Average time to get PetscTime(): 0 #PETSc Option Table entries: -log_summary -momentum_ksp_type gmres -options_left -pressure_ksp_converged_reason -pressure_mg_coarse_sub_pc_type svd -pressure_mg_levels_ksp_rtol 1e-4 -pressure_mg_levels_ksp_type richardson -pressure_mg_levels_pc_type sor -pressure_pc_gamg_agg_nsmooths 1 -pressure_pc_gamg_reuse_interpolation true -pressure_pc_type gamg #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure options: PETSC_ARCH=arch-openmpi-opt-intel-hlr-ext PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3 -prefix=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr-ext --with-blas-lapack-dir=/shared/apps/intel/2015/composer_xe_2015/mkl/lib/intel64/ --with-mpi-dir=/shared/apps/openmpi/1.8.2_intel COPTFLAGS="-O3 -xHost" FOPTFLAGS="-O3 -xHost" CXXOPTFLAGS="-O3 -xHost" --with-debugging=0 --download-hypre --download-ml ----------------------------------------- Libraries compiled on Sun Feb 1 16:09:22 2015 on hla0003 Machine characteristics: Linux-3.0.101-0.40-default-x86_64-with-SuSE-11-x86_64 Using PETSc directory: /home/gu08vomo/soft/petsc/3.5.3 Using PETSc arch: arch-openmpi-opt-intel-hlr-ext ----------------------------------------- Using C compiler: /shared/apps/openmpi/1.8.2_intel/bin/mpicc -fPIC -wd1572 -O3 -xHost ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /shared/apps/openmpi/1.8.2_intel/bin/mpif90 -fPIC -O3 -xHost ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/include -I/home/gu08vomo/soft/petsc/3.5.3/include -I/home/gu08vomo/soft/petsc/3.5.3/include -I/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/include -I/shared/apps/openmpi/1.8.2_intel/include ----------------------------------------- Using C linker: /shared/apps/openmpi/1.8.2_intel/bin/mpicc Using Fortran linker: /shared/apps/openmpi/1.8.2_intel/bin/mpif90 Using libraries: -Wl,-rpath,/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -L/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -lpetsc -Wl,-rpath,/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -L/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -lHYPRE -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -L/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib -L/shared/apps/gcc/4.8.3/lib -lmpi_cxx -lml -lmpi_cxx -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm -lX11 -lpthread -lssl -lcrypto -lmpi_usempi_ignore_tkr -lmpi_mpifh -lifport -lifcore -lm -lmpi_cxx -ldl -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -lmpi -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -L/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib -L/shared/apps/gcc/4.8.3/lib -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -limf -lsvml -lirng -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lpthread -lirc_s -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -L/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib -L/shared/apps/gcc/4.8.3/lib -ldl ----------------------------------------- #PETSc Option Table entries: -log_summary -momentum_ksp_type gmres -options_left -pressure_ksp_converged_reason -pressure_mg_coarse_sub_pc_type svd -pressure_mg_levels_ksp_rtol 1e-4 -pressure_mg_levels_ksp_type richardson -pressure_mg_levels_pc_type sor -pressure_pc_gamg_agg_nsmooths 1 -pressure_pc_gamg_reuse_interpolation true -pressure_pc_type gamg #End of PETSc Option Table entries There are no unused options. -------------- next part -------------- Sender: LSF System Subject: Job 541300: in cluster Done Job was submitted from host by user in cluster . Job was executed on host(s) , in queue , as user in cluster . was used as the home directory. was used as the working directory. Started at Thu Feb 19 10:46:15 2015 Results reported at Thu Feb 19 22:43:07 2015 Your job looked like: ------------------------------------------------------------ # LSBATCH: User input #! /bin/sh #BSUB -J mg_test #BSUB -o /home/gu08vomo/thesis/mgtest/gamg.128.out.%J #BSUB -n 1 #BSUB -W 14:00 #BSUB -x #BSUB -q test_mpi2 #BSUB -a openmpi module load openmpi/intel/1.8.2 #export PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr export PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr-ext export MYWORKDIR=/work/scratch/gu08vomo/thesis/singleblock/128_1_1_seg/ export OUTPUTDIR=/home/gu08vomo/thesis/coupling export PETSC_OPS="-options_file ops.gamg.old" cat ops.gamg echo "PETSC_DIR="$PETSC_DIR echo "MYWORKDIR="$MYWORKDIR cd $MYWORKDIR mpirun -n 1 ./caffa3d.MB.lnx ${PETSC_OPS} ------------------------------------------------------------ Successfully completed. Resource usage summary: CPU time : 43014.29 sec. Max Memory : 2905 MB Average Memory : 2235.03 MB Total Requested Memory : - Delta Memory : - (Delta: the difference between total requested memory and actual max usage.) Max Swap : 3659 MB Max Processes : 6 Max Threads : 11 The output (if any) follows: Modules: loading openmpi/intel/1.8.2 -momentum_ksp_type gmres -pressure_pc_type gamg -pressure_mg_coarse_sub_pc_type svd -pressure_pc_gamg_agg_nsmooths 1 -pressure_mg_levels_ksp_type richardson -pressure_mg_levels_pc_type sor -pressure_mg_levels_ksp_rtol 1e-4 -pressure_pc_gamg_reuse_interpolation true -log_summary -options_left -pressure_ksp_converged_reason PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr-ext MYWORKDIR=/work/scratch/gu08vomo/thesis/singleblock/128_1_1_seg/ ENTER PROBLEM NAME (SIX CHARACTERS): *************************************************** NAME OF PROBLEM SOLVED control *************************************************** *************************************************** CONTROL SETTINGS *************************************************** LREAD,LWRITE,LPOST,LTEST,LOUTS,LOUTE,LTIME,LGRAD F F F F F F F F IMON, JMON, KMON, MMON, RMON, IPR, JPR, KPR, MPR,NPCOR,NIGRAD 8 9 8 1 0 2 2 3 1 1 1 SORMAX, SLARGE, ALFA 0.1000E-07 0.1000E+31 0.9200E+00 (URF(I),I=1,5) 0.9000E+00 0.9000E+00 0.9000E+00 0.1000E+00 0.1000E+01 (SOR(I),I=1,5) 0.1000E+00 0.1000E+00 0.1000E+00 0.1000E+00 0.1000E+00 (GDS(I),I=1,5) - BLENDING (CDS-UDS) 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 LSG 100000 *************************************************** START SIMPLE RELAXATIONS *************************************************** Linear solve converged due to CONVERGED_RTOL iterations 2 KSP Object:(pressure_) 1 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=0.1, absolute=1e-50, divergence=10000 left preconditioning has attached null space using PRECONDITIONED norm type for convergence test PC Object:(pressure_) 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=4 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (pressure_mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (pressure_mg_coarse_) 1 MPI processes type: bjacobi block Jacobi: number of blocks = 1 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (pressure_mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (pressure_mg_coarse_sub_) 1 MPI processes type: svd linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=26, cols=26 total: nonzeros=536, allocated nonzeros=536 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=26, cols=26 total: nonzeros=536, allocated nonzeros=536 total number of mallocs used during MatSetValues calls =0 not using I-node routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (pressure_mg_levels_1_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=0.0001, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (pressure_mg_levels_1_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=2781, cols=2781 total: nonzeros=156609, allocated nonzeros=156609 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (pressure_mg_levels_2_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=0.0001, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (pressure_mg_levels_2_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=188698, cols=188698 total: nonzeros=6.12809e+06, allocated nonzeros=6.12809e+06 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (pressure_mg_levels_3_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=0.0001, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (pressure_mg_levels_3_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=2197000, cols=2197000 total: nonzeros=1.46816e+07, allocated nonzeros=1.46816e+07 total number of mallocs used during MatSetValues calls =0 not using I-node routines 0000001 0.1000E+01 0.1000E+01 0.1000E+01 0.1000E+01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 2 0000002 0.7654E+00 0.7194E+00 0.7661E+00 0.7330E+00 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 3 0000003 0.4442E+00 0.3597E+00 0.4375E+00 0.2886E+00 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 5 0000004 0.1641E+00 0.1296E+00 0.1648E+00 0.4463E-01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 5 0000005 0.5112E-01 0.3598E-01 0.5166E-01 0.3104E-01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 0000006 0.1957E-01 0.7820E-02 0.1983E-01 0.1447E-01 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 0000007 0.1497E-01 0.7200E-02 0.1495E-01 0.8590E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 6 0000008 0.1272E-01 0.5662E-02 0.1269E-01 0.8207E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 7 0000009 0.1135E-01 0.4639E-02 0.1135E-01 0.5068E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 7 0000010 0.1033E-01 0.4072E-02 0.1035E-01 0.5708E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 7 0000011 0.9421E-02 0.3706E-02 0.9432E-02 0.3233E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 7 0000012 0.8734E-02 0.3483E-02 0.8746E-02 0.3921E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 7 0000013 0.8122E-02 0.3221E-02 0.8130E-02 0.2095E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 7 0000014 0.7652E-02 0.3058E-02 0.7662E-02 0.3045E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 7 0000015 0.7237E-02 0.2890E-02 0.7245E-02 0.1618E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000016 0.6956E-02 0.2818E-02 0.6965E-02 0.1471E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 7 0000017 0.6339E-02 0.2521E-02 0.6346E-02 0.1578E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000018 0.6017E-02 0.2392E-02 0.6023E-02 0.1044E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000019 0.5734E-02 0.2283E-02 0.5740E-02 0.1212E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000020 0.5476E-02 0.2180E-02 0.5481E-02 0.7940E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000021 0.5248E-02 0.2095E-02 0.5254E-02 0.1054E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000022 0.5044E-02 0.2020E-02 0.5050E-02 0.8218E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000023 0.4875E-02 0.1970E-02 0.4881E-02 0.1209E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 7 0000024 0.4745E-02 0.1949E-02 0.4751E-02 0.1315E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000025 0.4687E-02 0.1991E-02 0.4694E-02 0.4871E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000026 0.4312E-02 0.1718E-02 0.4317E-02 0.4958E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000027 0.4170E-02 0.1661E-02 0.4174E-02 0.4818E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000028 0.4040E-02 0.1617E-02 0.4045E-02 0.5838E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000029 0.3930E-02 0.1591E-02 0.3935E-02 0.7478E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000030 0.3852E-02 0.1602E-02 0.3858E-02 0.1019E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000031 0.3836E-02 0.1682E-02 0.3842E-02 0.1235E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000032 0.3904E-02 0.1425E-02 0.3914E-02 0.4310E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000033 0.3462E-02 0.1459E-02 0.3465E-02 0.4804E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000034 0.3375E-02 0.1481E-02 0.3378E-02 0.4952E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000035 0.3307E-02 0.1300E-02 0.3311E-02 0.5204E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000036 0.3259E-02 0.1284E-02 0.3263E-02 0.7728E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000037 0.3254E-02 0.1329E-02 0.3259E-02 0.1037E-02 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000038 0.3346E-02 0.1486E-02 0.3353E-02 0.3282E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000039 0.2956E-02 0.1177E-02 0.2960E-02 0.2359E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000040 0.2892E-02 0.1160E-02 0.2896E-02 0.4576E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000041 0.2838E-02 0.1160E-02 0.2841E-02 0.5030E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000042 0.2811E-02 0.1200E-02 0.2815E-02 0.7012E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000043 0.2820E-02 0.1068E-02 0.2824E-02 0.8205E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000044 0.2892E-02 0.1123E-02 0.2897E-02 0.4962E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000045 0.2573E-02 0.1220E-02 0.2576E-02 0.2104E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000046 0.2534E-02 0.9942E-03 0.2537E-02 0.4036E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000047 0.2498E-02 0.9794E-03 0.2501E-02 0.4000E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000048 0.2489E-02 0.9990E-03 0.2492E-02 0.7027E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000049 0.2529E-02 0.1090E-02 0.2532E-02 0.8338E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000050 0.2644E-02 0.9371E-03 0.2648E-02 0.3577E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000051 0.2274E-02 0.1012E-02 0.2276E-02 0.1423E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000052 0.2239E-02 0.8736E-03 0.2242E-02 0.3321E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000053 0.2206E-02 0.8618E-03 0.2209E-02 0.3065E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000054 0.2193E-02 0.8675E-03 0.2196E-02 0.5479E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000055 0.2211E-02 0.9236E-03 0.2213E-02 0.6273E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000056 0.2286E-02 0.8231E-03 0.2289E-02 0.2699E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000057 0.2029E-02 0.8714E-03 0.2032E-02 0.1083E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000058 0.1999E-02 0.7774E-03 0.2002E-02 0.2617E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000059 0.1970E-02 0.7665E-03 0.1973E-02 0.2341E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000060 0.1954E-02 0.7659E-03 0.1957E-02 0.4193E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 8 0000061 0.1959E-02 0.7992E-03 0.1961E-02 0.5482E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000062 0.2019E-02 0.8872E-03 0.2021E-02 0.1874E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000063 0.1828E-02 0.7171E-03 0.1830E-02 0.1414E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000064 0.1802E-02 0.7103E-03 0.1804E-02 0.2711E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000065 0.1780E-02 0.7150E-03 0.1782E-02 0.2977E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000066 0.1775E-02 0.7407E-03 0.1777E-02 0.4188E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000067 0.1793E-02 0.6706E-03 0.1794E-02 0.4859E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000068 0.1853E-02 0.7134E-03 0.1854E-02 0.1724E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000069 0.1658E-02 0.6460E-03 0.1660E-02 0.1030E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000070 0.1634E-02 0.6452E-03 0.1636E-02 0.2447E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000071 0.1615E-02 0.6498E-03 0.1617E-02 0.2467E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000072 0.1610E-02 0.6750E-03 0.1612E-02 0.3604E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000073 0.1623E-02 0.6071E-03 0.1624E-02 0.4072E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000074 0.1670E-02 0.6422E-03 0.1672E-02 0.2926E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000075 0.1511E-02 0.6956E-03 0.1513E-02 0.1032E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000076 0.1498E-02 0.5768E-03 0.1500E-02 0.2234E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000077 0.1486E-02 0.5720E-03 0.1488E-02 0.2207E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000078 0.1490E-02 0.5875E-03 0.1492E-02 0.4043E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000079 0.1524E-02 0.6456E-03 0.1525E-02 0.4684E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000080 0.1606E-02 0.5600E-03 0.1607E-02 0.2275E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000081 0.1386E-02 0.6101E-03 0.1387E-02 0.8311E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000082 0.1372E-02 0.5256E-03 0.1374E-02 0.1982E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000083 0.1360E-02 0.5216E-03 0.1362E-02 0.1917E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000084 0.1360E-02 0.5293E-03 0.1362E-02 0.3408E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000085 0.1382E-02 0.5710E-03 0.1383E-02 0.3904E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000086 0.1443E-02 0.5078E-03 0.1443E-02 0.1817E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000087 0.1275E-02 0.5449E-03 0.1276E-02 0.6984E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000088 0.1262E-02 0.4818E-03 0.1264E-02 0.1668E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000089 0.1250E-02 0.4775E-03 0.1252E-02 0.1587E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000090 0.1247E-02 0.4809E-03 0.1248E-02 0.2788E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000091 0.1259E-02 0.5097E-03 0.1261E-02 0.3156E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000092 0.1303E-02 0.4637E-03 0.1303E-02 0.1476E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000093 0.1176E-02 0.4901E-03 0.1178E-02 0.1786E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000094 0.1165E-02 0.5038E-03 0.1166E-02 0.1799E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000095 0.1161E-02 0.4410E-03 0.1162E-02 0.1606E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000096 0.1165E-02 0.4456E-03 0.1166E-02 0.2954E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000097 0.1190E-02 0.4843E-03 0.1191E-02 0.3493E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000098 0.1251E-02 0.4346E-03 0.1251E-02 0.1745E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000099 0.1089E-02 0.4733E-03 0.1091E-02 0.6286E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000100 0.1081E-02 0.4100E-03 0.1082E-02 0.1508E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000101 0.1072E-02 0.4073E-03 0.1074E-02 0.1477E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000102 0.1073E-02 0.4132E-03 0.1075E-02 0.2604E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000103 0.1091E-02 0.4455E-03 0.1092E-02 0.2983E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000104 0.1139E-02 0.3984E-03 0.1139E-02 0.1423E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000105 0.1011E-02 0.4275E-03 0.1012E-02 0.5486E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000106 0.1002E-02 0.3791E-03 0.1003E-02 0.1289E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000107 0.9939E-03 0.3763E-03 0.9951E-03 0.1251E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000108 0.9925E-03 0.3793E-03 0.9936E-03 0.2172E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000109 0.1003E-02 0.4023E-03 0.1004E-02 0.2469E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000110 0.1039E-02 0.3670E-03 0.1039E-02 0.1175E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000111 0.9400E-03 0.3884E-03 0.9411E-03 0.4702E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000112 0.9314E-03 0.3515E-03 0.9326E-03 0.1080E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000113 0.9232E-03 0.3486E-03 0.9243E-03 0.1045E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000114 0.9200E-03 0.3496E-03 0.9211E-03 0.1799E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000115 0.9259E-03 0.3656E-03 0.9268E-03 0.2472E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000116 0.9561E-03 0.4049E-03 0.9566E-03 0.3358E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000117 0.1017E-02 0.3465E-03 0.1017E-02 0.1589E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000118 0.8667E-03 0.3811E-03 0.8678E-03 0.7975E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000119 0.8612E-03 0.3240E-03 0.8623E-03 0.1111E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000120 0.8578E-03 0.3235E-03 0.8588E-03 0.1533E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000121 0.8630E-03 0.3341E-03 0.8638E-03 0.2287E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000122 0.8873E-03 0.3705E-03 0.8878E-03 0.2975E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000123 0.9398E-03 0.3204E-03 0.9399E-03 0.1344E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000124 0.8088E-03 0.3522E-03 0.8098E-03 0.5957E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000125 0.8036E-03 0.3015E-03 0.8046E-03 0.1102E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000126 0.7996E-03 0.3006E-03 0.8005E-03 0.1291E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000127 0.8031E-03 0.3073E-03 0.8039E-03 0.2047E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000128 0.8216E-03 0.3367E-03 0.8222E-03 0.2506E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000129 0.8655E-03 0.2962E-03 0.8657E-03 0.1150E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000130 0.7558E-03 0.3227E-03 0.7567E-03 0.4880E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000131 0.7506E-03 0.2811E-03 0.7514E-03 0.1009E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000132 0.7460E-03 0.2798E-03 0.7468E-03 0.1086E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000133 0.7475E-03 0.2838E-03 0.7482E-03 0.1776E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000134 0.7604E-03 0.3062E-03 0.7610E-03 0.2101E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000135 0.7945E-03 0.2747E-03 0.7948E-03 0.9760E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000136 0.7071E-03 0.2953E-03 0.7078E-03 0.4072E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000137 0.7017E-03 0.2624E-03 0.7025E-03 0.8785E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000138 0.6968E-03 0.2608E-03 0.6976E-03 0.9057E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000139 0.6966E-03 0.2631E-03 0.6973E-03 0.1507E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000140 0.7050E-03 0.2795E-03 0.7056E-03 0.1748E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000141 0.7305E-03 0.2554E-03 0.7309E-03 0.8246E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000142 0.6621E-03 0.2708E-03 0.6628E-03 0.3419E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000143 0.6568E-03 0.2453E-03 0.6575E-03 0.7491E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000144 0.6517E-03 0.2436E-03 0.6524E-03 0.7552E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000145 0.6502E-03 0.2447E-03 0.6508E-03 0.1269E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000146 0.6552E-03 0.2565E-03 0.6558E-03 0.1776E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000147 0.6775E-03 0.2847E-03 0.6779E-03 0.2377E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000148 0.7225E-03 0.2438E-03 0.7225E-03 0.1154E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000149 0.6148E-03 0.2689E-03 0.6154E-03 0.5615E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000150 0.6115E-03 0.2280E-03 0.6122E-03 0.7955E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000151 0.6098E-03 0.2280E-03 0.6104E-03 0.1097E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000152 0.6143E-03 0.2361E-03 0.6148E-03 0.1648E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000153 0.6327E-03 0.2630E-03 0.6330E-03 0.2129E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000154 0.6718E-03 0.2269E-03 0.6718E-03 0.9835E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000155 0.5767E-03 0.2504E-03 0.5773E-03 0.4277E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000156 0.5736E-03 0.2135E-03 0.5741E-03 0.7934E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000157 0.5713E-03 0.2131E-03 0.5718E-03 0.9365E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000158 0.5745E-03 0.2184E-03 0.5750E-03 0.1485E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 9 0000159 0.5889E-03 0.2405E-03 0.5892E-03 0.1814E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000160 0.6221E-03 0.2109E-03 0.6221E-03 0.8473E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000161 0.5414E-03 0.2309E-03 0.5418E-03 0.3557E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000162 0.5381E-03 0.2001E-03 0.5386E-03 0.7318E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000163 0.5353E-03 0.1994E-03 0.5358E-03 0.7973E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000164 0.5370E-03 0.2028E-03 0.5375E-03 0.1300E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000165 0.5474E-03 0.2199E-03 0.5477E-03 0.1538E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000166 0.5737E-03 0.1966E-03 0.5738E-03 0.7252E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000167 0.5084E-03 0.2123E-03 0.5089E-03 0.3004E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000168 0.5050E-03 0.1878E-03 0.5054E-03 0.6436E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000169 0.5019E-03 0.1868E-03 0.5023E-03 0.6726E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000170 0.5024E-03 0.1889E-03 0.5027E-03 0.1115E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000171 0.5095E-03 0.2018E-03 0.5098E-03 0.1295E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000172 0.5296E-03 0.1836E-03 0.5298E-03 0.6188E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000173 0.4777E-03 0.1957E-03 0.4781E-03 0.2550E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000174 0.4742E-03 0.1763E-03 0.4746E-03 0.5548E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000175 0.4710E-03 0.1752E-03 0.4713E-03 0.5675E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000176 0.4704E-03 0.1764E-03 0.4708E-03 0.9496E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000177 0.4750E-03 0.1859E-03 0.4753E-03 0.1337E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000178 0.4930E-03 0.2079E-03 0.4932E-03 0.4391E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000179 0.4493E-03 0.1693E-03 0.4496E-03 0.3742E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000180 0.4458E-03 0.1689E-03 0.4462E-03 0.6682E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000181 0.4435E-03 0.1715E-03 0.4438E-03 0.7805E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000182 0.4455E-03 0.1794E-03 0.4458E-03 0.1020E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000183 0.4540E-03 0.1630E-03 0.4543E-03 0.1249E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000184 0.4746E-03 0.1776E-03 0.4748E-03 0.4554E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000185 0.4227E-03 0.1588E-03 0.4230E-03 0.2914E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000186 0.4191E-03 0.1599E-03 0.4194E-03 0.6642E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000187 0.4168E-03 0.1626E-03 0.4171E-03 0.7010E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000188 0.4185E-03 0.1712E-03 0.4188E-03 0.9635E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000189 0.4261E-03 0.1531E-03 0.4263E-03 0.1137E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000190 0.4445E-03 0.1665E-03 0.4446E-03 0.4378E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000191 0.3977E-03 0.1492E-03 0.3979E-03 0.2408E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000192 0.3943E-03 0.1503E-03 0.3945E-03 0.6230E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000193 0.3920E-03 0.1525E-03 0.3923E-03 0.6226E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000194 0.3933E-03 0.1603E-03 0.3935E-03 0.8894E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000195 0.3998E-03 0.1438E-03 0.3999E-03 0.1031E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000196 0.4158E-03 0.1557E-03 0.4159E-03 0.4061E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000197 0.3742E-03 0.1403E-03 0.3744E-03 0.2071E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000198 0.3710E-03 0.1412E-03 0.3712E-03 0.5712E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000199 0.3688E-03 0.1431E-03 0.3690E-03 0.5552E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000200 0.3696E-03 0.1502E-03 0.3698E-03 0.8150E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000201 0.3751E-03 0.1352E-03 0.3752E-03 0.9356E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000202 0.3890E-03 0.1456E-03 0.3891E-03 0.3714E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000203 0.3522E-03 0.1320E-03 0.3524E-03 0.1826E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000204 0.3492E-03 0.1327E-03 0.3493E-03 0.5202E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000205 0.3470E-03 0.1344E-03 0.3471E-03 0.4991E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000206 0.3475E-03 0.1406E-03 0.3477E-03 0.7452E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000207 0.3521E-03 0.1272E-03 0.3522E-03 0.8515E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000208 0.3642E-03 0.1363E-03 0.3642E-03 0.3380E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000209 0.3316E-03 0.1242E-03 0.3317E-03 0.1633E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000210 0.3287E-03 0.1248E-03 0.3288E-03 0.4733E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000211 0.3265E-03 0.1262E-03 0.3267E-03 0.4520E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000212 0.3269E-03 0.1318E-03 0.3270E-03 0.6826E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000213 0.3307E-03 0.1197E-03 0.3308E-03 0.7786E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000214 0.3413E-03 0.1278E-03 0.3413E-03 0.1209E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000215 0.3624E-03 0.1207E-03 0.3624E-03 0.4753E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000216 0.3094E-03 0.1352E-03 0.3095E-03 0.2928E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000217 0.3079E-03 0.1144E-03 0.3081E-03 0.4019E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000218 0.3072E-03 0.1143E-03 0.3073E-03 0.5720E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000219 0.3095E-03 0.1178E-03 0.3096E-03 0.8150E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000220 0.3186E-03 0.1312E-03 0.3187E-03 0.1088E-03 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000221 0.3382E-03 0.1136E-03 0.3381E-03 0.4429E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000222 0.2914E-03 0.1261E-03 0.2915E-03 0.2233E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000223 0.2899E-03 0.1077E-03 0.2900E-03 0.4024E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000224 0.2889E-03 0.1076E-03 0.2890E-03 0.4806E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000225 0.2907E-03 0.1102E-03 0.2907E-03 0.7508E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000226 0.2981E-03 0.1214E-03 0.2981E-03 0.9334E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000227 0.3151E-03 0.1066E-03 0.3151E-03 0.4049E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000228 0.2744E-03 0.1172E-03 0.2745E-03 0.1845E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000229 0.2729E-03 0.1015E-03 0.2730E-03 0.3757E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000230 0.2717E-03 0.1012E-03 0.2718E-03 0.4122E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000231 0.2729E-03 0.1031E-03 0.2729E-03 0.6738E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000232 0.2786E-03 0.1123E-03 0.2786E-03 0.8076E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000233 0.2928E-03 0.1000E-03 0.2927E-03 0.3621E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000234 0.2585E-03 0.1088E-03 0.2586E-03 0.1567E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000235 0.2570E-03 0.9566E-04 0.2570E-03 0.3375E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000236 0.2556E-03 0.9529E-04 0.2557E-03 0.3549E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000237 0.2562E-03 0.9664E-04 0.2563E-03 0.5946E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000238 0.2606E-03 0.1040E-03 0.2606E-03 0.6992E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000239 0.2722E-03 0.9395E-04 0.2722E-03 0.3214E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000240 0.2436E-03 0.1011E-03 0.2436E-03 0.1355E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000241 0.2420E-03 0.9017E-04 0.2420E-03 0.2990E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000242 0.2405E-03 0.8973E-04 0.2406E-03 0.3083E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000243 0.2407E-03 0.9069E-04 0.2408E-03 0.5231E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000244 0.2440E-03 0.9658E-04 0.2440E-03 0.6096E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000245 0.2534E-03 0.8830E-04 0.2534E-03 0.2845E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000246 0.2295E-03 0.9407E-04 0.2295E-03 0.1181E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000247 0.2279E-03 0.8500E-04 0.2279E-03 0.2636E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000248 0.2264E-03 0.8453E-04 0.2265E-03 0.2693E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000249 0.2263E-03 0.8520E-04 0.2263E-03 0.4602E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000250 0.2287E-03 0.8995E-04 0.2287E-03 0.6504E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000251 0.2379E-03 0.1012E-03 0.2379E-03 0.2121E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000252 0.2164E-03 0.8187E-04 0.2164E-03 0.1794E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000253 0.2148E-03 0.8184E-04 0.2149E-03 0.3297E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000254 0.2139E-03 0.8340E-04 0.2139E-03 0.3899E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000255 0.2152E-03 0.8784E-04 0.2153E-03 0.5174E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000256 0.2202E-03 0.7910E-04 0.2202E-03 0.6376E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000257 0.2314E-03 0.8696E-04 0.2314E-03 0.2270E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000258 0.2040E-03 0.7713E-04 0.2040E-03 0.1473E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000259 0.2024E-03 0.7794E-04 0.2025E-03 0.3393E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000260 0.2016E-03 0.7966E-04 0.2016E-03 0.3651E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000261 0.2030E-03 0.8472E-04 0.2030E-03 0.5052E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000262 0.2078E-03 0.7458E-04 0.2078E-03 0.6018E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000263 0.2186E-03 0.8236E-04 0.2186E-03 0.2258E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000264 0.1924E-03 0.7272E-04 0.1924E-03 0.1272E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000265 0.1909E-03 0.7356E-04 0.1909E-03 0.3295E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000266 0.1900E-03 0.7515E-04 0.1901E-03 0.3372E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000267 0.1914E-03 0.8008E-04 0.1914E-03 0.4828E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000268 0.1960E-03 0.7030E-04 0.1959E-03 0.5657E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000269 0.2061E-03 0.7768E-04 0.2061E-03 0.2168E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000270 0.1813E-03 0.6858E-04 0.1813E-03 0.1138E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000271 0.1799E-03 0.6940E-04 0.1799E-03 0.3127E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000272 0.1792E-03 0.7089E-04 0.1792E-03 0.3119E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000273 0.1804E-03 0.7561E-04 0.1804E-03 0.4571E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000274 0.1847E-03 0.6630E-04 0.1847E-03 0.5304E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000275 0.1942E-03 0.7325E-04 0.1942E-03 0.2051E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000276 0.1710E-03 0.6469E-04 0.1710E-03 0.1040E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000277 0.1697E-03 0.6547E-04 0.1697E-03 0.2944E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000278 0.1689E-03 0.6687E-04 0.1689E-03 0.2900E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000279 0.1701E-03 0.7134E-04 0.1701E-03 0.4312E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000280 0.1741E-03 0.6254E-04 0.1741E-03 0.4978E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000281 0.1830E-03 0.6904E-04 0.1829E-03 0.1928E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000282 0.1612E-03 0.6103E-04 0.1612E-03 0.9614E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000283 0.1600E-03 0.6176E-04 0.1600E-03 0.2765E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000284 0.1593E-03 0.6308E-04 0.1593E-03 0.2709E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000285 0.1604E-03 0.6731E-04 0.1604E-03 0.4065E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000286 0.1641E-03 0.5899E-04 0.1641E-03 0.4681E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000287 0.1724E-03 0.6510E-04 0.1724E-03 0.1810E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000288 0.1520E-03 0.5759E-04 0.1520E-03 0.8957E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000289 0.1509E-03 0.5827E-04 0.1508E-03 0.2598E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000290 0.1502E-03 0.5952E-04 0.1502E-03 0.2541E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 10 0000291 0.1512E-03 0.6352E-04 0.1512E-03 0.3837E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000292 0.1547E-03 0.5566E-04 0.1547E-03 0.4414E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000293 0.1626E-03 0.6141E-04 0.1625E-03 0.1701E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000294 0.1434E-03 0.5434E-04 0.1434E-03 0.8394E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000295 0.1423E-03 0.5498E-04 0.1423E-03 0.2445E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000296 0.1416E-03 0.5618E-04 0.1416E-03 0.2393E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000297 0.1426E-03 0.5998E-04 0.1426E-03 0.3628E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000298 0.1459E-03 0.5252E-04 0.1459E-03 0.4171E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000299 0.1534E-03 0.5795E-04 0.1533E-03 0.1601E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000300 0.1352E-03 0.5128E-04 0.1352E-03 0.7902E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000301 0.1342E-03 0.5189E-04 0.1342E-03 0.2307E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000302 0.1336E-03 0.5304E-04 0.1336E-03 0.2260E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000303 0.1346E-03 0.5666E-04 0.1345E-03 0.3436E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000304 0.1377E-03 0.4956E-04 0.1377E-03 0.3952E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000305 0.1447E-03 0.5472E-04 0.1447E-03 0.1511E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000306 0.1275E-03 0.4840E-04 0.1275E-03 0.7464E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000307 0.1266E-03 0.4898E-04 0.1265E-03 0.2181E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000308 0.1260E-03 0.5008E-04 0.1260E-03 0.2141E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000309 0.1270E-03 0.5355E-04 0.1269E-03 0.3261E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000310 0.1300E-03 0.4678E-04 0.1299E-03 0.3751E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000311 0.1367E-03 0.5169E-04 0.1367E-03 0.1429E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000312 0.1203E-03 0.4568E-04 0.1203E-03 0.7069E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000313 0.1194E-03 0.4624E-04 0.1194E-03 0.2067E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000314 0.1189E-03 0.4731E-04 0.1189E-03 0.2032E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000315 0.1198E-03 0.5063E-04 0.1198E-03 0.3100E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000316 0.1227E-03 0.4415E-04 0.1227E-03 0.3566E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000317 0.1291E-03 0.4885E-04 0.1291E-03 0.1354E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000318 0.1135E-03 0.4311E-04 0.1135E-03 0.6708E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000319 0.1126E-03 0.4366E-04 0.1126E-03 0.1962E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000320 0.1122E-03 0.4469E-04 0.1122E-03 0.1932E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000321 0.1131E-03 0.4788E-04 0.1130E-03 0.2950E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000322 0.1159E-03 0.4167E-04 0.1158E-03 0.3395E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000323 0.1220E-03 0.4618E-04 0.1220E-03 0.1285E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000324 0.1071E-03 0.4069E-04 0.1070E-03 0.6374E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000325 0.1063E-03 0.4123E-04 0.1062E-03 0.1865E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000326 0.1058E-03 0.4223E-04 0.1058E-03 0.1838E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000327 0.1067E-03 0.4530E-04 0.1067E-03 0.2811E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000328 0.1094E-03 0.3934E-04 0.1094E-03 0.3234E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000329 0.1154E-03 0.4367E-04 0.1154E-03 0.1221E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000330 0.1010E-03 0.3841E-04 0.1010E-03 0.6064E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000331 0.1003E-03 0.3893E-04 0.1002E-03 0.1775E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000332 0.9987E-04 0.3990E-04 0.9986E-04 0.1751E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000333 0.1007E-03 0.4286E-04 0.1007E-03 0.2680E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000334 0.1034E-03 0.3714E-04 0.1034E-03 0.3083E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000335 0.1091E-03 0.4131E-04 0.1091E-03 0.1161E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000336 0.9530E-04 0.3626E-04 0.9529E-04 0.5772E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000337 0.9460E-04 0.3677E-04 0.9459E-04 0.1690E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000338 0.9425E-04 0.3771E-04 0.9424E-04 0.1669E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000339 0.9511E-04 0.4056E-04 0.9511E-04 0.2556E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000340 0.9766E-04 0.3506E-04 0.9765E-04 0.2939E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000341 0.1032E-03 0.3907E-04 0.1032E-03 0.1105E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000342 0.8993E-04 0.3423E-04 0.8992E-04 0.5496E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000343 0.8927E-04 0.3473E-04 0.8926E-04 0.1610E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000344 0.8896E-04 0.3564E-04 0.8895E-04 0.1591E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000345 0.8981E-04 0.3838E-04 0.8980E-04 0.2438E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000346 0.9228E-04 0.3309E-04 0.9227E-04 0.2803E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000347 0.9759E-04 0.3696E-04 0.9758E-04 0.1052E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000348 0.8486E-04 0.3231E-04 0.8485E-04 0.5234E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000349 0.8425E-04 0.3280E-04 0.8424E-04 0.1534E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000350 0.8397E-04 0.3368E-04 0.8396E-04 0.1516E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000351 0.8480E-04 0.3633E-04 0.8480E-04 0.2326E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000352 0.8720E-04 0.3124E-04 0.8719E-04 0.2673E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000353 0.9232E-04 0.3497E-04 0.9231E-04 0.1002E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000354 0.8009E-04 0.3050E-04 0.8008E-04 0.4984E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000355 0.7951E-04 0.3097E-04 0.7951E-04 0.1462E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000356 0.7926E-04 0.3184E-04 0.7925E-04 0.1445E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000357 0.8008E-04 0.3438E-04 0.8008E-04 0.2218E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000358 0.8240E-04 0.2950E-04 0.8240E-04 0.2548E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000359 0.8733E-04 0.3308E-04 0.8732E-04 0.9537E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000360 0.7559E-04 0.2879E-04 0.7558E-04 0.4745E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000361 0.7505E-04 0.2925E-04 0.7505E-04 0.1393E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000362 0.7482E-04 0.3009E-04 0.7482E-04 0.1377E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000363 0.7563E-04 0.3254E-04 0.7563E-04 0.2115E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000364 0.7788E-04 0.2785E-04 0.7787E-04 0.2428E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000365 0.8261E-04 0.3130E-04 0.8261E-04 0.9080E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000366 0.7135E-04 0.2718E-04 0.7134E-04 0.4517E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000367 0.7084E-04 0.2763E-04 0.7084E-04 0.1327E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000368 0.7064E-04 0.2844E-04 0.7064E-04 0.1311E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000369 0.7143E-04 0.3079E-04 0.7143E-04 0.2016E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000370 0.7360E-04 0.2629E-04 0.7360E-04 0.2313E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000371 0.7815E-04 0.2961E-04 0.7815E-04 0.8641E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000372 0.6735E-04 0.2566E-04 0.6735E-04 0.4297E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000373 0.6688E-04 0.2609E-04 0.6688E-04 0.1263E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000374 0.6669E-04 0.2687E-04 0.6669E-04 0.1249E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000375 0.6747E-04 0.2913E-04 0.6747E-04 0.1920E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000376 0.6957E-04 0.2482E-04 0.6957E-04 0.2202E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000377 0.7393E-04 0.2801E-04 0.7393E-04 0.8222E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000378 0.6358E-04 0.2422E-04 0.6358E-04 0.4087E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000379 0.6314E-04 0.2464E-04 0.6314E-04 0.1202E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000380 0.6297E-04 0.2539E-04 0.6298E-04 0.1188E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000381 0.6373E-04 0.2756E-04 0.6373E-04 0.1829E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000382 0.6575E-04 0.2343E-04 0.6575E-04 0.2096E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000383 0.6994E-04 0.2649E-04 0.6994E-04 0.7819E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000384 0.6003E-04 0.2286E-04 0.6003E-04 0.3885E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000385 0.5961E-04 0.2327E-04 0.5962E-04 0.1144E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000386 0.5946E-04 0.2400E-04 0.5947E-04 0.1130E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000387 0.6020E-04 0.2607E-04 0.6021E-04 0.1740E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000388 0.6215E-04 0.2212E-04 0.6215E-04 0.1993E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000389 0.6615E-04 0.2505E-04 0.6616E-04 0.7433E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000390 0.5668E-04 0.2158E-04 0.5668E-04 0.3692E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000391 0.5629E-04 0.2197E-04 0.5629E-04 0.1088E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000392 0.5616E-04 0.2267E-04 0.5616E-04 0.1075E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000393 0.5687E-04 0.2466E-04 0.5688E-04 0.1655E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000394 0.5874E-04 0.2088E-04 0.5875E-04 0.1895E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000395 0.6257E-04 0.2369E-04 0.6258E-04 0.7064E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000396 0.5352E-04 0.2037E-04 0.5352E-04 0.3507E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000397 0.5315E-04 0.2075E-04 0.5316E-04 0.1034E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000398 0.5304E-04 0.2142E-04 0.5304E-04 0.1021E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000399 0.5373E-04 0.2332E-04 0.5374E-04 0.1574E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000400 0.5552E-04 0.1971E-04 0.5553E-04 0.1801E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000401 0.5918E-04 0.2240E-04 0.5919E-04 0.6710E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000402 0.5054E-04 0.1923E-04 0.5055E-04 0.3330E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000403 0.5020E-04 0.1959E-04 0.5021E-04 0.9820E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000404 0.5009E-04 0.2024E-04 0.5010E-04 0.9699E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000405 0.5076E-04 0.2205E-04 0.5077E-04 0.1496E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000406 0.5248E-04 0.1860E-04 0.5249E-04 0.1710E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000407 0.5598E-04 0.2117E-04 0.5598E-04 0.6371E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000408 0.4773E-04 0.1815E-04 0.4774E-04 0.3160E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000409 0.4741E-04 0.1850E-04 0.4742E-04 0.9325E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000410 0.4732E-04 0.1912E-04 0.4733E-04 0.9208E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000411 0.4796E-04 0.2085E-04 0.4797E-04 0.1421E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000412 0.4960E-04 0.1756E-04 0.4962E-04 0.1624E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000413 0.5294E-04 0.2001E-04 0.5295E-04 0.6046E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000414 0.4508E-04 0.1713E-04 0.4509E-04 0.2998E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000415 0.4478E-04 0.1747E-04 0.4480E-04 0.8852E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000416 0.4470E-04 0.1806E-04 0.4471E-04 0.8738E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000417 0.4532E-04 0.1971E-04 0.4533E-04 0.1349E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000418 0.4689E-04 0.1657E-04 0.4690E-04 0.1541E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000419 0.5007E-04 0.1892E-04 0.5008E-04 0.5737E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000420 0.4258E-04 0.1617E-04 0.4260E-04 0.2843E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000421 0.4230E-04 0.1649E-04 0.4232E-04 0.8399E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000422 0.4223E-04 0.1705E-04 0.4224E-04 0.8289E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000423 0.4282E-04 0.1863E-04 0.4284E-04 0.1280E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000424 0.4432E-04 0.1564E-04 0.4434E-04 0.1462E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000425 0.4735E-04 0.1788E-04 0.4736E-04 0.5441E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000426 0.4023E-04 0.1526E-04 0.4024E-04 0.2695E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000427 0.3996E-04 0.1557E-04 0.3998E-04 0.7967E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000428 0.3989E-04 0.1611E-04 0.3991E-04 0.7861E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000429 0.4047E-04 0.1761E-04 0.4048E-04 0.1214E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000430 0.4190E-04 0.1477E-04 0.4191E-04 0.1386E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000431 0.4478E-04 0.1690E-04 0.4479E-04 0.5159E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000432 0.3801E-04 0.1440E-04 0.3802E-04 0.2555E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000433 0.3776E-04 0.1470E-04 0.3778E-04 0.7554E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000434 0.3770E-04 0.1521E-04 0.3771E-04 0.7453E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000435 0.3824E-04 0.1664E-04 0.3826E-04 0.1152E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000436 0.3961E-04 0.1394E-04 0.3963E-04 0.1314E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000437 0.4235E-04 0.1597E-04 0.4236E-04 0.4890E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000438 0.3591E-04 0.1359E-04 0.3593E-04 0.2421E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000439 0.3568E-04 0.1387E-04 0.3570E-04 0.7161E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000440 0.3562E-04 0.1436E-04 0.3564E-04 0.7063E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000441 0.3615E-04 0.1572E-04 0.3617E-04 0.1092E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000442 0.3745E-04 0.1315E-04 0.3746E-04 0.1246E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000443 0.4005E-04 0.1509E-04 0.4007E-04 0.4634E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000444 0.3394E-04 0.1283E-04 0.3396E-04 0.2293E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000445 0.3372E-04 0.1310E-04 0.3374E-04 0.6787E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000446 0.3367E-04 0.1356E-04 0.3369E-04 0.6693E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000447 0.3417E-04 0.1486E-04 0.3419E-04 0.1035E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000448 0.3540E-04 0.1241E-04 0.3542E-04 0.1181E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000449 0.3788E-04 0.1425E-04 0.3789E-04 0.4390E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000450 0.3207E-04 0.1210E-04 0.3209E-04 0.2172E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000451 0.3187E-04 0.1236E-04 0.3189E-04 0.6430E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000452 0.3182E-04 0.1281E-04 0.3184E-04 0.6340E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000453 0.3230E-04 0.1404E-04 0.3232E-04 0.9806E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000454 0.3347E-04 0.1172E-04 0.3349E-04 0.1119E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000455 0.3582E-04 0.1346E-04 0.3584E-04 0.4158E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000456 0.3031E-04 0.1142E-04 0.3034E-04 0.2057E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000457 0.3012E-04 0.1167E-04 0.3014E-04 0.6091E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000458 0.3008E-04 0.1209E-04 0.3010E-04 0.6005E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 11 0000459 0.3053E-04 0.1326E-04 0.3056E-04 0.9291E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000460 0.3165E-04 0.1106E-04 0.3167E-04 0.1060E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000461 0.3388E-04 0.1272E-04 0.3390E-04 0.3939E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000462 0.2866E-04 0.1078E-04 0.2868E-04 0.1948E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000463 0.2847E-04 0.1102E-04 0.2850E-04 0.5769E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000464 0.2844E-04 0.1142E-04 0.2846E-04 0.5687E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000465 0.2887E-04 0.1253E-04 0.2889E-04 0.8801E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000466 0.2993E-04 0.1044E-04 0.2995E-04 0.1004E-04 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000467 0.3205E-04 0.1202E-04 0.3207E-04 0.3730E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000468 0.2709E-04 0.1017E-04 0.2712E-04 0.1845E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000469 0.2692E-04 0.1040E-04 0.2695E-04 0.5464E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000470 0.2689E-04 0.1078E-04 0.2691E-04 0.5385E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000471 0.2730E-04 0.1183E-04 0.2732E-04 0.8336E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000472 0.2831E-04 0.9849E-05 0.2833E-04 0.9504E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000473 0.3031E-04 0.1135E-04 0.3033E-04 0.3532E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000474 0.2562E-04 0.9601E-05 0.2564E-04 0.1746E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000475 0.2545E-04 0.9814E-05 0.2548E-04 0.5174E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000476 0.2542E-04 0.1018E-04 0.2545E-04 0.5099E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000477 0.2582E-04 0.1118E-04 0.2584E-04 0.7895E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000478 0.2677E-04 0.9295E-05 0.2680E-04 0.9000E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000479 0.2867E-04 0.1072E-04 0.2870E-04 0.3344E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000480 0.2422E-04 0.9060E-05 0.2425E-04 0.1653E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000481 0.2407E-04 0.9262E-05 0.2410E-04 0.4899E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000482 0.2404E-04 0.9610E-05 0.2407E-04 0.4828E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000483 0.2442E-04 0.1056E-04 0.2444E-04 0.7476E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000484 0.2533E-04 0.8772E-05 0.2535E-04 0.8523E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000485 0.2713E-04 0.1013E-04 0.2715E-04 0.3166E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000486 0.2291E-04 0.8549E-05 0.2294E-04 0.1565E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000487 0.2277E-04 0.8742E-05 0.2279E-04 0.4638E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000488 0.2274E-04 0.9073E-05 0.2277E-04 0.4570E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000489 0.2310E-04 0.9973E-05 0.2312E-04 0.7079E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000490 0.2396E-04 0.8278E-05 0.2398E-04 0.8070E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000491 0.2567E-04 0.9566E-05 0.2569E-04 0.2997E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000492 0.2167E-04 0.8068E-05 0.2170E-04 0.1482E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000493 0.2154E-04 0.8251E-05 0.2156E-04 0.4391E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000494 0.2151E-04 0.8566E-05 0.2154E-04 0.4327E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000495 0.2185E-04 0.9420E-05 0.2188E-04 0.6703E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000496 0.2267E-04 0.7812E-05 0.2269E-04 0.7641E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000497 0.2429E-04 0.9036E-05 0.2431E-04 0.2837E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000498 0.2050E-04 0.7613E-05 0.2053E-04 0.1403E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000499 0.2037E-04 0.7787E-05 0.2040E-04 0.4157E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000500 0.2035E-04 0.8087E-05 0.2038E-04 0.4096E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000501 0.2067E-04 0.8897E-05 0.2070E-04 0.6347E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000502 0.2145E-04 0.7372E-05 0.2147E-04 0.5131E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000503 0.1955E-04 0.8178E-05 0.2257E-04 0.2567E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000504 0.1972E-04 0.7106E-05 0.1942E-04 0.2393E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000505 0.1983E-04 0.7233E-05 0.1927E-04 0.4170E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000506 0.2021E-04 0.7619E-05 0.1933E-04 0.4282E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000507 0.2102E-04 0.7013E-05 0.1986E-04 0.5020E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000508 0.1868E-04 0.7610E-05 0.2058E-04 0.1450E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000509 0.1873E-04 0.6772E-05 0.1854E-04 0.2447E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000510 0.1875E-04 0.6832E-05 0.1839E-04 0.2730E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000511 0.1900E-04 0.7131E-05 0.1841E-04 0.4694E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000512 0.1968E-04 0.7953E-05 0.1881E-04 0.3936E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000513 0.1784E-04 0.6592E-05 0.1949E-04 0.2154E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000514 0.1786E-04 0.6816E-05 0.1770E-04 0.2863E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000515 0.1793E-04 0.7138E-05 0.1759E-04 0.2937E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000516 0.1822E-04 0.6350E-05 0.1768E-04 0.3848E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000517 0.1881E-04 0.6694E-05 0.1810E-04 0.4680E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000518 0.1704E-04 0.7557E-05 0.1881E-04 0.1663E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000519 0.1716E-04 0.6159E-05 0.1691E-04 0.2344E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000520 0.1723E-04 0.6224E-05 0.1680E-04 0.3035E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000521 0.1756E-04 0.6572E-05 0.1688E-04 0.4017E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000522 0.1822E-04 0.6059E-05 0.1734E-04 0.4249E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000523 0.1628E-04 0.6591E-05 0.1798E-04 0.1496E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000524 0.1633E-04 0.5850E-05 0.1616E-04 0.1919E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000525 0.1636E-04 0.5913E-05 0.1603E-04 0.2683E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000526 0.1658E-04 0.6175E-05 0.1605E-04 0.3907E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000527 0.1720E-04 0.6930E-05 0.1642E-04 0.3700E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000528 0.1555E-04 0.5695E-05 0.1700E-04 0.1737E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000529 0.1557E-04 0.5899E-05 0.1543E-04 0.2645E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000530 0.1564E-04 0.6178E-05 0.1534E-04 0.2484E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000531 0.1589E-04 0.5494E-05 0.1543E-04 0.3540E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000532 0.1642E-04 0.5809E-05 0.1581E-04 0.4142E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000533 0.1486E-04 0.6586E-05 0.1643E-04 0.1563E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000534 0.1497E-04 0.5324E-05 0.1475E-04 0.2018E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000535 0.1505E-04 0.5387E-05 0.1466E-04 0.2800E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000536 0.1534E-04 0.5702E-05 0.1473E-04 0.3534E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000537 0.1594E-04 0.5245E-05 0.1516E-04 0.3896E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000538 0.1420E-04 0.5726E-05 0.1572E-04 0.1300E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000539 0.1425E-04 0.5059E-05 0.1410E-04 0.1760E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000540 0.1428E-04 0.5119E-05 0.1399E-04 0.2364E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000541 0.1449E-04 0.5364E-05 0.1402E-04 0.3549E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000542 0.1506E-04 0.6048E-05 0.1436E-04 0.3335E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000543 0.1357E-04 0.4928E-05 0.1488E-04 0.1587E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000544 0.1360E-04 0.5116E-05 0.1347E-04 0.2345E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000545 0.1366E-04 0.5372E-05 0.1340E-04 0.2262E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000546 0.1390E-04 0.4752E-05 0.1348E-04 0.3161E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000547 0.1438E-04 0.5046E-05 0.1383E-04 0.3758E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000548 0.1297E-04 0.5749E-05 0.1440E-04 0.1391E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000549 0.1308E-04 0.4604E-05 0.1288E-04 0.1824E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000550 0.1316E-04 0.4665E-05 0.1281E-04 0.2500E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000551 0.1343E-04 0.4956E-05 0.1288E-04 0.3179E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 12 0000552 0.1397E-04 0.4540E-05 0.1327E-04 0.3507E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000553 0.1240E-04 0.4980E-05 0.1378E-04 0.1176E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000554 0.1246E-04 0.4374E-05 0.1232E-04 0.1571E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000555 0.1249E-04 0.4432E-05 0.1223E-04 0.2131E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000556 0.1268E-04 0.4658E-05 0.1225E-04 0.3170E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000557 0.1320E-04 0.5279E-05 0.1257E-04 0.3009E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000558 0.1186E-04 0.4264E-05 0.1305E-04 0.1415E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000559 0.1189E-04 0.4437E-05 0.1177E-04 0.2098E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000560 0.1195E-04 0.4669E-05 0.1171E-04 0.2021E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000561 0.1217E-04 0.4110E-05 0.1180E-04 0.2829E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000562 0.1260E-04 0.4382E-05 0.1211E-04 0.2860E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000563 0.1134E-04 0.4067E-05 0.1254E-04 0.1439E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000564 0.1136E-04 0.4270E-05 0.1126E-04 0.2012E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000565 0.1141E-04 0.4511E-05 0.1120E-04 0.1920E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000566 0.1162E-04 0.3914E-05 0.1129E-04 0.2665E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000567 0.1203E-04 0.4169E-05 0.1159E-04 0.2727E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000568 0.1084E-04 0.3876E-05 0.1200E-04 0.1371E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000569 0.1086E-04 0.4076E-05 0.1077E-04 0.1969E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000570 0.1092E-04 0.4306E-05 0.1072E-04 0.1850E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000571 0.1112E-04 0.3730E-05 0.1080E-04 0.2566E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000572 0.1151E-04 0.3976E-05 0.1110E-04 0.2597E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000573 0.1037E-04 0.3696E-05 0.1149E-04 0.1344E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000574 0.1039E-04 0.3891E-05 0.1030E-04 0.1897E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000575 0.1045E-04 0.4116E-05 0.1025E-04 0.1780E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000576 0.1064E-04 0.3553E-05 0.1034E-04 0.2468E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000577 0.1101E-04 0.3792E-05 0.1063E-04 0.2494E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000578 0.9922E-05 0.3523E-05 0.1101E-04 0.1295E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000579 0.9947E-05 0.3714E-05 0.9860E-05 0.1841E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000580 0.9999E-05 0.3930E-05 0.9811E-05 0.1716E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000581 0.1019E-04 0.3385E-05 0.9894E-05 0.2379E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000582 0.1055E-04 0.3617E-05 0.1018E-04 0.2392E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000583 0.9494E-05 0.3359E-05 0.1055E-04 0.1258E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000584 0.9519E-05 0.3545E-05 0.9436E-05 0.1778E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000585 0.9571E-05 0.3755E-05 0.9390E-05 0.1653E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000586 0.9755E-05 0.3225E-05 0.9472E-05 0.2291E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000587 0.1010E-04 0.3451E-05 0.9746E-05 0.2300E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000588 0.9086E-05 0.3201E-05 0.1011E-04 0.1215E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000589 0.9112E-05 0.3383E-05 0.9032E-05 0.1719E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000590 0.9163E-05 0.3586E-05 0.8988E-05 0.1592E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000591 0.9341E-05 0.3073E-05 0.9069E-05 0.2207E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000592 0.9679E-05 0.3293E-05 0.9336E-05 0.2209E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000593 0.8696E-05 0.3051E-05 0.9688E-05 0.1176E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000594 0.8723E-05 0.3228E-05 0.8646E-05 0.1111E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000595 0.8756E-05 0.2926E-05 0.8594E-05 0.1255E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000596 0.8855E-05 0.2972E-05 0.8607E-05 0.1978E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000597 0.9121E-05 0.3217E-05 0.8789E-05 0.1879E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000598 0.8325E-05 0.2894E-05 0.9055E-05 0.1063E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000599 0.8330E-05 0.3009E-05 0.8276E-05 0.1374E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000600 0.8354E-05 0.3164E-05 0.8230E-05 0.1317E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000601 0.8474E-05 0.2779E-05 0.8276E-05 0.1818E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000602 0.8723E-05 0.2941E-05 0.8464E-05 0.2144E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000603 0.7971E-05 0.3343E-05 0.8785E-05 0.8287E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000604 0.8028E-05 0.2695E-05 0.7926E-05 0.1113E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000605 0.8062E-05 0.2727E-05 0.7878E-05 0.1456E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000606 0.8203E-05 0.2894E-05 0.7912E-05 0.1880E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000607 0.8494E-05 0.2655E-05 0.8121E-05 0.2007E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000608 0.7632E-05 0.2911E-05 0.8405E-05 0.7173E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000609 0.7660E-05 0.2558E-05 0.7592E-05 0.9335E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000610 0.7674E-05 0.2591E-05 0.7536E-05 0.1264E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000611 0.7773E-05 0.2717E-05 0.7546E-05 0.1853E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000612 0.8050E-05 0.3073E-05 0.7716E-05 0.1755E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000613 0.7311E-05 0.2497E-05 0.7980E-05 0.8404E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000614 0.7326E-05 0.2597E-05 0.7270E-05 0.1261E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000615 0.7356E-05 0.2730E-05 0.7230E-05 0.1197E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000616 0.7472E-05 0.2407E-05 0.7273E-05 0.1679E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000617 0.7710E-05 0.2565E-05 0.7447E-05 0.1685E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000618 0.7001E-05 0.2383E-05 0.7688E-05 0.8746E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000619 0.7013E-05 0.2502E-05 0.6966E-05 0.1204E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000620 0.7041E-05 0.2644E-05 0.6929E-05 0.1146E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000621 0.7156E-05 0.2291E-05 0.6974E-05 0.1591E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000622 0.7382E-05 0.2445E-05 0.7146E-05 0.1623E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000623 0.6708E-05 0.2272E-05 0.7382E-05 0.8292E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000624 0.6721E-05 0.2391E-05 0.6675E-05 0.1191E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000625 0.6750E-05 0.2527E-05 0.6640E-05 0.1111E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000626 0.6862E-05 0.2184E-05 0.6687E-05 0.1543E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000627 0.7083E-05 0.2334E-05 0.6856E-05 0.1556E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000628 0.6428E-05 0.2167E-05 0.7088E-05 0.8209E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000629 0.6442E-05 0.2285E-05 0.6398E-05 0.1150E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000630 0.6472E-05 0.2420E-05 0.6365E-05 0.1075E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000631 0.6583E-05 0.2081E-05 0.6412E-05 0.1491E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000632 0.6798E-05 0.2229E-05 0.6580E-05 0.1504E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000633 0.6161E-05 0.2066E-05 0.6808E-05 0.7918E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000634 0.6177E-05 0.2183E-05 0.6133E-05 0.1122E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000635 0.6206E-05 0.2314E-05 0.6103E-05 0.1040E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000636 0.6316E-05 0.1983E-05 0.6150E-05 0.1445E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000637 0.6527E-05 0.2128E-05 0.6315E-05 0.1450E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000638 0.5907E-05 0.1970E-05 0.6539E-05 0.7730E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000639 0.5923E-05 0.2085E-05 0.5881E-05 0.7311E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000640 0.5943E-05 0.1887E-05 0.5845E-05 0.8256E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000641 0.6003E-05 0.1919E-05 0.5852E-05 0.1301E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000642 0.6171E-05 0.2082E-05 0.5964E-05 0.1239E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000643 0.5664E-05 0.1868E-05 0.6135E-05 0.1374E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000644 0.5706E-05 0.2036E-05 0.6349E-05 0.1009E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000645 0.5833E-05 0.1816E-05 0.5594E-05 0.1131E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000646 0.5954E-05 0.1906E-05 0.5587E-05 0.1605E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000647 0.6217E-05 0.2167E-05 0.5721E-05 0.1195E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000648 0.5435E-05 0.1787E-05 0.5957E-05 0.6667E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000649 0.5444E-05 0.1875E-05 0.5409E-05 0.9845E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000650 0.5463E-05 0.1981E-05 0.5380E-05 0.9117E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000651 0.5550E-05 0.1712E-05 0.5408E-05 0.1229E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000652 0.5715E-05 0.1829E-05 0.5537E-05 0.1281E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000653 0.5211E-05 0.1698E-05 0.5711E-05 0.6658E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000654 0.5220E-05 0.1788E-05 0.5190E-05 0.9082E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000655 0.5243E-05 0.1893E-05 0.5163E-05 0.8431E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000656 0.5326E-05 0.1631E-05 0.5200E-05 0.1197E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000657 0.5495E-05 0.1747E-05 0.5328E-05 0.1174E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000658 0.4998E-05 0.1620E-05 0.5511E-05 0.6311E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000659 0.5011E-05 0.1713E-05 0.4980E-05 0.6153E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000660 0.5024E-05 0.1553E-05 0.4949E-05 0.6902E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000661 0.5071E-05 0.1578E-05 0.4951E-05 0.1076E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000662 0.5202E-05 0.1709E-05 0.5039E-05 0.1397E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000663 0.5483E-05 0.1565E-05 0.5266E-05 0.6865E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000664 0.4762E-05 0.1734E-05 0.4778E-05 0.3466E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000665 0.4734E-05 0.1478E-05 0.4760E-05 0.5710E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000666 0.4716E-05 0.1479E-05 0.4746E-05 0.6979E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000667 0.4734E-05 0.1533E-05 0.4766E-05 0.1145E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000668 0.4822E-05 0.1713E-05 0.4868E-05 0.1458E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000669 0.5045E-05 0.1482E-05 0.5095E-05 0.6344E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000670 0.4531E-05 0.1646E-05 0.4550E-05 0.2832E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000671 0.4512E-05 0.1393E-05 0.4529E-05 0.5875E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000672 0.4496E-05 0.1394E-05 0.4514E-05 0.6486E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000673 0.4517E-05 0.1440E-05 0.4533E-05 0.1108E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000674 0.4608E-05 0.1611E-05 0.4626E-05 0.1363E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000675 0.4833E-05 0.1396E-05 0.4847E-05 0.5949E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000676 0.4315E-05 0.1556E-05 0.4333E-05 0.2575E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000677 0.4296E-05 0.1314E-05 0.4314E-05 0.5763E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000678 0.4281E-05 0.1314E-05 0.4299E-05 0.6076E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000679 0.4300E-05 0.1355E-05 0.4318E-05 0.1053E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000680 0.4384E-05 0.1517E-05 0.4403E-05 0.1270E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000681 0.4593E-05 0.1316E-05 0.4611E-05 0.5578E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000682 0.4110E-05 0.1466E-05 0.4127E-05 0.2365E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000683 0.4092E-05 0.1239E-05 0.4109E-05 0.5451E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000684 0.4077E-05 0.1239E-05 0.4094E-05 0.5623E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000685 0.4094E-05 0.1275E-05 0.4111E-05 0.9831E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000686 0.4170E-05 0.1423E-05 0.4187E-05 0.1172E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000687 0.4364E-05 0.1239E-05 0.4380E-05 0.5216E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000688 0.3916E-05 0.1379E-05 0.3933E-05 0.2187E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000689 0.3899E-05 0.1168E-05 0.3916E-05 0.5091E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000690 0.3884E-05 0.1168E-05 0.3900E-05 0.5213E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000691 0.3898E-05 0.1200E-05 0.3914E-05 0.9139E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000692 0.3967E-05 0.1336E-05 0.3983E-05 0.1084E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000693 0.4145E-05 0.1168E-05 0.4161E-05 0.4870E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000694 0.3732E-05 0.1296E-05 0.3749E-05 0.2024E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000695 0.3715E-05 0.1101E-05 0.3732E-05 0.4723E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000696 0.3701E-05 0.1101E-05 0.3717E-05 0.4830E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000697 0.3712E-05 0.1130E-05 0.3728E-05 0.8475E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000698 0.3775E-05 0.1254E-05 0.3790E-05 0.1003E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000699 0.3938E-05 0.1100E-05 0.3953E-05 0.4548E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000700 0.3558E-05 0.1218E-05 0.3574E-05 0.1879E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000701 0.3542E-05 0.1038E-05 0.3558E-05 0.4379E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000702 0.3527E-05 0.1037E-05 0.3543E-05 0.4487E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000703 0.3537E-05 0.1064E-05 0.3553E-05 0.7870E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000704 0.3593E-05 0.1178E-05 0.3608E-05 0.9306E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000705 0.3743E-05 0.1037E-05 0.3757E-05 0.4251E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000706 0.3393E-05 0.1146E-05 0.3409E-05 0.1748E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000707 0.3377E-05 0.9790E-06 0.3393E-05 0.4064E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000708 0.3363E-05 0.9780E-06 0.3379E-05 0.4176E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000709 0.3371E-05 0.1002E-05 0.3386E-05 0.7318E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000710 0.3422E-05 0.1108E-05 0.3437E-05 0.8655E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000711 0.3559E-05 0.9772E-06 0.3574E-05 0.3979E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000712 0.3237E-05 0.1078E-05 0.3252E-05 0.1630E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000713 0.3222E-05 0.9231E-06 0.3237E-05 0.3778E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000714 0.3208E-05 0.9220E-06 0.3223E-05 0.3897E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000715 0.3215E-05 0.9440E-06 0.3229E-05 0.6820E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000716 0.3261E-05 0.1042E-05 0.3275E-05 0.8072E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000717 0.3387E-05 0.9212E-06 0.3401E-05 0.3729E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000718 0.3089E-05 0.1015E-05 0.3104E-05 0.1524E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000719 0.3074E-05 0.8704E-06 0.3089E-05 0.3521E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000720 0.3061E-05 0.8693E-06 0.3075E-05 0.3644E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000721 0.3066E-05 0.8895E-06 0.3081E-05 0.6369E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000722 0.3108E-05 0.9805E-06 0.3122E-05 0.7545E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000723 0.3225E-05 0.8685E-06 0.3238E-05 0.3501E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000724 0.2949E-05 0.9553E-06 0.2963E-05 0.1428E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000725 0.2934E-05 0.8207E-06 0.2949E-05 0.3288E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000726 0.2921E-05 0.8196E-06 0.2936E-05 0.3414E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 13 0000727 0.2926E-05 0.8383E-06 0.2940E-05 0.5960E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000728 0.2964E-05 0.9232E-06 0.2978E-05 0.7067E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000729 0.3072E-05 0.8188E-06 0.3085E-05 0.1106E-05 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000730 0.3284E-05 0.1022E-05 0.3296E-05 0.3272E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000731 0.2799E-05 0.7956E-06 0.2813E-05 0.2863E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000732 0.2786E-05 0.8254E-06 0.2800E-05 0.5168E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000733 0.2789E-05 0.8752E-06 0.2803E-05 0.5050E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000734 0.2825E-05 0.7685E-06 0.2838E-05 0.7574E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000735 0.2911E-05 0.8529E-06 0.2923E-05 0.9903E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000736 0.3094E-05 0.7992E-06 0.3106E-05 0.2797E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000737 0.2672E-05 0.7519E-06 0.2686E-05 0.2361E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000738 0.2657E-05 0.7763E-06 0.2671E-05 0.4487E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000739 0.2653E-05 0.8191E-06 0.2666E-05 0.3828E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000740 0.2674E-05 0.7185E-06 0.2687E-05 0.6412E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000741 0.2728E-05 0.7757E-06 0.2740E-05 0.7961E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000742 0.2859E-05 0.7356E-06 0.2871E-05 0.3786E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000743 0.2552E-05 0.8375E-06 0.2565E-05 0.1463E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000744 0.2543E-05 0.6833E-06 0.2556E-05 0.3232E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000745 0.2537E-05 0.6859E-06 0.2550E-05 0.3563E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000746 0.2552E-05 0.7188E-06 0.2564E-05 0.6264E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000747 0.2609E-05 0.8270E-06 0.2621E-05 0.7529E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000748 0.2746E-05 0.6984E-06 0.2757E-05 0.3513E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000749 0.2440E-05 0.7994E-06 0.2452E-05 0.1385E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000750 0.2431E-05 0.6440E-06 0.2444E-05 0.3151E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000751 0.2425E-05 0.6468E-06 0.2437E-05 0.3374E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000752 0.2439E-05 0.6764E-06 0.2451E-05 0.5898E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000753 0.2493E-05 0.7805E-06 0.2504E-05 0.7041E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000754 0.2622E-05 0.6583E-06 0.2633E-05 0.3263E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000755 0.2333E-05 0.7550E-06 0.2345E-05 0.1307E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000756 0.2325E-05 0.6072E-06 0.2337E-05 0.3001E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000757 0.2318E-05 0.6097E-06 0.2330E-05 0.3165E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000758 0.2331E-05 0.6361E-06 0.2342E-05 0.5517E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000759 0.2379E-05 0.7331E-06 0.2391E-05 0.6552E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000760 0.2499E-05 0.6199E-06 0.2509E-05 0.3036E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000761 0.2231E-05 0.7101E-06 0.2243E-05 0.1226E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000762 0.2223E-05 0.5725E-06 0.2235E-05 0.2818E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000763 0.2216E-05 0.5746E-06 0.2228E-05 0.2953E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000764 0.2227E-05 0.5982E-06 0.2238E-05 0.5141E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000765 0.2271E-05 0.6875E-06 0.2282E-05 0.6089E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000766 0.2380E-05 0.5836E-06 0.2390E-05 0.2826E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000767 0.2135E-05 0.6669E-06 0.2146E-05 0.1145E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000768 0.2127E-05 0.5398E-06 0.2138E-05 0.2630E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000769 0.2120E-05 0.5415E-06 0.2131E-05 0.2749E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000770 0.2129E-05 0.5628E-06 0.2140E-05 0.4782E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000771 0.2168E-05 0.6448E-06 0.2178E-05 0.5658E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000772 0.2267E-05 0.5495E-06 0.2277E-05 0.2634E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000773 0.2043E-05 0.6263E-06 0.2054E-05 0.1068E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000774 0.2035E-05 0.5089E-06 0.2046E-05 0.2448E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000775 0.2028E-05 0.5104E-06 0.2039E-05 0.2560E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000776 0.2035E-05 0.5297E-06 0.2046E-05 0.4451E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000777 0.2070E-05 0.6051E-06 0.2081E-05 0.5266E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000778 0.2161E-05 0.5175E-06 0.2170E-05 0.2459E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000779 0.1956E-05 0.5883E-06 0.1966E-05 0.9960E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000780 0.1948E-05 0.4799E-06 0.1958E-05 0.2280E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000781 0.1941E-05 0.4811E-06 0.1951E-05 0.2386E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000782 0.1947E-05 0.4987E-06 0.1957E-05 0.4147E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000783 0.1978E-05 0.5683E-06 0.1988E-05 0.4909E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000784 0.2060E-05 0.4875E-06 0.2070E-05 0.2298E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000785 0.1873E-05 0.5529E-06 0.1883E-05 0.9302E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000786 0.1865E-05 0.4524E-06 0.1875E-05 0.2125E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000787 0.1858E-05 0.4535E-06 0.1868E-05 0.2228E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000788 0.1863E-05 0.4697E-06 0.1873E-05 0.3870E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000789 0.1891E-05 0.5341E-06 0.1901E-05 0.4585E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000790 0.1966E-05 0.4594E-06 0.1975E-05 0.2152E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000791 0.1794E-05 0.5200E-06 0.1804E-05 0.8701E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000792 0.1786E-05 0.4266E-06 0.1796E-05 0.1984E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000793 0.1779E-05 0.4275E-06 0.1789E-05 0.2084E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000794 0.1784E-05 0.4424E-06 0.1793E-05 0.3618E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000795 0.1809E-05 0.5023E-06 0.1818E-05 0.4291E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000796 0.1877E-05 0.4329E-06 0.1886E-05 0.6729E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000797 0.2011E-05 0.5743E-06 0.2019E-05 0.2009E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000798 0.1710E-05 0.4187E-06 0.1719E-05 0.1753E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000799 0.1703E-05 0.4402E-06 0.1712E-05 0.2185E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000800 0.1704E-05 0.3965E-06 0.1713E-05 0.2669E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000801 0.1718E-05 0.4178E-06 0.1727E-05 0.4518E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000802 0.1765E-05 0.4859E-06 0.1773E-05 0.5603E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000803 0.1869E-05 0.4247E-06 0.1877E-05 0.1685E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000804 0.1637E-05 0.3927E-06 0.1646E-05 0.1253E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000805 0.1629E-05 0.4084E-06 0.1638E-05 0.1838E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000806 0.1625E-05 0.3733E-06 0.1633E-05 0.1794E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000807 0.1629E-05 0.3834E-06 0.1637E-05 0.3506E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000808 0.1651E-05 0.4255E-06 0.1660E-05 0.4096E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000809 0.1714E-05 0.3846E-06 0.1722E-05 0.2152E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000810 0.1569E-05 0.4342E-06 0.1577E-05 0.7127E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000811 0.1562E-05 0.3548E-06 0.1571E-05 0.1716E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000812 0.1557E-05 0.3565E-06 0.1566E-05 0.1795E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000813 0.1562E-05 0.3749E-06 0.1570E-05 0.3341E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000814 0.1586E-05 0.4317E-06 0.1594E-05 0.3919E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000815 0.1650E-05 0.3656E-06 0.1658E-05 0.1952E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000816 0.1504E-05 0.4181E-06 0.1513E-05 0.7037E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000817 0.1499E-05 0.3344E-06 0.1507E-05 0.1670E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000818 0.1494E-05 0.3362E-06 0.1502E-05 0.1761E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000819 0.1498E-05 0.3529E-06 0.1506E-05 0.3154E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000820 0.1522E-05 0.4092E-06 0.1530E-05 0.3739E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000821 0.1584E-05 0.3448E-06 0.1591E-05 0.1798E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000822 0.1443E-05 0.3967E-06 0.1451E-05 0.6871E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000823 0.1438E-05 0.3153E-06 0.1445E-05 0.1603E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000824 0.1433E-05 0.3170E-06 0.1441E-05 0.1688E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000825 0.1437E-05 0.3321E-06 0.1445E-05 0.2973E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000826 0.1459E-05 0.3859E-06 0.1466E-05 0.3532E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000827 0.1517E-05 0.3250E-06 0.1524E-05 0.1675E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000828 0.1385E-05 0.3746E-06 0.1392E-05 0.6593E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000829 0.1379E-05 0.2973E-06 0.1387E-05 0.1520E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000830 0.1375E-05 0.2988E-06 0.1382E-05 0.1600E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000831 0.1378E-05 0.3126E-06 0.1385E-05 0.2794E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000832 0.1398E-05 0.3630E-06 0.1405E-05 0.3322E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000833 0.1452E-05 0.3061E-06 0.1458E-05 0.5201E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000834 0.1556E-05 0.4235E-06 0.1562E-05 0.1545E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000835 0.1322E-05 0.2948E-06 0.1329E-05 0.1356E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000836 0.1317E-05 0.3128E-06 0.1325E-05 0.1688E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000837 0.1318E-05 0.2767E-06 0.1326E-05 0.2061E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000838 0.1330E-05 0.2955E-06 0.1337E-05 0.3485E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000839 0.1366E-05 0.3523E-06 0.1373E-05 0.4329E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000840 0.1448E-05 0.3030E-06 0.1454E-05 0.1301E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000841 0.1268E-05 0.2761E-06 0.1275E-05 0.9704E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000842 0.1262E-05 0.2894E-06 0.1269E-05 0.1420E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000843 0.1259E-05 0.2604E-06 0.1266E-05 0.1386E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000844 0.1263E-05 0.2696E-06 0.1269E-05 0.2708E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000845 0.1281E-05 0.3053E-06 0.1287E-05 0.3165E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000846 0.1329E-05 0.2721E-06 0.1335E-05 0.3443E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000847 0.1212E-05 0.2714E-06 0.1400E-05 0.1015E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000848 0.1225E-05 0.2521E-06 0.1218E-05 0.1711E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000849 0.1231E-05 0.2651E-06 0.1208E-05 0.1986E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000850 0.1251E-05 0.2965E-06 0.1211E-05 0.2580E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000851 0.1292E-05 0.2523E-06 0.1234E-05 0.2584E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000852 0.1176E-05 0.2961E-06 0.1274E-05 0.2699E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000853 0.1184E-05 0.2468E-06 0.1323E-05 0.2085E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000854 0.1213E-05 0.2848E-06 0.1169E-05 0.2154E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000855 0.1243E-05 0.2365E-06 0.1168E-05 0.2706E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000856 0.1295E-05 0.2813E-06 0.1191E-05 0.2233E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000857 0.1138E-05 0.2386E-06 0.1237E-05 0.2829E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000858 0.1143E-05 0.2888E-06 0.1284E-05 0.1974E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000859 0.1171E-05 0.2300E-06 0.1131E-05 0.1947E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000860 0.1200E-05 0.2597E-06 0.1129E-05 0.2708E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000861 0.1247E-05 0.2300E-06 0.1151E-05 0.2390E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000862 0.1101E-05 0.2785E-06 0.1193E-05 0.2672E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000863 0.1108E-05 0.2230E-06 0.1238E-05 0.1981E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000864 0.1135E-05 0.2595E-06 0.1094E-05 0.2020E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000865 0.1163E-05 0.2140E-06 0.1093E-05 0.2575E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000866 0.1211E-05 0.2569E-06 0.1116E-05 0.2251E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000867 0.1065E-05 0.2163E-06 0.1159E-05 0.2674E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000868 0.1071E-05 0.2634E-06 0.1202E-05 0.1905E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000869 0.1098E-05 0.2087E-06 0.1059E-05 0.1901E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000870 0.1125E-05 0.2381E-06 0.1057E-05 0.2608E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000871 0.1169E-05 0.2091E-06 0.1080E-05 0.2297E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000872 0.1031E-05 0.2563E-06 0.1120E-05 0.2636E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000873 0.1039E-05 0.2024E-06 0.1161E-05 0.1937E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000874 0.1065E-05 0.2381E-06 0.1025E-05 0.1959E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000875 0.1092E-05 0.1943E-06 0.1024E-05 0.2515E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000876 0.1136E-05 0.2358E-06 0.1048E-05 0.2241E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000877 0.9978E-06 0.1967E-06 0.1089E-05 0.9582E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000878 0.1000E-05 0.2204E-06 0.9984E-06 0.1168E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000879 0.1003E-05 0.1831E-06 0.9924E-06 0.1235E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000880 0.1011E-05 0.1919E-06 0.9919E-06 0.1972E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000881 0.1032E-05 0.2226E-06 0.1005E-05 0.2527E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000882 0.1080E-05 0.1961E-06 0.1041E-05 0.7799E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000883 0.9612E-06 0.1811E-06 0.9654E-06 0.6375E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000884 0.9550E-06 0.1883E-06 0.9610E-06 0.1182E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000885 0.9516E-06 0.2002E-06 0.9581E-06 0.1128E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000886 0.9526E-06 0.1732E-06 0.9593E-06 0.1765E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000887 0.9595E-06 0.1892E-06 0.9677E-06 0.2261E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000888 0.9823E-06 0.1793E-06 0.9912E-06 0.3389E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000889 0.1033E-05 0.2436E-06 0.1044E-05 0.8961E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000890 0.9188E-06 0.1730E-06 0.9236E-06 0.8349E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000891 0.9153E-06 0.1831E-06 0.9197E-06 0.1143E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000892 0.9145E-06 0.1620E-06 0.9186E-06 0.1318E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000893 0.9192E-06 0.1730E-06 0.9231E-06 0.2073E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000894 0.9350E-06 0.1653E-06 0.9381E-06 0.2763E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000895 0.9746E-06 0.2141E-06 0.9771E-06 0.8663E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000896 0.8837E-06 0.1609E-06 0.8881E-06 0.5772E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000897 0.8796E-06 0.1689E-06 0.8841E-06 0.1024E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000898 0.8775E-06 0.1522E-06 0.8818E-06 0.9834E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000899 0.8796E-06 0.1600E-06 0.8839E-06 0.1920E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000900 0.8911E-06 0.1865E-06 0.8952E-06 0.2240E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000901 0.9223E-06 0.1626E-06 0.9263E-06 0.7194E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000902 0.8499E-06 0.1509E-06 0.8542E-06 0.4501E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000903 0.8454E-06 0.1570E-06 0.8496E-06 0.1080E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000904 0.8424E-06 0.1674E-06 0.8465E-06 0.8239E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000905 0.8434E-06 0.1438E-06 0.8476E-06 0.1558E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000906 0.8498E-06 0.1579E-06 0.8537E-06 0.1819E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000907 0.8704E-06 0.1496E-06 0.8741E-06 0.2941E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000908 0.9158E-06 0.2069E-06 0.9190E-06 0.7491E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000909 0.8135E-06 0.1447E-06 0.8175E-06 0.7709E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000910 0.8102E-06 0.1541E-06 0.8143E-06 0.9455E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000911 0.8096E-06 0.1347E-06 0.8136E-06 0.1188E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000912 0.8137E-06 0.1451E-06 0.8175E-06 0.1773E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000913 0.8276E-06 0.1382E-06 0.8313E-06 0.2453E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000914 0.8625E-06 0.1830E-06 0.8660E-06 0.7462E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000915 0.7828E-06 0.1345E-06 0.7866E-06 0.5273E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000916 0.7793E-06 0.1422E-06 0.7832E-06 0.8729E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000917 0.7776E-06 0.1265E-06 0.7814E-06 0.8797E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000918 0.7796E-06 0.1341E-06 0.7834E-06 0.1677E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000919 0.7903E-06 0.1587E-06 0.7939E-06 0.1985E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000920 0.8189E-06 0.1372E-06 0.8222E-06 0.2128E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000921 0.7498E-06 0.1370E-06 0.8614E-06 0.6295E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000922 0.7578E-06 0.1241E-06 0.7535E-06 0.1059E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000923 0.7621E-06 0.1340E-06 0.7479E-06 0.9907E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000924 0.7725E-06 0.1210E-06 0.7492E-06 0.1583E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000925 0.7943E-06 0.1437E-06 0.7615E-06 0.2014E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000926 0.8405E-06 0.1325E-06 0.7953E-06 0.6047E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000927 0.7262E-06 0.1215E-06 0.7288E-06 0.5658E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000928 0.7216E-06 0.1283E-06 0.7263E-06 0.7237E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000929 0.7193E-06 0.1128E-06 0.7245E-06 0.7697E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000930 0.7198E-06 0.1179E-06 0.7254E-06 0.1388E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000931 0.7260E-06 0.1374E-06 0.7333E-06 0.1672E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000932 0.7459E-06 0.1193E-06 0.7549E-06 0.2487E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000933 0.7860E-06 0.1297E-06 0.7976E-06 0.6162E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000934 0.6948E-06 0.1154E-06 0.6982E-06 0.3926E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000935 0.6918E-06 0.1054E-06 0.6948E-06 0.6731E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000936 0.6897E-06 0.1071E-06 0.6926E-06 0.8666E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000937 0.6910E-06 0.1153E-06 0.6936E-06 0.1255E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000938 0.6986E-06 0.1066E-06 0.7006E-06 0.1748E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000939 0.7195E-06 0.1320E-06 0.7209E-06 0.2467E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000940 0.7642E-06 0.1197E-06 0.7644E-06 0.6077E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000941 0.6654E-06 0.1088E-06 0.6684E-06 0.3772E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000942 0.6624E-06 0.9833E-07 0.6656E-06 0.6891E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000943 0.6604E-06 0.9999E-07 0.6636E-06 0.8637E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000944 0.6619E-06 0.1075E-06 0.6650E-06 0.1264E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000945 0.6694E-06 0.9985E-07 0.6727E-06 0.1735E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000946 0.6904E-06 0.1240E-06 0.6937E-06 0.2459E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000947 0.7344E-06 0.1125E-06 0.7379E-06 0.5970E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000948 0.6372E-06 0.1021E-06 0.6401E-06 0.3768E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000949 0.6345E-06 0.9187E-07 0.6374E-06 0.6741E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000950 0.6328E-06 0.9356E-07 0.6357E-06 0.8536E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000951 0.6346E-06 0.1012E-06 0.6374E-06 0.1242E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 14 0000952 0.6428E-06 0.9341E-07 0.6454E-06 0.1715E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000953 0.6645E-06 0.1173E-06 0.6668E-06 0.2421E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000954 0.7094E-06 0.1060E-06 0.7114E-06 0.5860E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000955 0.6104E-06 0.9606E-07 0.6131E-06 0.3708E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000956 0.6079E-06 0.8580E-07 0.6106E-06 0.6685E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000957 0.6064E-06 0.8752E-07 0.6092E-06 0.8426E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000958 0.6084E-06 0.9503E-07 0.6111E-06 0.1227E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000959 0.6168E-06 0.8751E-07 0.6195E-06 0.1688E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000960 0.6387E-06 0.1109E-06 0.6412E-06 0.2382E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000961 0.6833E-06 0.9994E-07 0.6858E-06 0.5723E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000962 0.5847E-06 0.9036E-07 0.5874E-06 0.3639E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000963 0.5824E-06 0.8015E-07 0.5851E-06 0.6538E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000964 0.5812E-06 0.8191E-07 0.5838E-06 0.8244E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000965 0.5834E-06 0.8941E-07 0.5859E-06 0.1200E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000966 0.5920E-06 0.8198E-07 0.5944E-06 0.1651E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000967 0.6141E-06 0.1050E-06 0.6163E-06 0.2326E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000968 0.6586E-06 0.9431E-07 0.6606E-06 0.5572E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000969 0.5603E-06 0.8504E-07 0.5628E-06 0.3559E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000970 0.5582E-06 0.7487E-07 0.5607E-06 0.6397E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000971 0.5571E-06 0.7666E-07 0.5596E-06 0.8056E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000972 0.5594E-06 0.8409E-07 0.5618E-06 0.1172E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000973 0.5682E-06 0.7682E-07 0.5705E-06 0.1611E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000974 0.5901E-06 0.9944E-07 0.5923E-06 0.4690E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000975 0.5398E-06 0.7550E-07 0.5421E-06 0.3424E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000976 0.5373E-06 0.7993E-07 0.5397E-06 0.5520E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000977 0.5359E-06 0.7042E-07 0.5382E-06 0.5345E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000978 0.5366E-06 0.7436E-07 0.5389E-06 0.1048E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000979 0.5420E-06 0.8776E-07 0.5442E-06 0.1207E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000980 0.5581E-06 0.7620E-07 0.5602E-06 0.1860E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000981 0.5902E-06 0.8355E-07 0.5922E-06 0.4400E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000982 0.5175E-06 0.7420E-07 0.5197E-06 0.3114E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000983 0.5153E-06 0.6594E-07 0.5175E-06 0.4982E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000984 0.5138E-06 0.6736E-07 0.5160E-06 0.6724E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000985 0.5148E-06 0.7352E-07 0.5170E-06 0.9387E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000986 0.5205E-06 0.6736E-07 0.5226E-06 0.1334E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000987 0.5360E-06 0.8617E-07 0.5380E-06 0.1856E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000988 0.5692E-06 0.7759E-07 0.5710E-06 0.4562E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000989 0.4961E-06 0.6990E-07 0.4982E-06 0.2734E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000990 0.4940E-06 0.6156E-07 0.4961E-06 0.5183E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000991 0.4927E-06 0.6299E-07 0.4948E-06 0.6321E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000992 0.4940E-06 0.6900E-07 0.4960E-06 0.9396E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000993 0.4999E-06 0.6316E-07 0.5019E-06 0.1278E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000994 0.5160E-06 0.8159E-07 0.5178E-06 0.1829E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000995 0.5496E-06 0.7336E-07 0.5513E-06 0.4355E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000996 0.4757E-06 0.6585E-07 0.4776E-06 0.2848E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000997 0.4738E-06 0.5752E-07 0.4757E-06 0.4987E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0000998 0.4726E-06 0.5898E-07 0.4746E-06 0.6375E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0000999 0.4740E-06 0.6499E-07 0.4759E-06 0.9192E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001000 0.4802E-06 0.5918E-07 0.4820E-06 0.1275E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001001 0.4963E-06 0.7731E-07 0.4981E-06 0.1797E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001002 0.5299E-06 0.6923E-07 0.5315E-06 0.4306E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001003 0.4561E-06 0.6205E-07 0.4580E-06 0.2718E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001004 0.4544E-06 0.5373E-07 0.4562E-06 0.4941E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001005 0.4534E-06 0.5522E-07 0.4552E-06 0.6165E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001006 0.4549E-06 0.6119E-07 0.4567E-06 0.9023E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001007 0.4612E-06 0.5550E-07 0.4629E-06 0.1238E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001008 0.4775E-06 0.7338E-07 0.4791E-06 0.1756E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001009 0.5109E-06 0.6550E-07 0.5124E-06 0.4166E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001010 0.4375E-06 0.5851E-07 0.4392E-06 0.2691E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001011 0.4358E-06 0.5021E-07 0.4376E-06 0.4804E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001012 0.4349E-06 0.5172E-07 0.4367E-06 0.6057E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001013 0.4366E-06 0.5764E-07 0.4382E-06 0.8801E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001014 0.4429E-06 0.5204E-07 0.4446E-06 0.1213E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001015 0.4591E-06 0.6959E-07 0.4607E-06 0.1712E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001016 0.4921E-06 0.6191E-07 0.4935E-06 0.4055E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001017 0.4196E-06 0.5515E-07 0.4213E-06 0.2602E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001018 0.4181E-06 0.4691E-07 0.4197E-06 0.4685E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001019 0.4173E-06 0.4843E-07 0.4189E-06 0.5869E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001020 0.4190E-06 0.5430E-07 0.4206E-06 0.8563E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001021 0.4254E-06 0.4881E-07 0.4269E-06 0.1176E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001022 0.4414E-06 0.6603E-07 0.4429E-06 0.1663E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001023 0.4738E-06 0.5856E-07 0.4752E-06 0.3920E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001024 0.4025E-06 0.5200E-07 0.4041E-06 0.2536E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001025 0.4011E-06 0.4383E-07 0.4026E-06 0.4545E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001026 0.4004E-06 0.4536E-07 0.4019E-06 0.5705E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001027 0.4021E-06 0.5115E-07 0.4036E-06 0.8312E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001028 0.4085E-06 0.4577E-07 0.4099E-06 0.1142E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001029 0.4243E-06 0.6259E-07 0.4256E-06 0.1613E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001030 0.4559E-06 0.5535E-07 0.4572E-06 0.3788E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001031 0.3862E-06 0.4902E-07 0.3877E-06 0.2450E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001032 0.3848E-06 0.4096E-07 0.3863E-06 0.4403E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001033 0.3842E-06 0.4249E-07 0.3857E-06 0.5513E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001034 0.3860E-06 0.4818E-07 0.3874E-06 0.8045E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001035 0.3922E-06 0.4293E-07 0.3936E-06 0.1104E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001036 0.4077E-06 0.5934E-07 0.4090E-06 0.1560E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001037 0.4386E-06 0.5233E-07 0.4398E-06 0.3651E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001038 0.3706E-06 0.4621E-07 0.3719E-06 0.2372E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001039 0.3693E-06 0.3828E-07 0.3707E-06 0.4256E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001040 0.3687E-06 0.3979E-07 0.3701E-06 0.5330E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001041 0.3705E-06 0.4537E-07 0.3718E-06 0.7778E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001042 0.3766E-06 0.4026E-07 0.3779E-06 0.1067E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001043 0.3917E-06 0.5620E-07 0.3929E-06 0.1507E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001044 0.4217E-06 0.4944E-07 0.4228E-06 0.3514E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001045 0.3556E-06 0.4356E-07 0.3569E-06 0.2287E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001046 0.3544E-06 0.3577E-07 0.3557E-06 0.4107E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001047 0.3539E-06 0.3727E-07 0.3552E-06 0.5136E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001048 0.3556E-06 0.4273E-07 0.3569E-06 0.7502E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001049 0.3616E-06 0.3776E-07 0.3628E-06 0.1029E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001050 0.3763E-06 0.5323E-07 0.3775E-06 0.1453E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001051 0.4054E-06 0.4672E-07 0.4065E-06 0.3378E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001052 0.3413E-06 0.4107E-07 0.3425E-06 0.2205E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001053 0.3401E-06 0.3343E-07 0.3413E-06 0.3959E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001054 0.3397E-06 0.3491E-07 0.3409E-06 0.4949E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001055 0.3414E-06 0.4023E-07 0.3426E-06 0.7232E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001056 0.3472E-06 0.3541E-07 0.3483E-06 0.9912E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001057 0.3615E-06 0.5038E-07 0.3625E-06 0.1400E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001058 0.3896E-06 0.4413E-07 0.3906E-06 0.3243E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001059 0.3276E-06 0.3871E-07 0.3287E-06 0.2121E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001060 0.3265E-06 0.3124E-07 0.3276E-06 0.3811E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001061 0.3261E-06 0.3270E-07 0.3272E-06 0.4758E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001062 0.3277E-06 0.3788E-07 0.3288E-06 0.6960E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001063 0.3334E-06 0.3322E-07 0.3345E-06 0.9534E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001064 0.3472E-06 0.4769E-07 0.3482E-06 0.1347E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001065 0.3744E-06 0.4169E-07 0.3753E-06 0.3112E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001066 0.3144E-06 0.3650E-07 0.3155E-06 0.2041E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001067 0.3134E-06 0.2920E-07 0.3145E-06 0.3666E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001068 0.3130E-06 0.3063E-07 0.3141E-06 0.4576E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001069 0.3147E-06 0.3566E-07 0.3157E-06 0.6696E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001070 0.3201E-06 0.3115E-07 0.3211E-06 0.8774E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001071 0.3332E-06 0.3411E-07 0.3341E-06 0.1279E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001072 0.3585E-06 0.3910E-07 0.3593E-06 0.2989E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001073 0.3018E-06 0.3403E-07 0.3028E-06 0.1912E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001074 0.3008E-06 0.2733E-07 0.3018E-06 0.3510E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001075 0.3004E-06 0.2867E-07 0.3014E-06 0.4318E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001076 0.3020E-06 0.3356E-07 0.3030E-06 0.6387E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001077 0.3072E-06 0.2908E-07 0.3081E-06 0.8343E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001078 0.3196E-06 0.3204E-07 0.3205E-06 0.1221E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001079 0.3437E-06 0.3662E-07 0.3445E-06 0.2817E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001080 0.2898E-06 0.3198E-07 0.2907E-06 0.1832E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001081 0.2888E-06 0.2552E-07 0.2897E-06 0.3337E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001082 0.2884E-06 0.2679E-07 0.2893E-06 0.4129E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001083 0.2899E-06 0.3135E-07 0.2908E-06 0.6088E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001084 0.2948E-06 0.2721E-07 0.2957E-06 0.7973E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001085 0.3066E-06 0.2996E-07 0.3074E-06 0.1166E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001086 0.3297E-06 0.3437E-07 0.3304E-06 0.2688E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001087 0.2782E-06 0.3002E-07 0.2791E-06 0.1742E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001088 0.2773E-06 0.2385E-07 0.2782E-06 0.3190E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001089 0.2769E-06 0.2507E-07 0.2778E-06 0.3932E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001090 0.2783E-06 0.2944E-07 0.2792E-06 0.5817E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001091 0.2830E-06 0.2547E-07 0.2838E-06 0.7616E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001092 0.2943E-06 0.2813E-07 0.2951E-06 0.1116E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001093 0.3163E-06 0.3230E-07 0.3171E-06 0.2555E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001094 0.2671E-06 0.2823E-07 0.2679E-06 0.1668E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001095 0.2662E-06 0.2229E-07 0.2671E-06 0.3049E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001096 0.2659E-06 0.2345E-07 0.2667E-06 0.3762E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001097 0.2672E-06 0.2761E-07 0.2680E-06 0.5566E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001098 0.2717E-06 0.2385E-07 0.2725E-06 0.7294E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001099 0.2825E-06 0.2638E-07 0.2832E-06 0.1069E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001100 0.3037E-06 0.3037E-07 0.3044E-06 0.2439E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001101 0.2565E-06 0.2655E-07 0.2573E-06 0.1595E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001102 0.2556E-06 0.2083E-07 0.2564E-06 0.2921E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001103 0.2553E-06 0.2195E-07 0.2561E-06 0.3598E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001104 0.2566E-06 0.2595E-07 0.2573E-06 0.5333E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001105 0.2609E-06 0.2234E-07 0.2616E-06 0.6990E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001106 0.2713E-06 0.2479E-07 0.2720E-06 0.1025E-06 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001107 0.2916E-06 0.2861E-07 0.2923E-06 0.2328E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001108 0.2463E-06 0.2500E-07 0.2470E-06 0.1530E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001109 0.2455E-06 0.1947E-07 0.2462E-06 0.2802E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001110 0.2452E-06 0.2056E-07 0.2459E-06 0.3451E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001111 0.2464E-06 0.2439E-07 0.2471E-06 0.5119E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001112 0.2506E-06 0.2094E-07 0.2513E-06 0.6713E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001113 0.2606E-06 0.2330E-07 0.2613E-06 0.9846E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001114 0.2802E-06 0.2696E-07 0.2809E-06 0.2228E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001115 0.2365E-06 0.2356E-07 0.2372E-06 0.1469E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001116 0.2358E-06 0.1821E-07 0.2365E-06 0.2693E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001117 0.2355E-06 0.1926E-07 0.2362E-06 0.3315E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001118 0.2367E-06 0.2296E-07 0.2374E-06 0.4923E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001119 0.2407E-06 0.1964E-07 0.2414E-06 0.6458E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001120 0.2504E-06 0.2193E-07 0.2511E-06 0.9473E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001121 0.2694E-06 0.2545E-07 0.2701E-06 0.2136E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001122 0.2272E-06 0.2222E-07 0.2278E-06 0.1415E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001123 0.2265E-06 0.1702E-07 0.2271E-06 0.2595E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001124 0.2262E-06 0.1805E-07 0.2268E-06 0.3194E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001125 0.2274E-06 0.2164E-07 0.2280E-06 0.4748E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001126 0.2314E-06 0.1843E-07 0.2320E-06 0.6230E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001127 0.2408E-06 0.2066E-07 0.2414E-06 0.9137E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001128 0.2592E-06 0.2406E-07 0.2598E-06 0.2053E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001129 0.2182E-06 0.2098E-07 0.2188E-06 0.1366E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001130 0.2175E-06 0.1592E-07 0.2181E-06 0.2508E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001131 0.2173E-06 0.1693E-07 0.2179E-06 0.3087E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001132 0.2185E-06 0.2043E-07 0.2191E-06 0.4593E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001133 0.2224E-06 0.1732E-07 0.2231E-06 0.6029E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 15 0001134 0.2317E-06 0.1950E-07 0.2323E-06 0.8838E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001135 0.2496E-06 0.2280E-07 0.2503E-06 0.1980E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001136 0.2096E-06 0.1985E-07 0.2102E-06 0.1325E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001137 0.2090E-06 0.1489E-07 0.2095E-06 0.2434E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001138 0.2088E-06 0.1590E-07 0.2094E-06 0.2997E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001139 0.2100E-06 0.1933E-07 0.2106E-06 0.4462E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001140 0.2139E-06 0.1629E-07 0.2145E-06 0.5860E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001141 0.2231E-06 0.1843E-07 0.2237E-06 0.8581E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001142 0.2408E-06 0.2165E-07 0.2414E-06 0.1918E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001143 0.2014E-06 0.1882E-07 0.2019E-06 0.1291E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001144 0.2008E-06 0.1393E-07 0.2013E-06 0.2373E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001145 0.2006E-06 0.1495E-07 0.2012E-06 0.2924E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001146 0.2019E-06 0.1834E-07 0.2025E-06 0.4357E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001147 0.2059E-06 0.1535E-07 0.2065E-06 0.5723E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001148 0.2150E-06 0.1748E-07 0.2157E-06 0.1553E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001149 0.1943E-06 0.1563E-07 0.1948E-06 0.7116E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001150 0.1935E-06 0.1308E-07 0.1940E-06 0.1823E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001151 0.1929E-06 0.1353E-07 0.1934E-06 0.1813E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001152 0.1930E-06 0.1545E-07 0.1935E-06 0.3103E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001153 0.1944E-06 0.1358E-07 0.1950E-06 0.3881E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001154 0.1988E-06 0.1881E-07 0.1994E-06 0.5979E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001155 0.2086E-06 0.1666E-07 0.2093E-06 0.1240E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001156 0.1867E-06 0.1522E-07 0.1872E-06 0.1034E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001157 0.1859E-06 0.1220E-07 0.1864E-06 0.1593E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001158 0.1855E-06 0.1275E-07 0.1860E-06 0.2188E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001159 0.1857E-06 0.1473E-07 0.1863E-06 0.3045E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001160 0.1876E-06 0.1297E-07 0.1882E-06 0.4372E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001161 0.1927E-06 0.1860E-07 0.1934E-06 0.6130E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001162 0.2039E-06 0.1638E-07 0.2047E-06 0.1434E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001163 0.1794E-06 0.1466E-07 0.1798E-06 0.9183E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001164 0.1787E-06 0.1145E-07 0.1792E-06 0.1793E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001165 0.1784E-06 0.1211E-07 0.1789E-06 0.2139E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001166 0.1789E-06 0.1444E-07 0.1795E-06 0.3271E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001167 0.1813E-06 0.1236E-07 0.1819E-06 0.4260E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001168 0.1873E-06 0.1382E-07 0.1880E-06 0.6364E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001169 0.1995E-06 0.1612E-07 0.2003E-06 0.1424E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001170 0.1724E-06 0.1422E-07 0.1728E-06 0.1016E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001171 0.1718E-06 0.1074E-07 0.1723E-06 0.1821E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001172 0.1716E-06 0.1151E-07 0.1721E-06 0.2299E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001173 0.1725E-06 0.1410E-07 0.1731E-06 0.3399E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001174 0.1754E-06 0.1182E-07 0.1761E-06 0.4530E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001175 0.1823E-06 0.1347E-07 0.1831E-06 0.6623E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001176 0.1959E-06 0.1593E-07 0.1969E-06 0.1500E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001177 0.1657E-06 0.1389E-07 0.1661E-06 0.1030E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001178 0.1652E-06 0.1007E-07 0.1657E-06 0.1928E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001179 0.1652E-06 0.1096E-07 0.1657E-06 0.2219E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001180 0.1663E-06 0.1053E-07 0.1669E-06 0.3653E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001181 0.1697E-06 0.1450E-07 0.1704E-06 0.4702E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001182 0.1777E-06 0.1288E-07 0.1786E-06 0.1252E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001183 0.1599E-06 0.1171E-07 0.1603E-06 0.6346E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001184 0.1593E-06 0.9444E-08 0.1597E-06 0.1549E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001185 0.1589E-06 0.9911E-08 0.1593E-06 0.1620E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001186 0.1592E-06 0.1167E-07 0.1597E-06 0.2712E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001187 0.1609E-06 0.1007E-07 0.1615E-06 0.3466E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001188 0.1654E-06 0.1484E-07 0.1662E-06 0.5266E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001189 0.1752E-06 0.1298E-07 0.1761E-06 0.1115E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001190 0.1537E-06 0.1171E-07 0.1541E-06 0.9266E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001191 0.1532E-06 0.8852E-08 0.1536E-06 0.1489E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001192 0.1530E-06 0.9497E-08 0.1535E-06 0.2019E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001193 0.1537E-06 0.1162E-07 0.1542E-06 0.2867E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001194 0.1562E-06 0.9795E-08 0.1568E-06 0.3945E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001195 0.1620E-06 0.1114E-07 0.1628E-06 0.5677E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001196 0.1738E-06 0.1328E-07 0.1747E-06 0.1323E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001197 0.1478E-06 0.1159E-07 0.1482E-06 0.8724E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001198 0.1474E-06 0.8337E-08 0.1478E-06 0.1720E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001199 0.1474E-06 0.9137E-08 0.1478E-06 0.1933E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001200 0.1484E-06 0.8774E-08 0.1490E-06 0.3249E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001201 0.1515E-06 0.1229E-07 0.1522E-06 0.4147E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001202 0.1587E-06 0.1089E-07 0.1596E-06 0.1123E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001203 0.1427E-06 0.9856E-08 0.1430E-06 0.5589E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001204 0.1421E-06 0.7818E-08 0.1425E-06 0.1408E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001205 0.1418E-06 0.8269E-08 0.1422E-06 0.1457E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001206 0.1422E-06 0.9924E-08 0.1426E-06 0.2470E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001207 0.1438E-06 0.8437E-08 0.1443E-06 0.3044E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001208 0.1481E-06 0.9512E-08 0.1488E-06 0.4762E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001209 0.1571E-06 0.1112E-07 0.1580E-06 0.1006E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001210 0.1371E-06 0.9932E-08 0.1375E-06 0.8417E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001211 0.1367E-06 0.7341E-08 0.1371E-06 0.1367E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001212 0.1366E-06 0.7963E-08 0.1370E-06 0.1849E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001213 0.1373E-06 0.9981E-08 0.1378E-06 0.2644E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001214 0.1398E-06 0.8246E-08 0.1404E-06 0.3642E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001215 0.1456E-06 0.9562E-08 0.1463E-06 0.3800E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001216 0.1546E-06 0.1027E-07 0.1320E-06 0.1136E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001217 0.1319E-06 0.8199E-08 0.1338E-06 0.1536E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001218 0.1310E-06 0.7215E-08 0.1350E-06 0.1945E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001219 0.1313E-06 0.9354E-08 0.1375E-06 0.2732E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001220 0.1335E-06 0.8434E-08 0.1426E-06 0.3802E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001221 0.1394E-06 0.1010E-07 0.1524E-06 0.3323E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001222 0.1492E-06 0.1056E-07 0.1280E-06 0.9847E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001223 0.1274E-06 0.7854E-08 0.1293E-06 0.1615E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001224 0.1266E-06 0.6914E-08 0.1304E-06 0.1720E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001225 0.1267E-06 0.9103E-08 0.1327E-06 0.2567E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001226 0.1287E-06 0.8173E-08 0.1371E-06 0.3391E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001227 0.1341E-06 0.9810E-08 0.1460E-06 0.3082E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001228 0.1428E-06 0.1002E-07 0.1236E-06 0.8683E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001229 0.1231E-06 0.7398E-08 0.1247E-06 0.1528E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001230 0.1223E-06 0.6533E-08 0.1254E-06 0.1500E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001231 0.1223E-06 0.8466E-08 0.1271E-06 0.2289E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001232 0.1238E-06 0.7643E-08 0.1305E-06 0.2963E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001233 0.1280E-06 0.9079E-08 0.1378E-06 0.2749E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001234 0.1351E-06 0.9180E-08 0.1194E-06 0.7762E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001235 0.1190E-06 0.6878E-08 0.1202E-06 0.1412E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001236 0.1182E-06 0.6141E-08 0.1206E-06 0.1319E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001237 0.1180E-06 0.7796E-08 0.1218E-06 0.2036E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001238 0.1191E-06 0.7081E-08 0.1243E-06 0.2604E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001239 0.1223E-06 0.8306E-08 0.1300E-06 0.2441E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001240 0.1278E-06 0.8330E-08 0.1154E-06 0.7017E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001241 0.1149E-06 0.6379E-08 0.1159E-06 0.1289E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001242 0.1142E-06 0.5762E-08 0.1161E-06 0.1175E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001243 0.1139E-06 0.7173E-08 0.1170E-06 0.1813E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001244 0.1147E-06 0.6553E-08 0.1188E-06 0.2314E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001245 0.1171E-06 0.7598E-08 0.1233E-06 0.3356E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001246 0.1235E-06 0.9488E-08 0.1312E-06 0.9021E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001247 0.1109E-06 0.7581E-08 0.1114E-06 0.5746E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001248 0.1106E-06 0.5194E-08 0.1107E-06 0.1067E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001249 0.1103E-06 0.5669E-08 0.1104E-06 0.1130E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001250 0.1104E-06 0.5434E-08 0.1105E-06 0.1882E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001251 0.1113E-06 0.7450E-08 0.1113E-06 0.2362E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001252 0.1140E-06 0.6587E-08 0.1138E-06 0.3509E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001253 0.1197E-06 0.8046E-08 0.1193E-06 0.7720E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001254 0.1067E-06 0.6880E-08 0.1069E-06 0.5523E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001255 0.1062E-06 0.4860E-08 0.1065E-06 0.9736E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001256 0.1059E-06 0.5314E-08 0.1062E-06 0.1111E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001257 0.1060E-06 0.5103E-08 0.1064E-06 0.1802E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001258 0.1069E-06 0.7061E-08 0.1073E-06 0.2330E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001259 0.1096E-06 0.6263E-08 0.1102E-06 0.3429E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001260 0.1153E-06 0.7694E-08 0.1161E-06 0.7730E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001261 0.1025E-06 0.6576E-08 0.1028E-06 0.5306E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001262 0.1021E-06 0.4557E-08 0.1024E-06 0.9919E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001263 0.1019E-06 0.5023E-08 0.1021E-06 0.1113E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001264 0.1021E-06 0.4810E-08 0.1023E-06 0.1834E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001265 0.1030E-06 0.6763E-08 0.1034E-06 0.2353E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001266 0.1059E-06 0.5972E-08 0.1064E-06 0.3475E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001267 0.1119E-06 0.7387E-08 0.1125E-06 0.7664E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001268 0.9856E-07 0.6341E-08 0.9877E-07 0.5368E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001269 0.9820E-07 0.4280E-08 0.9842E-07 0.9968E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001270 0.9799E-07 0.4775E-08 0.9825E-07 0.1133E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001271 0.9827E-07 0.4561E-08 0.9859E-07 0.1799E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001272 0.9943E-07 0.4941E-08 0.9986E-07 0.2375E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001273 0.1025E-06 0.5782E-08 0.1031E-06 0.3507E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001274 0.1090E-06 0.7086E-08 0.1098E-06 0.7668E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001275 0.9473E-07 0.6147E-08 0.9496E-07 0.5490E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001276 0.9443E-07 0.4017E-08 0.9465E-07 0.1016E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001277 0.9429E-07 0.4539E-08 0.9456E-07 0.1168E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001278 0.9469E-07 0.4319E-08 0.9503E-07 0.1849E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001279 0.9608E-07 0.4735E-08 0.9654E-07 0.2448E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001280 0.9959E-07 0.5591E-08 0.1002E-06 0.3594E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001281 0.1066E-06 0.6929E-08 0.1074E-06 0.7829E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001282 0.9108E-07 0.6006E-08 0.9130E-07 0.5619E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001283 0.9083E-07 0.3781E-08 0.9105E-07 0.1047E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001284 0.9077E-07 0.4346E-08 0.9104E-07 0.1207E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001285 0.9133E-07 0.4123E-08 0.9169E-07 0.1915E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001286 0.9300E-07 0.4569E-08 0.9350E-07 0.2532E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001287 0.9702E-07 0.5490E-08 0.9771E-07 0.6661E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001288 0.8796E-07 0.4853E-08 0.8815E-07 0.3358E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001289 0.8762E-07 0.3543E-08 0.8781E-07 0.8476E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001290 0.8738E-07 0.3855E-08 0.8760E-07 0.8065E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001291 0.8751E-07 0.3722E-08 0.8779E-07 0.1509E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001292 0.8830E-07 0.5119E-08 0.8867E-07 0.1815E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001293 0.9064E-07 0.4574E-08 0.9115E-07 0.2827E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001294 0.9557E-07 0.5600E-08 0.9625E-07 0.5889E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001295 0.8457E-07 0.4958E-08 0.8476E-07 0.4949E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001296 0.8429E-07 0.3332E-08 0.8448E-07 0.8190E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001297 0.8417E-07 0.3746E-08 0.8440E-07 0.1025E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001298 0.8451E-07 0.3574E-08 0.8480E-07 0.1549E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001299 0.8572E-07 0.3918E-08 0.8612E-07 0.2129E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001300 0.8877E-07 0.4626E-08 0.8931E-07 0.3073E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001301 0.9493E-07 0.5742E-08 0.9564E-07 0.6912E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001302 0.8133E-07 0.5029E-08 0.8151E-07 0.4771E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001303 0.8112E-07 0.3148E-08 0.8130E-07 0.9415E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001304 0.8111E-07 0.3648E-08 0.8133E-07 0.1064E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001305 0.8169E-07 0.3461E-08 0.8198E-07 0.1727E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001306 0.8334E-07 0.3863E-08 0.8373E-07 0.2267E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001307 0.8721E-07 0.4690E-08 0.8773E-07 0.6058E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001308 0.7855E-07 0.4152E-08 0.7871E-07 0.3025E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001309 0.7827E-07 0.2956E-08 0.7842E-07 0.7850E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001310 0.7809E-07 0.3258E-08 0.7827E-07 0.7479E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001311 0.7831E-07 0.3140E-08 0.7853E-07 0.1409E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001312 0.7919E-07 0.4461E-08 0.7949E-07 0.1699E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001313 0.8166E-07 0.3966E-08 0.8206E-07 0.2648E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001314 0.8668E-07 0.4929E-08 0.8719E-07 0.5446E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001315 0.7555E-07 0.4365E-08 0.7570E-07 0.4680E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001316 0.7533E-07 0.2786E-08 0.7548E-07 0.7751E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001317 0.7529E-07 0.3204E-08 0.7547E-07 0.9813E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001318 0.7574E-07 0.3045E-08 0.7596E-07 0.1484E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001319 0.7714E-07 0.3394E-08 0.7742E-07 0.2044E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001320 0.8043E-07 0.4095E-08 0.8082E-07 0.5090E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001321 0.7297E-07 0.3671E-08 0.7310E-07 0.2956E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001322 0.7270E-07 0.2622E-08 0.7283E-07 0.6805E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001323 0.7253E-07 0.2890E-08 0.7268E-07 0.7033E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001324 0.7271E-07 0.2785E-08 0.7289E-07 0.1257E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001325 0.7352E-07 0.3963E-08 0.7374E-07 0.1571E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001326 0.7574E-07 0.3524E-08 0.7603E-07 0.2404E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001327 0.8032E-07 0.4394E-08 0.8070E-07 0.5053E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001328 0.7018E-07 0.3898E-08 0.7031E-07 0.4108E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001329 0.6997E-07 0.2476E-08 0.7010E-07 0.7192E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001330 0.6993E-07 0.2851E-08 0.7008E-07 0.8842E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001331 0.7035E-07 0.2712E-08 0.7052E-07 0.1369E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001332 0.7162E-07 0.3027E-08 0.7184E-07 0.1866E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001333 0.7466E-07 0.3666E-08 0.7495E-07 0.1890E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001334 0.7947E-07 0.4066E-08 0.6751E-07 0.5383E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001335 0.6753E-07 0.3096E-08 0.6841E-07 0.7770E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001336 0.6712E-07 0.2533E-08 0.6907E-07 0.9559E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001337 0.6731E-07 0.3679E-08 0.7037E-07 0.1384E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001338 0.6856E-07 0.3233E-08 0.7302E-07 0.1902E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 16 0001339 0.7182E-07 0.4083E-08 0.7812E-07 0.1720E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001340 0.7700E-07 0.4334E-08 0.6538E-07 0.4778E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001341 0.6527E-07 0.2982E-08 0.6613E-07 0.8021E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001342 0.6486E-07 0.2454E-08 0.6669E-07 0.8373E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001343 0.6496E-07 0.2636E-08 0.6786E-07 0.1285E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001344 0.6605E-07 0.3171E-08 0.7014E-07 0.1706E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001345 0.6892E-07 0.3911E-08 0.7474E-07 0.1565E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001346 0.7353E-07 0.4089E-08 0.6320E-07 0.4343E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001347 0.6307E-07 0.2798E-08 0.6381E-07 0.7715E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001348 0.6267E-07 0.2332E-08 0.6422E-07 0.7492E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001349 0.6268E-07 0.2465E-08 0.6515E-07 0.1171E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001350 0.6354E-07 0.2974E-08 0.6697E-07 0.1531E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001351 0.6584E-07 0.3618E-08 0.7085E-07 0.1401E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001352 0.6969E-07 0.3759E-08 0.6110E-07 0.3969E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001353 0.6094E-07 0.2593E-08 0.6156E-07 0.7229E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001354 0.6055E-07 0.2197E-08 0.6182E-07 0.6711E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001355 0.6050E-07 0.2301E-08 0.6252E-07 0.1059E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001356 0.6115E-07 0.2760E-08 0.6392E-07 0.1371E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001357 0.6295E-07 0.3319E-08 0.6709E-07 0.1262E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001358 0.6608E-07 0.3425E-08 0.5907E-07 0.3660E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001359 0.5889E-07 0.2400E-08 0.5940E-07 0.6690E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001360 0.5851E-07 0.2064E-08 0.5956E-07 0.6075E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001361 0.5842E-07 0.2148E-08 0.6006E-07 0.9575E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001362 0.5890E-07 0.2553E-08 0.6114E-07 0.1238E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001363 0.6032E-07 0.3043E-08 0.6369E-07 0.1151E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001364 0.6286E-07 0.3122E-08 0.5710E-07 0.1538E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001365 0.6555E-07 0.3014E-08 0.5742E-07 0.9388E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001366 0.5662E-07 0.2924E-08 0.5885E-07 0.8586E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001367 0.5647E-07 0.2371E-08 0.6031E-07 0.1301E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001368 0.5743E-07 0.2952E-08 0.6257E-07 0.1051E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001369 0.5925E-07 0.2926E-08 0.5560E-07 0.1215E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001370 0.6118E-07 0.2566E-08 0.5567E-07 0.7305E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001371 0.5499E-07 0.2520E-08 0.5664E-07 0.8470E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001372 0.5480E-07 0.2168E-08 0.5752E-07 0.1064E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001373 0.5537E-07 0.2594E-08 0.5914E-07 0.1426E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001374 0.5735E-07 0.3346E-08 0.6219E-07 0.1463E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001375 0.6018E-07 0.3317E-08 0.5363E-07 0.4389E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001376 0.5353E-07 0.2215E-08 0.5404E-07 0.6521E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001377 0.5316E-07 0.1798E-08 0.5428E-07 0.6744E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001378 0.5317E-07 0.2003E-08 0.5479E-07 0.9917E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001379 0.5380E-07 0.2402E-08 0.5594E-07 0.1329E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001380 0.5553E-07 0.3032E-08 0.5847E-07 0.1387E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001381 0.5807E-07 0.3042E-08 0.5179E-07 0.4122E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001382 0.5174E-07 0.2076E-08 0.5224E-07 0.6871E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001383 0.5137E-07 0.1721E-08 0.5243E-07 0.6593E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001384 0.5136E-07 0.1884E-08 0.5290E-07 0.9816E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001385 0.5195E-07 0.2275E-08 0.5392E-07 0.1306E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001386 0.5351E-07 0.2832E-08 0.5625E-07 0.1318E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001387 0.5587E-07 0.2826E-08 0.5005E-07 0.3970E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001388 0.5002E-07 0.1937E-08 0.5047E-07 0.6898E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001389 0.4965E-07 0.1633E-08 0.5063E-07 0.6382E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001390 0.4962E-07 0.1766E-08 0.5105E-07 0.9531E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001391 0.5016E-07 0.2131E-08 0.5194E-07 0.1262E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001392 0.5156E-07 0.2626E-08 0.5406E-07 0.1261E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001393 0.5372E-07 0.2610E-08 0.4838E-07 0.3838E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001394 0.4835E-07 0.1807E-08 0.4877E-07 0.6771E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001395 0.4800E-07 0.1545E-08 0.4890E-07 0.6174E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001396 0.4795E-07 0.1657E-08 0.4926E-07 0.9209E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001397 0.4844E-07 0.1993E-08 0.5006E-07 0.1219E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001398 0.4973E-07 0.2435E-08 0.5200E-07 0.1214E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001399 0.5171E-07 0.2415E-08 0.4676E-07 0.3720E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001400 0.4674E-07 0.1690E-08 0.4713E-07 0.6599E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001401 0.4639E-07 0.1462E-08 0.4724E-07 0.5986E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001402 0.4634E-07 0.1558E-08 0.4757E-07 0.8907E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001403 0.4680E-07 0.1866E-08 0.4829E-07 0.1180E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001404 0.4800E-07 0.2267E-08 0.5009E-07 0.1178E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001405 0.4984E-07 0.2244E-08 0.4520E-07 0.3619E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001406 0.4518E-07 0.1585E-08 0.4555E-07 0.6428E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001407 0.4485E-07 0.1384E-08 0.4565E-07 0.5829E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001408 0.4480E-07 0.1469E-08 0.4595E-07 0.8649E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001409 0.4523E-07 0.1753E-08 0.4662E-07 0.1148E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001410 0.4637E-07 0.2120E-08 0.4832E-07 0.1148E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001411 0.4811E-07 0.2095E-08 0.4369E-07 0.3533E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001412 0.4368E-07 0.1492E-08 0.4403E-07 0.6273E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001413 0.4335E-07 0.1311E-08 0.4413E-07 0.5698E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001414 0.4330E-07 0.1388E-08 0.4441E-07 0.8431E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001415 0.4373E-07 0.1652E-08 0.4505E-07 0.1121E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001416 0.4482E-07 0.1992E-08 0.4667E-07 0.1124E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001417 0.4649E-07 0.1965E-08 0.4223E-07 0.3458E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001418 0.4223E-07 0.1408E-08 0.4257E-07 0.6136E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001419 0.4191E-07 0.1244E-08 0.4266E-07 0.5587E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001420 0.4187E-07 0.1314E-08 0.4294E-07 0.8243E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001421 0.4228E-07 0.1561E-08 0.4355E-07 0.1098E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001422 0.4335E-07 0.1877E-08 0.4512E-07 0.3058E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001423 0.4098E-07 0.1498E-08 0.4106E-07 0.2460E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001424 0.4081E-07 0.1074E-08 0.4081E-07 0.3730E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001425 0.4063E-07 0.1192E-08 0.4062E-07 0.3926E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001426 0.4052E-07 0.1123E-08 0.4051E-07 0.6160E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001427 0.4055E-07 0.1203E-08 0.4049E-07 0.7741E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001428 0.4085E-07 0.1349E-08 0.4077E-07 0.1156E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001429 0.4173E-07 0.1622E-08 0.4154E-07 0.9780E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001430 0.3922E-07 0.1756E-08 0.4297E-07 0.3860E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001431 0.3952E-07 0.1260E-08 0.3927E-07 0.5239E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001432 0.3959E-07 0.1084E-08 0.3899E-07 0.5518E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001433 0.3981E-07 0.1171E-08 0.3897E-07 0.7511E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001434 0.4037E-07 0.1354E-08 0.3934E-07 0.1033E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001435 0.4173E-07 0.1671E-08 0.4038E-07 0.2703E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001436 0.3811E-07 0.1346E-08 0.3812E-07 0.2044E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001437 0.3788E-07 0.9648E-09 0.3796E-07 0.3376E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001438 0.3771E-07 0.1080E-08 0.3781E-07 0.3704E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001439 0.3760E-07 0.1017E-08 0.3772E-07 0.5734E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001440 0.3759E-07 0.1089E-08 0.3778E-07 0.7401E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001441 0.3785E-07 0.1243E-08 0.3814E-07 0.1089E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001442 0.3858E-07 0.1496E-08 0.3908E-07 0.9615E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001443 0.3989E-07 0.1576E-08 0.3648E-07 0.3316E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001444 0.3645E-07 0.1158E-08 0.3675E-07 0.4450E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001445 0.3620E-07 0.9957E-09 0.3685E-07 0.5025E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001446 0.3618E-07 0.1075E-08 0.3710E-07 0.6824E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001447 0.3655E-07 0.1279E-08 0.3771E-07 0.9622E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001448 0.3756E-07 0.1576E-08 0.3912E-07 0.2474E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001449 0.3538E-07 0.1260E-08 0.3545E-07 0.1925E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001450 0.3524E-07 0.8664E-09 0.3524E-07 0.3159E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001451 0.3510E-07 0.9780E-09 0.3508E-07 0.3468E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001452 0.3502E-07 0.9190E-09 0.3499E-07 0.5396E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001453 0.3508E-07 0.9955E-09 0.3500E-07 0.6947E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001454 0.3543E-07 0.1142E-08 0.3528E-07 0.1025E-07 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001455 0.3633E-07 0.1397E-08 0.3605E-07 0.8772E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001456 0.3387E-07 0.1539E-08 0.3742E-07 0.3337E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001457 0.3415E-07 0.1069E-08 0.3391E-07 0.4439E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001458 0.3425E-07 0.8949E-09 0.3366E-07 0.4826E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001459 0.3450E-07 0.9822E-09 0.3366E-07 0.6592E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001460 0.3509E-07 0.1167E-08 0.3403E-07 0.9092E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001461 0.3645E-07 0.1468E-08 0.3504E-07 0.2348E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001462 0.3291E-07 0.1171E-08 0.3291E-07 0.1752E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001463 0.3271E-07 0.7830E-09 0.3278E-07 0.2974E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001464 0.3257E-07 0.8950E-09 0.3265E-07 0.3239E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001465 0.3249E-07 0.8399E-09 0.3259E-07 0.5073E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001466 0.3251E-07 0.9129E-09 0.3267E-07 0.6528E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001467 0.3280E-07 0.1067E-08 0.3304E-07 0.9649E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001468 0.3356E-07 0.1314E-08 0.3397E-07 0.8492E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001469 0.3487E-07 0.1410E-08 0.3149E-07 0.2955E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001470 0.3148E-07 0.9974E-09 0.3176E-07 0.3942E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001471 0.3126E-07 0.8281E-09 0.3188E-07 0.4472E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001472 0.3126E-07 0.9091E-09 0.3214E-07 0.6077E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001473 0.3163E-07 0.1109E-08 0.3275E-07 0.8542E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001474 0.3260E-07 0.1394E-08 0.3412E-07 0.2160E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001475 0.3055E-07 0.1104E-08 0.3061E-07 0.1682E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001476 0.3044E-07 0.7061E-09 0.3043E-07 0.2781E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001477 0.3032E-07 0.8175E-09 0.3030E-07 0.3072E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001478 0.3028E-07 0.7645E-09 0.3023E-07 0.4782E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001479 0.3037E-07 0.8428E-09 0.3026E-07 0.6181E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001480 0.3074E-07 0.9931E-09 0.3057E-07 0.9117E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001481 0.3166E-07 0.1242E-08 0.3135E-07 0.7840E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001482 0.2924E-07 0.1384E-08 0.3270E-07 0.2906E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001483 0.2952E-07 0.9276E-09 0.2928E-07 0.3886E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001484 0.2965E-07 0.7499E-09 0.2907E-07 0.4274E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001485 0.2991E-07 0.8380E-09 0.2908E-07 0.5851E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001486 0.3052E-07 0.1025E-08 0.2945E-07 0.8082E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001487 0.3185E-07 0.1312E-08 0.3041E-07 0.2055E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001488 0.2842E-07 0.1035E-08 0.2841E-07 0.1549E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001489 0.2825E-07 0.6408E-09 0.2832E-07 0.2627E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001490 0.2814E-07 0.7526E-09 0.2821E-07 0.2877E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001491 0.2808E-07 0.7028E-09 0.2817E-07 0.4508E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001492 0.2813E-07 0.7778E-09 0.2828E-07 0.5816E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001493 0.2844E-07 0.9340E-09 0.2866E-07 0.8596E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001494 0.2922E-07 0.1174E-08 0.2960E-07 0.1830E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001495 0.2730E-07 0.9764E-09 0.2733E-07 0.1363E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001496 0.2718E-07 0.6040E-09 0.2719E-07 0.2467E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001497 0.2708E-07 0.7156E-09 0.2709E-07 0.2735E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001498 0.2705E-07 0.6693E-09 0.2706E-07 0.4373E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001499 0.2717E-07 0.7538E-09 0.2714E-07 0.5695E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001500 0.2758E-07 0.9145E-09 0.2753E-07 0.8480E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001501 0.2857E-07 0.1172E-08 0.2847E-07 0.1836E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001502 0.2625E-07 0.9683E-09 0.2626E-07 0.1380E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001503 0.2613E-07 0.5759E-09 0.2615E-07 0.2515E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001504 0.2604E-07 0.6997E-09 0.2606E-07 0.2818E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001505 0.2603E-07 0.6502E-09 0.2605E-07 0.4474E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001506 0.2616E-07 0.7414E-09 0.2618E-07 0.5843E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001507 0.2662E-07 0.9147E-09 0.2664E-07 0.8642E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001508 0.2766E-07 0.1182E-08 0.2770E-07 0.1846E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001509 0.2524E-07 0.9781E-09 0.2525E-07 0.1375E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001510 0.2513E-07 0.5491E-09 0.2514E-07 0.2534E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001511 0.2506E-07 0.6828E-09 0.2507E-07 0.2840E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001512 0.2507E-07 0.6327E-09 0.2507E-07 0.4537E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001513 0.2525E-07 0.7322E-09 0.2524E-07 0.5926E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001514 0.2580E-07 0.9194E-09 0.2577E-07 0.8770E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001515 0.2700E-07 0.1201E-08 0.2694E-07 0.1871E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001516 0.2427E-07 0.9908E-09 0.2428E-07 0.1415E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001517 0.2417E-07 0.5252E-09 0.2418E-07 0.2598E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001518 0.2411E-07 0.6736E-09 0.2412E-07 0.2938E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001519 0.2416E-07 0.6207E-09 0.2416E-07 0.4671E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001520 0.2439E-07 0.7310E-09 0.2438E-07 0.6111E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001521 0.2503E-07 0.9333E-09 0.2501E-07 0.8988E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001522 0.2637E-07 0.1230E-08 0.2634E-07 0.1908E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001523 0.2333E-07 0.1014E-08 0.2334E-07 0.1438E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001524 0.2325E-07 0.5032E-09 0.2326E-07 0.2656E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001525 0.2321E-07 0.6660E-09 0.2321E-07 0.3012E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001526 0.2329E-07 0.6117E-09 0.2328E-07 0.4798E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001527 0.2359E-07 0.7322E-09 0.2356E-07 0.6279E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001528 0.2435E-07 0.9509E-09 0.2430E-07 0.6233E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001529 0.2242E-07 0.1086E-08 0.2551E-07 0.1917E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001530 0.2266E-07 0.7279E-09 0.2246E-07 0.2822E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001531 0.2280E-07 0.5441E-09 0.2231E-07 0.3092E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001532 0.2308E-07 0.6336E-09 0.2234E-07 0.4583E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001533 0.2369E-07 0.7980E-09 0.2265E-07 0.6145E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001534 0.2495E-07 0.1056E-08 0.2349E-07 0.1587E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001535 0.2181E-07 0.8377E-09 0.2179E-07 0.1023E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001536 0.2168E-07 0.4530E-09 0.2173E-07 0.2058E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001537 0.2161E-07 0.5580E-09 0.2166E-07 0.2116E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001538 0.2160E-07 0.5220E-09 0.2167E-07 0.3556E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001539 0.2171E-07 0.5958E-09 0.2182E-07 0.4493E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001540 0.2210E-07 0.7557E-09 0.2228E-07 0.6826E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001541 0.2298E-07 0.9802E-09 0.2328E-07 0.1400E-08 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001542 0.2095E-07 0.8274E-09 0.2097E-07 0.1141E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001543 0.2088E-07 0.4297E-09 0.2087E-07 0.1970E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001544 0.2083E-07 0.5478E-09 0.2082E-07 0.2331E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001545 0.2087E-07 0.5089E-09 0.2084E-07 0.3627E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001546 0.2107E-07 0.6025E-09 0.2101E-07 0.4859E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001547 0.2164E-07 0.7724E-09 0.2153E-07 0.7129E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001548 0.2283E-07 0.1026E-08 0.2265E-07 0.1550E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001549 0.2015E-07 0.8544E-09 0.2015E-07 0.1141E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001550 0.2008E-07 0.4149E-09 0.2009E-07 0.2168E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001551 0.2005E-07 0.5519E-09 0.2005E-07 0.2450E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001552 0.2012E-07 0.5099E-09 0.2012E-07 0.3943E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001553 0.2039E-07 0.6129E-09 0.2038E-07 0.5170E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001554 0.2106E-07 0.8044E-09 0.2104E-07 0.7633E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001555 0.2241E-07 0.1072E-08 0.2239E-07 0.1603E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001556 0.1938E-07 0.8966E-09 0.1939E-07 0.1238E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001557 0.1933E-07 0.4003E-09 0.1933E-07 0.2278E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001558 0.1932E-07 0.5574E-09 0.1931E-07 0.2644E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001559 0.1944E-07 0.5123E-09 0.1943E-07 0.4194E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001560 0.1980E-07 0.6307E-09 0.1977E-07 0.5540E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001561 0.2065E-07 0.8412E-09 0.2059E-07 0.1416E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001562 0.1872E-07 0.7046E-09 0.1872E-07 0.7479E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001563 0.1865E-07 0.3745E-09 0.1865E-07 0.1884E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001564 0.1860E-07 0.4746E-09 0.1860E-07 0.1806E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001565 0.1864E-07 0.4465E-09 0.1864E-07 0.3275E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001566 0.1883E-07 0.5188E-09 0.1881E-07 0.4035E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001567 0.1934E-07 0.6803E-09 0.1931E-07 0.6288E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001568 0.2041E-07 0.8938E-09 0.2037E-07 0.1250E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001569 0.1801E-07 0.7693E-09 0.1801E-07 0.1120E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001570 0.1795E-07 0.3591E-09 0.1795E-07 0.1848E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001571 0.1793E-07 0.4871E-09 0.1793E-07 0.2318E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001572 0.1802E-07 0.4491E-09 0.1801E-07 0.3515E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001573 0.1831E-07 0.5510E-09 0.1829E-07 0.4812E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001574 0.1902E-07 0.7265E-09 0.1898E-07 0.4688E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001575 0.1730E-07 0.8505E-09 0.2010E-07 0.1436E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001576 0.1751E-07 0.5640E-09 0.1733E-07 0.2065E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001577 0.1765E-07 0.4055E-09 0.1723E-07 0.2427E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001578 0.1794E-07 0.4803E-09 0.1727E-07 0.3544E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001579 0.1853E-07 0.6212E-09 0.1755E-07 0.4860E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001580 0.1970E-07 0.8314E-09 0.1832E-07 0.4381E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001581 0.1675E-07 0.9228E-09 0.1953E-07 0.1326E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001582 0.1693E-07 0.5492E-09 0.1676E-07 0.2196E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001583 0.1706E-07 0.4020E-09 0.1665E-07 0.2246E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001584 0.1733E-07 0.4704E-09 0.1667E-07 0.3392E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001585 0.1785E-07 0.6162E-09 0.1694E-07 0.4494E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001586 0.1893E-07 0.8181E-09 0.1765E-07 0.4167E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001587 0.1618E-07 0.8805E-09 0.1875E-07 0.1260E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001588 0.1636E-07 0.5224E-09 0.1620E-07 0.2206E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001589 0.1646E-07 0.3867E-09 0.1609E-07 0.2115E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001590 0.1668E-07 0.4460E-09 0.1610E-07 0.3236E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001591 0.1712E-07 0.5838E-09 0.1633E-07 0.4231E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001592 0.1806E-07 0.7683E-09 0.1694E-07 0.3924E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001593 0.1564E-07 0.8153E-09 0.1790E-07 0.1208E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001594 0.1580E-07 0.4880E-09 0.1566E-07 0.2160E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001595 0.1587E-07 0.3679E-09 0.1555E-07 0.1998E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001596 0.1606E-07 0.4204E-09 0.1555E-07 0.3067E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001597 0.1642E-07 0.5472E-09 0.1575E-07 0.3987E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001598 0.1724E-07 0.7152E-09 0.1627E-07 0.3743E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001599 0.1512E-07 0.7494E-09 0.1711E-07 0.1169E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001600 0.1526E-07 0.4542E-09 0.1513E-07 0.2099E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001601 0.1532E-07 0.3481E-09 0.1503E-07 0.1912E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001602 0.1547E-07 0.3952E-09 0.1502E-07 0.2922E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001603 0.1577E-07 0.5103E-09 0.1520E-07 0.3801E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001604 0.1649E-07 0.6632E-09 0.1566E-07 0.3608E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001605 0.1462E-07 0.6880E-09 0.1640E-07 0.1139E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001606 0.1474E-07 0.4228E-09 0.1463E-07 0.2042E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001607 0.1479E-07 0.3292E-09 0.1453E-07 0.1850E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001608 0.1492E-07 0.3720E-09 0.1452E-07 0.2808E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001609 0.1518E-07 0.4765E-09 0.1467E-07 0.3665E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001610 0.1582E-07 0.6161E-09 0.1509E-07 0.3518E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001611 0.1413E-07 0.6337E-09 0.1576E-07 0.1117E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001612 0.1425E-07 0.3949E-09 0.1414E-07 0.1996E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001613 0.1428E-07 0.3120E-09 0.1404E-07 0.1807E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001614 0.1440E-07 0.3511E-09 0.1403E-07 0.2724E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001615 0.1464E-07 0.4464E-09 0.1418E-07 0.3568E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001616 0.1522E-07 0.5746E-09 0.1457E-07 0.3460E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001617 0.1366E-07 0.5866E-09 0.1518E-07 0.1102E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001618 0.1377E-07 0.3703E-09 0.1367E-07 0.1962E-08 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 58 0001619 0.1381E-07 0.2964E-09 0.1357E-07 0.1779E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001620 0.1391E-07 0.3322E-09 0.1356E-07 0.2663E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001621 0.1413E-07 0.4199E-09 0.1371E-07 0.3502E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001622 0.1468E-07 0.5381E-09 0.1408E-07 0.9602E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001623 0.1328E-07 0.4122E-09 0.1327E-07 0.7725E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001624 0.1320E-07 0.2423E-09 0.1322E-07 0.1193E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001625 0.1314E-07 0.2914E-09 0.1316E-07 0.1268E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001626 0.1311E-07 0.2702E-09 0.1314E-07 0.1994E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001627 0.1311E-07 0.3037E-09 0.1317E-07 0.2526E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001628 0.1323E-07 0.3698E-09 0.1330E-07 0.3777E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001629 0.1354E-07 0.4716E-09 0.1366E-07 0.3257E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001630 0.1409E-07 0.5177E-09 0.1270E-07 0.1213E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001631 0.1270E-07 0.3420E-09 0.1281E-07 0.1670E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001632 0.1261E-07 0.2684E-09 0.1285E-07 0.1823E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001633 0.1261E-07 0.3048E-09 0.1296E-07 0.2492E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001634 0.1276E-07 0.3848E-09 0.1319E-07 0.3461E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001635 0.1316E-07 0.4991E-09 0.1374E-07 0.8705E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001636 0.1233E-07 0.3893E-09 0.1234E-07 0.6876E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001637 0.1228E-07 0.2219E-09 0.1227E-07 0.1130E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001638 0.1224E-07 0.2726E-09 0.1222E-07 0.1259E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001639 0.1222E-07 0.2517E-09 0.1220E-07 0.1952E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001640 0.1227E-07 0.2868E-09 0.1222E-07 0.2536E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001641 0.1244E-07 0.3537E-09 0.1235E-07 0.3734E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001642 0.1285E-07 0.4557E-09 0.1269E-07 0.8039E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001643 0.1186E-07 0.3726E-09 0.1186E-07 0.6068E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 20 0001644 0.1180E-07 0.2114E-09 0.1181E-07 0.1103E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001645 0.1176E-07 0.2639E-09 0.1177E-07 0.1223E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001646 0.1175E-07 0.2440E-09 0.1176E-07 0.1961E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001647 0.1179E-07 0.2829E-09 0.1181E-07 0.2550E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001648 0.1198E-07 0.3546E-09 0.1201E-07 0.3793E-08 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 1758 0001649 0.1241E-07 0.4638E-09 0.1246E-07 0.8208E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001650 0.1140E-07 0.3799E-09 0.1140E-07 0.6329E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001651 0.1135E-07 0.2040E-09 0.1135E-07 0.1148E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001652 0.1131E-07 0.2638E-09 0.1132E-07 0.1297E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001653 0.1132E-07 0.2421E-09 0.1132E-07 0.2056E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001654 0.1140E-07 0.2859E-09 0.1140E-07 0.2690E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 17 0001655 0.1163E-07 0.3648E-09 0.1163E-07 0.3961E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001656 0.1215E-07 0.4815E-09 0.1213E-07 0.8501E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 21 0001657 0.1096E-07 0.3949E-09 0.1096E-07 0.6489E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001658 0.1092E-07 0.1972E-09 0.1092E-07 0.1194E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001659 0.1089E-07 0.2641E-09 0.1090E-07 0.1348E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001660 0.1091E-07 0.2416E-09 0.1092E-07 0.2150E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001661 0.1101E-07 0.2905E-09 0.1102E-07 0.2806E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001662 0.1130E-07 0.3780E-09 0.1132E-07 0.4124E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001663 0.1190E-07 0.5026E-09 0.1192E-07 0.8804E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001664 0.1054E-07 0.4124E-09 0.1055E-07 0.6829E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001665 0.1050E-07 0.1914E-09 0.1051E-07 0.1250E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001666 0.1049E-07 0.2671E-09 0.1049E-07 0.1424E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001667 0.1053E-07 0.2432E-09 0.1053E-07 0.2260E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001668 0.1067E-07 0.2982E-09 0.1067E-07 0.2953E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001669 0.1102E-07 0.3944E-09 0.1103E-07 0.7625E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 19 0001670 0.1018E-07 0.3263E-09 0.1019E-07 0.3999E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001671 0.1014E-07 0.1790E-09 0.1014E-07 0.1009E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001672 0.1011E-07 0.2263E-09 0.1011E-07 0.9446E-09 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 19 0001673 0.1011E-07 0.2117E-09 0.1011E-07 0.1723E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001674 0.1017E-07 0.2452E-09 0.1018E-07 0.2096E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 18 0001675 0.1038E-07 0.3179E-09 0.1039E-07 0.3274E-08 0.0000E+00 Linear solve converged due to CONVERGED_RTOL iterations 20 0001676 0.1082E-07 0.4166E-09 0.1084E-07 0.6585E-09 0.0000E+00 Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 19 0001677 0.9792E-08 0.3539E-09 0.9797E-08 0.6003E-09 0.0000E+00 TIME FOR CALCULATION: 0.4300E+05 L2-NORM ERROR U VELOCITY 2.794552119135372E-005 L2-NORM ERROR V VELOCITY 2.790982482146795E-005 L2-NORM ERROR W VELOCITY 2.911527030306516E-005 L2-NORM ERROR ABS. VELOCITY 3.166182227160789E-005 L2-NORM ERROR PRESSURE 1.392953546657367E-003 *** CALCULATION FINISHED - SEE RESULTS *** ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./caffa3d.MB.lnx on a arch-openmpi-opt-intel-hlr-ext named hpb0241 with 1 processor, by gu08vomo Thu Feb 19 22:43:07 2015 Using Petsc Release Version 3.5.3, Jan, 31, 2015 Max Max/Min Avg Total Time (sec): 4.301e+04 1.00000 4.301e+04 Objects: 3.020e+05 1.00000 3.020e+05 Flops: 1.498e+13 1.00000 1.498e+13 1.498e+13 Flops/sec: 3.484e+08 1.00000 3.484e+08 3.484e+08 MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Reductions: 0.000e+00 0.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 6.3111e+03 14.7% 6.5910e+06 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 1: MOMENTUM: 2.5050e+03 5.8% 1.4158e+12 9.4% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 2: PRESCORR: 3.4194e+04 79.5% 1.3567e+13 90.6% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage ThreadCommRunKer 8386 1.0 1.4162e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNorm 1 1.0 2.3067e-01 1.0 4.39e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 67 0 0 0 19 VecScale 1 1.0 1.8690e-03 1.0 2.20e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 33 0 0 0 1176 VecSet 67086 1.0 1.2403e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 2 0 0 0 0 0 VecScatterBegin 75476 1.0 3.8897e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 1 1.0 1.8699e-03 1.0 2.20e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 33 0 0 0 1175 MatAssemblyBegin 3354 1.0 2.0068e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 3354 1.0 7.7234e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 --- Event Stage 1: MOMENTUM VecMDot 6511 1.0 1.5976e+01 1.0 3.51e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 2 0 0 0 2198 VecNorm 21604 1.0 2.9968e+01 1.0 9.49e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 1 7 0 0 0 3168 VecScale 11542 1.0 1.5647e+01 1.0 2.54e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 2 0 0 0 1621 VecCopy 10062 1.0 3.4819e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 VecSet 5031 1.0 4.6474e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 15093 1.0 4.9690e+01 1.0 6.63e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 2 5 0 0 0 1335 VecMAXPY 11542 1.0 3.5395e+01 1.0 6.37e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 5 0 0 0 1800 VecNormalize 11542 1.0 2.9870e+01 1.0 7.61e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 1 5 0 0 0 2547 MatMult 16573 1.0 4.5654e+02 1.0 4.50e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 3 0 0 0 18 32 0 0 0 986 MatSolve 16573 1.0 4.7541e+02 1.0 4.50e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 3 0 0 0 19 32 0 0 0 947 MatLUFactorNum 5031 1.0 5.4277e+02 1.0 2.30e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 22 16 0 0 0 424 MatILUFactorSym 5031 1.0 4.7841e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 19 0 0 0 0 0 MatGetRowIJ 5031 1.0 1.6491e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 5031 1.0 4.3468e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 2 0 0 0 0 0 KSPGMRESOrthog 6511 1.0 3.4936e+01 1.0 7.02e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 5 0 0 0 2010 KSPSetUp 5031 1.0 1.3675e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 5 0 0 0 0 0 KSPSolve 5031 1.0 2.1769e+03 1.0 1.23e+12 1.0 0.0e+00 0.0e+00 0.0e+00 5 8 0 0 0 87 87 0 0 0 567 PCSetUp 5031 1.0 1.0705e+03 1.0 2.30e+11 1.0 0.0e+00 0.0e+00 0.0e+00 2 2 0 0 0 43 16 0 0 0 215 PCApply 16573 1.0 4.7545e+02 1.0 4.50e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 3 0 0 0 19 32 0 0 0 947 --- Event Stage 2: PRESCORR VecMDot 77665 1.0 2.4251e+02 1.0 5.61e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 4 0 0 0 1 4 0 0 0 2312 VecTDot 51363 1.0 1.8632e+02 1.0 2.26e+11 1.0 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 1 2 0 0 0 1211 VecNorm 84373 1.0 8.5669e+01 1.0 2.16e+11 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 2 0 0 0 2518 VecScale 55341 1.0 2.7240e+01 1.0 4.41e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1617 VecCopy 10062 1.0 2.3209e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 423957 1.0 3.1843e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecAXPY 58064 1.0 2.2768e+02 1.0 2.41e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 1 2 0 0 0 1059 VecAYPX 106066 1.0 2.1610e+02 1.0 1.71e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 790 VecMAXPY 82696 1.0 3.0662e+02 1.0 6.41e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 4 0 0 0 1 5 0 0 0 2090 VecAssemblyBegin 5031 1.0 3.9411e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyEnd 5031 1.0 3.3855e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecPointwiseMult 82696 1.0 7.4910e+01 1.0 4.41e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 588 VecSetRandom 5031 1.0 4.0458e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 55341 1.0 5.2289e+01 1.0 1.32e+11 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 2528 MatMult 159730 1.0 2.2406e+03 1.0 2.49e+12 1.0 0.0e+00 0.0e+00 0.0e+00 5 17 0 0 0 7 18 0 0 0 1110 MatMultAdd 82065 1.0 5.7318e+02 1.0 4.41e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 3 0 0 0 2 3 0 0 0 770 MatMultTranspose 136775 1.0 6.4944e+02 1.0 4.41e+11 1.0 0.0e+00 0.0e+00 0.0e+00 2 3 0 0 0 2 3 0 0 0 679 MatSOR 164130 1.0 8.3032e+03 1.0 6.31e+12 1.0 0.0e+00 0.0e+00 0.0e+00 19 42 0 0 0 24 47 0 0 0 760 MatConvert 10062 1.0 1.9290e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 MatScale 15093 1.0 1.1497e+02 1.0 9.74e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 847 MatResidual 82065 1.0 1.0263e+03 1.0 1.15e+12 1.0 0.0e+00 0.0e+00 0.0e+00 2 8 0 0 0 3 8 0 0 0 1118 MatAssemblyBegin 51987 1.0 1.9356e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 51987 1.0 6.2574e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 2 0 0 0 0 0 MatGetRow -1157908450 1.0 8.1135e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 MatCoarsen 5031 1.0 3.1474e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 MatView 6 1.0 6.5398e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAXPY 5031 1.0 6.2875e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 2 0 0 0 0 0 MatMatMult 5031 1.0 7.4859e+02 1.0 8.35e+10 1.0 0.0e+00 0.0e+00 0.0e+00 2 1 0 0 0 2 1 0 0 0 112 MatMatMultSym 5031 1.0 5.0572e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 MatMatMultNum 5031 1.0 2.4280e+02 1.0 8.35e+10 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 344 MatPtAP 5031 1.0 4.0040e+03 1.0 4.98e+11 1.0 0.0e+00 0.0e+00 0.0e+00 9 3 0 0 0 12 4 0 0 0 124 MatPtAPSymbolic 5031 1.0 1.5557e+03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 4 0 0 0 0 5 0 0 0 0 0 MatPtAPNumeric 5031 1.0 2.4483e+03 1.0 4.98e+11 1.0 0.0e+00 0.0e+00 0.0e+00 6 3 0 0 0 7 4 0 0 0 204 MatTrnMatMult 5031 1.0 1.0475e+04 1.0 1.06e+12 1.0 0.0e+00 0.0e+00 0.0e+00 24 7 0 0 0 31 8 0 0 0 101 MatTrnMatMultSym 5031 1.0 4.9370e+03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 11 0 0 0 0 14 0 0 0 0 0 MatTrnMatMultNum 5031 1.0 5.5378e+03 1.0 1.06e+12 1.0 0.0e+00 0.0e+00 0.0e+00 13 7 0 0 0 16 8 0 0 0 192 MatGetSymTrans 10062 1.0 2.6491e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 KSPGMRESOrthog 50310 1.0 2.9522e+02 1.0 8.81e+11 1.0 0.0e+00 0.0e+00 0.0e+00 1 6 0 0 0 1 6 0 0 0 2985 KSPSetUp 16770 1.0 8.3196e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1677 1.0 3.4140e+04 1.0 1.35e+13 1.0 0.0e+00 0.0e+00 0.0e+00 79 90 0 0 0 100100 0 0 0 396 PCGAMGgraph_AGG 5031 1.0 2.5135e+03 1.0 7.03e+10 1.0 0.0e+00 0.0e+00 0.0e+00 6 0 0 0 0 7 1 0 0 0 28 PCGAMGcoarse_AGG 5031 1.0 1.1403e+04 1.0 1.06e+12 1.0 0.0e+00 0.0e+00 0.0e+00 27 7 0 0 0 33 8 0 0 0 93 PCGAMGProl_AGG 5031 1.0 7.1328e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 PCGAMGPOpt_AGG 5031 1.0 3.2823e+03 1.0 1.92e+12 1.0 0.0e+00 0.0e+00 0.0e+00 8 13 0 0 0 10 14 0 0 0 585 PCSetUp 3354 1.0 2.2016e+04 1.0 3.55e+12 1.0 0.0e+00 0.0e+00 0.0e+00 51 24 0 0 0 64 26 0 0 0 161 PCSetUpOnBlocks 27355 1.0 6.1723e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCApply 27355 1.0 1.0605e+04 1.0 8.34e+12 1.0 0.0e+00 0.0e+00 0.0e+00 25 56 0 0 0 31 61 0 0 0 787 --- Event Stage 3: Unknown ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Vector 67 95 833878448 0 Vector Scatter 2 2 1304 0 Index Set 4 7 17581544 0 IS L to G Mapping 2 2 17577192 0 Matrix 1 11 490350860 0 Matrix Null Space 0 1 620 0 Krylov Solver 0 7 116016472 0 Preconditioner 0 7 476716 0 --- Event Stage 1: MOMENTUM Vector 60376 60362 530554454928 0 Index Set 15093 15090 88419231280 0 Matrix 5031 5030 1063010020000 0 Matrix Null Space 1 0 0 0 Krylov Solver 2 0 0 0 Preconditioner 2 0 0 0 --- Event Stage 2: PRESCORR Vector 154290 154265 806410318792 0 Index Set 5031 5031 3984552 0 Matrix 41925 41916 1141360535716 0 Matrix Coarsen 5031 5031 3239964 0 Krylov Solver 5036 5031 152016696 0 Preconditioner 5036 5031 5272488 0 PetscRandom 5031 5031 3219840 0 Viewer 1 0 0 0 --- Event Stage 3: Unknown ======================================================================================================================== Average time to get PetscTime(): 0 #PETSc Option Table entries: -log_summary -momentum_ksp_type gmres -options_left -pressure_ksp_converged_reason -pressure_mg_coarse_sub_pc_type svd -pressure_mg_levels_ksp_rtol 1e-4 -pressure_mg_levels_ksp_type richardson -pressure_mg_levels_pc_type sor -pressure_pc_gamg_agg_nsmooths 1 -pressure_pc_type gamg #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure options: PETSC_ARCH=arch-openmpi-opt-intel-hlr-ext PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3 -prefix=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr-ext --with-blas-lapack-dir=/shared/apps/intel/2015/composer_xe_2015/mkl/lib/intel64/ --with-mpi-dir=/shared/apps/openmpi/1.8.2_intel COPTFLAGS="-O3 -xHost" FOPTFLAGS="-O3 -xHost" CXXOPTFLAGS="-O3 -xHost" --with-debugging=0 --download-hypre --download-ml ----------------------------------------- Libraries compiled on Sun Feb 1 16:09:22 2015 on hla0003 Machine characteristics: Linux-3.0.101-0.40-default-x86_64-with-SuSE-11-x86_64 Using PETSc directory: /home/gu08vomo/soft/petsc/3.5.3 Using PETSc arch: arch-openmpi-opt-intel-hlr-ext ----------------------------------------- Using C compiler: /shared/apps/openmpi/1.8.2_intel/bin/mpicc -fPIC -wd1572 -O3 -xHost ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /shared/apps/openmpi/1.8.2_intel/bin/mpif90 -fPIC -O3 -xHost ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/include -I/home/gu08vomo/soft/petsc/3.5.3/include -I/home/gu08vomo/soft/petsc/3.5.3/include -I/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/include -I/shared/apps/openmpi/1.8.2_intel/include ----------------------------------------- Using C linker: /shared/apps/openmpi/1.8.2_intel/bin/mpicc Using Fortran linker: /shared/apps/openmpi/1.8.2_intel/bin/mpif90 Using libraries: -Wl,-rpath,/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -L/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -lpetsc -Wl,-rpath,/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -L/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -lHYPRE -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -L/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib -L/shared/apps/gcc/4.8.3/lib -lmpi_cxx -lml -lmpi_cxx -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm -lX11 -lpthread -lssl -lcrypto -lmpi_usempi_ignore_tkr -lmpi_mpifh -lifport -lifcore -lm -lmpi_cxx -ldl -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -lmpi -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -L/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib -L/shared/apps/gcc/4.8.3/lib -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -limf -lsvml -lirng -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lpthread -lirc_s -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -L/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib -L/shared/apps/gcc/4.8.3/lib -ldl ----------------------------------------- #PETSc Option Table entries: -log_summary -momentum_ksp_type gmres -options_left -pressure_ksp_converged_reason -pressure_mg_coarse_sub_pc_type svd -pressure_mg_levels_ksp_rtol 1e-4 -pressure_mg_levels_ksp_type richardson -pressure_mg_levels_pc_type sor -pressure_pc_gamg_agg_nsmooths 1 -pressure_pc_type gamg #End of PETSc Option Table entries There are no unused options. From fabian.gabel at stud.tu-darmstadt.de Fri Feb 20 04:19:27 2015 From: fabian.gabel at stud.tu-darmstadt.de (Fabian Gabel) Date: Fri, 20 Feb 2015 11:19:27 +0100 Subject: [petsc-users] Efficient Use of GAMG for Poisson Equation with Full Neumann Boundary Conditions In-Reply-To: References: <1424186090.3298.2.camel@gmail.com> <54828F31-EBDC-4F83-9EE8-ECED68A56443@mcs.anl.gov> Message-ID: <1424427567.3272.5.camel@stud.tu-darmstadt.de> Hong, you find attached the stand-alone code. It comes with a small script (runtest.sh) for compilation and execution. If you are using the gnu compiler or want compilation with debugging flags just uncomment the corresponding line. Compilation should terminate successfully if the environment variable PETSC_DIR is set properly (root directory of PETSc installation). Fabian On Do, 2015-02-19 at 11:45 -0600, Hong wrote: > Fabian, > Too much time was spent on the matrix operations during setup phase, > which has plenty room for optimization. > Can you provide us a stand-alone code used in your experiment so we > can investigate how to make our gamg more efficient? > > > Hong > > On Wed, Feb 18, 2015 at 12:20 PM, Barry Smith > wrote: > > Fabian, > > CG requires that the preconditioner be symmetric positive > definite. ICC even if given a symmetric positive definite > matrix can generate an indefinite preconditioner. > > Similarly if an algebraic multigrid application is not > "strong enough" it can also result in a preconditioner that is > indefinite. > > You never want to use ICC for pressure type problems it > cannot compete with multigrid for large problems so let's > forget about ICC and focus on the GAMG. > > > -pressure_mg_coarse_sub_pc_type svd > > -pressure_mg_levels_ksp_rtol 1e-4 > > -pressure_mg_levels_ksp_type richardson > > -pressure_mg_levels_pc_type sor > > -pressure_pc_gamg_agg_nsmooths 1 > > -pressure_pc_type gamg > > There are many many tuning parameters for MG. > > First, is your pressure problem changing dramatically at > each new solver? That is, for example, is the mesh moving or > are there very different numerical values in the matrix? Is > the nonzero structure of the pressure matrix changing? > Currently the entire GAMG process is done for each new solve, > if you use the flag > > -pressure_pc_gamg_reuse_interpolation true > > it will create the interpolation needed for GAMG once and > reuse it for all the solves. Please try that and see what > happens. > > Then I will have many more suggestions. > > > Barry > > > > > On Feb 17, 2015, at 9:14 AM, Fabian Gabel > wrote: > > > > Dear PETSc team, > > > > I am trying to optimize the solver parameters for the linear > system I > > get, when I discretize the pressure correction equation > Poisson equation > > with Neumann boundary conditions) in a SIMPLE-type algorithm > using a > > finite volume method. > > > > The resulting system is symmetric and positive > semi-definite. A basis to > > the associated nullspace has been provided to the KSP > object. > > > > Using a CG solver with ICC preconditioning the solver needs > a lot of > > inner iterations to converge (-ksp_monitor -ksp_view output > attached for > > a case with approx. 2e6 unknowns; the lines beginning with > 000XXXX show > > the relative residual regarding the initial residual in the > outer > > iteration no. 1 for the variables u,v,w,p). Furthermore I > don't quite > > understand, why the solver reports > > > > Linear solve did not converge due to DIVERGED_INDEFINITE_PC > > > > at the later stages of my Picard iteration process > (iteration 0001519). > > > > I then tried out CG+GAMG preconditioning with success > regarding the > > number of inner iterations, but without advantages regarding > wall time > > (output attached). Also the DIVERGED_INDEFINITE_PC reason > shows up > > repeatedly after iteration 0001487. I used the following > options > > > > -pressure_mg_coarse_sub_pc_type svd > > -pressure_mg_levels_ksp_rtol 1e-4 > > -pressure_mg_levels_ksp_type richardson > > -pressure_mg_levels_pc_type sor > > -pressure_pc_gamg_agg_nsmooths 1 > > -pressure_pc_type gamg > > > > I would like to get an opinion on how the solver performance > could be > > increased further. -log_summary shows that my code spends > 80% of the > > time solving the linear systems for the pressure correction > (STAGE 2: > > PRESSCORR). Furthermore, do you know what could be causing > the > > DIVERGED_INDEFINITE_PC converged reason? > > > > Regards, > > Fabian Gabel > > > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: 128_1_1_seg_selfcontained.tgz Type: application/x-compressed-tar Size: 1895410 bytes Desc: not available URL: From bsmith at mcs.anl.gov Fri Feb 20 07:47:32 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 20 Feb 2015 07:47:32 -0600 Subject: [petsc-users] Efficient Use of GAMG for Poisson Equation with Full Neumann Boundary Conditions In-Reply-To: <1424424909.3272.1.camel@gmail.com> References: <1424186090.3298.2.camel@gmail.com> <54828F31-EBDC-4F83-9EE8-ECED68A56443@mcs.anl.gov> <1424424909.3272.1.camel@gmail.com> Message-ID: <0449A7FE-7C7F-4CE2-AC8D-19E9A7E01DD0@mcs.anl.gov> Excellent! Now the only real hopes for improvement are 1) run on an Intel system with the highest possible achievable memory bandwidth (this could possibly speed up the code by 50%) 2) to decrease the iteration counts a bit try -pressure_mg_levels_ksp_type chebyshev 3) some optimization on MatPtAPNumeric() sequential; we would have to do this, could shave off a small amount of time Barry Here is the percentage of time spent in the pressure correction MatMult 11 MatMultAdd 4 MatSOR 57 MatPtAPNumeric 17 > On Feb 20, 2015, at 3:35 AM, Fabian Gabel wrote: > > Barry, > >> >> First, is your pressure problem changing dramatically at each new solver? That is, for example, is the mesh moving or are there very different numerical values in the matrix? Is the nonzero structure of the pressure matrix changing? > > No moving grids, the non-zero structure is maintained throughout the > entire solution process. I am not sure about the "very different > numerical values". I determined the minimal matrix coefficient to be > approx -5e-7 and the maximal matrix coefficient to be 3e-6 (but once I > use block structured grids with locally refined blocks the range will > become wider), but there are some lines containing only a 1 on the > diagonal. This comes from the variable indexing I use, which includes > boundary values. If this should present a problem, I think I could scale > the corresponding rows with a factor depending on the maximal/minimal > element of the matrix. > >> Currently the entire GAMG process is done for each new solve, if you use the flag >> >> -pressure_pc_gamg_reuse_interpolation true >> >> it will create the interpolation needed for GAMG once and reuse it for all the solves. Please try that and see what happens. > > I attached the output for the additional solver option > (-reuse_interpolation). Since there appear to be some inconsistencies > with the previous output file for the GAMG solve I provided, I'll attach > the results for the solution process without the flag for reusing the > interpolation once again. So far wall clock time has been reduced by > almost 50%. > > Fabian > >> >> Then I will have many more suggestions. >> >> >> Barry >> >> >> >>> On Feb 17, 2015, at 9:14 AM, Fabian Gabel wrote: >>> >>> Dear PETSc team, >>> >>> I am trying to optimize the solver parameters for the linear system I >>> get, when I discretize the pressure correction equation Poisson equation >>> with Neumann boundary conditions) in a SIMPLE-type algorithm using a >>> finite volume method. >>> >>> The resulting system is symmetric and positive semi-definite. A basis to >>> the associated nullspace has been provided to the KSP object. >>> >>> Using a CG solver with ICC preconditioning the solver needs a lot of >>> inner iterations to converge (-ksp_monitor -ksp_view output attached for >>> a case with approx. 2e6 unknowns; the lines beginning with 000XXXX show >>> the relative residual regarding the initial residual in the outer >>> iteration no. 1 for the variables u,v,w,p). Furthermore I don't quite >>> understand, why the solver reports >>> >>> Linear solve did not converge due to DIVERGED_INDEFINITE_PC >>> >>> at the later stages of my Picard iteration process (iteration 0001519). >>> >>> I then tried out CG+GAMG preconditioning with success regarding the >>> number of inner iterations, but without advantages regarding wall time >>> (output attached). Also the DIVERGED_INDEFINITE_PC reason shows up >>> repeatedly after iteration 0001487. I used the following options >>> >>> -pressure_mg_coarse_sub_pc_type svd >>> -pressure_mg_levels_ksp_rtol 1e-4 >>> -pressure_mg_levels_ksp_type richardson >>> -pressure_mg_levels_pc_type sor >>> -pressure_pc_gamg_agg_nsmooths 1 >>> -pressure_pc_type gamg >>> >>> I would like to get an opinion on how the solver performance could be >>> increased further. -log_summary shows that my code spends 80% of the >>> time solving the linear systems for the pressure correction (STAGE 2: >>> PRESSCORR). Furthermore, do you know what could be causing the >>> DIVERGED_INDEFINITE_PC converged reason? >>> >>> Regards, >>> Fabian Gabel >>> >> > > > From mfadams at lbl.gov Fri Feb 20 11:05:40 2015 From: mfadams at lbl.gov (Mark Adams) Date: Fri, 20 Feb 2015 12:05:40 -0500 Subject: [petsc-users] GAMG having very large coarse problem? In-Reply-To: References: Message-ID: Not sure what is going on. It stopped coarsening for some reason. Does the code set the number of levels at 2? Could you try remove the -pc_gamg_coarse_eq_limit 200 ... maybe it is gett junk. Also add -pc_gamg_verbose 2. That might give me a clue (probably not). Mark On Wed, Feb 18, 2015 at 10:48 PM, Barry Smith wrote: > > Mark, > > When I run ksp/ksp/examples/tutorials/ex45 I get a VERY large coarse > problem. It seems to ignore the -pc_gamg_coarse_eq_limit 200 argument. Any > idea what is going on? > > Thanks > > Barry > > > $ ./ex45 -da_refine 3 -pc_type gamg -ksp_monitor -ksp_view -log_summary > -pc_gamg_coarse_eq_limit 200 > 0 KSP Residual norm 2.790769524030e+02 > 1 KSP Residual norm 4.484052193577e+01 > 2 KSP Residual norm 2.409368790441e+00 > 3 KSP Residual norm 1.553421589919e-01 > 4 KSP Residual norm 9.821441923699e-03 > 5 KSP Residual norm 5.610434857134e-04 > KSP Object: 1 MPI processes > type: gmres > GMRES: restart=30, using Classical (unmodified) Gram-Schmidt > Orthogonalization with no iterative refinement > GMRES: happy breakdown tolerance 1e-30 > maximum iterations=10000 > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using nonzero initial guess > using PRECONDITIONED norm type for convergence test > PC Object: 1 MPI processes > type: gamg > MG: type is MULTIPLICATIVE, levels=2 cycles=v > Cycles per PCApply=1 > Using Galerkin computed coarse grid matrices > Coarse grid solver -- level ------------------------------- > KSP Object: (mg_coarse_) 1 MPI processes > type: gmres > GMRES: restart=30, using Classical (unmodified) Gram-Schmidt > Orthogonalization with no iterative refinement > GMRES: happy breakdown tolerance 1e-30 > maximum iterations=1, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using NONE norm type for convergence test > PC Object: (mg_coarse_) 1 MPI processes > type: bjacobi > block Jacobi: number of blocks = 1 > Local solve is same for all blocks, in the following KSP and PC > objects: > KSP Object: (mg_coarse_sub_) 1 MPI processes > type: preonly > maximum iterations=1, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using NONE norm type for convergence test > PC Object: (mg_coarse_sub_) 1 MPI processes > type: lu > LU: out-of-place factorization > tolerance for zero pivot 2.22045e-14 > using diagonal shift on blocks to prevent zero pivot [INBLOCKS] > matrix ordering: nd > factor fill ratio given 5, needed 36.4391 > Factored matrix follows: > Mat Object: 1 MPI processes > type: seqaij > rows=16587, cols=16587 > package used to perform factorization: petsc > total: nonzeros=1.8231e+07, allocated nonzeros=1.8231e+07 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > linear system matrix = precond matrix: > Mat Object: 1 MPI processes > type: seqaij > rows=16587, cols=16587 > total: nonzeros=500315, allocated nonzeros=500315 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > linear system matrix = precond matrix: > Mat Object: 1 MPI processes > type: seqaij > rows=16587, cols=16587 > total: nonzeros=500315, allocated nonzeros=500315 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > Down solver (pre-smoother) on level 1 ------------------------------- > KSP Object: (mg_levels_1_) 1 MPI processes > type: chebyshev > Chebyshev: eigenvalue estimates: min = 0.0976343, max = 2.05032 > maximum iterations=2 > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using nonzero initial guess > using NONE norm type for convergence test > PC Object: (mg_levels_1_) 1 MPI processes > type: sor > SOR: type = local_symmetric, iterations = 1, local iterations = 1, > omega = 1 > linear system matrix = precond matrix: > Mat Object: 1 MPI processes > type: seqaij > rows=117649, cols=117649 > total: nonzeros=809137, allocated nonzeros=809137 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > Up solver (post-smoother) same as down solver (pre-smoother) > linear system matrix = precond matrix: > Mat Object: 1 MPI processes > type: seqaij > rows=117649, cols=117649 > total: nonzeros=809137, allocated nonzeros=809137 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > Residual norm 3.81135e-05 > > ************************************************************************************************************************ > *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r > -fCourier9' to print this document *** > > ************************************************************************************************************************ > > ---------------------------------------------- PETSc Performance Summary: > ---------------------------------------------- > > ./ex45 on a arch-opt named Barrys-MacBook-Pro.local with 1 processor, by > barrysmith Wed Feb 18 21:38:03 2015 > Using Petsc Development GIT revision: v3.5.3-1998-geddef31 GIT Date: > 2015-02-18 11:05:09 -0600 > > Max Max/Min Avg Total > Time (sec): 1.103e+01 1.00000 1.103e+01 > Objects: 9.200e+01 1.00000 9.200e+01 > Flops: 1.756e+10 1.00000 1.756e+10 1.756e+10 > Flops/sec: 1.592e+09 1.00000 1.592e+09 1.592e+09 > MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 > MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00 > MPI Reductions: 0.000e+00 0.00000 > > Flop counting convention: 1 flop = 1 real number operation of type > (multiply/divide/add/subtract) > e.g., VecAXPY() for real vectors of length N > --> 2N flops > and VecAXPY() for complex vectors of length N > --> 8N flops > > Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages > --- -- Message Lengths -- -- Reductions -- > Avg %Total Avg %Total counts > %Total Avg %Total counts %Total > 0: Main Stage: 1.1030e+01 100.0% 1.7556e+10 100.0% 0.000e+00 > 0.0% 0.000e+00 0.0% 0.000e+00 0.0% > > > ------------------------------------------------------------------------------------------------------------------------ > See the 'Profiling' chapter of the users' manual for details on > interpreting output. > Phase summary info: > Count: number of times phase was executed > Time and Flops: Max - maximum over all processors > Ratio - ratio of maximum to minimum over all processors > Mess: number of messages sent > Avg. len: average message length (bytes) > Reduct: number of global reductions > Global: entire computation > Stage: stages of a computation. Set stages with PetscLogStagePush() and > PetscLogStagePop(). > %T - percent time in this phase %F - percent flops in this > phase > %M - percent messages in this phase %L - percent message lengths > in this phase > %R - percent reductions in this phase > Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time > over all processors) > > ------------------------------------------------------------------------------------------------------------------------ > Event Count Time (sec) Flops > --- Global --- --- Stage --- Total > Max Ratio Max Ratio Max Ratio Mess Avg len > Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s > > ------------------------------------------------------------------------------------------------------------------------ > > --- Event Stage 0: Main Stage > > KSPGMRESOrthog 21 1.0 8.8868e-03 1.0 3.33e+07 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 3752 > KSPSetUp 5 1.0 4.3986e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > KSPSolve 1 1.0 1.0995e+01 1.0 1.76e+10 1.0 0.0e+00 0.0e+00 > 0.0e+00100100 0 0 0 100100 0 0 0 1596 > VecMDot 21 1.0 4.7335e-03 1.0 1.67e+07 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 3522 > VecNorm 30 1.0 9.4804e-04 1.0 4.63e+06 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 4887 > VecScale 29 1.0 7.8293e-04 1.0 2.20e+06 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 2809 > VecCopy 14 1.0 7.7058e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecSet 102 1.0 1.4530e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecAXPY 9 1.0 3.8154e-04 1.0 9.05e+05 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 2372 > VecAYPX 48 1.0 5.6449e-03 1.0 7.06e+06 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 1251 > VecAXPBYCZ 24 1.0 4.0700e-03 1.0 1.41e+07 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 3469 > VecMAXPY 29 1.0 5.1512e-03 1.0 2.04e+07 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 3960 > VecAssemblyBegin 1 1.0 6.7055e-08 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecAssemblyEnd 1 1.0 8.1025e-08 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecPointwiseMult 11 1.0 1.8083e-03 1.0 1.29e+06 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 716 > VecSetRandom 1 1.0 1.7628e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecNormalize 29 1.0 1.7100e-03 1.0 6.60e+06 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 3858 > MatMult 58 1.0 5.0949e-02 1.0 8.39e+07 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 1647 > MatMultAdd 6 1.0 5.2584e-03 1.0 5.01e+06 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 952 > MatMultTranspose 6 1.0 6.1330e-03 1.0 5.01e+06 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 816 > MatSolve 12 1.0 2.0657e-01 1.0 4.37e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 2 2 0 0 0 2 2 0 0 0 2117 > MatSOR 36 1.0 7.1355e-02 1.0 5.84e+07 1.0 0.0e+00 0.0e+00 > 0.0e+00 1 0 0 0 0 1 0 0 0 0 818 > MatLUFactorSym 1 1.0 3.4310e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 3 0 0 0 0 3 0 0 0 0 0 > MatLUFactorNum 1 1.0 9.8038e+00 1.0 1.69e+10 1.0 0.0e+00 0.0e+00 > 0.0e+00 89 96 0 0 0 89 96 0 0 0 1721 > MatConvert 1 1.0 5.6955e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatScale 3 1.0 2.7223e-03 1.0 2.45e+06 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 901 > MatResidual 6 1.0 6.2142e-03 1.0 9.71e+06 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 1562 > MatAssemblyBegin 12 1.0 2.7413e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatAssemblyEnd 12 1.0 2.4857e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatGetRow 470596 1.0 2.4337e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatGetRowIJ 1 1.0 2.3254e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatGetOrdering 1 1.0 1.7668e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatCoarsen 1 1.0 8.5790e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatView 5 1.0 2.2273e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatAXPY 1 1.0 1.8864e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatMatMult 1 1.0 2.4513e-02 1.0 2.03e+06 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 83 > MatMatMultSym 1 1.0 1.7885e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatMatMultNum 1 1.0 6.6144e-03 1.0 2.03e+06 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 307 > MatPtAP 1 1.0 1.1460e-01 1.0 1.30e+07 1.0 0.0e+00 0.0e+00 > 0.0e+00 1 0 0 0 0 1 0 0 0 0 114 > MatPtAPSymbolic 1 1.0 4.6803e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatPtAPNumeric 1 1.0 6.7781e-02 1.0 1.30e+07 1.0 0.0e+00 0.0e+00 > 0.0e+00 1 0 0 0 0 1 0 0 0 0 192 > MatTrnMatMult 1 1.0 9.1702e-02 1.0 1.02e+07 1.0 0.0e+00 0.0e+00 > 0.0e+00 1 0 0 0 0 1 0 0 0 0 111 > MatTrnMatMultSym 1 1.0 6.0173e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 > MatTrnMatMultNum 1 1.0 3.1526e-02 1.0 1.02e+07 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 324 > MatGetSymTrans 2 1.0 4.2753e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > PCGAMGgraph_AGG 1 1.0 6.9175e-02 1.0 1.62e+06 1.0 0.0e+00 0.0e+00 > 0.0e+00 1 0 0 0 0 1 0 0 0 0 23 > PCGAMGcoarse_AGG 1 1.0 1.1130e-01 1.0 1.02e+07 1.0 0.0e+00 0.0e+00 > 0.0e+00 1 0 0 0 0 1 0 0 0 0 92 > PCGAMGProl_AGG 1 1.0 2.9380e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > PCGAMGPOpt_AGG 1 1.0 9.1377e-02 1.0 5.15e+07 1.0 0.0e+00 0.0e+00 > 0.0e+00 1 0 0 0 0 1 0 0 0 0 564 > PCSetUp 2 1.0 1.0587e+01 1.0 1.69e+10 1.0 0.0e+00 0.0e+00 > 0.0e+00 96 97 0 0 0 96 97 0 0 0 1601 > PCSetUpOnBlocks 6 1.0 1.0165e+01 1.0 1.69e+10 1.0 0.0e+00 0.0e+00 > 0.0e+00 92 96 0 0 0 92 96 0 0 0 1660 > PCApply 6 1.0 1.0503e+01 1.0 1.75e+10 1.0 0.0e+00 0.0e+00 > 0.0e+00 95 99 0 0 0 95 99 0 0 0 1662 > > ------------------------------------------------------------------------------------------------------------------------ > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Fri Feb 20 11:18:46 2015 From: mfadams at lbl.gov (Mark Adams) Date: Fri, 20 Feb 2015 12:18:46 -0500 Subject: [petsc-users] Efficient Use of GAMG for Poisson Equation with Full Neumann Boundary Conditions In-Reply-To: References: <1424186090.3298.2.camel@gmail.com> <54828F31-EBDC-4F83-9EE8-ECED68A56443@mcs.anl.gov> Message-ID: On Thu, Feb 19, 2015 at 12:45 PM, Hong wrote: > Fabian, > Too much time was spent on the matrix operations during setup phase, which > has plenty room for optimization. > Yea, a little slow but not crazy. With 2M equation on one proc you may be having cache problems. > Can you provide us a stand-alone code used in your experiment so we can > investigate how to make our gamg more efficient? > > Hong > > On Wed, Feb 18, 2015 at 12:20 PM, Barry Smith wrote: > >> >> Fabian, >> >> CG requires that the preconditioner be symmetric positive definite. >> ICC even if given a symmetric positive definite matrix can generate an >> indefinite preconditioner. >> >> Similarly if an algebraic multigrid application is not "strong enough" >> it can also result in a preconditioner that is indefinite. >> >> You never want to use ICC for pressure type problems it cannot compete >> with multigrid for large problems so let's forget about ICC and focus on >> the GAMG. >> >> > -pressure_mg_coarse_sub_pc_type svd >> > -pressure_mg_levels_ksp_rtol 1e-4 >> > -pressure_mg_levels_ksp_type richardson >> > -pressure_mg_levels_pc_type sor >> > -pressure_pc_gamg_agg_nsmooths 1 >> > -pressure_pc_type gamg >> >> There are many many tuning parameters for MG. >> >> First, is your pressure problem changing dramatically at each new >> solver? That is, for example, is the mesh moving or are there very >> different numerical values in the matrix? Is the nonzero structure of the >> pressure matrix changing? Currently the entire GAMG process is done for >> each new solve, if you use the flag >> >> -pressure_pc_gamg_reuse_interpolation true >> >> it will create the interpolation needed for GAMG once and reuse it for >> all the solves. Please try that and see what happens. >> >> Then I will have many more suggestions. >> >> >> Barry >> >> >> >> > On Feb 17, 2015, at 9:14 AM, Fabian Gabel >> wrote: >> > >> > Dear PETSc team, >> > >> > I am trying to optimize the solver parameters for the linear system I >> > get, when I discretize the pressure correction equation Poisson equation >> > with Neumann boundary conditions) in a SIMPLE-type algorithm using a >> > finite volume method. >> > >> > The resulting system is symmetric and positive semi-definite. A basis to >> > the associated nullspace has been provided to the KSP object. >> > >> > Using a CG solver with ICC preconditioning the solver needs a lot of >> > inner iterations to converge (-ksp_monitor -ksp_view output attached for >> > a case with approx. 2e6 unknowns; the lines beginning with 000XXXX show >> > the relative residual regarding the initial residual in the outer >> > iteration no. 1 for the variables u,v,w,p). Furthermore I don't quite >> > understand, why the solver reports >> > >> > Linear solve did not converge due to DIVERGED_INDEFINITE_PC >> > >> > at the later stages of my Picard iteration process (iteration 0001519). >> > >> > I then tried out CG+GAMG preconditioning with success regarding the >> > number of inner iterations, but without advantages regarding wall time >> > (output attached). Also the DIVERGED_INDEFINITE_PC reason shows up >> > repeatedly after iteration 0001487. I used the following options >> > >> > -pressure_mg_coarse_sub_pc_type svd >> > -pressure_mg_levels_ksp_rtol 1e-4 >> > -pressure_mg_levels_ksp_type richardson >> > -pressure_mg_levels_pc_type sor >> > -pressure_pc_gamg_agg_nsmooths 1 >> > -pressure_pc_type gamg >> > >> > I would like to get an opinion on how the solver performance could be >> > increased further. -log_summary shows that my code spends 80% of the >> > time solving the linear systems for the pressure correction (STAGE 2: >> > PRESSCORR). Furthermore, do you know what could be causing the >> > DIVERGED_INDEFINITE_PC converged reason? >> > >> > Regards, >> > Fabian Gabel >> > >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From salazardetroya at gmail.com Fri Feb 20 13:10:40 2015 From: salazardetroya at gmail.com (Miguel Angel Salazar de Troya) Date: Fri, 20 Feb 2015 13:10:40 -0600 Subject: [petsc-users] DMNetworkSetSizes called by all processors Message-ID: Hi I was checking this example: http://www.mcs.anl.gov/petsc/petsc-current/src/snes/examples/tutorials/network/pflow/pf.c.html In line 443, the data is read by processor 0 and DMNetworkSetSizes is called in 461 by all processors, since it is collective. However, the data read by processor 0 has not been broadcasted and numEdges and numVertices and have still a value of zero for the other processors. Does this mean that the other processors create a dummy DMNetwork and wait to receive their partition once the partitioning is called in line 508? Is this a standard procedure to create meshes in parallel? Thanks Miguel -- *Miguel Angel Salazar de Troya* Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From abhyshr at mcs.anl.gov Fri Feb 20 13:24:46 2015 From: abhyshr at mcs.anl.gov (Abhyankar, Shrirang G.) Date: Fri, 20 Feb 2015 19:24:46 +0000 Subject: [petsc-users] DMNetworkSetSizes called by all processors In-Reply-To: Message-ID: Miguel, In this example the DM is created initially only on proc. 0. Once the DM gets partitioned (DMNetworkDistribute()), another DM is created that has the appropriate node,edge,component info on each processor. Shri From: Miguel Angel Salazar de Troya > Date: Fri, 20 Feb 2015 13:10:40 -0600 To: "petsc-users at mcs.anl.gov" > Subject: [petsc-users] DMNetworkSetSizes called by all processors Hi I was checking this example: http://www.mcs.anl.gov/petsc/petsc-current/src/snes/examples/tutorials/network/pflow/pf.c.html In line 443, the data is read by processor 0 and DMNetworkSetSizes is called in 461 by all processors, since it is collective. However, the data read by processor 0 has not been broadcasted and numEdges and numVertices and have still a value of zero for the other processors. Does this mean that the other processors create a dummy DMNetwork and wait to receive their partition once the partitioning is called in line 508? Is this a standard procedure to create meshes in parallel? Thanks Miguel -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.knezevic at akselos.com Sun Feb 22 06:50:30 2015 From: david.knezevic at akselos.com (David Knezevic) Date: Sun, 22 Feb 2015 07:50:30 -0500 Subject: [petsc-users] MatMPIAIJSetPreallocation: "nnz cannot be greater than row length" Message-ID: Hi all, I've implemented a solver for a contact problem using SNES. The sparsity pattern of the jacobian matrix needs to change at each nonlinear iteration (because the elements which are in contact can change), so I tried to deal with this by calling MatSeqAIJSetPreallocation and MatMPIAIJSetPreallocation during each iteration in order to update the preallocation. This seems to work fine in serial, but with two or more MPI processes I run into the error "nnz cannot be greater than row length", e.g.: nnz cannot be greater than row length: local row 528 value 12 rowlength 0 This error is from the call to MatSeqAIJSetPreallocation(b->B,o_nz,o_nnz); in MatMPIAIJSetPreallocation_MPIAIJ. Any guidance on what the problem might be would be most appreciated. For example, I was wondering if there is a problem with calling SetPreallocation on a matrix that has already been preallocated? Some notes: - I'm using PETSc via libMesh - The code that triggers this issue is available as a PR on the libMesh github repo, in case anyone is interested: https://github.com/libMesh/libmesh/pull/460/ - I can try to make a minimal pure-PETSc example that reproduces this error, if that would be helpful. Many thanks, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronalcelayavzla at gmail.com Sun Feb 22 09:50:25 2015 From: ronalcelayavzla at gmail.com (Ronal Celaya) Date: Sun, 22 Feb 2015 11:20:25 -0430 Subject: [petsc-users] PETSc publications In-Reply-To: References: Message-ID: Hi. The link http://www.mcs.anl.gov/~kaushik/Papers/pcfd99_gkks.pdf gives a 404 error I've found this link http://www.cs.odu.edu/~keyes/papers/pcfd99_gkks.pdf I think is the same article Regards, On Thu, Feb 19, 2015 at 3:10 PM, Matthew Knepley wrote: > On Thu, Feb 19, 2015 at 1:29 PM, Ronal Celaya > wrote: > >> Are there publications and/or documentation that could help me gain an >> understanding of the algorithms and architecture of: >> >> 1. PETSc's sparse matrix-vector multiplication >> > > There is nice stuff in: > http://www.mcs.anl.gov/~kaushik/Papers/pcfd99_gkks.pdf > and several discussions in the slides on the Tutorials page. > > >> 2. PETSc's CG algorithm >> >> I need to gain a deep and thorough understanding of these, but would >> prefer not to start with studying the code first. Any recommendations as to >> how to best approach my study I'd appreciate. I know how to use PETSc, and >> have a working knowledge of numerical linear algebra parallel algorithms. >> > > There is nothing special about -pc_type cg. It follows Saad's book ( > http://www-users.cs.umn.edu/~saad/IterMethBook_2ndEd.pdf) or > http://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf > > Thanks, > > Matt > > >> Thanks in advance! >> >> -- >> Ronal Celaya >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- Ronal Celaya -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Feb 22 09:58:27 2015 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 22 Feb 2015 09:58:27 -0600 Subject: [petsc-users] PETSc publications In-Reply-To: References: Message-ID: On Sun, Feb 22, 2015 at 9:50 AM, Ronal Celaya wrote: > Hi. > The link http://www.mcs.anl.gov/~kaushik/Papers/pcfd99_gkks.pdf gives a > 404 error > > I've found this link http://www.cs.odu.edu/~keyes/papers/pcfd99_gkks.pdf > I think is the same article > Yes, that is the same. Thanks, Matt > Regards, > > On Thu, Feb 19, 2015 at 3:10 PM, Matthew Knepley > wrote: > >> On Thu, Feb 19, 2015 at 1:29 PM, Ronal Celaya >> wrote: >> >>> Are there publications and/or documentation that could help me gain an >>> understanding of the algorithms and architecture of: >>> >>> 1. PETSc's sparse matrix-vector multiplication >>> >> >> There is nice stuff in: >> http://www.mcs.anl.gov/~kaushik/Papers/pcfd99_gkks.pdf >> and several discussions in the slides on the Tutorials page. >> >> >>> 2. PETSc's CG algorithm >>> >>> I need to gain a deep and thorough understanding of these, but would >>> prefer not to start with studying the code first. Any recommendations as to >>> how to best approach my study I'd appreciate. I know how to use PETSc, and >>> have a working knowledge of numerical linear algebra parallel algorithms. >>> >> >> There is nothing special about -pc_type cg. It follows Saad's book ( >> http://www-users.cs.umn.edu/~saad/IterMethBook_2ndEd.pdf) or >> http://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf >> >> Thanks, >> >> Matt >> >> >>> Thanks in advance! >>> >>> -- >>> Ronal Celaya >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > Ronal Celaya > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dngoldberg at gmail.com Sun Feb 22 11:00:12 2015 From: dngoldberg at gmail.com (Daniel Goldberg) Date: Sun, 22 Feb 2015 17:00:12 +0000 Subject: [petsc-users] solving multiple linear systems with same matrix (sequentially, not simultaneously) Message-ID: Hello In the code I am developing, there is a point where I will need to solve a linear system with the same matrix but different right hand sides. It is a time-dependent model and the matrix in question will change each timestep, but for a single timestep ~30-40 linear solves will be required, with the same matrix but differing right hand sides. Each right hand side depends on the previous, so they cannot all be solved simultaneously. Generally I solve the matrix via CG with Block Jacobi / ILU(0) preconditioning. I don't persist the matrix in between solves (i.e. I destroy the Mat, KSP and Vec objects after each linear solve) because I do not get the feeling this makes a large difference to performance and it is easier in dealing with the rest of the model code, which does not use PETSc. The matrix is positive definite with sparsity of 18 nonzero elements per row, and generally the linear system is smaller than 1 million degrees of freedom. If my matrix was dense, then likely retaining LU factors for foward/backward substitution would be a good strategy (though I'm not sure if this is easily doable using PETSc direct solvers). But given the sparsity I am unsure of whether I can take advantage of using the same matrix 30-40 times in a row. The comments in ex16.c state that when a KSP object is used multiple times for a linear solve, the preconditioner operations are not done each time -- and so I figured that if I changed my preconditioning parameters I might be able to make subsequent solves with the same KSP object faster. I tried an experment in which I increased pc_factor_levels, and then created a number of random vectors, e.g. vec_1, vec_2, vec_3... and called KSPSolve(ksp, vec_i, solution_i) sequentially with the same ksp object. I did see a decrease in the number of CG iterations required as pc_factor_levels was increased, as well as an increase in time for the first linear solve, but no decrease in time required for subsequent solves. *Should* there be a timing decrease? But more generally is there a way to optimize the fact I'm solving 40 linear systems with the same matrix? Apologies for not providing a copy of the relevant bits of code -- this is my first email to the group, and a rather long one already -- happy to be more concrete about anything I've said above. Thanks Dan -- Daniel Goldberg, PhD Lecturer in Glaciology School of Geosciences, University of Edinburgh Geography Building, Drummond Street, Edinburgh EH8 9XP em: D an.Goldberg at ed.ac.uk web: http://ocean.mit.edu/~dgoldberg -------------- next part -------------- An HTML attachment was scrubbed... URL: From salazardetroya at gmail.com Sun Feb 22 11:01:44 2015 From: salazardetroya at gmail.com (Miguel Angel Salazar de Troya) Date: Sun, 22 Feb 2015 11:01:44 -0600 Subject: [petsc-users] DMNetworkGetEdgeRange() in parallel Message-ID: Hi I noticed that the routine DMNetworkGetEdgeRange() returns the local indices for the edge range. Is there any way to obtain the global indices? So if my network has 10 edges, the processor 1 has the 0-4 edges and the processor 2, the 5-9 edges, how can I obtain this information? Thanks Miguel -- *Miguel Angel Salazar de Troya* Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From fd.kong at siat.ac.cn Sun Feb 22 11:37:19 2015 From: fd.kong at siat.ac.cn (Fande Kong) Date: Sun, 22 Feb 2015 10:37:19 -0700 Subject: [petsc-users] any examples about aspin, nasm and fas? Message-ID: Hi all, I want to learn how to use a nonlinear preconditioner. There are three interesting nonlinear preconditioners, nasm, aspin and fas. But I could not find any examples on the use of the nonlinear preconditioners. Thanks, Fande, -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Feb 22 11:54:38 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 22 Feb 2015 11:54:38 -0600 Subject: [petsc-users] solving multiple linear systems with same matrix (sequentially, not simultaneously) In-Reply-To: References: Message-ID: <98085B4A-166C-49E4-89F6-DDF53B6FFD4D@mcs.anl.gov> You should definitely keep the matrix, PC, and KSP for all 40 of the solves. This will eliminate the matrix and preconditioner setup time for last 39 solves. What kind of operator do you have? ILU(k) is a general purpose preconditioner but not particularly good. I would recommend trying -pc_type gamg next. Barry > On Feb 22, 2015, at 11:00 AM, Daniel Goldberg wrote: > > Hello > > In the code I am developing, there is a point where I will need to solve a linear system with the same matrix but different right hand sides. It is a time-dependent model and the matrix in question will change each timestep, but for a single timestep ~30-40 linear solves will be required, with the same matrix but differing right hand sides. Each right hand side depends on the previous, so they cannot all be solved simultaneously. > > Generally I solve the matrix via CG with Block Jacobi / ILU(0) preconditioning. I don't persist the matrix in between solves (i.e. I destroy the Mat, KSP and Vec objects after each linear solve) because I do not get the feeling this makes a large difference to performance and it is easier in dealing with the rest of the model code, which does not use PETSc. The matrix is positive definite with sparsity of 18 nonzero elements per row, and generally the linear system is smaller than 1 million degrees of freedom. > > If my matrix was dense, then likely retaining LU factors for foward/backward substitution would be a good strategy (though I'm not sure if this is easily doable using PETSc direct solvers). But given the sparsity I am unsure of whether I can take advantage of using the same matrix 30-40 times in a row. > > The comments in ex16.c state that when a KSP object is used multiple times for a linear solve, the preconditioner operations are not done each time -- and so I figured that if I changed my preconditioning parameters I might be able to make subsequent solves with the same KSP object faster. I tried an experment in which I increased pc_factor_levels, and then created a number of random vectors, e.g. vec_1, vec_2, vec_3... and called > > KSPSolve(ksp, vec_i, solution_i) > > sequentially with the same ksp object. I did see a decrease in the number of CG iterations required as pc_factor_levels was increased, as well as an increase in time for the first linear solve, but no decrease in time required for subsequent solves. *Should* there be a timing decrease? But more generally is there a way to optimize the fact I'm solving 40 linear systems with the same matrix? > > Apologies for not providing a copy of the relevant bits of code -- this is my first email to the group, and a rather long one already -- happy to be more concrete about anything I've said above. > > Thanks > Dan > > -- > > Daniel Goldberg, PhD > Lecturer in Glaciology > School of Geosciences, University of Edinburgh > Geography Building, Drummond Street, Edinburgh EH8 9XP > > > em: Dan.Goldberg at ed.ac.uk > web: http://ocean.mit.edu/~dgoldberg From bsmith at mcs.anl.gov Sun Feb 22 12:06:21 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 22 Feb 2015 12:06:21 -0600 Subject: [petsc-users] any examples about aspin, nasm and fas? In-Reply-To: References: Message-ID: <8DD3AA15-9510-4BD3-A3D2-2AC12AA0A03B@mcs.anl.gov> They are discussed a little in the paper http://www.mcs.anl.gov/publication/composing-scalable-nonlinear-algebraic-solvers and have some manual pages, plus you can take a look at the source code but otherwise you are on your own. Barry > On Feb 22, 2015, at 11:37 AM, Fande Kong wrote: > > Hi all, > > > I want to learn how to use a nonlinear preconditioner. There are three interesting nonlinear preconditioners, nasm, aspin and fas. But I could not find any examples on the use of the nonlinear preconditioners. > > Thanks, > > Fande, From bsmith at mcs.anl.gov Sun Feb 22 12:13:05 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 22 Feb 2015 12:13:05 -0600 Subject: [petsc-users] MatMPIAIJSetPreallocation: "nnz cannot be greater than row length" In-Reply-To: References: Message-ID: <9734C4EE-C645-41E1-B140-2113E1EBF146@mcs.anl.gov> David, This is an obscure little feature of MatMPIAIJ, each time you change the sparsity pattern before you call the MatMPIAIJSetPreallocation you need to call MatDisAssemble_MPIAIJ(Mat mat). This is a private PETSc function so you need to provide your own prototype for it above the function you use it in. Let us know if this resolves the problem. Barry We never really intended that people would call MatMPIAIJSetPreallocation() AFTER they had already used the matrix. > On Feb 22, 2015, at 6:50 AM, David Knezevic wrote: > > Hi all, > > I've implemented a solver for a contact problem using SNES. The sparsity pattern of the jacobian matrix needs to change at each nonlinear iteration (because the elements which are in contact can change), so I tried to deal with this by calling MatSeqAIJSetPreallocation and MatMPIAIJSetPreallocation during each iteration in order to update the preallocation. > > This seems to work fine in serial, but with two or more MPI processes I run into the error "nnz cannot be greater than row length", e.g.: > nnz cannot be greater than row length: local row 528 value 12 rowlength 0 > > This error is from the call to > MatSeqAIJSetPreallocation(b->B,o_nz,o_nnz); in MatMPIAIJSetPreallocation_MPIAIJ. > > Any guidance on what the problem might be would be most appreciated. For example, I was wondering if there is a problem with calling SetPreallocation on a matrix that has already been preallocated? > > Some notes: > - I'm using PETSc via libMesh > - The code that triggers this issue is available as a PR on the libMesh github repo, in case anyone is interested: https://github.com/libMesh/libmesh/pull/460/ > - I can try to make a minimal pure-PETSc example that reproduces this error, if that would be helpful. > > Many thanks, > David > From dngoldberg at gmail.com Sun Feb 22 12:14:31 2015 From: dngoldberg at gmail.com (Daniel Goldberg) Date: Sun, 22 Feb 2015 18:14:31 +0000 Subject: [petsc-users] solving multiple linear systems with same matrix (sequentially, not simultaneously) In-Reply-To: <98085B4A-166C-49E4-89F6-DDF53B6FFD4D@mcs.anl.gov> References: <98085B4A-166C-49E4-89F6-DDF53B6FFD4D@mcs.anl.gov> Message-ID: Hi Barry Thank you for the reply. From a test I did with a linear system of about 400 000 on 128 cores, i clocked that the setup takes about 2% of the time of the solve -- but it will be a larger fraction with smaller systems. If you mean the differential operator -- it is elliptic. The equations are based on those of a 3D shear-thinning stokes fluid, but simplified by various approximations including small aspect ratio, so the equations are essentially two-dimensional. (sometimes called the Shallow Shelf Approximation and similar to the equations solved by http://onlinelibrary.wiley.com/doi/10.1029/2008JF001179/abstract). The linear system I solve is a step in an iterative solution to the nonlinear equations. I will try gamg as I know it can reduce the number of CG iterations required. (I'm guessing you mean algebraic, not geometric?) Am I correct in thinking that, for subsequent solves with the same KSP object, the time of the solve should scale with the number of conj grad iterations? Will this time be relatively independent of the preconditioner type? Thanks Dan On Sun, Feb 22, 2015 at 5:54 PM, Barry Smith wrote: > > You should definitely keep the matrix, PC, and KSP for all 40 of the > solves. This will eliminate the matrix and preconditioner setup time for > last 39 solves. > > What kind of operator do you have? ILU(k) is a general purpose > preconditioner but not particularly good. I would recommend trying -pc_type > gamg next. > > Barry > > > > On Feb 22, 2015, at 11:00 AM, Daniel Goldberg > wrote: > > > > Hello > > > > In the code I am developing, there is a point where I will need to solve > a linear system with the same matrix but different right hand sides. It is > a time-dependent model and the matrix in question will change each > timestep, but for a single timestep ~30-40 linear solves will be required, > with the same matrix but differing right hand sides. Each right hand side > depends on the previous, so they cannot all be solved simultaneously. > > > > Generally I solve the matrix via CG with Block Jacobi / ILU(0) > preconditioning. I don't persist the matrix in between solves (i.e. I > destroy the Mat, KSP and Vec objects after each linear solve) because I do > not get the feeling this makes a large difference to performance and it is > easier in dealing with the rest of the model code, which does not use > PETSc. The matrix is positive definite with sparsity of 18 nonzero elements > per row, and generally the linear system is smaller than 1 million degrees > of freedom. > > > > If my matrix was dense, then likely retaining LU factors for > foward/backward substitution would be a good strategy (though I'm not sure > if this is easily doable using PETSc direct solvers). But given the > sparsity I am unsure of whether I can take advantage of using the same > matrix 30-40 times in a row. > > > > The comments in ex16.c state that when a KSP object is used multiple > times for a linear solve, the preconditioner operations are not done each > time -- and so I figured that if I changed my preconditioning parameters I > might be able to make subsequent solves with the same KSP object faster. I > tried an experment in which I increased pc_factor_levels, and then created > a number of random vectors, e.g. vec_1, vec_2, vec_3... and called > > > > KSPSolve(ksp, vec_i, solution_i) > > > > sequentially with the same ksp object. I did see a decrease in the > number of CG iterations required as pc_factor_levels was increased, as well > as an increase in time for the first linear solve, but no decrease in time > required for subsequent solves. *Should* there be a timing decrease? But > more generally is there a way to optimize the fact I'm solving 40 linear > systems with the same matrix? > > > > Apologies for not providing a copy of the relevant bits of code -- this > is my first email to the group, and a rather long one already -- happy to > be more concrete about anything I've said above. > > > > Thanks > > Dan > > > > -- > > > > Daniel Goldberg, PhD > > Lecturer in Glaciology > > School of Geosciences, University of Edinburgh > > Geography Building, Drummond Street, Edinburgh EH8 9XP > > > > > > em: Dan.Goldberg at ed.ac.uk > > web: http://ocean.mit.edu/~dgoldberg > > -- Daniel Goldberg, PhD Lecturer in Glaciology School of Geosciences, University of Edinburgh Geography Building, Drummond Street, Edinburgh EH8 9XP em: D an.Goldberg at ed.ac.uk web: http://ocean.mit.edu/~dgoldberg -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Feb 22 12:15:40 2015 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 22 Feb 2015 12:15:40 -0600 Subject: [petsc-users] DMNetworkGetEdgeRange() in parallel In-Reply-To: References: Message-ID: On Sun, Feb 22, 2015 at 11:01 AM, Miguel Angel Salazar de Troya < salazardetroya at gmail.com> wrote: > Hi > > I noticed that the routine DMNetworkGetEdgeRange() returns the local > indices for the edge range. Is there any way to obtain the global indices? > So if my network has 10 edges, the processor 1 has the 0-4 edges and the > processor 2, the 5-9 edges, how can I obtain this information? > One of the points of DMPlex is we do not require a global numbering. Everything is numbered locally, and the PetscSF maps local numbers to local numbers in order to determine ownership. If you want to create a global numbering for some reason, you can using DMPlexCreatePointNumbering(). There are also cell and vertex versions that we use for output, so you could do it just for edges as well. Thanks, Matt > Thanks > Miguel > > -- > *Miguel Angel Salazar de Troya* > Graduate Research Assistant > Department of Mechanical Science and Engineering > University of Illinois at Urbana-Champaign > (217) 550-2360 > salaza11 at illinois.edu > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Feb 22 12:30:34 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 22 Feb 2015 12:30:34 -0600 Subject: [petsc-users] solving multiple linear systems with same matrix (sequentially, not simultaneously) In-Reply-To: References: <98085B4A-166C-49E4-89F6-DDF53B6FFD4D@mcs.anl.gov> Message-ID: <7030ED15-A93B-403C-BE28-DEF842F1941D@mcs.anl.gov> > On Feb 22, 2015, at 12:14 PM, Daniel Goldberg wrote: > > Hi Barry > > Thank you for the reply. From a test I did with a linear system of about 400 000 on 128 cores, i clocked that the setup takes about 2% of the time of the solve This is because the preconditioner is very fast to build. GAMG takes longer to build but will give much faster convergence for each solve. > -- but it will be a larger fraction with smaller systems. > > If you mean the differential operator -- it is elliptic. The equations are based on those of a 3D shear-thinning stokes fluid, but simplified by various approximations including small aspect ratio, so the equations are essentially two-dimensional. (sometimes called the Shallow Shelf Approximation and similar to the equations solved by http://onlinelibrary.wiley.com/doi/10.1029/2008JF001179/abstract). The linear system I solve is a step in an iterative solution to the nonlinear equations. > > I will try gamg as I know it can reduce the number of CG iterations required. (I'm guessing you mean algebraic, not geometric?) By default GAMG is an algebraic multigrid preconditioner. Look at its documentation at http://www.mcs.anl.gov/petsc/petsc-master/docs/index.html it will be a bit better than the older documentation. The documentation for GAMG is still pretty thin so feel free to ask questions. Start with -pc_type gamg -ksp_monitor -ksp_type cg -ksp_norm_type unpreconditioned > > Am I correct in thinking that, for subsequent solves with the same KSP object, the time of the solve should scale with the number of conj grad iterations? Yes > Will this time be relatively independent of the preconditioner type? No, it will be much faster for some preconditioners than others. Barry > > Thanks > Dan > > On Sun, Feb 22, 2015 at 5:54 PM, Barry Smith wrote: > > You should definitely keep the matrix, PC, and KSP for all 40 of the solves. This will eliminate the matrix and preconditioner setup time for last 39 solves. > > What kind of operator do you have? ILU(k) is a general purpose preconditioner but not particularly good. I would recommend trying -pc_type gamg next. > > Barry > > > > On Feb 22, 2015, at 11:00 AM, Daniel Goldberg wrote: > > > > Hello > > > > In the code I am developing, there is a point where I will need to solve a linear system with the same matrix but different right hand sides. It is a time-dependent model and the matrix in question will change each timestep, but for a single timestep ~30-40 linear solves will be required, with the same matrix but differing right hand sides. Each right hand side depends on the previous, so they cannot all be solved simultaneously. > > > > Generally I solve the matrix via CG with Block Jacobi / ILU(0) preconditioning. I don't persist the matrix in between solves (i.e. I destroy the Mat, KSP and Vec objects after each linear solve) because I do not get the feeling this makes a large difference to performance and it is easier in dealing with the rest of the model code, which does not use PETSc. The matrix is positive definite with sparsity of 18 nonzero elements per row, and generally the linear system is smaller than 1 million degrees of freedom. > > > > If my matrix was dense, then likely retaining LU factors for foward/backward substitution would be a good strategy (though I'm not sure if this is easily doable using PETSc direct solvers). But given the sparsity I am unsure of whether I can take advantage of using the same matrix 30-40 times in a row. > > > > The comments in ex16.c state that when a KSP object is used multiple times for a linear solve, the preconditioner operations are not done each time -- and so I figured that if I changed my preconditioning parameters I might be able to make subsequent solves with the same KSP object faster. I tried an experment in which I increased pc_factor_levels, and then created a number of random vectors, e.g. vec_1, vec_2, vec_3... and called > > > > KSPSolve(ksp, vec_i, solution_i) > > > > sequentially with the same ksp object. I did see a decrease in the number of CG iterations required as pc_factor_levels was increased, as well as an increase in time for the first linear solve, but no decrease in time required for subsequent solves. *Should* there be a timing decrease? But more generally is there a way to optimize the fact I'm solving 40 linear systems with the same matrix? > > > > Apologies for not providing a copy of the relevant bits of code -- this is my first email to the group, and a rather long one already -- happy to be more concrete about anything I've said above. > > > > Thanks > > Dan > > > > -- > > > > Daniel Goldberg, PhD > > Lecturer in Glaciology > > School of Geosciences, University of Edinburgh > > Geography Building, Drummond Street, Edinburgh EH8 9XP > > > > > > em: Dan.Goldberg at ed.ac.uk > > web: http://ocean.mit.edu/~dgoldberg > > > > > -- > > Daniel Goldberg, PhD > Lecturer in Glaciology > School of Geosciences, University of Edinburgh > Geography Building, Drummond Street, Edinburgh EH8 9XP > > > em: Dan.Goldberg at ed.ac.uk > web: http://ocean.mit.edu/~dgoldberg From salazardetroya at gmail.com Sun Feb 22 15:59:44 2015 From: salazardetroya at gmail.com (Miguel Angel Salazar de Troya) Date: Sun, 22 Feb 2015 15:59:44 -0600 Subject: [petsc-users] DMNetworkGetEdgeRange() in parallel In-Reply-To: References: Message-ID: Thanks. Once I obtain that Index Set with the routine DMPlexCreateCellNumbering() (I assume that the edges in DMNetwork correspond to cells in DMPlex) can I use it to partition a vector with as many components as edges I have in my network? Thanks Miguel On Sun, Feb 22, 2015 at 12:15 PM, Matthew Knepley wrote: > On Sun, Feb 22, 2015 at 11:01 AM, Miguel Angel Salazar de Troya < > salazardetroya at gmail.com> wrote: > >> Hi >> >> I noticed that the routine DMNetworkGetEdgeRange() returns the local >> indices for the edge range. Is there any way to obtain the global indices? >> So if my network has 10 edges, the processor 1 has the 0-4 edges and the >> processor 2, the 5-9 edges, how can I obtain this information? >> > > One of the points of DMPlex is we do not require a global numbering. > Everything is numbered > locally, and the PetscSF maps local numbers to local numbers in order to > determine ownership. > > If you want to create a global numbering for some reason, you can using > DMPlexCreatePointNumbering(). > There are also cell and vertex versions that we use for output, so you > could do it just for edges as well. > > Thanks, > > Matt > > >> Thanks >> Miguel >> >> -- >> *Miguel Angel Salazar de Troya* >> Graduate Research Assistant >> Department of Mechanical Science and Engineering >> University of Illinois at Urbana-Champaign >> (217) 550-2360 >> salaza11 at illinois.edu >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- *Miguel Angel Salazar de Troya* Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.knezevic at akselos.com Sun Feb 22 16:09:26 2015 From: david.knezevic at akselos.com (David Knezevic) Date: Sun, 22 Feb 2015 17:09:26 -0500 Subject: [petsc-users] MatMPIAIJSetPreallocation: "nnz cannot be greater than row length" In-Reply-To: <9734C4EE-C645-41E1-B140-2113E1EBF146@mcs.anl.gov> References: <9734C4EE-C645-41E1-B140-2113E1EBF146@mcs.anl.gov> Message-ID: Hi Barry, Thanks for your help, much appreciated. I added a prototype for MatDisAssemble_MPIAIJ: PETSC_INTERN PetscErrorCode MatDisAssemble_MPIAIJ(Mat); and I added a call to MatDisAssemble_MPIAIJ before MatMPIAIJSetPreallocation. However, I get a segfault on the call to MatDisAssemble_MPIAIJ. The segfault occurs in both serial and parallel. FYI, I'm using Petsc 3.5.2, and I'm not using a non-debug build (though I could rebuild PETSc in debug mode if you think that would help figure out what's happening here). Thanks, David On Sun, Feb 22, 2015 at 1:13 PM, Barry Smith wrote: > David, > > This is an obscure little feature of MatMPIAIJ, each time you change > the sparsity pattern before you call the MatMPIAIJSetPreallocation you need > to call MatDisAssemble_MPIAIJ(Mat mat). This is a private PETSc > function so you need to provide your own prototype for it above the > function you use it in. > > Let us know if this resolves the problem. > > Barry > > We never really intended that people would call > MatMPIAIJSetPreallocation() AFTER they had already used the matrix. > > > > On Feb 22, 2015, at 6:50 AM, David Knezevic > wrote: > > > > Hi all, > > > > I've implemented a solver for a contact problem using SNES. The sparsity > pattern of the jacobian matrix needs to change at each nonlinear iteration > (because the elements which are in contact can change), so I tried to deal > with this by calling MatSeqAIJSetPreallocation and > MatMPIAIJSetPreallocation during each iteration in order to update the > preallocation. > > > > This seems to work fine in serial, but with two or more MPI processes I > run into the error "nnz cannot be greater than row length", e.g.: > > nnz cannot be greater than row length: local row 528 value 12 rowlength 0 > > > > This error is from the call to > > MatSeqAIJSetPreallocation(b->B,o_nz,o_nnz); in > MatMPIAIJSetPreallocation_MPIAIJ. > > > > Any guidance on what the problem might be would be most appreciated. For > example, I was wondering if there is a problem with calling > SetPreallocation on a matrix that has already been preallocated? > > > > Some notes: > > - I'm using PETSc via libMesh > > - The code that triggers this issue is available as a PR on the libMesh > github repo, in case anyone is interested: > https://github.com/libMesh/libmesh/pull/460/ > > - I can try to make a minimal pure-PETSc example that reproduces this > error, if that would be helpful. > > > > Many thanks, > > David > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.knezevic at akselos.com Sun Feb 22 16:22:16 2015 From: david.knezevic at akselos.com (David Knezevic) Date: Sun, 22 Feb 2015 17:22:16 -0500 Subject: [petsc-users] MatMPIAIJSetPreallocation: "nnz cannot be greater than row length" In-Reply-To: References: <9734C4EE-C645-41E1-B140-2113E1EBF146@mcs.anl.gov> Message-ID: P.S. Typo correction: I meant to say "I'm using a non-debug build". David On Sun, Feb 22, 2015 at 5:09 PM, David Knezevic wrote: > Hi Barry, > > Thanks for your help, much appreciated. > > I added a prototype for MatDisAssemble_MPIAIJ: > PETSC_INTERN PetscErrorCode MatDisAssemble_MPIAIJ(Mat); > > and I added a call to MatDisAssemble_MPIAIJ before > MatMPIAIJSetPreallocation. However, I get a segfault on the call > to MatDisAssemble_MPIAIJ. The segfault occurs in both serial and parallel. > > FYI, I'm using Petsc 3.5.2, and I'm not using a non-debug build (though I > could rebuild PETSc in debug mode if you think that would help figure out > what's happening here). > > Thanks, > David > > > > On Sun, Feb 22, 2015 at 1:13 PM, Barry Smith wrote: > > >> David, >> >> This is an obscure little feature of MatMPIAIJ, each time you change >> the sparsity pattern before you call the MatMPIAIJSetPreallocation you need >> to call MatDisAssemble_MPIAIJ(Mat mat). This is a private PETSc >> function so you need to provide your own prototype for it above the >> function you use it in. >> >> Let us know if this resolves the problem. >> >> Barry >> >> We never really intended that people would call >> MatMPIAIJSetPreallocation() AFTER they had already used the matrix. >> >> >> > On Feb 22, 2015, at 6:50 AM, David Knezevic >> wrote: >> > >> > Hi all, >> > >> > I've implemented a solver for a contact problem using SNES. The >> sparsity pattern of the jacobian matrix needs to change at each nonlinear >> iteration (because the elements which are in contact can change), so I >> tried to deal with this by calling MatSeqAIJSetPreallocation and >> MatMPIAIJSetPreallocation during each iteration in order to update the >> preallocation. >> > >> > This seems to work fine in serial, but with two or more MPI processes I >> run into the error "nnz cannot be greater than row length", e.g.: >> > nnz cannot be greater than row length: local row 528 value 12 rowlength >> 0 >> > >> > This error is from the call to >> > MatSeqAIJSetPreallocation(b->B,o_nz,o_nnz); in >> MatMPIAIJSetPreallocation_MPIAIJ. >> > >> > Any guidance on what the problem might be would be most appreciated. >> For example, I was wondering if there is a problem with calling >> SetPreallocation on a matrix that has already been preallocated? >> > >> > Some notes: >> > - I'm using PETSc via libMesh >> > - The code that triggers this issue is available as a PR on the libMesh >> github repo, in case anyone is interested: >> https://github.com/libMesh/libmesh/pull/460/ >> > - I can try to make a minimal pure-PETSc example that reproduces this >> error, if that would be helpful. >> > >> > Many thanks, >> > David >> > >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Feb 22 16:45:39 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 22 Feb 2015 16:45:39 -0600 Subject: [petsc-users] MatMPIAIJSetPreallocation: "nnz cannot be greater than row length" In-Reply-To: References: <9734C4EE-C645-41E1-B140-2113E1EBF146@mcs.anl.gov> Message-ID: <2CBFC145-0960-4152-A939-29CF87FCFEEE@mcs.anl.gov> Do not call for SeqAIJ matrix. Do not call before the first time you have preallocated and put entries in the matrix and done the MatAssemblyBegin/End() If it still crashes you'll need to try the debugger Barry > On Feb 22, 2015, at 4:09 PM, David Knezevic wrote: > > Hi Barry, > > Thanks for your help, much appreciated. > > I added a prototype for MatDisAssemble_MPIAIJ: > PETSC_INTERN PetscErrorCode MatDisAssemble_MPIAIJ(Mat); > > and I added a call to MatDisAssemble_MPIAIJ before MatMPIAIJSetPreallocation. However, I get a segfault on the call to MatDisAssemble_MPIAIJ. The segfault occurs in both serial and parallel. > > FYI, I'm using Petsc 3.5.2, and I'm not using a non-debug build (though I could rebuild PETSc in debug mode if you think that would help figure out what's happening here). > > Thanks, > David > > > > On Sun, Feb 22, 2015 at 1:13 PM, Barry Smith wrote: > > David, > > This is an obscure little feature of MatMPIAIJ, each time you change the sparsity pattern before you call the MatMPIAIJSetPreallocation you need to call MatDisAssemble_MPIAIJ(Mat mat). This is a private PETSc function so you need to provide your own prototype for it above the function you use it in. > > Let us know if this resolves the problem. > > Barry > > We never really intended that people would call MatMPIAIJSetPreallocation() AFTER they had already used the matrix. > > > > On Feb 22, 2015, at 6:50 AM, David Knezevic wrote: > > > > Hi all, > > > > I've implemented a solver for a contact problem using SNES. The sparsity pattern of the jacobian matrix needs to change at each nonlinear iteration (because the elements which are in contact can change), so I tried to deal with this by calling MatSeqAIJSetPreallocation and MatMPIAIJSetPreallocation during each iteration in order to update the preallocation. > > > > This seems to work fine in serial, but with two or more MPI processes I run into the error "nnz cannot be greater than row length", e.g.: > > nnz cannot be greater than row length: local row 528 value 12 rowlength 0 > > > > This error is from the call to > > MatSeqAIJSetPreallocation(b->B,o_nz,o_nnz); in MatMPIAIJSetPreallocation_MPIAIJ. > > > > Any guidance on what the problem might be would be most appreciated. For example, I was wondering if there is a problem with calling SetPreallocation on a matrix that has already been preallocated? > > > > Some notes: > > - I'm using PETSc via libMesh > > - The code that triggers this issue is available as a PR on the libMesh github repo, in case anyone is interested: https://github.com/libMesh/libmesh/pull/460/ > > - I can try to make a minimal pure-PETSc example that reproduces this error, if that would be helpful. > > > > Many thanks, > > David > > > > From david.knezevic at akselos.com Sun Feb 22 16:58:47 2015 From: david.knezevic at akselos.com (David Knezevic) Date: Sun, 22 Feb 2015 17:58:47 -0500 Subject: [petsc-users] MatMPIAIJSetPreallocation: "nnz cannot be greater than row length" In-Reply-To: <2CBFC145-0960-4152-A939-29CF87FCFEEE@mcs.anl.gov> References: <9734C4EE-C645-41E1-B140-2113E1EBF146@mcs.anl.gov> <2CBFC145-0960-4152-A939-29CF87FCFEEE@mcs.anl.gov> Message-ID: Thanks, that helps! After fixing that, now I get this error: [1]PETSC ERROR: Petsc has generated inconsistent data [1]PETSC ERROR: MPIAIJ Matrix was assembled but is missing garray Any suggestions about what may be wrong now? I'll try the debugger tomorrow. Thanks, David On Sun, Feb 22, 2015 at 5:45 PM, Barry Smith wrote: > > Do not call for SeqAIJ matrix. Do not call before the first time you have > preallocated and put entries in the matrix and done the > MatAssemblyBegin/End() > > If it still crashes you'll need to try the debugger > > Barry > > > On Feb 22, 2015, at 4:09 PM, David Knezevic > wrote: > > > > Hi Barry, > > > > Thanks for your help, much appreciated. > > > > I added a prototype for MatDisAssemble_MPIAIJ: > > PETSC_INTERN PetscErrorCode MatDisAssemble_MPIAIJ(Mat); > > > > and I added a call to MatDisAssemble_MPIAIJ before > MatMPIAIJSetPreallocation. However, I get a segfault on the call to > MatDisAssemble_MPIAIJ. The segfault occurs in both serial and parallel. > > > > FYI, I'm using Petsc 3.5.2, and I'm not using a non-debug build (though > I could rebuild PETSc in debug mode if you think that would help figure out > what's happening here). > > > > Thanks, > > David > > > > > > > > On Sun, Feb 22, 2015 at 1:13 PM, Barry Smith wrote: > > > > David, > > > > This is an obscure little feature of MatMPIAIJ, each time you > change the sparsity pattern before you call the MatMPIAIJSetPreallocation > you need to call MatDisAssemble_MPIAIJ(Mat mat). This is a private > PETSc function so you need to provide your own prototype for it above the > function you use it in. > > > > Let us know if this resolves the problem. > > > > Barry > > > > We never really intended that people would call > MatMPIAIJSetPreallocation() AFTER they had already used the matrix. > > > > > > > On Feb 22, 2015, at 6:50 AM, David Knezevic < > david.knezevic at akselos.com> wrote: > > > > > > Hi all, > > > > > > I've implemented a solver for a contact problem using SNES. The > sparsity pattern of the jacobian matrix needs to change at each nonlinear > iteration (because the elements which are in contact can change), so I > tried to deal with this by calling MatSeqAIJSetPreallocation and > MatMPIAIJSetPreallocation during each iteration in order to update the > preallocation. > > > > > > This seems to work fine in serial, but with two or more MPI processes > I run into the error "nnz cannot be greater than row length", e.g.: > > > nnz cannot be greater than row length: local row 528 value 12 > rowlength 0 > > > > > > This error is from the call to > > > MatSeqAIJSetPreallocation(b->B,o_nz,o_nnz); in > MatMPIAIJSetPreallocation_MPIAIJ. > > > > > > Any guidance on what the problem might be would be most appreciated. > For example, I was wondering if there is a problem with calling > SetPreallocation on a matrix that has already been preallocated? > > > > > > Some notes: > > > - I'm using PETSc via libMesh > > > - The code that triggers this issue is available as a PR on the > libMesh github repo, in case anyone is interested: > https://github.com/libMesh/libmesh/pull/460/ > > > - I can try to make a minimal pure-PETSc example that reproduces this > error, if that would be helpful. > > > > > > Many thanks, > > > David > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dkarpeev at gmail.com Sun Feb 22 17:02:20 2015 From: dkarpeev at gmail.com (Dmitry Karpeyev) Date: Sun, 22 Feb 2015 23:02:20 +0000 Subject: [petsc-users] MatMPIAIJSetPreallocation: "nnz cannot be greater than row length" References: <9734C4EE-C645-41E1-B140-2113E1EBF146@mcs.anl.gov> <2CBFC145-0960-4152-A939-29CF87FCFEEE@mcs.anl.gov> Message-ID: David, It might be easier to just rebuild the whole matrix from scratch: you would in effect be doing all that with disassembling and resetting the preallocation. MatSetType(mat,MATMPIAIJ) or PetscObjectGetType((PetscObject)mat,&type); MatSetType(mat,type); followed by MatXAIJSetPreallocation(...); should do. Dmitry. On Sun Feb 22 2015 at 4:45:46 PM Barry Smith wrote: > > Do not call for SeqAIJ matrix. Do not call before the first time you have > preallocated and put entries in the matrix and done the > MatAssemblyBegin/End() > > If it still crashes you'll need to try the debugger > > Barry > > > On Feb 22, 2015, at 4:09 PM, David Knezevic > wrote: > > > > Hi Barry, > > > > Thanks for your help, much appreciated. > > > > I added a prototype for MatDisAssemble_MPIAIJ: > > PETSC_INTERN PetscErrorCode MatDisAssemble_MPIAIJ(Mat); > > > > and I added a call to MatDisAssemble_MPIAIJ before > MatMPIAIJSetPreallocation. However, I get a segfault on the call to > MatDisAssemble_MPIAIJ. The segfault occurs in both serial and parallel. > > > > FYI, I'm using Petsc 3.5.2, and I'm not using a non-debug build (though > I could rebuild PETSc in debug mode if you think that would help figure out > what's happening here). > > > > Thanks, > > David > > > > > > > > On Sun, Feb 22, 2015 at 1:13 PM, Barry Smith wrote: > > > > David, > > > > This is an obscure little feature of MatMPIAIJ, each time you > change the sparsity pattern before you call the MatMPIAIJSetPreallocation > you need to call MatDisAssemble_MPIAIJ(Mat mat). This is a private > PETSc function so you need to provide your own prototype for it above the > function you use it in. > > > > Let us know if this resolves the problem. > > > > Barry > > > > We never really intended that people would call > MatMPIAIJSetPreallocation() AFTER they had already used the matrix. > > > > > > > On Feb 22, 2015, at 6:50 AM, David Knezevic < > david.knezevic at akselos.com> wrote: > > > > > > Hi all, > > > > > > I've implemented a solver for a contact problem using SNES. The > sparsity pattern of the jacobian matrix needs to change at each nonlinear > iteration (because the elements which are in contact can change), so I > tried to deal with this by calling MatSeqAIJSetPreallocation and > MatMPIAIJSetPreallocation during each iteration in order to update the > preallocation. > > > > > > This seems to work fine in serial, but with two or more MPI processes > I run into the error "nnz cannot be greater than row length", e.g.: > > > nnz cannot be greater than row length: local row 528 value 12 > rowlength 0 > > > > > > This error is from the call to > > > MatSeqAIJSetPreallocation(b->B,o_nz,o_nnz); in > MatMPIAIJSetPreallocation_MPIAIJ. > > > > > > Any guidance on what the problem might be would be most appreciated. > For example, I was wondering if there is a problem with calling > SetPreallocation on a matrix that has already been preallocated? > > > > > > Some notes: > > > - I'm using PETSc via libMesh > > > - The code that triggers this issue is available as a PR on the > libMesh github repo, in case anyone is interested: > https://github.com/libMesh/libmesh/pull/460/ > > > - I can try to make a minimal pure-PETSc example that reproduces this > error, if that would be helpful. > > > > > > Many thanks, > > > David > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Feb 22 17:58:21 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 22 Feb 2015 17:58:21 -0600 Subject: [petsc-users] MatMPIAIJSetPreallocation: "nnz cannot be greater than row length" In-Reply-To: References: <9734C4EE-C645-41E1-B140-2113E1EBF146@mcs.anl.gov> <2CBFC145-0960-4152-A939-29CF87FCFEEE@mcs.anl.gov> Message-ID: <49C3A6D3-E259-42F6-9D35-9B4A616D22E5@mcs.anl.gov> Yeah try this. > On Feb 22, 2015, at 5:02 PM, Dmitry Karpeyev wrote: > > David, > It might be easier to just rebuild the whole matrix from scratch: you would in effect be doing all that with disassembling and resetting the preallocation. > MatSetType(mat,MATMPIAIJ) > or > PetscObjectGetType((PetscObject)mat,&type); > MatSetType(mat,type); > followed by > MatXAIJSetPreallocation(...); > should do. > Dmitry. > > > On Sun Feb 22 2015 at 4:45:46 PM Barry Smith wrote: > > Do not call for SeqAIJ matrix. Do not call before the first time you have preallocated and put entries in the matrix and done the MatAssemblyBegin/End() > > If it still crashes you'll need to try the debugger > > Barry > > > On Feb 22, 2015, at 4:09 PM, David Knezevic wrote: > > > > Hi Barry, > > > > Thanks for your help, much appreciated. > > > > I added a prototype for MatDisAssemble_MPIAIJ: > > PETSC_INTERN PetscErrorCode MatDisAssemble_MPIAIJ(Mat); > > > > and I added a call to MatDisAssemble_MPIAIJ before MatMPIAIJSetPreallocation. However, I get a segfault on the call to MatDisAssemble_MPIAIJ. The segfault occurs in both serial and parallel. > > > > FYI, I'm using Petsc 3.5.2, and I'm not using a non-debug build (though I could rebuild PETSc in debug mode if you think that would help figure out what's happening here). > > > > Thanks, > > David > > > > > > > > On Sun, Feb 22, 2015 at 1:13 PM, Barry Smith wrote: > > > > David, > > > > This is an obscure little feature of MatMPIAIJ, each time you change the sparsity pattern before you call the MatMPIAIJSetPreallocation you need to call MatDisAssemble_MPIAIJ(Mat mat). This is a private PETSc function so you need to provide your own prototype for it above the function you use it in. > > > > Let us know if this resolves the problem. > > > > Barry > > > > We never really intended that people would call MatMPIAIJSetPreallocation() AFTER they had already used the matrix. > > > > > > > On Feb 22, 2015, at 6:50 AM, David Knezevic wrote: > > > > > > Hi all, > > > > > > I've implemented a solver for a contact problem using SNES. The sparsity pattern of the jacobian matrix needs to change at each nonlinear iteration (because the elements which are in contact can change), so I tried to deal with this by calling MatSeqAIJSetPreallocation and MatMPIAIJSetPreallocation during each iteration in order to update the preallocation. > > > > > > This seems to work fine in serial, but with two or more MPI processes I run into the error "nnz cannot be greater than row length", e.g.: > > > nnz cannot be greater than row length: local row 528 value 12 rowlength 0 > > > > > > This error is from the call to > > > MatSeqAIJSetPreallocation(b->B,o_nz,o_nnz); in MatMPIAIJSetPreallocation_MPIAIJ. > > > > > > Any guidance on what the problem might be would be most appreciated. For example, I was wondering if there is a problem with calling SetPreallocation on a matrix that has already been preallocated? > > > > > > Some notes: > > > - I'm using PETSc via libMesh > > > - The code that triggers this issue is available as a PR on the libMesh github repo, in case anyone is interested: https://github.com/libMesh/libmesh/pull/460/ > > > - I can try to make a minimal pure-PETSc example that reproduces this error, if that would be helpful. > > > > > > Many thanks, > > > David > > > > > > > > From jychang48 at gmail.com Sun Feb 22 19:05:04 2015 From: jychang48 at gmail.com (Justin Chang) Date: Sun, 22 Feb 2015 19:05:04 -0600 Subject: [petsc-users] FE discretization in DMPlex In-Reply-To: References: <896C129B-43B0-4E5C-A5B7-ADC604E34892@gmail.com> Message-ID: Hi Matt, Bringing this thread back from the dead. 1) Have you had the chance to implement things like RT and DG in DMPlex? 2) Are examples/tests that illustrate how to do dualspaces? 3) Or quantities like cell size h, jump, average? I was originally trying to implement DG and RT0 in FEniCS but I am having lots of trouble getting the FEniCS code to scale on our university's clusters, so that's why I want to attempt going back to PETSc's DMPlex to do strong scaling studies. Thanks, Justin On Sat, Sep 6, 2014 at 3:58 AM, Matthew Knepley wrote: > On Fri, Sep 5, 2014 at 10:55 PM, Justin Chang wrote: > >> Hi all, >> >> So I understand how the FEM code works in the DMPlex examples (ex12 and >> 62). Pardon me if this is a silly question. >> >> 1) If I wanted to solve either the poisson or stokes using the >> discontinuous Galerkin method, is there a way to do this with the built-in >> DMPlex/FEM functions? Basically each cell/element has its own set of >> degrees of freedom, and jump/average operations would be needed to >> "connect" the dofs across element interfaces. >> >> 2) Or how about using something like Raviart-Thomas spaces (we'll say >> lowest order for simplicity). Where the velocity dofs are not nodal >> quantities, instead they are denoted by edge fluxes (or face fluxes for >> tetrahedrals). Pressure would be piecewise constant. >> >> Intuitively these should be doable if I were to write my own >> DMPlex/PetscSection code, but I was wondering if the above two >> discretizations are achievable in the way ex12 and ex62 are. >> > > Lets do RT first since its easier. The primal space is > > P_K = Poly_{q--1}(K) + x Poly_{q-1}(K) > > so at lowest order its just Poly_1. The dual space is moments of the > normal component > of velocity on the edges. So you would write a dual space where the > functionals integrated > the normal component. This is the tricky part: > > http://www.math.chalmers.se/~logg/pub/papers/KirbyLoggEtAl2010a.pdf > > DG is just a generalization of this kind of thing where you need to a) > have some geometric > quantities available to the pointwise functions (like h), and also some > field quantities (like > the jump and average). > > I understand exactly how I want to do the RT, BDM, BDMF, and NED elements, > and those > will be in soon. I think DG is fairly messy and am not completely sure > what I want here. > > Matt > > >> Thanks, >> Justin >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Sun Feb 22 20:11:49 2015 From: jed at jedbrown.org (Jed Brown) Date: Sun, 22 Feb 2015 19:11:49 -0700 Subject: [petsc-users] solving multiple linear systems with same matrix (sequentially, not simultaneously) In-Reply-To: <7030ED15-A93B-403C-BE28-DEF842F1941D@mcs.anl.gov> References: <98085B4A-166C-49E4-89F6-DDF53B6FFD4D@mcs.anl.gov> <7030ED15-A93B-403C-BE28-DEF842F1941D@mcs.anl.gov> Message-ID: <87zj85idga.fsf@jedbrown.org> Barry Smith writes: >> I will try gamg as I know it can reduce the number of CG iterations required. (I'm guessing you mean algebraic, not geometric?) > > By default GAMG is an algebraic multigrid preconditioner. Look at its documentation at http://www.mcs.anl.gov/petsc/petsc-master/docs/index.html it will be a bit better than the older documentation. The documentation for GAMG is still pretty thin so feel free to ask questions. You might want to call MatSetBlockSize and MatSetNearNullSpace to define both translation and rotation modes. This is most relevant for large ice shelves. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From david.knezevic at akselos.com Sun Feb 22 21:09:04 2015 From: david.knezevic at akselos.com (David Knezevic) Date: Sun, 22 Feb 2015 22:09:04 -0500 Subject: [petsc-users] MatMPIAIJSetPreallocation: "nnz cannot be greater than row length" In-Reply-To: References: <9734C4EE-C645-41E1-B140-2113E1EBF146@mcs.anl.gov> <2CBFC145-0960-4152-A939-29CF87FCFEEE@mcs.anl.gov> Message-ID: Hi Dmitry, Thanks for the suggestion. I tried MatSetType(mat,MATMPIAIJ) followed by MatXAIJSetPreallocation(...), but unfortunately this still gives me the same error as before: "nnz cannot be greater than row length: local row 168 value 24 rowlength 0". I gather that the idea here is that MatSetType builds a new matrix object, and then I should be able to pre-allocate for that new matrix however I like, right? Was I supposed to clear the matrix object somehow before calling MatSetType? (I didn't do any sort of clear operation.) As I said earlier, I'll make a dbg PETSc build, so hopefully that will help shed some light on what's going wrong for me. Thanks, David On Sun, Feb 22, 2015 at 6:02 PM, Dmitry Karpeyev wrote: > David, > It might be easier to just rebuild the whole matrix from scratch: you > would in effect be doing all that with disassembling and resetting the > preallocation. > MatSetType(mat,MATMPIAIJ) > or > PetscObjectGetType((PetscObject)mat,&type); > MatSetType(mat,type); > followed by > MatXAIJSetPreallocation(...); > should do. > Dmitry. > > > On Sun Feb 22 2015 at 4:45:46 PM Barry Smith wrote: > >> >> Do not call for SeqAIJ matrix. Do not call before the first time you >> have preallocated and put entries in the matrix and done the >> MatAssemblyBegin/End() >> >> If it still crashes you'll need to try the debugger >> >> Barry >> >> > On Feb 22, 2015, at 4:09 PM, David Knezevic >> wrote: >> > >> > Hi Barry, >> > >> > Thanks for your help, much appreciated. >> > >> > I added a prototype for MatDisAssemble_MPIAIJ: >> > PETSC_INTERN PetscErrorCode MatDisAssemble_MPIAIJ(Mat); >> > >> > and I added a call to MatDisAssemble_MPIAIJ before >> MatMPIAIJSetPreallocation. However, I get a segfault on the call to >> MatDisAssemble_MPIAIJ. The segfault occurs in both serial and parallel. >> > >> > FYI, I'm using Petsc 3.5.2, and I'm not using a non-debug build (though >> I could rebuild PETSc in debug mode if you think that would help figure out >> what's happening here). >> > >> > Thanks, >> > David >> > >> > >> > >> > On Sun, Feb 22, 2015 at 1:13 PM, Barry Smith >> wrote: >> > >> > David, >> > >> > This is an obscure little feature of MatMPIAIJ, each time you >> change the sparsity pattern before you call the MatMPIAIJSetPreallocation >> you need to call MatDisAssemble_MPIAIJ(Mat mat). This is a private >> PETSc function so you need to provide your own prototype for it above the >> function you use it in. >> > >> > Let us know if this resolves the problem. >> > >> > Barry >> > >> > We never really intended that people would call >> MatMPIAIJSetPreallocation() AFTER they had already used the matrix. >> > >> > >> > > On Feb 22, 2015, at 6:50 AM, David Knezevic < >> david.knezevic at akselos.com> wrote: >> > > >> > > Hi all, >> > > >> > > I've implemented a solver for a contact problem using SNES. The >> sparsity pattern of the jacobian matrix needs to change at each nonlinear >> iteration (because the elements which are in contact can change), so I >> tried to deal with this by calling MatSeqAIJSetPreallocation and >> MatMPIAIJSetPreallocation during each iteration in order to update the >> preallocation. >> > > >> > > This seems to work fine in serial, but with two or more MPI processes >> I run into the error "nnz cannot be greater than row length", e.g.: >> > > nnz cannot be greater than row length: local row 528 value 12 >> rowlength 0 >> > > >> > > This error is from the call to >> > > MatSeqAIJSetPreallocation(b->B,o_nz,o_nnz); in >> MatMPIAIJSetPreallocation_MPIAIJ. >> > > >> > > Any guidance on what the problem might be would be most appreciated. >> For example, I was wondering if there is a problem with calling >> SetPreallocation on a matrix that has already been preallocated? >> > > >> > > Some notes: >> > > - I'm using PETSc via libMesh >> > > - The code that triggers this issue is available as a PR on the >> libMesh github repo, in case anyone is interested: >> https://github.com/libMesh/libmesh/pull/460/ >> > > - I can try to make a minimal pure-PETSc example that reproduces this >> error, if that would be helpful. >> > > >> > > Many thanks, >> > > David >> > > >> > >> > >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Feb 22 21:15:17 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 22 Feb 2015 21:15:17 -0600 Subject: [petsc-users] MatMPIAIJSetPreallocation: "nnz cannot be greater than row length" In-Reply-To: References: <9734C4EE-C645-41E1-B140-2113E1EBF146@mcs.anl.gov> <2CBFC145-0960-4152-A939-29CF87FCFEEE@mcs.anl.gov> Message-ID: <7AB307B8-8F93-4DDF-B0FB-F5718751AF0E@mcs.anl.gov> > On Feb 22, 2015, at 9:09 PM, David Knezevic wrote: > > Hi Dmitry, > > Thanks for the suggestion. I tried MatSetType(mat,MATMPIAIJ) followed by MatXAIJSetPreallocation(...), but unfortunately this still gives me the same error as before: "nnz cannot be greater than row length: local row 168 value 24 rowlength 0". > > I gather that the idea here is that MatSetType builds a new matrix object, and then I should be able to pre-allocate for that new matrix however I like, right? Was I supposed to clear the matrix object somehow before calling MatSetType? (I didn't do any sort of clear operation.) If the type doesn't change then MatSetType() won't do anything. You can try setting the type to BAIJ and then setting the type back to AIJ. This may/should clear out the matrix. > > As I said earlier, I'll make a dbg PETSc build, so hopefully that will help shed some light on what's going wrong for me. Don't bother, what I suggested won't work. Barry > > Thanks, > David > > > > > On Sun, Feb 22, 2015 at 6:02 PM, Dmitry Karpeyev wrote: > David, > It might be easier to just rebuild the whole matrix from scratch: you would in effect be doing all that with disassembling and resetting the preallocation. > MatSetType(mat,MATMPIAIJ) > or > PetscObjectGetType((PetscObject)mat,&type); > MatSetType(mat,type); > followed by > MatXAIJSetPreallocation(...); > should do. > Dmitry. > > > On Sun Feb 22 2015 at 4:45:46 PM Barry Smith wrote: > > Do not call for SeqAIJ matrix. Do not call before the first time you have preallocated and put entries in the matrix and done the MatAssemblyBegin/End() > > If it still crashes you'll need to try the debugger > > Barry > > > On Feb 22, 2015, at 4:09 PM, David Knezevic wrote: > > > > Hi Barry, > > > > Thanks for your help, much appreciated. > > > > I added a prototype for MatDisAssemble_MPIAIJ: > > PETSC_INTERN PetscErrorCode MatDisAssemble_MPIAIJ(Mat); > > > > and I added a call to MatDisAssemble_MPIAIJ before MatMPIAIJSetPreallocation. However, I get a segfault on the call to MatDisAssemble_MPIAIJ. The segfault occurs in both serial and parallel. > > > > FYI, I'm using Petsc 3.5.2, and I'm not using a non-debug build (though I could rebuild PETSc in debug mode if you think that would help figure out what's happening here). > > > > Thanks, > > David > > > > > > > > On Sun, Feb 22, 2015 at 1:13 PM, Barry Smith wrote: > > > > David, > > > > This is an obscure little feature of MatMPIAIJ, each time you change the sparsity pattern before you call the MatMPIAIJSetPreallocation you need to call MatDisAssemble_MPIAIJ(Mat mat). This is a private PETSc function so you need to provide your own prototype for it above the function you use it in. > > > > Let us know if this resolves the problem. > > > > Barry > > > > We never really intended that people would call MatMPIAIJSetPreallocation() AFTER they had already used the matrix. > > > > > > > On Feb 22, 2015, at 6:50 AM, David Knezevic wrote: > > > > > > Hi all, > > > > > > I've implemented a solver for a contact problem using SNES. The sparsity pattern of the jacobian matrix needs to change at each nonlinear iteration (because the elements which are in contact can change), so I tried to deal with this by calling MatSeqAIJSetPreallocation and MatMPIAIJSetPreallocation during each iteration in order to update the preallocation. > > > > > > This seems to work fine in serial, but with two or more MPI processes I run into the error "nnz cannot be greater than row length", e.g.: > > > nnz cannot be greater than row length: local row 528 value 12 rowlength 0 > > > > > > This error is from the call to > > > MatSeqAIJSetPreallocation(b->B,o_nz,o_nnz); in MatMPIAIJSetPreallocation_MPIAIJ. > > > > > > Any guidance on what the problem might be would be most appreciated. For example, I was wondering if there is a problem with calling SetPreallocation on a matrix that has already been preallocated? > > > > > > Some notes: > > > - I'm using PETSc via libMesh > > > - The code that triggers this issue is available as a PR on the libMesh github repo, in case anyone is interested: https://github.com/libMesh/libmesh/pull/460/ > > > - I can try to make a minimal pure-PETSc example that reproduces this error, if that would be helpful. > > > > > > Many thanks, > > > David > > > > > > > > > From dkarpeev at gmail.com Sun Feb 22 21:22:05 2015 From: dkarpeev at gmail.com (Dmitry Karpeyev) Date: Mon, 23 Feb 2015 03:22:05 +0000 Subject: [petsc-users] MatMPIAIJSetPreallocation: "nnz cannot be greater than row length" References: <9734C4EE-C645-41E1-B140-2113E1EBF146@mcs.anl.gov> <2CBFC145-0960-4152-A939-29CF87FCFEEE@mcs.anl.gov> <7AB307B8-8F93-4DDF-B0FB-F5718751AF0E@mcs.anl.gov> Message-ID: On Sun Feb 22 2015 at 9:15:22 PM Barry Smith wrote: > > > On Feb 22, 2015, at 9:09 PM, David Knezevic > wrote: > > > > Hi Dmitry, > > > > Thanks for the suggestion. I tried MatSetType(mat,MATMPIAIJ) followed by > MatXAIJSetPreallocation(...), but unfortunately this still gives me the > same error as before: "nnz cannot be greater than row length: local row 168 > value 24 rowlength 0". > > > > I gather that the idea here is that MatSetType builds a new matrix > object, and then I should be able to pre-allocate for that new matrix > however I like, right? Was I supposed to clear the matrix object somehow > before calling MatSetType? (I didn't do any sort of clear operation.) > > If the type doesn't change then MatSetType() won't do anything. You can > try setting the type to BAIJ and then setting the type back to AIJ. This > may/should clear out the matrix. > Ah, yes. If the type is the same as before it does quit early, but changing the type and then back will clear out and rebuild the matrix. We need something like MatReset() to do the equivalent thing. > > > > > As I said earlier, I'll make a dbg PETSc build, so hopefully that will > help shed some light on what's going wrong for me. > I think it's always a good idea to have a dbg build of PETSc when you doing things like these. Dmitry. > > Don't bother, what I suggested won't work. > > Barry > > > > > > Thanks, > > David > > > > > > > > > > On Sun, Feb 22, 2015 at 6:02 PM, Dmitry Karpeyev > wrote: > > David, > > It might be easier to just rebuild the whole matrix from scratch: you > would in effect be doing all that with disassembling and resetting the > preallocation. > > MatSetType(mat,MATMPIAIJ) > > or > > PetscObjectGetType((PetscObject)mat,&type); > > MatSetType(mat,type); > > followed by > > MatXAIJSetPreallocation(...); > > should do. > > Dmitry. > > > > > > On Sun Feb 22 2015 at 4:45:46 PM Barry Smith wrote: > > > > Do not call for SeqAIJ matrix. Do not call before the first time you > have preallocated and put entries in the matrix and done the > MatAssemblyBegin/End() > > > > If it still crashes you'll need to try the debugger > > > > Barry > > > > > On Feb 22, 2015, at 4:09 PM, David Knezevic < > david.knezevic at akselos.com> wrote: > > > > > > Hi Barry, > > > > > > Thanks for your help, much appreciated. > > > > > > I added a prototype for MatDisAssemble_MPIAIJ: > > > PETSC_INTERN PetscErrorCode MatDisAssemble_MPIAIJ(Mat); > > > > > > and I added a call to MatDisAssemble_MPIAIJ before > MatMPIAIJSetPreallocation. However, I get a segfault on the call to > MatDisAssemble_MPIAIJ. The segfault occurs in both serial and parallel. > > > > > > FYI, I'm using Petsc 3.5.2, and I'm not using a non-debug build > (though I could rebuild PETSc in debug mode if you think that would help > figure out what's happening here). > > > > > > Thanks, > > > David > > > > > > > > > > > > On Sun, Feb 22, 2015 at 1:13 PM, Barry Smith > wrote: > > > > > > David, > > > > > > This is an obscure little feature of MatMPIAIJ, each time you > change the sparsity pattern before you call the MatMPIAIJSetPreallocation > you need to call MatDisAssemble_MPIAIJ(Mat mat). This is a private > PETSc function so you need to provide your own prototype for it above the > function you use it in. > > > > > > Let us know if this resolves the problem. > > > > > > Barry > > > > > > We never really intended that people would call > MatMPIAIJSetPreallocation() AFTER they had already used the matrix. > > > > > > > > > > On Feb 22, 2015, at 6:50 AM, David Knezevic < > david.knezevic at akselos.com> wrote: > > > > > > > > Hi all, > > > > > > > > I've implemented a solver for a contact problem using SNES. The > sparsity pattern of the jacobian matrix needs to change at each nonlinear > iteration (because the elements which are in contact can change), so I > tried to deal with this by calling MatSeqAIJSetPreallocation and > MatMPIAIJSetPreallocation during each iteration in order to update the > preallocation. > > > > > > > > This seems to work fine in serial, but with two or more MPI > processes I run into the error "nnz cannot be greater than row length", > e.g.: > > > > nnz cannot be greater than row length: local row 528 value 12 > rowlength 0 > > > > > > > > This error is from the call to > > > > MatSeqAIJSetPreallocation(b->B,o_nz,o_nnz); in > MatMPIAIJSetPreallocation_MPIAIJ. > > > > > > > > Any guidance on what the problem might be would be most appreciated. > For example, I was wondering if there is a problem with calling > SetPreallocation on a matrix that has already been preallocated? > > > > > > > > Some notes: > > > > - I'm using PETSc via libMesh > > > > - The code that triggers this issue is available as a PR on the > libMesh github repo, in case anyone is interested: > https://github.com/libMesh/libmesh/pull/460/ > > > > - I can try to make a minimal pure-PETSc example that reproduces > this error, if that would be helpful. > > > > > > > > Many thanks, > > > > David > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From domenico_lahaye at yahoo.com Mon Feb 23 05:18:05 2015 From: domenico_lahaye at yahoo.com (domenico lahaye) Date: Mon, 23 Feb 2015 11:18:05 +0000 (UTC) Subject: [petsc-users] Solving multiple linear systems with similar matrix sequentially In-Reply-To: <7030ED15-A93B-403C-BE28-DEF842F1941D@mcs.anl.gov> References: <7030ED15-A93B-403C-BE28-DEF842F1941D@mcs.anl.gov> Message-ID: <1378912628.4284286.1424690285600.JavaMail.yahoo@mail.yahoo.com> Hi, ? Contingency analysis of power systems requires to solve a sequence a (non-)linear systems in which the Jacobian/matrix changes by a rank-2/rank-1 update. ?? Does the PETSc interface to GAMG allow to freeze the AMG hierarchy for a given matrix (base case) and to reuse this hierarchy in solving a list of slightly perturbed linear problems (list of contingencies)? ? Thanks, Domenico. -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.knezevic at akselos.com Mon Feb 23 07:17:03 2015 From: david.knezevic at akselos.com (David Knezevic) Date: Mon, 23 Feb 2015 08:17:03 -0500 Subject: [petsc-users] MatMPIAIJSetPreallocation: "nnz cannot be greater than row length" In-Reply-To: References: <9734C4EE-C645-41E1-B140-2113E1EBF146@mcs.anl.gov> <2CBFC145-0960-4152-A939-29CF87FCFEEE@mcs.anl.gov> <7AB307B8-8F93-4DDF-B0FB-F5718751AF0E@mcs.anl.gov> Message-ID: Hi Barry, hi Dmitry, I set the matrix to BAIJ and back to AIJ, and the code got a bit further. But I now run into the error pasted below (Note that I'm now using "--with-debugging=1"): PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- PETSC ERROR: Petsc has generated inconsistent data PETSC ERROR: MPIAIJ Matrix was assembled but is missing garray PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. PETSC ERROR: Petsc Release Version 3.5.2, Sep, 08, 2014 PETSC ERROR: ./example-dbg on a arch-linux2-c-debug named david-Lenovo by dknez Mon Feb 23 08:05:44 2015 PETSC ERROR: Configure options --with-shared-libraries=1 --with-debugging=1 --download-suitesparse=1 --download-parmetis=1 --download-blacs=1 --download-scalapack=1 --download-mumps=1 --download-metis --download-superlu_dist --prefix=/home/dknez/software/libmesh_install/dbg_real/petsc --download-hypre PETSC ERROR: #1 MatCreateColmap_MPIAIJ_Private() line 361 in /home/dknez/software/petsc-3.5.2/src/mat/impls/aij/mpi/mpiaij.c PETSC ERROR: #2 MatSetValues_MPIAIJ() line 538 in /home/dknez/software/petsc-3.5.2/src/mat/impls/aij/mpi/mpiaij.c PETSC ERROR: #3 MatSetValues() line 1136 in /home/dknez/software/petsc-3.5.2/src/mat/interface/matrix.c PETSC ERROR: #4 add_matrix() line 765 in /home/dknez/software/libmesh-src/src/numerics/petsc_matrix.C -------------------------------------------------------------------------- This occurs when I try to set some entries of the matrix. Do you have any suggestions on how I can resolve this? Thanks! David On Sun, Feb 22, 2015 at 10:22 PM, Dmitry Karpeyev wrote: > > > On Sun Feb 22 2015 at 9:15:22 PM Barry Smith wrote: > >> >> > On Feb 22, 2015, at 9:09 PM, David Knezevic >> wrote: >> > >> > Hi Dmitry, >> > >> > Thanks for the suggestion. I tried MatSetType(mat,MATMPIAIJ) followed >> by MatXAIJSetPreallocation(...), but unfortunately this still gives me the >> same error as before: "nnz cannot be greater than row length: local row 168 >> value 24 rowlength 0". >> > >> > I gather that the idea here is that MatSetType builds a new matrix >> object, and then I should be able to pre-allocate for that new matrix >> however I like, right? Was I supposed to clear the matrix object somehow >> before calling MatSetType? (I didn't do any sort of clear operation.) >> >> If the type doesn't change then MatSetType() won't do anything. You can >> try setting the type to BAIJ and then setting the type back to AIJ. This >> may/should clear out the matrix. >> > Ah, yes. If the type is the same as before it does quit early, but > changing the type and then back will clear out and rebuild the matrix. We > need > something like MatReset() to do the equivalent thing. > >> >> > >> > As I said earlier, I'll make a dbg PETSc build, so hopefully that will >> help shed some light on what's going wrong for me. >> > I think it's always a good idea to have a dbg build of PETSc when you > doing things like these. > > Dmitry. > >> >> Don't bother, what I suggested won't work. >> >> Barry >> >> >> > >> > Thanks, >> > David >> > >> > >> > >> > >> > On Sun, Feb 22, 2015 at 6:02 PM, Dmitry Karpeyev >> wrote: >> > David, >> > It might be easier to just rebuild the whole matrix from scratch: you >> would in effect be doing all that with disassembling and resetting the >> preallocation. >> > MatSetType(mat,MATMPIAIJ) >> > or >> > PetscObjectGetType((PetscObject)mat,&type); >> > MatSetType(mat,type); >> > followed by >> > MatXAIJSetPreallocation(...); >> > should do. >> > Dmitry. >> > >> > >> > On Sun Feb 22 2015 at 4:45:46 PM Barry Smith >> wrote: >> > >> > Do not call for SeqAIJ matrix. Do not call before the first time you >> have preallocated and put entries in the matrix and done the >> MatAssemblyBegin/End() >> > >> > If it still crashes you'll need to try the debugger >> > >> > Barry >> > >> > > On Feb 22, 2015, at 4:09 PM, David Knezevic < >> david.knezevic at akselos.com> wrote: >> > > >> > > Hi Barry, >> > > >> > > Thanks for your help, much appreciated. >> > > >> > > I added a prototype for MatDisAssemble_MPIAIJ: >> > > PETSC_INTERN PetscErrorCode MatDisAssemble_MPIAIJ(Mat); >> > > >> > > and I added a call to MatDisAssemble_MPIAIJ before >> MatMPIAIJSetPreallocation. However, I get a segfault on the call to >> MatDisAssemble_MPIAIJ. The segfault occurs in both serial and parallel. >> > > >> > > FYI, I'm using Petsc 3.5.2, and I'm not using a non-debug build >> (though I could rebuild PETSc in debug mode if you think that would help >> figure out what's happening here). >> > > >> > > Thanks, >> > > David >> > > >> > > >> > > >> > > On Sun, Feb 22, 2015 at 1:13 PM, Barry Smith >> wrote: >> > > >> > > David, >> > > >> > > This is an obscure little feature of MatMPIAIJ, each time you >> change the sparsity pattern before you call the MatMPIAIJSetPreallocation >> you need to call MatDisAssemble_MPIAIJ(Mat mat). This is a private >> PETSc function so you need to provide your own prototype for it above the >> function you use it in. >> > > >> > > Let us know if this resolves the problem. >> > > >> > > Barry >> > > >> > > We never really intended that people would call >> MatMPIAIJSetPreallocation() AFTER they had already used the matrix. >> > > >> > > >> > > > On Feb 22, 2015, at 6:50 AM, David Knezevic < >> david.knezevic at akselos.com> wrote: >> > > > >> > > > Hi all, >> > > > >> > > > I've implemented a solver for a contact problem using SNES. The >> sparsity pattern of the jacobian matrix needs to change at each nonlinear >> iteration (because the elements which are in contact can change), so I >> tried to deal with this by calling MatSeqAIJSetPreallocation and >> MatMPIAIJSetPreallocation during each iteration in order to update the >> preallocation. >> > > > >> > > > This seems to work fine in serial, but with two or more MPI >> processes I run into the error "nnz cannot be greater than row length", >> e.g.: >> > > > nnz cannot be greater than row length: local row 528 value 12 >> rowlength 0 >> > > > >> > > > This error is from the call to >> > > > MatSeqAIJSetPreallocation(b->B,o_nz,o_nnz); in >> MatMPIAIJSetPreallocation_MPIAIJ. >> > > > >> > > > Any guidance on what the problem might be would be most >> appreciated. For example, I was wondering if there is a problem with >> calling SetPreallocation on a matrix that has already been preallocated? >> > > > >> > > > Some notes: >> > > > - I'm using PETSc via libMesh >> > > > - The code that triggers this issue is available as a PR on the >> libMesh github repo, in case anyone is interested: >> https://github.com/libMesh/libmesh/pull/460/ >> > > > - I can try to make a minimal pure-PETSc example that reproduces >> this error, if that would be helpful. >> > > > >> > > > Many thanks, >> > > > David >> > > > >> > > >> > > >> > >> > >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Feb 23 07:43:31 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 23 Feb 2015 07:43:31 -0600 Subject: [petsc-users] Solving multiple linear systems with similar matrix sequentially In-Reply-To: <1378912628.4284286.1424690285600.JavaMail.yahoo@mail.yahoo.com> References: <7030ED15-A93B-403C-BE28-DEF842F1941D@mcs.anl.gov> <1378912628.4284286.1424690285600.JavaMail.yahoo@mail.yahoo.com> Message-ID: <27A5BEE5-8058-4177-AB68-CC70ACEDB3A4@mcs.anl.gov> > On Feb 23, 2015, at 5:18 AM, domenico lahaye wrote: > > Hi, > > Contingency analysis of power systems requires to solve a sequence > a (non-)linear systems in which the Jacobian/matrix changes by a > rank-2/rank-1 update. > > Does the PETSc interface to GAMG allow to freeze the AMG hierarchy > for a given matrix (base case) and to reuse this hierarchy in solving a list of > slightly perturbed linear problems (list of contingencies)? You can do two things. Freeze the current preconditioner and solve a sequence of (new with slightly different numerical values) linear systems using that same precondition. Use KSPSetReusePreconditioner() in PETSc 3.5.x to freeze/unfreeze the preconditioner Freeze the hierarchy and coarse grid interpolations of GAMG but compute the new coarse grid operators RAP for each new linear system (this is a much cheaper operation). Use PCGAMGSetReuseInterpolation() to freeze/unfreeze the hierarchy. Barry > > Thanks, Domenico. From knepley at gmail.com Mon Feb 23 07:47:37 2015 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 23 Feb 2015 07:47:37 -0600 Subject: [petsc-users] FE discretization in DMPlex In-Reply-To: References: <896C129B-43B0-4E5C-A5B7-ADC604E34892@gmail.com> Message-ID: On Sun, Feb 22, 2015 at 7:05 PM, Justin Chang wrote: > Hi Matt, > > Bringing this thread back from the dead. > > 1) Have you had the chance to implement things like RT and DG in DMPlex? > No, there has not been much call. Its on my stack of things to do. > 2) Are examples/tests that illustrate how to do dualspaces? > I have commented the dual space routines now. Basically, you just create a set of functionals, and in this world a functional is just a quadrature rule. If you have a hard time understanding something, feel free to mail. > 3) Or quantities like cell size h, jump, average? > The first thing is to declare the adjacency correctly, so that you get neighboring cells. Once you have that all these are simple local calculations. Why don't we start with RT0 since it is the simplest to think about. Thanks, Matt > I was originally trying to implement DG and RT0 in FEniCS but I am having > lots of trouble getting the FEniCS code to scale on our university's > clusters, so that's why I want to attempt going back to PETSc's DMPlex to > do strong scaling studies. > > Thanks, > Justin > > On Sat, Sep 6, 2014 at 3:58 AM, Matthew Knepley wrote: > >> On Fri, Sep 5, 2014 at 10:55 PM, Justin Chang >> wrote: >> >>> Hi all, >>> >>> So I understand how the FEM code works in the DMPlex examples (ex12 and >>> 62). Pardon me if this is a silly question. >>> >>> 1) If I wanted to solve either the poisson or stokes using the >>> discontinuous Galerkin method, is there a way to do this with the built-in >>> DMPlex/FEM functions? Basically each cell/element has its own set of >>> degrees of freedom, and jump/average operations would be needed to >>> "connect" the dofs across element interfaces. >>> >>> 2) Or how about using something like Raviart-Thomas spaces (we'll say >>> lowest order for simplicity). Where the velocity dofs are not nodal >>> quantities, instead they are denoted by edge fluxes (or face fluxes for >>> tetrahedrals). Pressure would be piecewise constant. >>> >>> Intuitively these should be doable if I were to write my own >>> DMPlex/PetscSection code, but I was wondering if the above two >>> discretizations are achievable in the way ex12 and ex62 are. >>> >> >> Lets do RT first since its easier. The primal space is >> >> P_K = Poly_{q--1}(K) + x Poly_{q-1}(K) >> >> so at lowest order its just Poly_1. The dual space is moments of the >> normal component >> of velocity on the edges. So you would write a dual space where the >> functionals integrated >> the normal component. This is the tricky part: >> >> http://www.math.chalmers.se/~logg/pub/papers/KirbyLoggEtAl2010a.pdf >> >> DG is just a generalization of this kind of thing where you need to a) >> have some geometric >> quantities available to the pointwise functions (like h), and also some >> field quantities (like >> the jump and average). >> >> I understand exactly how I want to do the RT, BDM, BDMF, and NED >> elements, and those >> will be in soon. I think DG is fairly messy and am not completely sure >> what I want here. >> >> Matt >> >> >>> Thanks, >>> Justin >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Feb 23 07:54:34 2015 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 23 Feb 2015 07:54:34 -0600 Subject: [petsc-users] DMNetworkGetEdgeRange() in parallel In-Reply-To: References: Message-ID: On Sun, Feb 22, 2015 at 3:59 PM, Miguel Angel Salazar de Troya < salazardetroya at gmail.com> wrote: > Thanks. Once I obtain that Index Set with the routine DMPlexCreateCellNumbering() > (I assume that the edges in DMNetwork correspond to cells in DMPlex) can I > use it to partition a vector with as many components as edges I have in my > network? > I do not completely understand the question. If you want a partition of the edges, you can use DMPlexCreatePartition() and its friend DMPlexDistribute(). What are you trying to do? Matt > Thanks > Miguel > > On Sun, Feb 22, 2015 at 12:15 PM, Matthew Knepley > wrote: > >> On Sun, Feb 22, 2015 at 11:01 AM, Miguel Angel Salazar de Troya < >> salazardetroya at gmail.com> wrote: >> >>> Hi >>> >>> I noticed that the routine DMNetworkGetEdgeRange() returns the local >>> indices for the edge range. Is there any way to obtain the global indices? >>> So if my network has 10 edges, the processor 1 has the 0-4 edges and the >>> processor 2, the 5-9 edges, how can I obtain this information? >>> >> >> One of the points of DMPlex is we do not require a global numbering. >> Everything is numbered >> locally, and the PetscSF maps local numbers to local numbers in order to >> determine ownership. >> >> If you want to create a global numbering for some reason, you can using >> DMPlexCreatePointNumbering(). >> There are also cell and vertex versions that we use for output, so you >> could do it just for edges as well. >> >> Thanks, >> >> Matt >> >> >>> Thanks >>> Miguel >>> >>> -- >>> *Miguel Angel Salazar de Troya* >>> Graduate Research Assistant >>> Department of Mechanical Science and Engineering >>> University of Illinois at Urbana-Champaign >>> (217) 550-2360 >>> salaza11 at illinois.edu >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > *Miguel Angel Salazar de Troya* > Graduate Research Assistant > Department of Mechanical Science and Engineering > University of Illinois at Urbana-Champaign > (217) 550-2360 > salaza11 at illinois.edu > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From abhyshr at mcs.anl.gov Mon Feb 23 08:08:23 2015 From: abhyshr at mcs.anl.gov (Abhyankar, Shrirang G.) Date: Mon, 23 Feb 2015 14:08:23 +0000 Subject: [petsc-users] DMNetworkGetEdgeRange() in parallel In-Reply-To: Message-ID: Miguel, One possible way is to store the global numbering of any edge/vertex in the "component" attached to it. Once the mesh gets partitioned, the components are also distributed so you can easily retrieve the global number of any edge/vertex by accessing its component. This is what is done in the DMNetwork example pf.c although the global numbering is not used for anything. Shri From: Matthew Knepley > Date: Mon, 23 Feb 2015 07:54:34 -0600 To: Miguel Angel Salazar de Troya > Cc: "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel On Sun, Feb 22, 2015 at 3:59 PM, Miguel Angel Salazar de Troya > wrote: Thanks. Once I obtain that Index Set with the routine DMPlexCreateCellNumbering() (I assume that the edges in DMNetwork correspond to cells in DMPlex) can I use it to partition a vector with as many components as edges I have in my network? I do not completely understand the question. If you want a partition of the edges, you can use DMPlexCreatePartition() and its friend DMPlexDistribute(). What are you trying to do? Matt Thanks Miguel On Sun, Feb 22, 2015 at 12:15 PM, Matthew Knepley > wrote: On Sun, Feb 22, 2015 at 11:01 AM, Miguel Angel Salazar de Troya > wrote: Hi I noticed that the routine DMNetworkGetEdgeRange() returns the local indices for the edge range. Is there any way to obtain the global indices? So if my network has 10 edges, the processor 1 has the 0-4 edges and the processor 2, the 5-9 edges, how can I obtain this information? One of the points of DMPlex is we do not require a global numbering. Everything is numbered locally, and the PetscSF maps local numbers to local numbers in order to determine ownership. If you want to create a global numbering for some reason, you can using DMPlexCreatePointNumbering(). There are also cell and vertex versions that we use for output, so you could do it just for edges as well. Thanks, Matt Thanks Miguel -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dngoldberg at gmail.com Mon Feb 23 08:20:41 2015 From: dngoldberg at gmail.com (Daniel Goldberg) Date: Mon, 23 Feb 2015 14:20:41 +0000 Subject: [petsc-users] solving multiple linear systems with same matrix (sequentially, not simultaneously) In-Reply-To: <87zj85idga.fsf@jedbrown.org> References: <98085B4A-166C-49E4-89F6-DDF53B6FFD4D@mcs.anl.gov> <7030ED15-A93B-403C-BE28-DEF842F1941D@mcs.anl.gov> <87zj85idga.fsf@jedbrown.org> Message-ID: Thanks Jed. So the nullspace would be initialized with an array of 3 petsc vectors: (u,v)= (0,1), (1,0), and (-y,x), correct? And also to be sure -- this is usefuil only for multigrid preconditioners, yes? Thanks Dan On Mon, Feb 23, 2015 at 2:11 AM, Jed Brown wrote: > Barry Smith writes: > >> I will try gamg as I know it can reduce the number of CG iterations > required. (I'm guessing you mean algebraic, not geometric?) > > > > By default GAMG is an algebraic multigrid preconditioner. Look at its > documentation at http://www.mcs.anl.gov/petsc/petsc-master/docs/index.html > it will be a bit better than the older documentation. The documentation for > GAMG is still pretty thin so feel free to ask questions. > > You might want to call MatSetBlockSize and MatSetNearNullSpace to define > both translation and rotation modes. This is most relevant for large > ice shelves. > -- Daniel Goldberg, PhD Lecturer in Glaciology School of Geosciences, University of Edinburgh Geography Building, Drummond Street, Edinburgh EH8 9XP em: D an.Goldberg at ed.ac.uk web: http://ocean.mit.edu/~dgoldberg -------------- next part -------------- An HTML attachment was scrubbed... URL: From salazardetroya at gmail.com Mon Feb 23 08:42:19 2015 From: salazardetroya at gmail.com (Miguel Angel Salazar de Troya) Date: Mon, 23 Feb 2015 08:42:19 -0600 Subject: [petsc-users] DMNetworkGetEdgeRange() in parallel In-Reply-To: References: Message-ID: Thanks, that will help me. Now what I would like to have is the following: if I have two processors and ten edges, the partitioning results in the first processor having the edges 0-4 and the second processor, the edges 5-9. I also have a global vector with as many components as edges, 10. How can I partition it so the first processor also has the 0-4 components and the second, the 5-9 components of the vector? Miguel On Feb 23, 2015 8:08 AM, "Abhyankar, Shrirang G." wrote: > Miguel, > One possible way is to store the global numbering of any edge/vertex in > the "component" attached to it. Once the mesh gets partitioned, the > components are also distributed so you can easily retrieve the global > number of any edge/vertex by accessing its component. This is what is done > in the DMNetwork example pf.c although the global numbering is not used for > anything. > > Shri > From: Matthew Knepley > Date: Mon, 23 Feb 2015 07:54:34 -0600 > To: Miguel Angel Salazar de Troya > Cc: "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel > > On Sun, Feb 22, 2015 at 3:59 PM, Miguel Angel Salazar de Troya < > salazardetroya at gmail.com> wrote: > >> Thanks. Once I obtain that Index Set with the routine DMPlexCreateCellNumbering() >> (I assume that the edges in DMNetwork correspond to cells in DMPlex) can I >> use it to partition a vector with as many components as edges I have in my >> network? >> > > I do not completely understand the question. > > If you want a partition of the edges, you can use > DMPlexCreatePartition() and its friend DMPlexDistribute(). What > are you trying to do? > > Matt > > >> Thanks >> Miguel >> >> On Sun, Feb 22, 2015 at 12:15 PM, Matthew Knepley >> wrote: >> >>> On Sun, Feb 22, 2015 at 11:01 AM, Miguel Angel Salazar de Troya < >>> salazardetroya at gmail.com> wrote: >>> >>>> Hi >>>> >>>> I noticed that the routine DMNetworkGetEdgeRange() returns the local >>>> indices for the edge range. Is there any way to obtain the global indices? >>>> So if my network has 10 edges, the processor 1 has the 0-4 edges and the >>>> processor 2, the 5-9 edges, how can I obtain this information? >>>> >>> >>> One of the points of DMPlex is we do not require a global numbering. >>> Everything is numbered >>> locally, and the PetscSF maps local numbers to local numbers in order to >>> determine ownership. >>> >>> If you want to create a global numbering for some reason, you can >>> using DMPlexCreatePointNumbering(). >>> There are also cell and vertex versions that we use for output, so you >>> could do it just for edges as well. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Thanks >>>> Miguel >>>> >>>> -- >>>> *Miguel Angel Salazar de Troya* >>>> Graduate Research Assistant >>>> Department of Mechanical Science and Engineering >>>> University of Illinois at Urbana-Champaign >>>> (217) 550-2360 >>>> salaza11 at illinois.edu >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> *Miguel Angel Salazar de Troya* >> Graduate Research Assistant >> Department of Mechanical Science and Engineering >> University of Illinois at Urbana-Champaign >> (217) 550-2360 >> salaza11 at illinois.edu >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From karpeev at mcs.anl.gov Mon Feb 23 09:08:07 2015 From: karpeev at mcs.anl.gov (Dmitry Karpeyev) Date: Mon, 23 Feb 2015 15:08:07 +0000 Subject: [petsc-users] MatMPIAIJSetPreallocation: "nnz cannot be greater than row length" References: <9734C4EE-C645-41E1-B140-2113E1EBF146@mcs.anl.gov> <2CBFC145-0960-4152-A939-29CF87FCFEEE@mcs.anl.gov> <7AB307B8-8F93-4DDF-B0FB-F5718751AF0E@mcs.anl.gov> Message-ID: David, What code are you running when you encounter this error? I'm trying to reproduce it and I tried examples/systems_of_equations/ex8, but it ran for me, ostensibly to completion. I have a small PETSc pull request that implements MatReset(), which passes a small PETSc test, but libMesh needs some work to be able to build against petsc/master because of some recent changes to PETSc. Dmitry. On Mon Feb 23 2015 at 7:17:06 AM David Knezevic wrote: > Hi Barry, hi Dmitry, > > I set the matrix to BAIJ and back to AIJ, and the code got a bit further. > But I now run into the error pasted below (Note that I'm now using > "--with-debugging=1"): > > PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > PETSC ERROR: Petsc has generated inconsistent data > PETSC ERROR: MPIAIJ Matrix was assembled but is missing garray > PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for > trouble shooting. > PETSC ERROR: Petsc Release Version 3.5.2, Sep, 08, 2014 > PETSC ERROR: ./example-dbg on a arch-linux2-c-debug named david-Lenovo by > dknez Mon Feb 23 08:05:44 2015 > PETSC ERROR: Configure options --with-shared-libraries=1 > --with-debugging=1 --download-suitesparse=1 --download-parmetis=1 > --download-blacs=1 --download-scalapack=1 --download-mumps=1 > --download-metis --download-superlu_dist > --prefix=/home/dknez/software/libmesh_install/dbg_real/petsc > --download-hypre > PETSC ERROR: #1 MatCreateColmap_MPIAIJ_Private() line 361 in > /home/dknez/software/petsc-3.5.2/src/mat/impls/aij/mpi/mpiaij.c > PETSC ERROR: #2 MatSetValues_MPIAIJ() line 538 in > /home/dknez/software/petsc-3.5.2/src/mat/impls/aij/mpi/mpiaij.c > PETSC ERROR: #3 MatSetValues() line 1136 in > /home/dknez/software/petsc-3.5.2/src/mat/interface/matrix.c > PETSC ERROR: #4 add_matrix() line 765 in > /home/dknez/software/libmesh-src/src/numerics/petsc_matrix.C > -------------------------------------------------------------------------- > > This occurs when I try to set some entries of the matrix. Do you have any > suggestions on how I can resolve this? > > Thanks! > David > > > > > On Sun, Feb 22, 2015 at 10:22 PM, Dmitry Karpeyev > wrote: > >> >> >> On Sun Feb 22 2015 at 9:15:22 PM Barry Smith wrote: >> >>> >>> > On Feb 22, 2015, at 9:09 PM, David Knezevic < >>> david.knezevic at akselos.com> wrote: >>> > >>> > Hi Dmitry, >>> > >>> > Thanks for the suggestion. I tried MatSetType(mat,MATMPIAIJ) followed >>> by MatXAIJSetPreallocation(...), but unfortunately this still gives me the >>> same error as before: "nnz cannot be greater than row length: local row 168 >>> value 24 rowlength 0". >>> > >>> > I gather that the idea here is that MatSetType builds a new matrix >>> object, and then I should be able to pre-allocate for that new matrix >>> however I like, right? Was I supposed to clear the matrix object somehow >>> before calling MatSetType? (I didn't do any sort of clear operation.) >>> >>> If the type doesn't change then MatSetType() won't do anything. You >>> can try setting the type to BAIJ and then setting the type back to AIJ. >>> This may/should clear out the matrix. >>> >> Ah, yes. If the type is the same as before it does quit early, but >> changing the type and then back will clear out and rebuild the matrix. We >> need >> something like MatReset() to do the equivalent thing. >> >>> >>> > >>> > As I said earlier, I'll make a dbg PETSc build, so hopefully that will >>> help shed some light on what's going wrong for me. >>> >> I think it's always a good idea to have a dbg build of PETSc when you >> doing things like these. >> >> Dmitry. >> >>> >>> Don't bother, what I suggested won't work. >>> >>> Barry >>> >>> >>> > >>> > Thanks, >>> > David >>> > >>> > >>> > >>> > >>> > On Sun, Feb 22, 2015 at 6:02 PM, Dmitry Karpeyev >>> wrote: >>> > David, >>> > It might be easier to just rebuild the whole matrix from scratch: you >>> would in effect be doing all that with disassembling and resetting the >>> preallocation. >>> > MatSetType(mat,MATMPIAIJ) >>> > or >>> > PetscObjectGetType((PetscObject)mat,&type); >>> > MatSetType(mat,type); >>> > followed by >>> > MatXAIJSetPreallocation(...); >>> > should do. >>> > Dmitry. >>> > >>> > >>> > On Sun Feb 22 2015 at 4:45:46 PM Barry Smith >>> wrote: >>> > >>> > Do not call for SeqAIJ matrix. Do not call before the first time you >>> have preallocated and put entries in the matrix and done the >>> MatAssemblyBegin/End() >>> > >>> > If it still crashes you'll need to try the debugger >>> > >>> > Barry >>> > >>> > > On Feb 22, 2015, at 4:09 PM, David Knezevic < >>> david.knezevic at akselos.com> wrote: >>> > > >>> > > Hi Barry, >>> > > >>> > > Thanks for your help, much appreciated. >>> > > >>> > > I added a prototype for MatDisAssemble_MPIAIJ: >>> > > PETSC_INTERN PetscErrorCode MatDisAssemble_MPIAIJ(Mat); >>> > > >>> > > and I added a call to MatDisAssemble_MPIAIJ before >>> MatMPIAIJSetPreallocation. However, I get a segfault on the call to >>> MatDisAssemble_MPIAIJ. The segfault occurs in both serial and parallel. >>> > > >>> > > FYI, I'm using Petsc 3.5.2, and I'm not using a non-debug build >>> (though I could rebuild PETSc in debug mode if you think that would help >>> figure out what's happening here). >>> > > >>> > > Thanks, >>> > > David >>> > > >>> > > >>> > > >>> > > On Sun, Feb 22, 2015 at 1:13 PM, Barry Smith >>> wrote: >>> > > >>> > > David, >>> > > >>> > > This is an obscure little feature of MatMPIAIJ, each time you >>> change the sparsity pattern before you call the MatMPIAIJSetPreallocation >>> you need to call MatDisAssemble_MPIAIJ(Mat mat). This is a private >>> PETSc function so you need to provide your own prototype for it above the >>> function you use it in. >>> > > >>> > > Let us know if this resolves the problem. >>> > > >>> > > Barry >>> > > >>> > > We never really intended that people would call >>> MatMPIAIJSetPreallocation() AFTER they had already used the matrix. >>> > > >>> > > >>> > > > On Feb 22, 2015, at 6:50 AM, David Knezevic < >>> david.knezevic at akselos.com> wrote: >>> > > > >>> > > > Hi all, >>> > > > >>> > > > I've implemented a solver for a contact problem using SNES. The >>> sparsity pattern of the jacobian matrix needs to change at each nonlinear >>> iteration (because the elements which are in contact can change), so I >>> tried to deal with this by calling MatSeqAIJSetPreallocation and >>> MatMPIAIJSetPreallocation during each iteration in order to update the >>> preallocation. >>> > > > >>> > > > This seems to work fine in serial, but with two or more MPI >>> processes I run into the error "nnz cannot be greater than row length", >>> e.g.: >>> > > > nnz cannot be greater than row length: local row 528 value 12 >>> rowlength 0 >>> > > > >>> > > > This error is from the call to >>> > > > MatSeqAIJSetPreallocation(b->B,o_nz,o_nnz); in >>> MatMPIAIJSetPreallocation_MPIAIJ. >>> > > > >>> > > > Any guidance on what the problem might be would be most >>> appreciated. For example, I was wondering if there is a problem with >>> calling SetPreallocation on a matrix that has already been preallocated? >>> > > > >>> > > > Some notes: >>> > > > - I'm using PETSc via libMesh >>> > > > - The code that triggers this issue is available as a PR on the >>> libMesh github repo, in case anyone is interested: >>> https://github.com/libMesh/libmesh/pull/460/ >>> > > > - I can try to make a minimal pure-PETSc example that reproduces >>> this error, if that would be helpful. >>> > > > >>> > > > Many thanks, >>> > > > David >>> > > > >>> > > >>> > > >>> > >>> > >>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Feb 23 09:09:05 2015 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 23 Feb 2015 09:09:05 -0600 Subject: [petsc-users] DMNetworkGetEdgeRange() in parallel In-Reply-To: References: Message-ID: On Mon, Feb 23, 2015 at 8:42 AM, Miguel Angel Salazar de Troya < salazardetroya at gmail.com> wrote: > Thanks, that will help me. Now what I would like to have is the following: > if I have two processors and ten edges, the partitioning results in the > first processor having the edges 0-4 and the second processor, the edges > 5-9. I also have a global vector with as many components as edges, 10. How > can I partition it so the first processor also has the 0-4 components and > the second, the 5-9 components of the vector? > I think it would help to know what you want to accomplish. This is how you are proposing to do it.' If you just want to put data on edges, DMNetwork has a facility for that already. Thanks, Matt > Miguel > On Feb 23, 2015 8:08 AM, "Abhyankar, Shrirang G." > wrote: > >> Miguel, >> One possible way is to store the global numbering of any edge/vertex >> in the "component" attached to it. Once the mesh gets partitioned, the >> components are also distributed so you can easily retrieve the global >> number of any edge/vertex by accessing its component. This is what is done >> in the DMNetwork example pf.c although the global numbering is not used for >> anything. >> >> Shri >> From: Matthew Knepley >> Date: Mon, 23 Feb 2015 07:54:34 -0600 >> To: Miguel Angel Salazar de Troya >> Cc: "petsc-users at mcs.anl.gov" >> Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel >> >> On Sun, Feb 22, 2015 at 3:59 PM, Miguel Angel Salazar de Troya < >> salazardetroya at gmail.com> wrote: >> >>> Thanks. Once I obtain that Index Set with the routine DMPlexCreateCellNumbering() >>> (I assume that the edges in DMNetwork correspond to cells in DMPlex) can I >>> use it to partition a vector with as many components as edges I have in my >>> network? >>> >> >> I do not completely understand the question. >> >> If you want a partition of the edges, you can use >> DMPlexCreatePartition() and its friend DMPlexDistribute(). What >> are you trying to do? >> >> Matt >> >> >>> Thanks >>> Miguel >>> >>> On Sun, Feb 22, 2015 at 12:15 PM, Matthew Knepley >>> wrote: >>> >>>> On Sun, Feb 22, 2015 at 11:01 AM, Miguel Angel Salazar de Troya < >>>> salazardetroya at gmail.com> wrote: >>>> >>>>> Hi >>>>> >>>>> I noticed that the routine DMNetworkGetEdgeRange() returns the local >>>>> indices for the edge range. Is there any way to obtain the global indices? >>>>> So if my network has 10 edges, the processor 1 has the 0-4 edges and the >>>>> processor 2, the 5-9 edges, how can I obtain this information? >>>>> >>>> >>>> One of the points of DMPlex is we do not require a global numbering. >>>> Everything is numbered >>>> locally, and the PetscSF maps local numbers to local numbers in order >>>> to determine ownership. >>>> >>>> If you want to create a global numbering for some reason, you can >>>> using DMPlexCreatePointNumbering(). >>>> There are also cell and vertex versions that we use for output, so you >>>> could do it just for edges as well. >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> Thanks >>>>> Miguel >>>>> >>>>> -- >>>>> *Miguel Angel Salazar de Troya* >>>>> Graduate Research Assistant >>>>> Department of Mechanical Science and Engineering >>>>> University of Illinois at Urbana-Champaign >>>>> (217) 550-2360 >>>>> salaza11 at illinois.edu >>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >>> >>> -- >>> *Miguel Angel Salazar de Troya* >>> Graduate Research Assistant >>> Department of Mechanical Science and Engineering >>> University of Illinois at Urbana-Champaign >>> (217) 550-2360 >>> salaza11 at illinois.edu >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.knezevic at akselos.com Mon Feb 23 09:15:37 2015 From: david.knezevic at akselos.com (David Knezevic) Date: Mon, 23 Feb 2015 10:15:37 -0500 Subject: [petsc-users] MatMPIAIJSetPreallocation: "nnz cannot be greater than row length" In-Reply-To: References: <9734C4EE-C645-41E1-B140-2113E1EBF146@mcs.anl.gov> <2CBFC145-0960-4152-A939-29CF87FCFEEE@mcs.anl.gov> <7AB307B8-8F93-4DDF-B0FB-F5718751AF0E@mcs.anl.gov> Message-ID: Hi Dmitry, Thanks very much for testing out the example. examples/systems_of_equations/ex8 works fine for me in serial, but it fails for me if I run with more than 1 MPI process. Can you try it with, say, 2 or 4 MPI processes? If we need access to MatReset in libMesh to get this to work, I'll be happy to work on a libMesh pull request for that. David -- David J. Knezevic | CTO Akselos | 17 Bay State Road | Boston, MA | 02215 Phone (office): +1-857-265-2238 Phone (mobile): +1-617-599-4755 Web: http://www.akselos.com On Mon, Feb 23, 2015 at 10:08 AM, Dmitry Karpeyev wrote: > David, > > What code are you running when you encounter this error? I'm trying to > reproduce it and > I tried examples/systems_of_equations/ex8, but it ran for me, ostensibly > to completion. > > I have a small PETSc pull request that implements MatReset(), which passes > a small PETSc test, > but libMesh needs some work to be able to build against petsc/master > because of some recent > changes to PETSc. > > Dmitry. > > On Mon Feb 23 2015 at 7:17:06 AM David Knezevic < > david.knezevic at akselos.com> wrote: > >> Hi Barry, hi Dmitry, >> >> I set the matrix to BAIJ and back to AIJ, and the code got a bit further. >> But I now run into the error pasted below (Note that I'm now using >> "--with-debugging=1"): >> >> PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> PETSC ERROR: Petsc has generated inconsistent data >> PETSC ERROR: MPIAIJ Matrix was assembled but is missing garray >> PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for >> trouble shooting. >> PETSC ERROR: Petsc Release Version 3.5.2, Sep, 08, 2014 >> PETSC ERROR: ./example-dbg on a arch-linux2-c-debug named david-Lenovo by >> dknez Mon Feb 23 08:05:44 2015 >> PETSC ERROR: Configure options --with-shared-libraries=1 >> --with-debugging=1 --download-suitesparse=1 --download-parmetis=1 >> --download-blacs=1 --download-scalapack=1 --download-mumps=1 >> --download-metis --download-superlu_dist >> --prefix=/home/dknez/software/libmesh_install/dbg_real/petsc >> --download-hypre >> PETSC ERROR: #1 MatCreateColmap_MPIAIJ_Private() line 361 in >> /home/dknez/software/petsc-3.5.2/src/mat/impls/aij/mpi/mpiaij.c >> PETSC ERROR: #2 MatSetValues_MPIAIJ() line 538 in >> /home/dknez/software/petsc-3.5.2/src/mat/impls/aij/mpi/mpiaij.c >> PETSC ERROR: #3 MatSetValues() line 1136 in >> /home/dknez/software/petsc-3.5.2/src/mat/interface/matrix.c >> PETSC ERROR: #4 add_matrix() line 765 in >> /home/dknez/software/libmesh-src/src/numerics/petsc_matrix.C >> -------------------------------------------------------------------------- >> >> This occurs when I try to set some entries of the matrix. Do you have any >> suggestions on how I can resolve this? >> >> Thanks! >> David >> >> >> >> >> On Sun, Feb 22, 2015 at 10:22 PM, Dmitry Karpeyev >> wrote: >> >>> >>> >>> On Sun Feb 22 2015 at 9:15:22 PM Barry Smith wrote: >>> >>>> >>>> > On Feb 22, 2015, at 9:09 PM, David Knezevic < >>>> david.knezevic at akselos.com> wrote: >>>> > >>>> > Hi Dmitry, >>>> > >>>> > Thanks for the suggestion. I tried MatSetType(mat,MATMPIAIJ) followed >>>> by MatXAIJSetPreallocation(...), but unfortunately this still gives me the >>>> same error as before: "nnz cannot be greater than row length: local row 168 >>>> value 24 rowlength 0". >>>> > >>>> > I gather that the idea here is that MatSetType builds a new matrix >>>> object, and then I should be able to pre-allocate for that new matrix >>>> however I like, right? Was I supposed to clear the matrix object somehow >>>> before calling MatSetType? (I didn't do any sort of clear operation.) >>>> >>>> If the type doesn't change then MatSetType() won't do anything. You >>>> can try setting the type to BAIJ and then setting the type back to AIJ. >>>> This may/should clear out the matrix. >>>> >>> Ah, yes. If the type is the same as before it does quit early, but >>> changing the type and then back will clear out and rebuild the matrix. We >>> need >>> something like MatReset() to do the equivalent thing. >>> >>>> >>>> > >>>> > As I said earlier, I'll make a dbg PETSc build, so hopefully that >>>> will help shed some light on what's going wrong for me. >>>> >>> I think it's always a good idea to have a dbg build of PETSc when you >>> doing things like these. >>> >>> Dmitry. >>> >>>> >>>> Don't bother, what I suggested won't work. >>>> >>>> Barry >>>> >>>> >>>> > >>>> > Thanks, >>>> > David >>>> > >>>> > >>>> > >>>> > >>>> > On Sun, Feb 22, 2015 at 6:02 PM, Dmitry Karpeyev >>>> wrote: >>>> > David, >>>> > It might be easier to just rebuild the whole matrix from scratch: you >>>> would in effect be doing all that with disassembling and resetting the >>>> preallocation. >>>> > MatSetType(mat,MATMPIAIJ) >>>> > or >>>> > PetscObjectGetType((PetscObject)mat,&type); >>>> > MatSetType(mat,type); >>>> > followed by >>>> > MatXAIJSetPreallocation(...); >>>> > should do. >>>> > Dmitry. >>>> > >>>> > >>>> > On Sun Feb 22 2015 at 4:45:46 PM Barry Smith >>>> wrote: >>>> > >>>> > Do not call for SeqAIJ matrix. Do not call before the first time you >>>> have preallocated and put entries in the matrix and done the >>>> MatAssemblyBegin/End() >>>> > >>>> > If it still crashes you'll need to try the debugger >>>> > >>>> > Barry >>>> > >>>> > > On Feb 22, 2015, at 4:09 PM, David Knezevic < >>>> david.knezevic at akselos.com> wrote: >>>> > > >>>> > > Hi Barry, >>>> > > >>>> > > Thanks for your help, much appreciated. >>>> > > >>>> > > I added a prototype for MatDisAssemble_MPIAIJ: >>>> > > PETSC_INTERN PetscErrorCode MatDisAssemble_MPIAIJ(Mat); >>>> > > >>>> > > and I added a call to MatDisAssemble_MPIAIJ before >>>> MatMPIAIJSetPreallocation. However, I get a segfault on the call to >>>> MatDisAssemble_MPIAIJ. The segfault occurs in both serial and parallel. >>>> > > >>>> > > FYI, I'm using Petsc 3.5.2, and I'm not using a non-debug build >>>> (though I could rebuild PETSc in debug mode if you think that would help >>>> figure out what's happening here). >>>> > > >>>> > > Thanks, >>>> > > David >>>> > > >>>> > > >>>> > > >>>> > > On Sun, Feb 22, 2015 at 1:13 PM, Barry Smith >>>> wrote: >>>> > > >>>> > > David, >>>> > > >>>> > > This is an obscure little feature of MatMPIAIJ, each time you >>>> change the sparsity pattern before you call the MatMPIAIJSetPreallocation >>>> you need to call MatDisAssemble_MPIAIJ(Mat mat). This is a private >>>> PETSc function so you need to provide your own prototype for it above the >>>> function you use it in. >>>> > > >>>> > > Let us know if this resolves the problem. >>>> > > >>>> > > Barry >>>> > > >>>> > > We never really intended that people would call >>>> MatMPIAIJSetPreallocation() AFTER they had already used the matrix. >>>> > > >>>> > > >>>> > > > On Feb 22, 2015, at 6:50 AM, David Knezevic < >>>> david.knezevic at akselos.com> wrote: >>>> > > > >>>> > > > Hi all, >>>> > > > >>>> > > > I've implemented a solver for a contact problem using SNES. The >>>> sparsity pattern of the jacobian matrix needs to change at each nonlinear >>>> iteration (because the elements which are in contact can change), so I >>>> tried to deal with this by calling MatSeqAIJSetPreallocation and >>>> MatMPIAIJSetPreallocation during each iteration in order to update the >>>> preallocation. >>>> > > > >>>> > > > This seems to work fine in serial, but with two or more MPI >>>> processes I run into the error "nnz cannot be greater than row length", >>>> e.g.: >>>> > > > nnz cannot be greater than row length: local row 528 value 12 >>>> rowlength 0 >>>> > > > >>>> > > > This error is from the call to >>>> > > > MatSeqAIJSetPreallocation(b->B,o_nz,o_nnz); in >>>> MatMPIAIJSetPreallocation_MPIAIJ. >>>> > > > >>>> > > > Any guidance on what the problem might be would be most >>>> appreciated. For example, I was wondering if there is a problem with >>>> calling SetPreallocation on a matrix that has already been preallocated? >>>> > > > >>>> > > > Some notes: >>>> > > > - I'm using PETSc via libMesh >>>> > > > - The code that triggers this issue is available as a PR on the >>>> libMesh github repo, in case anyone is interested: >>>> https://github.com/libMesh/libmesh/pull/460/ >>>> > > > - I can try to make a minimal pure-PETSc example that reproduces >>>> this error, if that would be helpful. >>>> > > > >>>> > > > Many thanks, >>>> > > > David >>>> > > > >>>> > > >>>> > > >>>> > >>>> > >>>> >>>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From salazardetroya at gmail.com Mon Feb 23 09:27:51 2015 From: salazardetroya at gmail.com (Miguel Angel Salazar de Troya) Date: Mon, 23 Feb 2015 09:27:51 -0600 Subject: [petsc-users] DMNetworkGetEdgeRange() in parallel In-Reply-To: References: Message-ID: I'm iterating through local edges given in DMNetworkGetEdgeRange(). For each edge, I extract or modify its corresponding value in a global petsc vector. Therefore that vector must have as many components as edges there are in the network. To extract the value in the vector, I use VecGetArray() and a variable counter that is incremented in each iteration. The array that I obtain in VecGetArray() has to be the same size than the edge range. That variable counter starts as 0, so if the array that I obtained in VecGetArray() is x_array, x_array[0] must be the component in the global vector that corresponds with the start edge given in DMNetworkGetEdgeRange() I need that global petsc vector because I will use it in other operations, it's not just data. Sorry for the confusion. Thanks in advance. Miguel On Mon, Feb 23, 2015 at 9:09 AM, Matthew Knepley wrote: > On Mon, Feb 23, 2015 at 8:42 AM, Miguel Angel Salazar de Troya < > salazardetroya at gmail.com> wrote: > >> Thanks, that will help me. Now what I would like to have is the >> following: if I have two processors and ten edges, the partitioning results >> in the first processor having the edges 0-4 and the second processor, the >> edges 5-9. I also have a global vector with as many components as edges, >> 10. How can I partition it so the first processor also has the 0-4 >> components and the second, the 5-9 components of the vector? >> > I think it would help to know what you want to accomplish. This is how you > are proposing to do it.' > > If you just want to put data on edges, DMNetwork has a facility for that > already. > > Thanks, > > Matt > > >> Miguel >> On Feb 23, 2015 8:08 AM, "Abhyankar, Shrirang G." >> wrote: >> >>> Miguel, >>> One possible way is to store the global numbering of any edge/vertex >>> in the "component" attached to it. Once the mesh gets partitioned, the >>> components are also distributed so you can easily retrieve the global >>> number of any edge/vertex by accessing its component. This is what is done >>> in the DMNetwork example pf.c although the global numbering is not used for >>> anything. >>> >>> Shri >>> From: Matthew Knepley >>> Date: Mon, 23 Feb 2015 07:54:34 -0600 >>> To: Miguel Angel Salazar de Troya >>> Cc: "petsc-users at mcs.anl.gov" >>> Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel >>> >>> On Sun, Feb 22, 2015 at 3:59 PM, Miguel Angel Salazar de Troya < >>> salazardetroya at gmail.com> wrote: >>> >>>> Thanks. Once I obtain that Index Set with the routine DMPlexCreateCellNumbering() >>>> (I assume that the edges in DMNetwork correspond to cells in DMPlex) can I >>>> use it to partition a vector with as many components as edges I have in my >>>> network? >>>> >>> >>> I do not completely understand the question. >>> >>> If you want a partition of the edges, you can use >>> DMPlexCreatePartition() and its friend DMPlexDistribute(). What >>> are you trying to do? >>> >>> Matt >>> >>> >>>> Thanks >>>> Miguel >>>> >>>> On Sun, Feb 22, 2015 at 12:15 PM, Matthew Knepley >>>> wrote: >>>> >>>>> On Sun, Feb 22, 2015 at 11:01 AM, Miguel Angel Salazar de Troya < >>>>> salazardetroya at gmail.com> wrote: >>>>> >>>>>> Hi >>>>>> >>>>>> I noticed that the routine DMNetworkGetEdgeRange() returns the >>>>>> local indices for the edge range. Is there any way to obtain the global >>>>>> indices? So if my network has 10 edges, the processor 1 has the 0-4 edges >>>>>> and the processor 2, the 5-9 edges, how can I obtain this information? >>>>>> >>>>> >>>>> One of the points of DMPlex is we do not require a global numbering. >>>>> Everything is numbered >>>>> locally, and the PetscSF maps local numbers to local numbers in order >>>>> to determine ownership. >>>>> >>>>> If you want to create a global numbering for some reason, you can >>>>> using DMPlexCreatePointNumbering(). >>>>> There are also cell and vertex versions that we use for output, so you >>>>> could do it just for edges as well. >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> >>>>>> Thanks >>>>>> Miguel >>>>>> >>>>>> -- >>>>>> *Miguel Angel Salazar de Troya* >>>>>> Graduate Research Assistant >>>>>> Department of Mechanical Science and Engineering >>>>>> University of Illinois at Urbana-Champaign >>>>>> (217) 550-2360 >>>>>> salaza11 at illinois.edu >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>> >>>> >>>> >>>> -- >>>> *Miguel Angel Salazar de Troya* >>>> Graduate Research Assistant >>>> Department of Mechanical Science and Engineering >>>> University of Illinois at Urbana-Champaign >>>> (217) 550-2360 >>>> salaza11 at illinois.edu >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- *Miguel Angel Salazar de Troya* Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From karpeev at mcs.anl.gov Mon Feb 23 09:40:09 2015 From: karpeev at mcs.anl.gov (Dmitry Karpeyev) Date: Mon, 23 Feb 2015 15:40:09 +0000 Subject: [petsc-users] MatMPIAIJSetPreallocation: "nnz cannot be greater than row length" Message-ID: Hi David, Thanks -- I do see a failure with two mpi ranks. libMesh needs to change the way configure extracts PETSc information -- configuration data were moved: conf --> lib/petsc-conf ${PETSC_ARCH}/conf --> ${PETSC_ARCH}/lib/petsc-conf At one point I started looking at m4/petsc.m4, but that got put on the back burner. For now making the relevant symlinks by hand lets you configure and build libMesh with petsc/master. Dmitry. On Mon Feb 23 2015 at 9:15:44 AM David Knezevic wrote: > Hi Dmitry, > > Thanks very much for testing out the example. > > examples/systems_of_equations/ex8 works fine for me in serial, but it > fails for me if I run with more than 1 MPI process. Can you try it with, > say, 2 or 4 MPI processes? > > If we need access to MatReset in libMesh to get this to work, I'll be > happy to work on a libMesh pull request for that. > > David > > > -- > > David J. Knezevic | CTO > Akselos | 17 Bay State Road | Boston, MA | 02215 > Phone (office): +1-857-265-2238 > Phone (mobile): +1-617-599-4755 > Web: http://www.akselos.com > > > On Mon, Feb 23, 2015 at 10:08 AM, Dmitry Karpeyev > wrote: > >> David, >> >> What code are you running when you encounter this error? I'm trying to >> reproduce it and >> I tried examples/systems_of_equations/ex8, but it ran for me, ostensibly >> to completion. >> >> I have a small PETSc pull request that implements MatReset(), which >> passes a small PETSc test, >> but libMesh needs some work to be able to build against petsc/master >> because of some recent >> changes to PETSc. >> >> Dmitry. >> >> On Mon Feb 23 2015 at 7:17:06 AM David Knezevic < >> david.knezevic at akselos.com> wrote: >> >>> Hi Barry, hi Dmitry, >>> >>> I set the matrix to BAIJ and back to AIJ, and the code got a bit >>> further. But I now run into the error pasted below (Note that I'm now using >>> "--with-debugging=1"): >>> >>> PETSC ERROR: --------------------- Error Message >>> -------------------------------------------------------------- >>> PETSC ERROR: Petsc has generated inconsistent data >>> PETSC ERROR: MPIAIJ Matrix was assembled but is missing garray >>> PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >>> for trouble shooting. >>> PETSC ERROR: Petsc Release Version 3.5.2, Sep, 08, 2014 >>> PETSC ERROR: ./example-dbg on a arch-linux2-c-debug named david-Lenovo >>> by dknez Mon Feb 23 08:05:44 2015 >>> PETSC ERROR: Configure options --with-shared-libraries=1 >>> --with-debugging=1 --download-suitesparse=1 --download-parmetis=1 >>> --download-blacs=1 --download-scalapack=1 --download-mumps=1 >>> --download-metis --download-superlu_dist --prefix=/home/dknez/software/libmesh_install/dbg_real/petsc >>> --download-hypre >>> PETSC ERROR: #1 MatCreateColmap_MPIAIJ_Private() line 361 in >>> /home/dknez/software/petsc-3.5.2/src/mat/impls/aij/mpi/mpiaij.c >>> PETSC ERROR: #2 MatSetValues_MPIAIJ() line 538 in >>> /home/dknez/software/petsc-3.5.2/src/mat/impls/aij/mpi/mpiaij.c >>> PETSC ERROR: #3 MatSetValues() line 1136 in /home/dknez/software/petsc-3. >>> 5.2/src/mat/interface/matrix.c >>> PETSC ERROR: #4 add_matrix() line 765 in /home/dknez/software/libmesh- >>> src/src/numerics/petsc_matrix.C >>> ------------------------------------------------------------ >>> -------------- >>> >>> This occurs when I try to set some entries of the matrix. Do you have >>> any suggestions on how I can resolve this? >>> >>> Thanks! >>> David >>> >>> >>> >>> >>> On Sun, Feb 22, 2015 at 10:22 PM, Dmitry Karpeyev >>> wrote: >>> >>>> >>>> >>>> On Sun Feb 22 2015 at 9:15:22 PM Barry Smith >>>> wrote: >>>> >>>>> >>>>> > On Feb 22, 2015, at 9:09 PM, David Knezevic < >>>>> david.knezevic at akselos.com> wrote: >>>>> > >>>>> > Hi Dmitry, >>>>> > >>>>> > Thanks for the suggestion. I tried MatSetType(mat,MATMPIAIJ) >>>>> followed by MatXAIJSetPreallocation(...), but unfortunately this still >>>>> gives me the same error as before: "nnz cannot be greater than row length: >>>>> local row 168 value 24 rowlength 0". >>>>> > >>>>> > I gather that the idea here is that MatSetType builds a new matrix >>>>> object, and then I should be able to pre-allocate for that new matrix >>>>> however I like, right? Was I supposed to clear the matrix object somehow >>>>> before calling MatSetType? (I didn't do any sort of clear operation.) >>>>> >>>>> If the type doesn't change then MatSetType() won't do anything. You >>>>> can try setting the type to BAIJ and then setting the type back to AIJ. >>>>> This may/should clear out the matrix. >>>>> >>>> Ah, yes. If the type is the same as before it does quit early, but >>>> changing the type and then back will clear out and rebuild the matrix. We >>>> need >>>> something like MatReset() to do the equivalent thing. >>>> >>>>> >>>>> > >>>>> > As I said earlier, I'll make a dbg PETSc build, so hopefully that >>>>> will help shed some light on what's going wrong for me. >>>>> >>>> I think it's always a good idea to have a dbg build of PETSc when you >>>> doing things like these. >>>> >>>> Dmitry. >>>> >>>>> >>>>> Don't bother, what I suggested won't work. >>>>> >>>>> Barry >>>>> >>>>> >>>>> > >>>>> > Thanks, >>>>> > David >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > On Sun, Feb 22, 2015 at 6:02 PM, Dmitry Karpeyev >>>>> wrote: >>>>> > David, >>>>> > It might be easier to just rebuild the whole matrix from scratch: >>>>> you would in effect be doing all that with disassembling and resetting the >>>>> preallocation. >>>>> > MatSetType(mat,MATMPIAIJ) >>>>> > or >>>>> > PetscObjectGetType((PetscObject)mat,&type); >>>>> > MatSetType(mat,type); >>>>> > followed by >>>>> > MatXAIJSetPreallocation(...); >>>>> > should do. >>>>> > Dmitry. >>>>> > >>>>> > >>>>> > On Sun Feb 22 2015 at 4:45:46 PM Barry Smith >>>>> wrote: >>>>> > >>>>> > Do not call for SeqAIJ matrix. Do not call before the first time >>>>> you have preallocated and put entries in the matrix and done the >>>>> MatAssemblyBegin/End() >>>>> > >>>>> > If it still crashes you'll need to try the debugger >>>>> > >>>>> > Barry >>>>> > >>>>> > > On Feb 22, 2015, at 4:09 PM, David Knezevic < >>>>> david.knezevic at akselos.com> wrote: >>>>> > > >>>>> > > Hi Barry, >>>>> > > >>>>> > > Thanks for your help, much appreciated. >>>>> > > >>>>> > > I added a prototype for MatDisAssemble_MPIAIJ: >>>>> > > PETSC_INTERN PetscErrorCode MatDisAssemble_MPIAIJ(Mat); >>>>> > > >>>>> > > and I added a call to MatDisAssemble_MPIAIJ before >>>>> MatMPIAIJSetPreallocation. However, I get a segfault on the call to >>>>> MatDisAssemble_MPIAIJ. The segfault occurs in both serial and parallel. >>>>> > > >>>>> > > FYI, I'm using Petsc 3.5.2, and I'm not using a non-debug build >>>>> (though I could rebuild PETSc in debug mode if you think that would help >>>>> figure out what's happening here). >>>>> > > >>>>> > > Thanks, >>>>> > > David >>>>> > > >>>>> > > >>>>> > > >>>>> > > On Sun, Feb 22, 2015 at 1:13 PM, Barry Smith >>>>> wrote: >>>>> > > >>>>> > > David, >>>>> > > >>>>> > > This is an obscure little feature of MatMPIAIJ, each time you >>>>> change the sparsity pattern before you call the MatMPIAIJSetPreallocation >>>>> you need to call MatDisAssemble_MPIAIJ(Mat mat). This is a private >>>>> PETSc function so you need to provide your own prototype for it above the >>>>> function you use it in. >>>>> > > >>>>> > > Let us know if this resolves the problem. >>>>> > > >>>>> > > Barry >>>>> > > >>>>> > > We never really intended that people would call >>>>> MatMPIAIJSetPreallocation() AFTER they had already used the matrix. >>>>> > > >>>>> > > >>>>> > > > On Feb 22, 2015, at 6:50 AM, David Knezevic < >>>>> david.knezevic at akselos.com> wrote: >>>>> > > > >>>>> > > > Hi all, >>>>> > > > >>>>> > > > I've implemented a solver for a contact problem using SNES. The >>>>> sparsity pattern of the jacobian matrix needs to change at each nonlinear >>>>> iteration (because the elements which are in contact can change), so I >>>>> tried to deal with this by calling MatSeqAIJSetPreallocation and >>>>> MatMPIAIJSetPreallocation during each iteration in order to update the >>>>> preallocation. >>>>> > > > >>>>> > > > This seems to work fine in serial, but with two or more MPI >>>>> processes I run into the error "nnz cannot be greater than row length", >>>>> e.g.: >>>>> > > > nnz cannot be greater than row length: local row 528 value 12 >>>>> rowlength 0 >>>>> > > > >>>>> > > > This error is from the call to >>>>> > > > MatSeqAIJSetPreallocation(b->B,o_nz,o_nnz); in >>>>> MatMPIAIJSetPreallocation_MPIAIJ. >>>>> > > > >>>>> > > > Any guidance on what the problem might be would be most >>>>> appreciated. For example, I was wondering if there is a problem with >>>>> calling SetPreallocation on a matrix that has already been preallocated? >>>>> > > > >>>>> > > > Some notes: >>>>> > > > - I'm using PETSc via libMesh >>>>> > > > - The code that triggers this issue is available as a PR on the >>>>> libMesh github repo, in case anyone is interested: >>>>> https://github.com/libMesh/libmesh/pull/460/ >>>>> > > > - I can try to make a minimal pure-PETSc example that reproduces >>>>> this error, if that would be helpful. >>>>> > > > >>>>> > > > Many thanks, >>>>> > > > David >>>>> > > > >>>>> > > >>>>> > > >>>>> > >>>>> > >>>>> >>>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.knezevic at akselos.com Mon Feb 23 10:03:25 2015 From: david.knezevic at akselos.com (David Knezevic) Date: Mon, 23 Feb 2015 11:03:25 -0500 Subject: [petsc-users] MatMPIAIJSetPreallocation: "nnz cannot be greater than row length" In-Reply-To: References: Message-ID: Hi Dmitry, OK, good to hear we're seeing the same behavior for the example. Regarding this comment: libMesh needs to change the way configure extracts PETSc information -- > configuration data were moved: > conf --> lib/petsc-conf > ${PETSC_ARCH}/conf --> ${PETSC_ARCH}/lib/petsc-conf > > At one point I started looking at m4/petsc.m4, but that got put on the > back burner. For now making the relevant symlinks by hand lets you > configure and build libMesh with petsc/master. > So do you suggest that the next step here is to build libmesh against petsc/master so that we can try your PETSc pull request that implements MatReset() to see if that gets this example working? David > On Mon Feb 23 2015 at 9:15:44 AM David Knezevic < > david.knezevic at akselos.com> wrote: > >> Hi Dmitry, >> >> Thanks very much for testing out the example. >> >> examples/systems_of_equations/ex8 works fine for me in serial, but it >> fails for me if I run with more than 1 MPI process. Can you try it with, >> say, 2 or 4 MPI processes? >> >> If we need access to MatReset in libMesh to get this to work, I'll be >> happy to work on a libMesh pull request for that. >> >> David >> >> >> -- >> >> David J. Knezevic | CTO >> Akselos | 17 Bay State Road | Boston, MA | 02215 >> Phone (office): +1-857-265-2238 >> Phone (mobile): +1-617-599-4755 >> Web: http://www.akselos.com >> >> >> On Mon, Feb 23, 2015 at 10:08 AM, Dmitry Karpeyev >> wrote: >> >>> David, >>> >>> What code are you running when you encounter this error? I'm trying to >>> reproduce it and >>> I tried examples/systems_of_equations/ex8, but it ran for me, >>> ostensibly to completion. >>> >>> I have a small PETSc pull request that implements MatReset(), which >>> passes a small PETSc test, >>> but libMesh needs some work to be able to build against petsc/master >>> because of some recent >>> changes to PETSc. >>> >>> Dmitry. >>> >>> On Mon Feb 23 2015 at 7:17:06 AM David Knezevic < >>> david.knezevic at akselos.com> wrote: >>> >>>> Hi Barry, hi Dmitry, >>>> >>>> I set the matrix to BAIJ and back to AIJ, and the code got a bit >>>> further. But I now run into the error pasted below (Note that I'm now using >>>> "--with-debugging=1"): >>>> >>>> PETSC ERROR: --------------------- Error Message >>>> -------------------------------------------------------------- >>>> PETSC ERROR: Petsc has generated inconsistent data >>>> PETSC ERROR: MPIAIJ Matrix was assembled but is missing garray >>>> PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >>>> for trouble shooting. >>>> PETSC ERROR: Petsc Release Version 3.5.2, Sep, 08, 2014 >>>> PETSC ERROR: ./example-dbg on a arch-linux2-c-debug named david-Lenovo >>>> by dknez Mon Feb 23 08:05:44 2015 >>>> PETSC ERROR: Configure options --with-shared-libraries=1 >>>> --with-debugging=1 --download-suitesparse=1 --download-parmetis=1 >>>> --download-blacs=1 --download-scalapack=1 --download-mumps=1 >>>> --download-metis --download-superlu_dist --prefix=/home/dknez/software/libmesh_install/dbg_real/petsc >>>> --download-hypre >>>> PETSC ERROR: #1 MatCreateColmap_MPIAIJ_Private() line 361 in >>>> /home/dknez/software/petsc-3.5.2/src/mat/impls/aij/mpi/mpiaij.c >>>> PETSC ERROR: #2 MatSetValues_MPIAIJ() line 538 in >>>> /home/dknez/software/petsc-3.5.2/src/mat/impls/aij/mpi/mpiaij.c >>>> PETSC ERROR: #3 MatSetValues() line 1136 in >>>> /home/dknez/software/petsc-3.5.2/src/mat/interface/matrix.c >>>> PETSC ERROR: #4 add_matrix() line 765 in /home/dknez/software/libmesh- >>>> src/src/numerics/petsc_matrix.C >>>> ------------------------------------------------------------ >>>> -------------- >>>> >>>> This occurs when I try to set some entries of the matrix. Do you have >>>> any suggestions on how I can resolve this? >>>> >>>> Thanks! >>>> David >>>> >>>> >>>> >>>> >>>> On Sun, Feb 22, 2015 at 10:22 PM, Dmitry Karpeyev >>>> wrote: >>>> >>>>> >>>>> >>>>> On Sun Feb 22 2015 at 9:15:22 PM Barry Smith >>>>> wrote: >>>>> >>>>>> >>>>>> > On Feb 22, 2015, at 9:09 PM, David Knezevic < >>>>>> david.knezevic at akselos.com> wrote: >>>>>> > >>>>>> > Hi Dmitry, >>>>>> > >>>>>> > Thanks for the suggestion. I tried MatSetType(mat,MATMPIAIJ) >>>>>> followed by MatXAIJSetPreallocation(...), but unfortunately this still >>>>>> gives me the same error as before: "nnz cannot be greater than row length: >>>>>> local row 168 value 24 rowlength 0". >>>>>> > >>>>>> > I gather that the idea here is that MatSetType builds a new matrix >>>>>> object, and then I should be able to pre-allocate for that new matrix >>>>>> however I like, right? Was I supposed to clear the matrix object somehow >>>>>> before calling MatSetType? (I didn't do any sort of clear operation.) >>>>>> >>>>>> If the type doesn't change then MatSetType() won't do anything. You >>>>>> can try setting the type to BAIJ and then setting the type back to AIJ. >>>>>> This may/should clear out the matrix. >>>>>> >>>>> Ah, yes. If the type is the same as before it does quit early, but >>>>> changing the type and then back will clear out and rebuild the matrix. We >>>>> need >>>>> something like MatReset() to do the equivalent thing. >>>>> >>>>>> >>>>>> > >>>>>> > As I said earlier, I'll make a dbg PETSc build, so hopefully that >>>>>> will help shed some light on what's going wrong for me. >>>>>> >>>>> I think it's always a good idea to have a dbg build of PETSc when you >>>>> doing things like these. >>>>> >>>>> Dmitry. >>>>> >>>>>> >>>>>> Don't bother, what I suggested won't work. >>>>>> >>>>>> Barry >>>>>> >>>>>> >>>>>> > >>>>>> > Thanks, >>>>>> > David >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>> > On Sun, Feb 22, 2015 at 6:02 PM, Dmitry Karpeyev < >>>>>> dkarpeev at gmail.com> wrote: >>>>>> > David, >>>>>> > It might be easier to just rebuild the whole matrix from scratch: >>>>>> you would in effect be doing all that with disassembling and resetting the >>>>>> preallocation. >>>>>> > MatSetType(mat,MATMPIAIJ) >>>>>> > or >>>>>> > PetscObjectGetType((PetscObject)mat,&type); >>>>>> > MatSetType(mat,type); >>>>>> > followed by >>>>>> > MatXAIJSetPreallocation(...); >>>>>> > should do. >>>>>> > Dmitry. >>>>>> > >>>>>> > >>>>>> > On Sun Feb 22 2015 at 4:45:46 PM Barry Smith >>>>>> wrote: >>>>>> > >>>>>> > Do not call for SeqAIJ matrix. Do not call before the first time >>>>>> you have preallocated and put entries in the matrix and done the >>>>>> MatAssemblyBegin/End() >>>>>> > >>>>>> > If it still crashes you'll need to try the debugger >>>>>> > >>>>>> > Barry >>>>>> > >>>>>> > > On Feb 22, 2015, at 4:09 PM, David Knezevic < >>>>>> david.knezevic at akselos.com> wrote: >>>>>> > > >>>>>> > > Hi Barry, >>>>>> > > >>>>>> > > Thanks for your help, much appreciated. >>>>>> > > >>>>>> > > I added a prototype for MatDisAssemble_MPIAIJ: >>>>>> > > PETSC_INTERN PetscErrorCode MatDisAssemble_MPIAIJ(Mat); >>>>>> > > >>>>>> > > and I added a call to MatDisAssemble_MPIAIJ before >>>>>> MatMPIAIJSetPreallocation. However, I get a segfault on the call to >>>>>> MatDisAssemble_MPIAIJ. The segfault occurs in both serial and parallel. >>>>>> > > >>>>>> > > FYI, I'm using Petsc 3.5.2, and I'm not using a non-debug build >>>>>> (though I could rebuild PETSc in debug mode if you think that would help >>>>>> figure out what's happening here). >>>>>> > > >>>>>> > > Thanks, >>>>>> > > David >>>>>> > > >>>>>> > > >>>>>> > > >>>>>> > > On Sun, Feb 22, 2015 at 1:13 PM, Barry Smith >>>>>> wrote: >>>>>> > > >>>>>> > > David, >>>>>> > > >>>>>> > > This is an obscure little feature of MatMPIAIJ, each time >>>>>> you change the sparsity pattern before you call the >>>>>> MatMPIAIJSetPreallocation you need to call MatDisAssemble_MPIAIJ(Mat >>>>>> mat). This is a private PETSc function so you need to provide your own >>>>>> prototype for it above the function you use it in. >>>>>> > > >>>>>> > > Let us know if this resolves the problem. >>>>>> > > >>>>>> > > Barry >>>>>> > > >>>>>> > > We never really intended that people would call >>>>>> MatMPIAIJSetPreallocation() AFTER they had already used the matrix. >>>>>> > > >>>>>> > > >>>>>> > > > On Feb 22, 2015, at 6:50 AM, David Knezevic < >>>>>> david.knezevic at akselos.com> wrote: >>>>>> > > > >>>>>> > > > Hi all, >>>>>> > > > >>>>>> > > > I've implemented a solver for a contact problem using SNES. The >>>>>> sparsity pattern of the jacobian matrix needs to change at each nonlinear >>>>>> iteration (because the elements which are in contact can change), so I >>>>>> tried to deal with this by calling MatSeqAIJSetPreallocation and >>>>>> MatMPIAIJSetPreallocation during each iteration in order to update the >>>>>> preallocation. >>>>>> > > > >>>>>> > > > This seems to work fine in serial, but with two or more MPI >>>>>> processes I run into the error "nnz cannot be greater than row length", >>>>>> e.g.: >>>>>> > > > nnz cannot be greater than row length: local row 528 value 12 >>>>>> rowlength 0 >>>>>> > > > >>>>>> > > > This error is from the call to >>>>>> > > > MatSeqAIJSetPreallocation(b->B,o_nz,o_nnz); in >>>>>> MatMPIAIJSetPreallocation_MPIAIJ. >>>>>> > > > >>>>>> > > > Any guidance on what the problem might be would be most >>>>>> appreciated. For example, I was wondering if there is a problem with >>>>>> calling SetPreallocation on a matrix that has already been preallocated? >>>>>> > > > >>>>>> > > > Some notes: >>>>>> > > > - I'm using PETSc via libMesh >>>>>> > > > - The code that triggers this issue is available as a PR on the >>>>>> libMesh github repo, in case anyone is interested: >>>>>> https://github.com/libMesh/libmesh/pull/460/ >>>>>> > > > - I can try to make a minimal pure-PETSc example that >>>>>> reproduces this error, if that would be helpful. >>>>>> > > > >>>>>> > > > Many thanks, >>>>>> > > > David >>>>>> > > > >>>>>> > > >>>>>> > > >>>>>> > >>>>>> > >>>>>> >>>>>> >>>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From karpeev at mcs.anl.gov Mon Feb 23 10:10:38 2015 From: karpeev at mcs.anl.gov (Dmitry Karpeyev) Date: Mon, 23 Feb 2015 16:10:38 +0000 Subject: [petsc-users] MatMPIAIJSetPreallocation: "nnz cannot be greater than row length" References: Message-ID: I just tried building against petsc/master, but there needs to be more work on libMesh before it can work with petsc/master: the new VecLockPush()/Pop() stuff isn't respected by vector manipulation in libMesh. I put a hack equivalent to MatReset() into your branch (patch attached, in case you want to take a look at it), but it generates the same error in MatCreateColmap that you reported earlier. It's odd that it occurs on the second nonlinear iteration. I'll have to dig a bit deeper to see what's going on. Dmitry. On Mon Feb 23 2015 at 10:03:33 AM David Knezevic wrote: > Hi Dmitry, > > OK, good to hear we're seeing the same behavior for the example. > > Regarding this comment: > > > libMesh needs to change the way configure extracts PETSc information -- >> configuration data were moved: >> conf --> lib/petsc-conf >> ${PETSC_ARCH}/conf --> ${PETSC_ARCH}/lib/petsc-conf >> >> At one point I started looking at m4/petsc.m4, but that got put on the >> back burner. For now making the relevant symlinks by hand lets you >> configure and build libMesh with petsc/master. >> > > > So do you suggest that the next step here is to build libmesh against > petsc/master so that we can try your PETSc pull request that implements > MatReset() to see if that gets this example working? > > David > > > > > >> On Mon Feb 23 2015 at 9:15:44 AM David Knezevic < >> david.knezevic at akselos.com> wrote: >> >>> Hi Dmitry, >>> >>> Thanks very much for testing out the example. >>> >>> examples/systems_of_equations/ex8 works fine for me in serial, but it >>> fails for me if I run with more than 1 MPI process. Can you try it with, >>> say, 2 or 4 MPI processes? >>> >>> If we need access to MatReset in libMesh to get this to work, I'll be >>> happy to work on a libMesh pull request for that. >>> >>> David >>> >>> >>> -- >>> >>> David J. Knezevic | CTO >>> Akselos | 17 Bay State Road | Boston, MA | 02215 >>> Phone (office): +1-857-265-2238 >>> Phone (mobile): +1-617-599-4755 >>> Web: http://www.akselos.com >>> >>> >>> On Mon, Feb 23, 2015 at 10:08 AM, Dmitry Karpeyev >>> wrote: >>> >>>> David, >>>> >>>> What code are you running when you encounter this error? I'm trying to >>>> reproduce it and >>>> I tried examples/systems_of_equations/ex8, but it ran for me, >>>> ostensibly to completion. >>>> >>>> I have a small PETSc pull request that implements MatReset(), which >>>> passes a small PETSc test, >>>> but libMesh needs some work to be able to build against petsc/master >>>> because of some recent >>>> changes to PETSc. >>>> >>>> Dmitry. >>>> >>>> On Mon Feb 23 2015 at 7:17:06 AM David Knezevic < >>>> david.knezevic at akselos.com> wrote: >>>> >>>>> Hi Barry, hi Dmitry, >>>>> >>>>> I set the matrix to BAIJ and back to AIJ, and the code got a bit >>>>> further. But I now run into the error pasted below (Note that I'm now using >>>>> "--with-debugging=1"): >>>>> >>>>> PETSC ERROR: --------------------- Error Message >>>>> -------------------------------------------------------------- >>>>> PETSC ERROR: Petsc has generated inconsistent data >>>>> PETSC ERROR: MPIAIJ Matrix was assembled but is missing garray >>>>> PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >>>>> for trouble shooting. >>>>> PETSC ERROR: Petsc Release Version 3.5.2, Sep, 08, 2014 >>>>> PETSC ERROR: ./example-dbg on a arch-linux2-c-debug named david-Lenovo >>>>> by dknez Mon Feb 23 08:05:44 2015 >>>>> PETSC ERROR: Configure options --with-shared-libraries=1 >>>>> --with-debugging=1 --download-suitesparse=1 --download-parmetis=1 >>>>> --download-blacs=1 --download-scalapack=1 --download-mumps=1 >>>>> --download-metis --download-superlu_dist --prefix=/home/dknez/software/libmesh_install/dbg_real/petsc >>>>> --download-hypre >>>>> PETSC ERROR: #1 MatCreateColmap_MPIAIJ_Private() line 361 in >>>>> /home/dknez/software/petsc-3.5.2/src/mat/impls/aij/mpi/mpiaij.c >>>>> PETSC ERROR: #2 MatSetValues_MPIAIJ() line 538 in >>>>> /home/dknez/software/petsc-3.5.2/src/mat/impls/aij/mpi/mpiaij.c >>>>> PETSC ERROR: #3 MatSetValues() line 1136 in >>>>> /home/dknez/software/petsc-3.5.2/src/mat/interface/matrix.c >>>>> PETSC ERROR: #4 add_matrix() line 765 in /home/dknez/software/libmesh- >>>>> src/src/numerics/petsc_matrix.C >>>>> ------------------------------------------------------------ >>>>> -------------- >>>>> >>>>> This occurs when I try to set some entries of the matrix. Do you have >>>>> any suggestions on how I can resolve this? >>>>> >>>>> Thanks! >>>>> David >>>>> >>>>> >>>>> >>>>> >>>>> On Sun, Feb 22, 2015 at 10:22 PM, Dmitry Karpeyev >>>>> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Sun Feb 22 2015 at 9:15:22 PM Barry Smith >>>>>> wrote: >>>>>> >>>>>>> >>>>>>> > On Feb 22, 2015, at 9:09 PM, David Knezevic < >>>>>>> david.knezevic at akselos.com> wrote: >>>>>>> > >>>>>>> > Hi Dmitry, >>>>>>> > >>>>>>> > Thanks for the suggestion. I tried MatSetType(mat,MATMPIAIJ) >>>>>>> followed by MatXAIJSetPreallocation(...), but unfortunately this still >>>>>>> gives me the same error as before: "nnz cannot be greater than row length: >>>>>>> local row 168 value 24 rowlength 0". >>>>>>> > >>>>>>> > I gather that the idea here is that MatSetType builds a new matrix >>>>>>> object, and then I should be able to pre-allocate for that new matrix >>>>>>> however I like, right? Was I supposed to clear the matrix object somehow >>>>>>> before calling MatSetType? (I didn't do any sort of clear operation.) >>>>>>> >>>>>>> If the type doesn't change then MatSetType() won't do anything. >>>>>>> You can try setting the type to BAIJ and then setting the type back to AIJ. >>>>>>> This may/should clear out the matrix. >>>>>>> >>>>>> Ah, yes. If the type is the same as before it does quit early, but >>>>>> changing the type and then back will clear out and rebuild the matrix. We >>>>>> need >>>>>> something like MatReset() to do the equivalent thing. >>>>>> >>>>>>> >>>>>>> > >>>>>>> > As I said earlier, I'll make a dbg PETSc build, so hopefully that >>>>>>> will help shed some light on what's going wrong for me. >>>>>>> >>>>>> I think it's always a good idea to have a dbg build of PETSc when you >>>>>> doing things like these. >>>>>> >>>>>> Dmitry. >>>>>> >>>>>>> >>>>>>> Don't bother, what I suggested won't work. >>>>>>> >>>>>>> Barry >>>>>>> >>>>>>> >>>>>>> > >>>>>>> > Thanks, >>>>>>> > David >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > On Sun, Feb 22, 2015 at 6:02 PM, Dmitry Karpeyev < >>>>>>> dkarpeev at gmail.com> wrote: >>>>>>> > David, >>>>>>> > It might be easier to just rebuild the whole matrix from scratch: >>>>>>> you would in effect be doing all that with disassembling and resetting the >>>>>>> preallocation. >>>>>>> > MatSetType(mat,MATMPIAIJ) >>>>>>> > or >>>>>>> > PetscObjectGetType((PetscObject)mat,&type); >>>>>>> > MatSetType(mat,type); >>>>>>> > followed by >>>>>>> > MatXAIJSetPreallocation(...); >>>>>>> > should do. >>>>>>> > Dmitry. >>>>>>> > >>>>>>> > >>>>>>> > On Sun Feb 22 2015 at 4:45:46 PM Barry Smith >>>>>>> wrote: >>>>>>> > >>>>>>> > Do not call for SeqAIJ matrix. Do not call before the first time >>>>>>> you have preallocated and put entries in the matrix and done the >>>>>>> MatAssemblyBegin/End() >>>>>>> > >>>>>>> > If it still crashes you'll need to try the debugger >>>>>>> > >>>>>>> > Barry >>>>>>> > >>>>>>> > > On Feb 22, 2015, at 4:09 PM, David Knezevic < >>>>>>> david.knezevic at akselos.com> wrote: >>>>>>> > > >>>>>>> > > Hi Barry, >>>>>>> > > >>>>>>> > > Thanks for your help, much appreciated. >>>>>>> > > >>>>>>> > > I added a prototype for MatDisAssemble_MPIAIJ: >>>>>>> > > PETSC_INTERN PetscErrorCode MatDisAssemble_MPIAIJ(Mat); >>>>>>> > > >>>>>>> > > and I added a call to MatDisAssemble_MPIAIJ before >>>>>>> MatMPIAIJSetPreallocation. However, I get a segfault on the call to >>>>>>> MatDisAssemble_MPIAIJ. The segfault occurs in both serial and parallel. >>>>>>> > > >>>>>>> > > FYI, I'm using Petsc 3.5.2, and I'm not using a non-debug build >>>>>>> (though I could rebuild PETSc in debug mode if you think that would help >>>>>>> figure out what's happening here). >>>>>>> > > >>>>>>> > > Thanks, >>>>>>> > > David >>>>>>> > > >>>>>>> > > >>>>>>> > > >>>>>>> > > On Sun, Feb 22, 2015 at 1:13 PM, Barry Smith >>>>>>> wrote: >>>>>>> > > >>>>>>> > > David, >>>>>>> > > >>>>>>> > > This is an obscure little feature of MatMPIAIJ, each time >>>>>>> you change the sparsity pattern before you call the >>>>>>> MatMPIAIJSetPreallocation you need to call MatDisAssemble_MPIAIJ(Mat >>>>>>> mat). This is a private PETSc function so you need to provide your own >>>>>>> prototype for it above the function you use it in. >>>>>>> > > >>>>>>> > > Let us know if this resolves the problem. >>>>>>> > > >>>>>>> > > Barry >>>>>>> > > >>>>>>> > > We never really intended that people would call >>>>>>> MatMPIAIJSetPreallocation() AFTER they had already used the matrix. >>>>>>> > > >>>>>>> > > >>>>>>> > > > On Feb 22, 2015, at 6:50 AM, David Knezevic < >>>>>>> david.knezevic at akselos.com> wrote: >>>>>>> > > > >>>>>>> > > > Hi all, >>>>>>> > > > >>>>>>> > > > I've implemented a solver for a contact problem using SNES. >>>>>>> The sparsity pattern of the jacobian matrix needs to change at each >>>>>>> nonlinear iteration (because the elements which are in contact can change), >>>>>>> so I tried to deal with this by calling MatSeqAIJSetPreallocation and >>>>>>> MatMPIAIJSetPreallocation during each iteration in order to update the >>>>>>> preallocation. >>>>>>> > > > >>>>>>> > > > This seems to work fine in serial, but with two or more MPI >>>>>>> processes I run into the error "nnz cannot be greater than row length", >>>>>>> e.g.: >>>>>>> > > > nnz cannot be greater than row length: local row 528 value 12 >>>>>>> rowlength 0 >>>>>>> > > > >>>>>>> > > > This error is from the call to >>>>>>> > > > MatSeqAIJSetPreallocation(b->B,o_nz,o_nnz); in >>>>>>> MatMPIAIJSetPreallocation_MPIAIJ. >>>>>>> > > > >>>>>>> > > > Any guidance on what the problem might be would be most >>>>>>> appreciated. For example, I was wondering if there is a problem with >>>>>>> calling SetPreallocation on a matrix that has already been preallocated? >>>>>>> > > > >>>>>>> > > > Some notes: >>>>>>> > > > - I'm using PETSc via libMesh >>>>>>> > > > - The code that triggers this issue is available as a PR on >>>>>>> the libMesh github repo, in case anyone is interested: >>>>>>> https://github.com/libMesh/libmesh/pull/460/ >>>>>>> > > > - I can try to make a minimal pure-PETSc example that >>>>>>> reproduces this error, if that would be helpful. >>>>>>> > > > >>>>>>> > > > Many thanks, >>>>>>> > > > David >>>>>>> > > > >>>>>>> > > >>>>>>> > > >>>>>>> > >>>>>>> > >>>>>>> >>>>>>> >>>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: mat-reset-hack.patch Type: application/octet-stream Size: 947 bytes Desc: not available URL: From david.knezevic at akselos.com Mon Feb 23 10:17:10 2015 From: david.knezevic at akselos.com (David Knezevic) Date: Mon, 23 Feb 2015 11:17:10 -0500 Subject: [petsc-users] MatMPIAIJSetPreallocation: "nnz cannot be greater than row length" In-Reply-To: References: Message-ID: OK, sounds good. Let me know if I can help with the digging. David On Mon, Feb 23, 2015 at 11:10 AM, Dmitry Karpeyev wrote: > I just tried building against petsc/master, but there needs to be more > work on libMesh before it can work with petsc/master: > the new VecLockPush()/Pop() stuff isn't respected by vector manipulation > in libMesh. > I put a hack equivalent to MatReset() into your branch (patch attached, in > case you want to take a look at it), > but it generates the same error in MatCreateColmap that you reported > earlier. It's odd that it occurs > on the second nonlinear iteration. I'll have to dig a bit deeper to > see what's going on. > > Dmitry. > > > On Mon Feb 23 2015 at 10:03:33 AM David Knezevic < > david.knezevic at akselos.com> wrote: > >> Hi Dmitry, >> >> OK, good to hear we're seeing the same behavior for the example. >> >> Regarding this comment: >> >> >> libMesh needs to change the way configure extracts PETSc information -- >>> configuration data were moved: >>> conf --> lib/petsc-conf >>> ${PETSC_ARCH}/conf --> ${PETSC_ARCH}/lib/petsc-conf >>> >>> At one point I started looking at m4/petsc.m4, but that got put on the >>> back burner. For now making the relevant symlinks by hand lets you >>> configure and build libMesh with petsc/master. >>> >> >> >> So do you suggest that the next step here is to build libmesh against >> petsc/master so that we can try your PETSc pull request that implements >> MatReset() to see if that gets this example working? >> >> David >> >> >> >> >> >>> On Mon Feb 23 2015 at 9:15:44 AM David Knezevic < >>> david.knezevic at akselos.com> wrote: >>> >>>> Hi Dmitry, >>>> >>>> Thanks very much for testing out the example. >>>> >>>> examples/systems_of_equations/ex8 works fine for me in serial, but it >>>> fails for me if I run with more than 1 MPI process. Can you try it with, >>>> say, 2 or 4 MPI processes? >>>> >>>> If we need access to MatReset in libMesh to get this to work, I'll be >>>> happy to work on a libMesh pull request for that. >>>> >>>> David >>>> >>>> >>>> -- >>>> >>>> David J. Knezevic | CTO >>>> Akselos | 17 Bay State Road | Boston, MA | 02215 >>>> Phone (office): +1-857-265-2238 >>>> Phone (mobile): +1-617-599-4755 >>>> Web: http://www.akselos.com >>>> >>>> >>>> On Mon, Feb 23, 2015 at 10:08 AM, Dmitry Karpeyev >>>> wrote: >>>> >>>>> David, >>>>> >>>>> What code are you running when you encounter this error? I'm trying >>>>> to reproduce it and >>>>> I tried examples/systems_of_equations/ex8, but it ran for me, >>>>> ostensibly to completion. >>>>> >>>>> I have a small PETSc pull request that implements MatReset(), which >>>>> passes a small PETSc test, >>>>> but libMesh needs some work to be able to build against petsc/master >>>>> because of some recent >>>>> changes to PETSc. >>>>> >>>>> Dmitry. >>>>> >>>>> On Mon Feb 23 2015 at 7:17:06 AM David Knezevic < >>>>> david.knezevic at akselos.com> wrote: >>>>> >>>>>> Hi Barry, hi Dmitry, >>>>>> >>>>>> I set the matrix to BAIJ and back to AIJ, and the code got a bit >>>>>> further. But I now run into the error pasted below (Note that I'm now using >>>>>> "--with-debugging=1"): >>>>>> >>>>>> PETSC ERROR: --------------------- Error Message >>>>>> -------------------------------------------------------------- >>>>>> PETSC ERROR: Petsc has generated inconsistent data >>>>>> PETSC ERROR: MPIAIJ Matrix was assembled but is missing garray >>>>>> PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >>>>>> for trouble shooting. >>>>>> PETSC ERROR: Petsc Release Version 3.5.2, Sep, 08, 2014 >>>>>> PETSC ERROR: ./example-dbg on a arch-linux2-c-debug named >>>>>> david-Lenovo by dknez Mon Feb 23 08:05:44 2015 >>>>>> PETSC ERROR: Configure options --with-shared-libraries=1 >>>>>> --with-debugging=1 --download-suitesparse=1 --download-parmetis=1 >>>>>> --download-blacs=1 --download-scalapack=1 --download-mumps=1 >>>>>> --download-metis --download-superlu_dist --prefix=/home/dknez/software/libmesh_install/dbg_real/petsc >>>>>> --download-hypre >>>>>> PETSC ERROR: #1 MatCreateColmap_MPIAIJ_Private() line 361 in >>>>>> /home/dknez/software/petsc-3.5.2/src/mat/impls/aij/mpi/mpiaij.c >>>>>> PETSC ERROR: #2 MatSetValues_MPIAIJ() line 538 in >>>>>> /home/dknez/software/petsc-3.5.2/src/mat/impls/aij/mpi/mpiaij.c >>>>>> PETSC ERROR: #3 MatSetValues() line 1136 in >>>>>> /home/dknez/software/petsc-3.5.2/src/mat/interface/matrix.c >>>>>> PETSC ERROR: #4 add_matrix() line 765 in /home/dknez/software/libmesh- >>>>>> src/src/numerics/petsc_matrix.C >>>>>> ------------------------------------------------------------ >>>>>> -------------- >>>>>> >>>>>> This occurs when I try to set some entries of the matrix. Do you have >>>>>> any suggestions on how I can resolve this? >>>>>> >>>>>> Thanks! >>>>>> David >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Sun, Feb 22, 2015 at 10:22 PM, Dmitry Karpeyev >>>>> > wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Sun Feb 22 2015 at 9:15:22 PM Barry Smith >>>>>>> wrote: >>>>>>> >>>>>>>> >>>>>>>> > On Feb 22, 2015, at 9:09 PM, David Knezevic < >>>>>>>> david.knezevic at akselos.com> wrote: >>>>>>>> > >>>>>>>> > Hi Dmitry, >>>>>>>> > >>>>>>>> > Thanks for the suggestion. I tried MatSetType(mat,MATMPIAIJ) >>>>>>>> followed by MatXAIJSetPreallocation(...), but unfortunately this still >>>>>>>> gives me the same error as before: "nnz cannot be greater than row length: >>>>>>>> local row 168 value 24 rowlength 0". >>>>>>>> > >>>>>>>> > I gather that the idea here is that MatSetType builds a new >>>>>>>> matrix object, and then I should be able to pre-allocate for that new >>>>>>>> matrix however I like, right? Was I supposed to clear the matrix object >>>>>>>> somehow before calling MatSetType? (I didn't do any sort of clear >>>>>>>> operation.) >>>>>>>> >>>>>>>> If the type doesn't change then MatSetType() won't do anything. >>>>>>>> You can try setting the type to BAIJ and then setting the type back to AIJ. >>>>>>>> This may/should clear out the matrix. >>>>>>>> >>>>>>> Ah, yes. If the type is the same as before it does quit early, but >>>>>>> changing the type and then back will clear out and rebuild the matrix. We >>>>>>> need >>>>>>> something like MatReset() to do the equivalent thing. >>>>>>> >>>>>>>> >>>>>>>> > >>>>>>>> > As I said earlier, I'll make a dbg PETSc build, so hopefully that >>>>>>>> will help shed some light on what's going wrong for me. >>>>>>>> >>>>>>> I think it's always a good idea to have a dbg build of PETSc when >>>>>>> you doing things like these. >>>>>>> >>>>>>> Dmitry. >>>>>>> >>>>>>>> >>>>>>>> Don't bother, what I suggested won't work. >>>>>>>> >>>>>>>> Barry >>>>>>>> >>>>>>>> >>>>>>>> > >>>>>>>> > Thanks, >>>>>>>> > David >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> > On Sun, Feb 22, 2015 at 6:02 PM, Dmitry Karpeyev < >>>>>>>> dkarpeev at gmail.com> wrote: >>>>>>>> > David, >>>>>>>> > It might be easier to just rebuild the whole matrix from scratch: >>>>>>>> you would in effect be doing all that with disassembling and resetting the >>>>>>>> preallocation. >>>>>>>> > MatSetType(mat,MATMPIAIJ) >>>>>>>> > or >>>>>>>> > PetscObjectGetType((PetscObject)mat,&type); >>>>>>>> > MatSetType(mat,type); >>>>>>>> > followed by >>>>>>>> > MatXAIJSetPreallocation(...); >>>>>>>> > should do. >>>>>>>> > Dmitry. >>>>>>>> > >>>>>>>> > >>>>>>>> > On Sun Feb 22 2015 at 4:45:46 PM Barry Smith >>>>>>>> wrote: >>>>>>>> > >>>>>>>> > Do not call for SeqAIJ matrix. Do not call before the first time >>>>>>>> you have preallocated and put entries in the matrix and done the >>>>>>>> MatAssemblyBegin/End() >>>>>>>> > >>>>>>>> > If it still crashes you'll need to try the debugger >>>>>>>> > >>>>>>>> > Barry >>>>>>>> > >>>>>>>> > > On Feb 22, 2015, at 4:09 PM, David Knezevic < >>>>>>>> david.knezevic at akselos.com> wrote: >>>>>>>> > > >>>>>>>> > > Hi Barry, >>>>>>>> > > >>>>>>>> > > Thanks for your help, much appreciated. >>>>>>>> > > >>>>>>>> > > I added a prototype for MatDisAssemble_MPIAIJ: >>>>>>>> > > PETSC_INTERN PetscErrorCode MatDisAssemble_MPIAIJ(Mat); >>>>>>>> > > >>>>>>>> > > and I added a call to MatDisAssemble_MPIAIJ before >>>>>>>> MatMPIAIJSetPreallocation. However, I get a segfault on the call to >>>>>>>> MatDisAssemble_MPIAIJ. The segfault occurs in both serial and parallel. >>>>>>>> > > >>>>>>>> > > FYI, I'm using Petsc 3.5.2, and I'm not using a non-debug build >>>>>>>> (though I could rebuild PETSc in debug mode if you think that would help >>>>>>>> figure out what's happening here). >>>>>>>> > > >>>>>>>> > > Thanks, >>>>>>>> > > David >>>>>>>> > > >>>>>>>> > > >>>>>>>> > > >>>>>>>> > > On Sun, Feb 22, 2015 at 1:13 PM, Barry Smith < >>>>>>>> bsmith at mcs.anl.gov> wrote: >>>>>>>> > > >>>>>>>> > > David, >>>>>>>> > > >>>>>>>> > > This is an obscure little feature of MatMPIAIJ, each time >>>>>>>> you change the sparsity pattern before you call the >>>>>>>> MatMPIAIJSetPreallocation you need to call MatDisAssemble_MPIAIJ(Mat >>>>>>>> mat). This is a private PETSc function so you need to provide your own >>>>>>>> prototype for it above the function you use it in. >>>>>>>> > > >>>>>>>> > > Let us know if this resolves the problem. >>>>>>>> > > >>>>>>>> > > Barry >>>>>>>> > > >>>>>>>> > > We never really intended that people would call >>>>>>>> MatMPIAIJSetPreallocation() AFTER they had already used the matrix. >>>>>>>> > > >>>>>>>> > > >>>>>>>> > > > On Feb 22, 2015, at 6:50 AM, David Knezevic < >>>>>>>> david.knezevic at akselos.com> wrote: >>>>>>>> > > > >>>>>>>> > > > Hi all, >>>>>>>> > > > >>>>>>>> > > > I've implemented a solver for a contact problem using SNES. >>>>>>>> The sparsity pattern of the jacobian matrix needs to change at each >>>>>>>> nonlinear iteration (because the elements which are in contact can change), >>>>>>>> so I tried to deal with this by calling MatSeqAIJSetPreallocation and >>>>>>>> MatMPIAIJSetPreallocation during each iteration in order to update the >>>>>>>> preallocation. >>>>>>>> > > > >>>>>>>> > > > This seems to work fine in serial, but with two or more MPI >>>>>>>> processes I run into the error "nnz cannot be greater than row length", >>>>>>>> e.g.: >>>>>>>> > > > nnz cannot be greater than row length: local row 528 value 12 >>>>>>>> rowlength 0 >>>>>>>> > > > >>>>>>>> > > > This error is from the call to >>>>>>>> > > > MatSeqAIJSetPreallocation(b->B,o_nz,o_nnz); in >>>>>>>> MatMPIAIJSetPreallocation_MPIAIJ. >>>>>>>> > > > >>>>>>>> > > > Any guidance on what the problem might be would be most >>>>>>>> appreciated. For example, I was wondering if there is a problem with >>>>>>>> calling SetPreallocation on a matrix that has already been preallocated? >>>>>>>> > > > >>>>>>>> > > > Some notes: >>>>>>>> > > > - I'm using PETSc via libMesh >>>>>>>> > > > - The code that triggers this issue is available as a PR on >>>>>>>> the libMesh github repo, in case anyone is interested: >>>>>>>> https://github.com/libMesh/libmesh/pull/460/ >>>>>>>> > > > - I can try to make a minimal pure-PETSc example that >>>>>>>> reproduces this error, if that would be helpful. >>>>>>>> > > > >>>>>>>> > > > Many thanks, >>>>>>>> > > > David >>>>>>>> > > > >>>>>>>> > > >>>>>>>> > > >>>>>>>> > >>>>>>>> > >>>>>>>> >>>>>>>> >>>>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From abhyshr at mcs.anl.gov Mon Feb 23 10:33:31 2015 From: abhyshr at mcs.anl.gov (Abhyankar, Shrirang G.) Date: Mon, 23 Feb 2015 16:33:31 +0000 Subject: [petsc-users] DMNetworkGetEdgeRange() in parallel In-Reply-To: Message-ID: Miguel, It's not entirely clear what you are trying to do and what solver you intend to use eventually. The way DMNetwork is set up currently (following from DMPlex) is that you can assign degrees of freedom for each vertex and edge using DMNetworkAddNumVariables. Once the DM is setup and/or distributed, one create global vector(s) of the appropriate size using DMCreateGlobalVector. During a residual evaluation, one first gets the local vectors from the DM and then does a DMGlobalToLocalBegin/End to copy the contents of the global vector to the local vector. You can then use a VecGetArray() on this local vector to access the elements of the vector. While iterating over the local edges/vertices, DMNetworkGetVariableOffset gives you the location of the first element in the local vector for that particular edge/vertex point. This is how it is done in the DMNetwork example pf.c. Now back to your question, are you creating your "global petsc vector" using DMCreateGlobalVector()? Do you wish to have vectors of different sizes associated with a DMNetwork? Shri From: Miguel Angel Salazar de Troya > Date: Mon, 23 Feb 2015 09:27:51 -0600 To: Matthew Knepley > Cc: Shri >, "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel I'm iterating through local edges given in DMNetworkGetEdgeRange(). For each edge, I extract or modify its corresponding value in a global petsc vector. Therefore that vector must have as many components as edges there are in the network. To extract the value in the vector, I use VecGetArray() and a variable counter that is incremented in each iteration. The array that I obtain in VecGetArray() has to be the same size than the edge range. That variable counter starts as 0, so if the array that I obtained in VecGetArray() is x_array, x_array[0] must be the component in the global vector that corresponds with the start edge given in DMNetworkGetEdgeRange() I need that global petsc vector because I will use it in other operations, it's not just data. Sorry for the confusion. Thanks in advance. Miguel On Mon, Feb 23, 2015 at 9:09 AM, Matthew Knepley > wrote: On Mon, Feb 23, 2015 at 8:42 AM, Miguel Angel Salazar de Troya > wrote: Thanks, that will help me. Now what I would like to have is the following: if I have two processors and ten edges, the partitioning results in the first processor having the edges 0-4 and the second processor, the edges 5-9. I also have a global vector with as many components as edges, 10. How can I partition it so the first processor also has the 0-4 components and the second, the 5-9 components of the vector? I think it would help to know what you want to accomplish. This is how you are proposing to do it.' If you just want to put data on edges, DMNetwork has a facility for that already. Thanks, Matt Miguel On Feb 23, 2015 8:08 AM, "Abhyankar, Shrirang G." > wrote: Miguel, One possible way is to store the global numbering of any edge/vertex in the "component" attached to it. Once the mesh gets partitioned, the components are also distributed so you can easily retrieve the global number of any edge/vertex by accessing its component. This is what is done in the DMNetwork example pf.c although the global numbering is not used for anything. Shri From: Matthew Knepley > Date: Mon, 23 Feb 2015 07:54:34 -0600 To: Miguel Angel Salazar de Troya > Cc: "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel On Sun, Feb 22, 2015 at 3:59 PM, Miguel Angel Salazar de Troya > wrote: Thanks. Once I obtain that Index Set with the routine DMPlexCreateCellNumbering() (I assume that the edges in DMNetwork correspond to cells in DMPlex) can I use it to partition a vector with as many components as edges I have in my network? I do not completely understand the question. If you want a partition of the edges, you can use DMPlexCreatePartition() and its friend DMPlexDistribute(). What are you trying to do? Matt Thanks Miguel On Sun, Feb 22, 2015 at 12:15 PM, Matthew Knepley > wrote: On Sun, Feb 22, 2015 at 11:01 AM, Miguel Angel Salazar de Troya > wrote: Hi I noticed that the routine DMNetworkGetEdgeRange() returns the local indices for the edge range. Is there any way to obtain the global indices? So if my network has 10 edges, the processor 1 has the 0-4 edges and the processor 2, the 5-9 edges, how can I obtain this information? One of the points of DMPlex is we do not require a global numbering. Everything is numbered locally, and the PetscSF maps local numbers to local numbers in order to determine ownership. If you want to create a global numbering for some reason, you can using DMPlexCreatePointNumbering(). There are also cell and vertex versions that we use for output, so you could do it just for edges as well. Thanks, Matt Thanks Miguel -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From salazardetroya at gmail.com Mon Feb 23 10:46:02 2015 From: salazardetroya at gmail.com (Miguel Angel Salazar de Troya) Date: Mon, 23 Feb 2015 10:46:02 -0600 Subject: [petsc-users] DMNetworkGetEdgeRange() in parallel In-Reply-To: References: Message-ID: Yes, that's what I need. If I added a variable to the edges with DMNetworkAddNumVariables (), my global vector that DMCreateGlobalVector () creates would have the edge variables that I don't want to have mixed with the variables I added in the vertices. I want them to be in a separate vectors. Therefore, I create a vector with DMCreateGlobalVector () with the variables I added in the vertices. Now I want another vector with "other" variables in the edges, and this vector has to be partitioned the same way the edges are. Thanks Miguel On Mon, Feb 23, 2015 at 10:33 AM, Abhyankar, Shrirang G. < abhyshr at mcs.anl.gov> wrote: > Miguel, > It's not entirely clear what you are trying to do and what solver you > intend to use eventually. The way DMNetwork is set up currently (following > from DMPlex) is that you can assign degrees of freedom for each vertex and > edge using DMNetworkAddNumVariables > . > Once the DM is setup and/or distributed, one create global vector(s) of the > appropriate size using DMCreateGlobalVector > . > During a residual evaluation, one first gets the local vectors from the DM > and then does a DMGlobalToLocalBegin/End > to > copy the contents of the global vector to the local vector. You can then > use a VecGetArray() on this local vector to access the elements of the > vector. While iterating over the local edges/vertices, > DMNetworkGetVariableOffset > gives > you the location of the first element in the local vector for that > particular edge/vertex point. This is how it is done in the DMNetwork > example pf.c. > > Now back to your question, are you creating your "global petsc vector" > using DMCreateGlobalVector()? Do you wish to have vectors of different > sizes associated with a DMNetwork? > > Shri > > > From: Miguel Angel Salazar de Troya > Date: Mon, 23 Feb 2015 09:27:51 -0600 > To: Matthew Knepley > Cc: Shri , "petsc-users at mcs.anl.gov" < > petsc-users at mcs.anl.gov> > > Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel > > I'm iterating through local edges given in DMNetworkGetEdgeRange(). For > each edge, I extract or modify its corresponding value in a global petsc > vector. Therefore that vector must have as many components as edges there > are in the network. To extract the value in the vector, I use VecGetArray() > and a variable counter that is incremented in each iteration. The array > that I obtain in VecGetArray() has to be the same size than the edge > range. That variable counter starts as 0, so if the array that I obtained > in VecGetArray() is x_array, x_array[0] must be the component in the > global vector that corresponds with the start edge given in > DMNetworkGetEdgeRange() > > I need that global petsc vector because I will use it in other > operations, it's not just data. Sorry for the confusion. Thanks in advance. > > Miguel > > > On Mon, Feb 23, 2015 at 9:09 AM, Matthew Knepley > wrote: > >> On Mon, Feb 23, 2015 at 8:42 AM, Miguel Angel Salazar de Troya < >> salazardetroya at gmail.com> wrote: >> >>> Thanks, that will help me. Now what I would like to have is the >>> following: if I have two processors and ten edges, the partitioning results >>> in the first processor having the edges 0-4 and the second processor, the >>> edges 5-9. I also have a global vector with as many components as edges, >>> 10. How can I partition it so the first processor also has the 0-4 >>> components and the second, the 5-9 components of the vector? >>> >> I think it would help to know what you want to accomplish. This is how >> you are proposing to do it.' >> >> If you just want to put data on edges, DMNetwork has a facility for >> that already. >> >> Thanks, >> >> Matt >> >> >>> Miguel >>> On Feb 23, 2015 8:08 AM, "Abhyankar, Shrirang G." >>> wrote: >>> >>>> Miguel, >>>> One possible way is to store the global numbering of any edge/vertex >>>> in the "component" attached to it. Once the mesh gets partitioned, the >>>> components are also distributed so you can easily retrieve the global >>>> number of any edge/vertex by accessing its component. This is what is done >>>> in the DMNetwork example pf.c although the global numbering is not used for >>>> anything. >>>> >>>> Shri >>>> From: Matthew Knepley >>>> Date: Mon, 23 Feb 2015 07:54:34 -0600 >>>> To: Miguel Angel Salazar de Troya >>>> Cc: "petsc-users at mcs.anl.gov" >>>> Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel >>>> >>>> On Sun, Feb 22, 2015 at 3:59 PM, Miguel Angel Salazar de Troya < >>>> salazardetroya at gmail.com> wrote: >>>> >>>>> Thanks. Once I obtain that Index Set with the routine DMPlexCreateCellNumbering() >>>>> (I assume that the edges in DMNetwork correspond to cells in DMPlex) can I >>>>> use it to partition a vector with as many components as edges I have in my >>>>> network? >>>>> >>>> >>>> I do not completely understand the question. >>>> >>>> If you want a partition of the edges, you can use >>>> DMPlexCreatePartition() and its friend DMPlexDistribute(). What >>>> are you trying to do? >>>> >>>> Matt >>>> >>>> >>>>> Thanks >>>>> Miguel >>>>> >>>>> On Sun, Feb 22, 2015 at 12:15 PM, Matthew Knepley >>>>> wrote: >>>>> >>>>>> On Sun, Feb 22, 2015 at 11:01 AM, Miguel Angel Salazar de Troya < >>>>>> salazardetroya at gmail.com> wrote: >>>>>> >>>>>>> Hi >>>>>>> >>>>>>> I noticed that the routine DMNetworkGetEdgeRange() returns the >>>>>>> local indices for the edge range. Is there any way to obtain the global >>>>>>> indices? So if my network has 10 edges, the processor 1 has the 0-4 edges >>>>>>> and the processor 2, the 5-9 edges, how can I obtain this information? >>>>>>> >>>>>> >>>>>> One of the points of DMPlex is we do not require a global >>>>>> numbering. Everything is numbered >>>>>> locally, and the PetscSF maps local numbers to local numbers in order >>>>>> to determine ownership. >>>>>> >>>>>> If you want to create a global numbering for some reason, you can >>>>>> using DMPlexCreatePointNumbering(). >>>>>> There are also cell and vertex versions that we use for output, so >>>>>> you could do it just for edges as well. >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> >>>>>> >>>>>>> Thanks >>>>>>> Miguel >>>>>>> >>>>>>> -- >>>>>>> *Miguel Angel Salazar de Troya* >>>>>>> Graduate Research Assistant >>>>>>> Department of Mechanical Science and Engineering >>>>>>> University of Illinois at Urbana-Champaign >>>>>>> (217) 550-2360 >>>>>>> salaza11 at illinois.edu >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> *Miguel Angel Salazar de Troya* >>>>> Graduate Research Assistant >>>>> Department of Mechanical Science and Engineering >>>>> University of Illinois at Urbana-Champaign >>>>> (217) 550-2360 >>>>> salaza11 at illinois.edu >>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > *Miguel Angel Salazar de Troya* > Graduate Research Assistant > Department of Mechanical Science and Engineering > University of Illinois at Urbana-Champaign > (217) 550-2360 > salaza11 at illinois.edu > > -- *Miguel Angel Salazar de Troya* Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From abhyshr at mcs.anl.gov Mon Feb 23 11:20:14 2015 From: abhyshr at mcs.anl.gov (Abhyankar, Shrirang G.) Date: Mon, 23 Feb 2015 17:20:14 +0000 Subject: [petsc-users] DMNetworkGetEdgeRange() in parallel In-Reply-To: Message-ID: What's the issue with having a single global vector? The global/local vector will have all the variables for the edges followed by all the variables for the vertices. In any case, DMNetwork does not support having separate global vectors for edges and vertices. Shri From: Miguel Angel Salazar de Troya > Date: Mon, 23 Feb 2015 10:46:02 -0600 To: Shri > Cc: Matthew Knepley >, "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel Yes, that's what I need. If I added a variable to the edges with DMNetworkAddNumVariables(), my global vector that DMCreateGlobalVector() creates would have the edge variables that I don't want to have mixed with the variables I added in the vertices. I want them to be in a separate vectors. Therefore, I create a vector with DMCreateGlobalVector() with the variables I added in the vertices. Now I want another vector with "other" variables in the edges, and this vector has to be partitioned the same way the edges are. Thanks Miguel On Mon, Feb 23, 2015 at 10:33 AM, Abhyankar, Shrirang G. > wrote: Miguel, It's not entirely clear what you are trying to do and what solver you intend to use eventually. The way DMNetwork is set up currently (following from DMPlex) is that you can assign degrees of freedom for each vertex and edge using DMNetworkAddNumVariables. Once the DM is setup and/or distributed, one create global vector(s) of the appropriate size using DMCreateGlobalVector. During a residual evaluation, one first gets the local vectors from the DM and then does a DMGlobalToLocalBegin/End to copy the contents of the global vector to the local vector. You can then use a VecGetArray() on this local vector to access the elements of the vector. While iterating over the local edges/vertices, DMNetworkGetVariableOffset gives you the location of the first element in the local vector for that particular edge/vertex point. This is how it is done in the DMNetwork example pf.c. Now back to your question, are you creating your "global petsc vector" using DMCreateGlobalVector()? Do you wish to have vectors of different sizes associated with a DMNetwork? Shri From: Miguel Angel Salazar de Troya > Date: Mon, 23 Feb 2015 09:27:51 -0600 To: Matthew Knepley > Cc: Shri >, "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel I'm iterating through local edges given in DMNetworkGetEdgeRange(). For each edge, I extract or modify its corresponding value in a global petsc vector. Therefore that vector must have as many components as edges there are in the network. To extract the value in the vector, I use VecGetArray() and a variable counter that is incremented in each iteration. The array that I obtain in VecGetArray() has to be the same size than the edge range. That variable counter starts as 0, so if the array that I obtained in VecGetArray() is x_array, x_array[0] must be the component in the global vector that corresponds with the start edge given in DMNetworkGetEdgeRange() I need that global petsc vector because I will use it in other operations, it's not just data. Sorry for the confusion. Thanks in advance. Miguel On Mon, Feb 23, 2015 at 9:09 AM, Matthew Knepley > wrote: On Mon, Feb 23, 2015 at 8:42 AM, Miguel Angel Salazar de Troya > wrote: Thanks, that will help me. Now what I would like to have is the following: if I have two processors and ten edges, the partitioning results in the first processor having the edges 0-4 and the second processor, the edges 5-9. I also have a global vector with as many components as edges, 10. How can I partition it so the first processor also has the 0-4 components and the second, the 5-9 components of the vector? I think it would help to know what you want to accomplish. This is how you are proposing to do it.' If you just want to put data on edges, DMNetwork has a facility for that already. Thanks, Matt Miguel On Feb 23, 2015 8:08 AM, "Abhyankar, Shrirang G." > wrote: Miguel, One possible way is to store the global numbering of any edge/vertex in the "component" attached to it. Once the mesh gets partitioned, the components are also distributed so you can easily retrieve the global number of any edge/vertex by accessing its component. This is what is done in the DMNetwork example pf.c although the global numbering is not used for anything. Shri From: Matthew Knepley > Date: Mon, 23 Feb 2015 07:54:34 -0600 To: Miguel Angel Salazar de Troya > Cc: "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel On Sun, Feb 22, 2015 at 3:59 PM, Miguel Angel Salazar de Troya > wrote: Thanks. Once I obtain that Index Set with the routine DMPlexCreateCellNumbering() (I assume that the edges in DMNetwork correspond to cells in DMPlex) can I use it to partition a vector with as many components as edges I have in my network? I do not completely understand the question. If you want a partition of the edges, you can use DMPlexCreatePartition() and its friend DMPlexDistribute(). What are you trying to do? Matt Thanks Miguel On Sun, Feb 22, 2015 at 12:15 PM, Matthew Knepley > wrote: On Sun, Feb 22, 2015 at 11:01 AM, Miguel Angel Salazar de Troya > wrote: Hi I noticed that the routine DMNetworkGetEdgeRange() returns the local indices for the edge range. Is there any way to obtain the global indices? So if my network has 10 edges, the processor 1 has the 0-4 edges and the processor 2, the 5-9 edges, how can I obtain this information? One of the points of DMPlex is we do not require a global numbering. Everything is numbered locally, and the PetscSF maps local numbers to local numbers in order to determine ownership. If you want to create a global numbering for some reason, you can using DMPlexCreatePointNumbering(). There are also cell and vertex versions that we use for output, so you could do it just for edges as well. Thanks, Matt Thanks Miguel -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Feb 23 11:34:00 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 23 Feb 2015 11:34:00 -0600 Subject: [petsc-users] DMNetworkGetEdgeRange() in parallel In-Reply-To: References: Message-ID: > On Feb 23, 2015, at 11:20 AM, Abhyankar, Shrirang G. wrote: > > What's the issue with having a single global vector? The global/local vector will have all the variables for the edges followed by all the variables for the vertices. > > In any case, DMNetwork does not support having separate global vectors for edges and vertices. If you really want them in a separate vector global you can simply create a new global vector and pull out the edge variables (or vertex variables) from the original big global vector. The "pulling out" is completely local. You can also "push back" values from the smaller "edge" global vector to the original big global vector. If you also want separate local vectors you can do the same thing on the original bigger local vectors pulling out just the edges or vertices. Barry > > Shri > > From: Miguel Angel Salazar de Troya > Date: Mon, 23 Feb 2015 10:46:02 -0600 > To: Shri > Cc: Matthew Knepley , "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel > > Yes, that's what I need. If I added a variable to the edges with DMNetworkAddNumVariables(), my global vector that DMCreateGlobalVector() creates would have the edge variables that I don't want to have mixed with the variables I added in the vertices. I want them to be in a separate vectors. Therefore, I create a vector with DMCreateGlobalVector() with the variables I added in the vertices. Now I want another vector with "other" variables in the edges, and this vector has to be partitioned the same way the edges are. > > Thanks > Miguel > > On Mon, Feb 23, 2015 at 10:33 AM, Abhyankar, Shrirang G. wrote: > Miguel, > It's not entirely clear what you are trying to do and what solver you intend to use eventually. The way DMNetwork is set up currently (following from DMPlex) is that you can assign degrees of freedom for each vertex and edge using DMNetworkAddNumVariables. Once the DM is setup and/or distributed, one create global vector(s) of the appropriate size using DMCreateGlobalVector. During a residual evaluation, one first gets the local vectors from the DM and then does a DMGlobalToLocalBegin/End to copy the contents of the global vector to the local vector. You can then use a VecGetArray() on this local vector to access the elements of the vector. While iterating over the local edges/vertices, DMNetworkGetVariableOffset gives you the location of the first element in the local vector for that particular edge/vertex point. This is how it is done in the DMNetwork example pf.c. > > Now back to your question, are you creating your "global petsc vector" using DMCreateGlobalVector()? Do you wish to have vectors of different sizes associated with a DMNetwork? > > Shri > > > From: Miguel Angel Salazar de Troya > Date: Mon, 23 Feb 2015 09:27:51 -0600 > To: Matthew Knepley > Cc: Shri , "petsc-users at mcs.anl.gov" > > Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel > > I'm iterating through local edges given in DMNetworkGetEdgeRange(). For each edge, I extract or modify its corresponding value in a global petsc vector. Therefore that vector must have as many components as edges there are in the network. To extract the value in the vector, I use VecGetArray() and a variable counter that is incremented in each iteration. The array that I obtain in VecGetArray() has to be the same size than the edge range. That variable counter starts as 0, so if the array that I obtained in VecGetArray() is x_array, x_array[0] must be the component in the global vector that corresponds with the start edge given in DMNetworkGetEdgeRange() > > I need that global petsc vector because I will use it in other operations, it's not just data. Sorry for the confusion. Thanks in advance. > > Miguel > > > On Mon, Feb 23, 2015 at 9:09 AM, Matthew Knepley wrote: > On Mon, Feb 23, 2015 at 8:42 AM, Miguel Angel Salazar de Troya wrote: > Thanks, that will help me. Now what I would like to have is the following: if I have two processors and ten edges, the partitioning results in the first processor having the edges 0-4 and the second processor, the edges 5-9. I also have a global vector with as many components as edges, 10. How can I partition it so the first processor also has the 0-4 components and the second, the 5-9 components of the vector? > > I think it would help to know what you want to accomplish. This is how you are proposing to do it.' > > If you just want to put data on edges, DMNetwork has a facility for that already. > > Thanks, > > Matt > > Miguel > > On Feb 23, 2015 8:08 AM, "Abhyankar, Shrirang G." wrote: > Miguel, > One possible way is to store the global numbering of any edge/vertex in the "component" attached to it. Once the mesh gets partitioned, the components are also distributed so you can easily retrieve the global number of any edge/vertex by accessing its component. This is what is done in the DMNetwork example pf.c although the global numbering is not used for anything. > > Shri > From: Matthew Knepley > Date: Mon, 23 Feb 2015 07:54:34 -0600 > To: Miguel Angel Salazar de Troya > Cc: "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel > > On Sun, Feb 22, 2015 at 3:59 PM, Miguel Angel Salazar de Troya wrote: > Thanks. Once I obtain that Index Set with the routine DMPlexCreateCellNumbering() (I assume that the edges in DMNetwork correspond to cells in DMPlex) can I use it to partition a vector with as many components as edges I have in my network? > > I do not completely understand the question. > > If you want a partition of the edges, you can use DMPlexCreatePartition() and its friend DMPlexDistribute(). What > are you trying to do? > > Matt > > Thanks > Miguel > > On Sun, Feb 22, 2015 at 12:15 PM, Matthew Knepley wrote: > On Sun, Feb 22, 2015 at 11:01 AM, Miguel Angel Salazar de Troya wrote: > Hi > > I noticed that the routine DMNetworkGetEdgeRange() returns the local indices for the edge range. Is there any way to obtain the global indices? So if my network has 10 edges, the processor 1 has the 0-4 edges and the processor 2, the 5-9 edges, how can I obtain this information? > > One of the points of DMPlex is we do not require a global numbering. Everything is numbered > locally, and the PetscSF maps local numbers to local numbers in order to determine ownership. > > If you want to create a global numbering for some reason, you can using DMPlexCreatePointNumbering(). > There are also cell and vertex versions that we use for output, so you could do it just for edges as well. > > Thanks, > > Matt > > Thanks > Miguel > > -- > Miguel Angel Salazar de Troya > Graduate Research Assistant > Department of Mechanical Science and Engineering > University of Illinois at Urbana-Champaign > (217) 550-2360 > salaza11 at illinois.edu > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > > > -- > Miguel Angel Salazar de Troya > Graduate Research Assistant > Department of Mechanical Science and Engineering > University of Illinois at Urbana-Champaign > (217) 550-2360 > salaza11 at illinois.edu > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > > > -- > Miguel Angel Salazar de Troya > Graduate Research Assistant > Department of Mechanical Science and Engineering > University of Illinois at Urbana-Champaign > (217) 550-2360 > salaza11 at illinois.edu > > > > > -- > Miguel Angel Salazar de Troya > Graduate Research Assistant > Department of Mechanical Science and Engineering > University of Illinois at Urbana-Champaign > (217) 550-2360 > salaza11 at illinois.edu > From knepley at gmail.com Mon Feb 23 11:37:30 2015 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 23 Feb 2015 11:37:30 -0600 Subject: [petsc-users] DMNetworkGetEdgeRange() in parallel In-Reply-To: References: Message-ID: On Mon, Feb 23, 2015 at 9:27 AM, Miguel Angel Salazar de Troya < salazardetroya at gmail.com> wrote: > I'm iterating through local edges given in DMNetworkGetEdgeRange(). For > each edge, I extract or modify its corresponding value in a global petsc > vector. Therefore that vector must have as many components as edges there > are in the network. To extract the value in the vector, I use VecGetArray() > and a variable counter that is incremented in each iteration. The array > that I obtain in VecGetArray() has to be the same size than the edge > range. That variable counter starts as 0, so if the array that I obtained > in VecGetArray() is x_array, x_array[0] must be the component in the > global vector that corresponds with the start edge given in > DMNetworkGetEdgeRange() > > I need that global petsc vector because I will use it in other operations, > it's not just data. Sorry for the confusion. Thanks in advance. > This sounds like an assembly operation. The usual paradigm is to compute in the local space, and then communicate to get to the global space. So you would make a PetscSection that had 1 (or some) unknowns on each cell (edge) and then you can use DMCreateGlobal/LocalVector() and DMLocalToGlobal() to do this. Does that make sense? Thanks, Matt > Miguel > > > On Mon, Feb 23, 2015 at 9:09 AM, Matthew Knepley > wrote: > >> On Mon, Feb 23, 2015 at 8:42 AM, Miguel Angel Salazar de Troya < >> salazardetroya at gmail.com> wrote: >> >>> Thanks, that will help me. Now what I would like to have is the >>> following: if I have two processors and ten edges, the partitioning results >>> in the first processor having the edges 0-4 and the second processor, the >>> edges 5-9. I also have a global vector with as many components as edges, >>> 10. How can I partition it so the first processor also has the 0-4 >>> components and the second, the 5-9 components of the vector? >>> >> I think it would help to know what you want to accomplish. This is how >> you are proposing to do it.' >> >> If you just want to put data on edges, DMNetwork has a facility for that >> already. >> >> Thanks, >> >> Matt >> >> >>> Miguel >>> On Feb 23, 2015 8:08 AM, "Abhyankar, Shrirang G." >>> wrote: >>> >>>> Miguel, >>>> One possible way is to store the global numbering of any edge/vertex >>>> in the "component" attached to it. Once the mesh gets partitioned, the >>>> components are also distributed so you can easily retrieve the global >>>> number of any edge/vertex by accessing its component. This is what is done >>>> in the DMNetwork example pf.c although the global numbering is not used for >>>> anything. >>>> >>>> Shri >>>> From: Matthew Knepley >>>> Date: Mon, 23 Feb 2015 07:54:34 -0600 >>>> To: Miguel Angel Salazar de Troya >>>> Cc: "petsc-users at mcs.anl.gov" >>>> Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel >>>> >>>> On Sun, Feb 22, 2015 at 3:59 PM, Miguel Angel Salazar de Troya < >>>> salazardetroya at gmail.com> wrote: >>>> >>>>> Thanks. Once I obtain that Index Set with the routine DMPlexCreateCellNumbering() >>>>> (I assume that the edges in DMNetwork correspond to cells in DMPlex) can I >>>>> use it to partition a vector with as many components as edges I have in my >>>>> network? >>>>> >>>> >>>> I do not completely understand the question. >>>> >>>> If you want a partition of the edges, you can use >>>> DMPlexCreatePartition() and its friend DMPlexDistribute(). What >>>> are you trying to do? >>>> >>>> Matt >>>> >>>> >>>>> Thanks >>>>> Miguel >>>>> >>>>> On Sun, Feb 22, 2015 at 12:15 PM, Matthew Knepley >>>>> wrote: >>>>> >>>>>> On Sun, Feb 22, 2015 at 11:01 AM, Miguel Angel Salazar de Troya < >>>>>> salazardetroya at gmail.com> wrote: >>>>>> >>>>>>> Hi >>>>>>> >>>>>>> I noticed that the routine DMNetworkGetEdgeRange() returns the >>>>>>> local indices for the edge range. Is there any way to obtain the global >>>>>>> indices? So if my network has 10 edges, the processor 1 has the 0-4 edges >>>>>>> and the processor 2, the 5-9 edges, how can I obtain this information? >>>>>>> >>>>>> >>>>>> One of the points of DMPlex is we do not require a global >>>>>> numbering. Everything is numbered >>>>>> locally, and the PetscSF maps local numbers to local numbers in order >>>>>> to determine ownership. >>>>>> >>>>>> If you want to create a global numbering for some reason, you can >>>>>> using DMPlexCreatePointNumbering(). >>>>>> There are also cell and vertex versions that we use for output, so >>>>>> you could do it just for edges as well. >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> >>>>>> >>>>>>> Thanks >>>>>>> Miguel >>>>>>> >>>>>>> -- >>>>>>> *Miguel Angel Salazar de Troya* >>>>>>> Graduate Research Assistant >>>>>>> Department of Mechanical Science and Engineering >>>>>>> University of Illinois at Urbana-Champaign >>>>>>> (217) 550-2360 >>>>>>> salaza11 at illinois.edu >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> *Miguel Angel Salazar de Troya* >>>>> Graduate Research Assistant >>>>> Department of Mechanical Science and Engineering >>>>> University of Illinois at Urbana-Champaign >>>>> (217) 550-2360 >>>>> salaza11 at illinois.edu >>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > *Miguel Angel Salazar de Troya* > Graduate Research Assistant > Department of Mechanical Science and Engineering > University of Illinois at Urbana-Champaign > (217) 550-2360 > salaza11 at illinois.edu > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From salazardetroya at gmail.com Mon Feb 23 13:40:35 2015 From: salazardetroya at gmail.com (Miguel Angel Salazar de Troya) Date: Mon, 23 Feb 2015 13:40:35 -0600 Subject: [petsc-users] DMNetworkGetEdgeRange() in parallel In-Reply-To: References: Message-ID: Wouldn't including the edge variables in the global vector make the code slower? I'm using the global vector in a TS, using one of the explicit RK schemes. The edge variables would not be updated in the RHSFunction evaluation. I only change the edge variables in the TSUpdate. If the global vector had the edge variables, it would be a much larger vector, and all the vector operations performed by the TS would be slower. Although the vector F returned by the RHSFunction would be zero in the edge variable components. I guess that being the vector sparse that would not be a problem. I think I'm more interested in the PetscSection approach because it might require less modifications in my code. However, I don't know how I could do this. Maybe something like this? PetscSectionCreate(PETSC_COMM_WORLD, &s); PetscSectionSetNumFields(s, 1); PetscSectionSetFieldComponents(s, 0, 1); // Now to set the chart, I pick the edge range DMNetworkGetEdgeRange(dm, & eStart, & eEnd PetscSectionSetChart(s, eStart, eEnd); for(PetscInt e = eStart; c < eEnd; ++e) { PetscSectionSetDof(s, e, 1); PetscSectionSetFieldDof(s, e, 1, 1); } PetscSectionSetUp(s); Now in the manual I see this: DMSetDefaultSection(dm, s); DMGetLocalVector(dm, &localVec); DMGetGlobalVector(dm, &globalVec); Setting up the default section in the DM would interfere with the section already set up with the variables in the vertices? Thanks a lot for your responses. On Mon, Feb 23, 2015 at 11:37 AM, Matthew Knepley wrote: > On Mon, Feb 23, 2015 at 9:27 AM, Miguel Angel Salazar de Troya < > salazardetroya at gmail.com> wrote: > >> I'm iterating through local edges given in DMNetworkGetEdgeRange(). For >> each edge, I extract or modify its corresponding value in a global petsc >> vector. Therefore that vector must have as many components as edges there >> are in the network. To extract the value in the vector, I use VecGetArray() >> and a variable counter that is incremented in each iteration. The array >> that I obtain in VecGetArray() has to be the same size than the edge >> range. That variable counter starts as 0, so if the array that I obtained >> in VecGetArray() is x_array, x_array[0] must be the component in the >> global vector that corresponds with the start edge given in >> DMNetworkGetEdgeRange() >> >> I need that global petsc vector because I will use it in other >> operations, it's not just data. Sorry for the confusion. Thanks in advance. >> > > This sounds like an assembly operation. The usual paradigm is to compute > in the local space, and then communicate to get to the global space. So you > would make a PetscSection that had 1 (or some) unknowns on each cell (edge) > and then you can use DMCreateGlobal/LocalVector() and DMLocalToGlobal() to > do this. > > Does that make sense? > > Thanks, > > Matt > > >> Miguel >> >> >> On Mon, Feb 23, 2015 at 9:09 AM, Matthew Knepley >> wrote: >> >>> On Mon, Feb 23, 2015 at 8:42 AM, Miguel Angel Salazar de Troya < >>> salazardetroya at gmail.com> wrote: >>> >>>> Thanks, that will help me. Now what I would like to have is the >>>> following: if I have two processors and ten edges, the partitioning results >>>> in the first processor having the edges 0-4 and the second processor, the >>>> edges 5-9. I also have a global vector with as many components as edges, >>>> 10. How can I partition it so the first processor also has the 0-4 >>>> components and the second, the 5-9 components of the vector? >>>> >>> I think it would help to know what you want to accomplish. This is how >>> you are proposing to do it.' >>> >>> If you just want to put data on edges, DMNetwork has a facility for that >>> already. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Miguel >>>> On Feb 23, 2015 8:08 AM, "Abhyankar, Shrirang G." >>>> wrote: >>>> >>>>> Miguel, >>>>> One possible way is to store the global numbering of any >>>>> edge/vertex in the "component" attached to it. Once the mesh gets >>>>> partitioned, the components are also distributed so you can easily retrieve >>>>> the global number of any edge/vertex by accessing its component. This is >>>>> what is done in the DMNetwork example pf.c although the global numbering is >>>>> not used for anything. >>>>> >>>>> Shri >>>>> From: Matthew Knepley >>>>> Date: Mon, 23 Feb 2015 07:54:34 -0600 >>>>> To: Miguel Angel Salazar de Troya >>>>> Cc: "petsc-users at mcs.anl.gov" >>>>> Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel >>>>> >>>>> On Sun, Feb 22, 2015 at 3:59 PM, Miguel Angel Salazar de Troya < >>>>> salazardetroya at gmail.com> wrote: >>>>> >>>>>> Thanks. Once I obtain that Index Set with the routine DMPlexCreateCellNumbering() >>>>>> (I assume that the edges in DMNetwork correspond to cells in DMPlex) can I >>>>>> use it to partition a vector with as many components as edges I have in my >>>>>> network? >>>>>> >>>>> >>>>> I do not completely understand the question. >>>>> >>>>> If you want a partition of the edges, you can use >>>>> DMPlexCreatePartition() and its friend DMPlexDistribute(). What >>>>> are you trying to do? >>>>> >>>>> Matt >>>>> >>>>> >>>>>> Thanks >>>>>> Miguel >>>>>> >>>>>> On Sun, Feb 22, 2015 at 12:15 PM, Matthew Knepley >>>>>> wrote: >>>>>> >>>>>>> On Sun, Feb 22, 2015 at 11:01 AM, Miguel Angel Salazar de Troya < >>>>>>> salazardetroya at gmail.com> wrote: >>>>>>> >>>>>>>> Hi >>>>>>>> >>>>>>>> I noticed that the routine DMNetworkGetEdgeRange() returns the >>>>>>>> local indices for the edge range. Is there any way to obtain the global >>>>>>>> indices? So if my network has 10 edges, the processor 1 has the 0-4 edges >>>>>>>> and the processor 2, the 5-9 edges, how can I obtain this information? >>>>>>>> >>>>>>> >>>>>>> One of the points of DMPlex is we do not require a global >>>>>>> numbering. Everything is numbered >>>>>>> locally, and the PetscSF maps local numbers to local numbers in >>>>>>> order to determine ownership. >>>>>>> >>>>>>> If you want to create a global numbering for some reason, you can >>>>>>> using DMPlexCreatePointNumbering(). >>>>>>> There are also cell and vertex versions that we use for output, so >>>>>>> you could do it just for edges as well. >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> Matt >>>>>>> >>>>>>> >>>>>>>> Thanks >>>>>>>> Miguel >>>>>>>> >>>>>>>> -- >>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>> Graduate Research Assistant >>>>>>>> Department of Mechanical Science and Engineering >>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>> (217) 550-2360 >>>>>>>> salaza11 at illinois.edu >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> What most experimenters take for granted before they begin their >>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>> experiments lead. >>>>>>> -- Norbert Wiener >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> *Miguel Angel Salazar de Troya* >>>>>> Graduate Research Assistant >>>>>> Department of Mechanical Science and Engineering >>>>>> University of Illinois at Urbana-Champaign >>>>>> (217) 550-2360 >>>>>> salaza11 at illinois.edu >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> *Miguel Angel Salazar de Troya* >> Graduate Research Assistant >> Department of Mechanical Science and Engineering >> University of Illinois at Urbana-Champaign >> (217) 550-2360 >> salaza11 at illinois.edu >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- *Miguel Angel Salazar de Troya* Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Feb 23 14:05:49 2015 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 23 Feb 2015 14:05:49 -0600 Subject: [petsc-users] DMNetworkGetEdgeRange() in parallel In-Reply-To: References: Message-ID: On Mon, Feb 23, 2015 at 1:40 PM, Miguel Angel Salazar de Troya < salazardetroya at gmail.com> wrote: > Wouldn't including the edge variables in the global vector make the code > slower? I'm using the global vector in a TS, using one of the explicit RK > schemes. The edge variables would not be updated in the RHSFunction > evaluation. I only change the edge variables in the TSUpdate. If the global > vector had the edge variables, it would be a much larger vector, and all > the vector operations performed by the TS would be slower. Although the > vector F returned by the RHSFunction would be zero in the edge variable > components. I guess that being the vector sparse that would not be a > problem. > > I think I'm more interested in the PetscSection approach because it might > require less modifications in my code. However, I don't know how I could do > this. Maybe something like this? > > PetscSectionCreate(PETSC_COMM_WORLD, &s); > PetscSectionSetNumFields(s, 1); > PetscSectionSetFieldComponents(s, 0, 1); > > // Now to set the chart, I pick the edge range > > DMNetworkGetEdgeRange(dm, & eStart, & eEnd > > PetscSectionSetChart(s, eStart, eEnd); > > for(PetscInt e = eStart; c < eEnd; ++e) { > PetscSectionSetDof(s, e, 1); > PetscSectionSetFieldDof(s, e, 1, 1); > It should be PetscSectionSetFieldDof(s, e, 0, 1); > } > PetscSectionSetUp(s); > > Now in the manual I see this: > First you want to do: DMClone(dm, &dmEdge); and then use dmEdge below. > DMSetDefaultSection(dm, s); > DMGetLocalVector(dm, &localVec); > DMGetGlobalVector(dm, &globalVec); > > Setting up the default section in the DM would interfere with the section > already set up with the variables in the vertices? > Yep, thats why you would use a clone. Thanks, Matt > Thanks a lot for your responses. > > > > On Mon, Feb 23, 2015 at 11:37 AM, Matthew Knepley > wrote: > >> On Mon, Feb 23, 2015 at 9:27 AM, Miguel Angel Salazar de Troya < >> salazardetroya at gmail.com> wrote: >> >>> I'm iterating through local edges given in DMNetworkGetEdgeRange(). For >>> each edge, I extract or modify its corresponding value in a global petsc >>> vector. Therefore that vector must have as many components as edges there >>> are in the network. To extract the value in the vector, I use VecGetArray() >>> and a variable counter that is incremented in each iteration. The array >>> that I obtain in VecGetArray() has to be the same size than the edge >>> range. That variable counter starts as 0, so if the array that I obtained >>> in VecGetArray() is x_array, x_array[0] must be the component in the >>> global vector that corresponds with the start edge given in >>> DMNetworkGetEdgeRange() >>> >>> I need that global petsc vector because I will use it in other >>> operations, it's not just data. Sorry for the confusion. Thanks in advance. >>> >> >> This sounds like an assembly operation. The usual paradigm is to compute >> in the local space, and then communicate to get to the global space. So you >> would make a PetscSection that had 1 (or some) unknowns on each cell (edge) >> and then you can use DMCreateGlobal/LocalVector() and DMLocalToGlobal() to >> do this. >> >> Does that make sense? >> >> Thanks, >> >> Matt >> >> >>> Miguel >>> >>> >>> On Mon, Feb 23, 2015 at 9:09 AM, Matthew Knepley >>> wrote: >>> >>>> On Mon, Feb 23, 2015 at 8:42 AM, Miguel Angel Salazar de Troya < >>>> salazardetroya at gmail.com> wrote: >>>> >>>>> Thanks, that will help me. Now what I would like to have is the >>>>> following: if I have two processors and ten edges, the partitioning results >>>>> in the first processor having the edges 0-4 and the second processor, the >>>>> edges 5-9. I also have a global vector with as many components as edges, >>>>> 10. How can I partition it so the first processor also has the 0-4 >>>>> components and the second, the 5-9 components of the vector? >>>>> >>>> I think it would help to know what you want to accomplish. This is how >>>> you are proposing to do it.' >>>> >>>> If you just want to put data on edges, DMNetwork has a facility for >>>> that already. >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> Miguel >>>>> On Feb 23, 2015 8:08 AM, "Abhyankar, Shrirang G." >>>>> wrote: >>>>> >>>>>> Miguel, >>>>>> One possible way is to store the global numbering of any >>>>>> edge/vertex in the "component" attached to it. Once the mesh gets >>>>>> partitioned, the components are also distributed so you can easily retrieve >>>>>> the global number of any edge/vertex by accessing its component. This is >>>>>> what is done in the DMNetwork example pf.c although the global numbering is >>>>>> not used for anything. >>>>>> >>>>>> Shri >>>>>> From: Matthew Knepley >>>>>> Date: Mon, 23 Feb 2015 07:54:34 -0600 >>>>>> To: Miguel Angel Salazar de Troya >>>>>> Cc: "petsc-users at mcs.anl.gov" >>>>>> Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel >>>>>> >>>>>> On Sun, Feb 22, 2015 at 3:59 PM, Miguel Angel Salazar de Troya < >>>>>> salazardetroya at gmail.com> wrote: >>>>>> >>>>>>> Thanks. Once I obtain that Index Set with the routine DMPlexCreateCellNumbering() >>>>>>> (I assume that the edges in DMNetwork correspond to cells in DMPlex) can I >>>>>>> use it to partition a vector with as many components as edges I have in my >>>>>>> network? >>>>>>> >>>>>> >>>>>> I do not completely understand the question. >>>>>> >>>>>> If you want a partition of the edges, you can use >>>>>> DMPlexCreatePartition() and its friend DMPlexDistribute(). What >>>>>> are you trying to do? >>>>>> >>>>>> Matt >>>>>> >>>>>> >>>>>>> Thanks >>>>>>> Miguel >>>>>>> >>>>>>> On Sun, Feb 22, 2015 at 12:15 PM, Matthew Knepley >>>>>> > wrote: >>>>>>> >>>>>>>> On Sun, Feb 22, 2015 at 11:01 AM, Miguel Angel Salazar de Troya < >>>>>>>> salazardetroya at gmail.com> wrote: >>>>>>>> >>>>>>>>> Hi >>>>>>>>> >>>>>>>>> I noticed that the routine DMNetworkGetEdgeRange() returns the >>>>>>>>> local indices for the edge range. Is there any way to obtain the global >>>>>>>>> indices? So if my network has 10 edges, the processor 1 has the 0-4 edges >>>>>>>>> and the processor 2, the 5-9 edges, how can I obtain this information? >>>>>>>>> >>>>>>>> >>>>>>>> One of the points of DMPlex is we do not require a global >>>>>>>> numbering. Everything is numbered >>>>>>>> locally, and the PetscSF maps local numbers to local numbers in >>>>>>>> order to determine ownership. >>>>>>>> >>>>>>>> If you want to create a global numbering for some reason, you can >>>>>>>> using DMPlexCreatePointNumbering(). >>>>>>>> There are also cell and vertex versions that we use for output, so >>>>>>>> you could do it just for edges as well. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> >>>>>>>> Matt >>>>>>>> >>>>>>>> >>>>>>>>> Thanks >>>>>>>>> Miguel >>>>>>>>> >>>>>>>>> -- >>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>> Graduate Research Assistant >>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>> (217) 550-2360 >>>>>>>>> salaza11 at illinois.edu >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> What most experimenters take for granted before they begin their >>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>> experiments lead. >>>>>>>> -- Norbert Wiener >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> *Miguel Angel Salazar de Troya* >>>>>>> Graduate Research Assistant >>>>>>> Department of Mechanical Science and Engineering >>>>>>> University of Illinois at Urbana-Champaign >>>>>>> (217) 550-2360 >>>>>>> salaza11 at illinois.edu >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >>> >>> -- >>> *Miguel Angel Salazar de Troya* >>> Graduate Research Assistant >>> Department of Mechanical Science and Engineering >>> University of Illinois at Urbana-Champaign >>> (217) 550-2360 >>> salaza11 at illinois.edu >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > *Miguel Angel Salazar de Troya* > Graduate Research Assistant > Department of Mechanical Science and Engineering > University of Illinois at Urbana-Champaign > (217) 550-2360 > salaza11 at illinois.edu > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From salazardetroya at gmail.com Mon Feb 23 14:15:00 2015 From: salazardetroya at gmail.com (Miguel Angel Salazar de Troya) Date: Mon, 23 Feb 2015 14:15:00 -0600 Subject: [petsc-users] DMNetworkGetEdgeRange() in parallel In-Reply-To: References: Message-ID: Thanks a lot, the partition should be done before setting up the section, right? Miguel On Mon, Feb 23, 2015 at 2:05 PM, Matthew Knepley wrote: > On Mon, Feb 23, 2015 at 1:40 PM, Miguel Angel Salazar de Troya < > salazardetroya at gmail.com> wrote: > >> Wouldn't including the edge variables in the global vector make the code >> slower? I'm using the global vector in a TS, using one of the explicit RK >> schemes. The edge variables would not be updated in the RHSFunction >> evaluation. I only change the edge variables in the TSUpdate. If the global >> vector had the edge variables, it would be a much larger vector, and all >> the vector operations performed by the TS would be slower. Although the >> vector F returned by the RHSFunction would be zero in the edge variable >> components. I guess that being the vector sparse that would not be a >> problem. >> >> I think I'm more interested in the PetscSection approach because it might >> require less modifications in my code. However, I don't know how I could do >> this. Maybe something like this? >> >> PetscSectionCreate(PETSC_COMM_WORLD, &s); >> PetscSectionSetNumFields(s, 1); >> PetscSectionSetFieldComponents(s, 0, 1); >> >> // Now to set the chart, I pick the edge range >> >> DMNetworkGetEdgeRange(dm, & eStart, & eEnd >> >> PetscSectionSetChart(s, eStart, eEnd); >> >> for(PetscInt e = eStart; c < eEnd; ++e) { >> PetscSectionSetDof(s, e, 1); >> PetscSectionSetFieldDof(s, e, 1, 1); >> > > It should be PetscSectionSetFieldDof(s, e, 0, 1); > > >> } >> PetscSectionSetUp(s); >> >> Now in the manual I see this: >> > > First you want to do: > > DMClone(dm, &dmEdge); > > and then use dmEdge below. > > >> DMSetDefaultSection(dm, s); >> DMGetLocalVector(dm, &localVec); >> DMGetGlobalVector(dm, &globalVec); >> >> Setting up the default section in the DM would interfere with the section >> already set up with the variables in the vertices? >> > > Yep, thats why you would use a clone. > > Thanks, > > Matt > > >> Thanks a lot for your responses. >> >> >> >> On Mon, Feb 23, 2015 at 11:37 AM, Matthew Knepley >> wrote: >> >>> On Mon, Feb 23, 2015 at 9:27 AM, Miguel Angel Salazar de Troya < >>> salazardetroya at gmail.com> wrote: >>> >>>> I'm iterating through local edges given in DMNetworkGetEdgeRange(). >>>> For each edge, I extract or modify its corresponding value in a global >>>> petsc vector. Therefore that vector must have as many components as edges >>>> there are in the network. To extract the value in the vector, I use >>>> VecGetArray() and a variable counter that is incremented in each iteration. >>>> The array that I obtain in VecGetArray() has to be the same size than >>>> the edge range. That variable counter starts as 0, so if the array that I >>>> obtained in VecGetArray() is x_array, x_array[0] must be the component >>>> in the global vector that corresponds with the start edge given in >>>> DMNetworkGetEdgeRange() >>>> >>>> I need that global petsc vector because I will use it in other >>>> operations, it's not just data. Sorry for the confusion. Thanks in advance. >>>> >>> >>> This sounds like an assembly operation. The usual paradigm is to compute >>> in the local space, and then communicate to get to the global space. So you >>> would make a PetscSection that had 1 (or some) unknowns on each cell (edge) >>> and then you can use DMCreateGlobal/LocalVector() and DMLocalToGlobal() to >>> do this. >>> >>> Does that make sense? >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Miguel >>>> >>>> >>>> On Mon, Feb 23, 2015 at 9:09 AM, Matthew Knepley >>>> wrote: >>>> >>>>> On Mon, Feb 23, 2015 at 8:42 AM, Miguel Angel Salazar de Troya < >>>>> salazardetroya at gmail.com> wrote: >>>>> >>>>>> Thanks, that will help me. Now what I would like to have is the >>>>>> following: if I have two processors and ten edges, the partitioning results >>>>>> in the first processor having the edges 0-4 and the second processor, the >>>>>> edges 5-9. I also have a global vector with as many components as edges, >>>>>> 10. How can I partition it so the first processor also has the 0-4 >>>>>> components and the second, the 5-9 components of the vector? >>>>>> >>>>> I think it would help to know what you want to accomplish. This is how >>>>> you are proposing to do it.' >>>>> >>>>> If you just want to put data on edges, DMNetwork has a facility for >>>>> that already. >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> >>>>>> Miguel >>>>>> On Feb 23, 2015 8:08 AM, "Abhyankar, Shrirang G." < >>>>>> abhyshr at mcs.anl.gov> wrote: >>>>>> >>>>>>> Miguel, >>>>>>> One possible way is to store the global numbering of any >>>>>>> edge/vertex in the "component" attached to it. Once the mesh gets >>>>>>> partitioned, the components are also distributed so you can easily retrieve >>>>>>> the global number of any edge/vertex by accessing its component. This is >>>>>>> what is done in the DMNetwork example pf.c although the global numbering is >>>>>>> not used for anything. >>>>>>> >>>>>>> Shri >>>>>>> From: Matthew Knepley >>>>>>> Date: Mon, 23 Feb 2015 07:54:34 -0600 >>>>>>> To: Miguel Angel Salazar de Troya >>>>>>> Cc: "petsc-users at mcs.anl.gov" >>>>>>> Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel >>>>>>> >>>>>>> On Sun, Feb 22, 2015 at 3:59 PM, Miguel Angel Salazar de Troya < >>>>>>> salazardetroya at gmail.com> wrote: >>>>>>> >>>>>>>> Thanks. Once I obtain that Index Set with the routine DMPlexCreateCellNumbering() >>>>>>>> (I assume that the edges in DMNetwork correspond to cells in DMPlex) can I >>>>>>>> use it to partition a vector with as many components as edges I have in my >>>>>>>> network? >>>>>>>> >>>>>>> >>>>>>> I do not completely understand the question. >>>>>>> >>>>>>> If you want a partition of the edges, you can use >>>>>>> DMPlexCreatePartition() and its friend DMPlexDistribute(). What >>>>>>> are you trying to do? >>>>>>> >>>>>>> Matt >>>>>>> >>>>>>> >>>>>>>> Thanks >>>>>>>> Miguel >>>>>>>> >>>>>>>> On Sun, Feb 22, 2015 at 12:15 PM, Matthew Knepley < >>>>>>>> knepley at gmail.com> wrote: >>>>>>>> >>>>>>>>> On Sun, Feb 22, 2015 at 11:01 AM, Miguel Angel Salazar de Troya < >>>>>>>>> salazardetroya at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hi >>>>>>>>>> >>>>>>>>>> I noticed that the routine DMNetworkGetEdgeRange() returns the >>>>>>>>>> local indices for the edge range. Is there any way to obtain the global >>>>>>>>>> indices? So if my network has 10 edges, the processor 1 has the 0-4 edges >>>>>>>>>> and the processor 2, the 5-9 edges, how can I obtain this information? >>>>>>>>>> >>>>>>>>> >>>>>>>>> One of the points of DMPlex is we do not require a global >>>>>>>>> numbering. Everything is numbered >>>>>>>>> locally, and the PetscSF maps local numbers to local numbers in >>>>>>>>> order to determine ownership. >>>>>>>>> >>>>>>>>> If you want to create a global numbering for some reason, you >>>>>>>>> can using DMPlexCreatePointNumbering(). >>>>>>>>> There are also cell and vertex versions that we use for output, so >>>>>>>>> you could do it just for edges as well. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> >>>>>>>>> Matt >>>>>>>>> >>>>>>>>> >>>>>>>>>> Thanks >>>>>>>>>> Miguel >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>> Graduate Research Assistant >>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>> (217) 550-2360 >>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> What most experimenters take for granted before they begin their >>>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>>> experiments lead. >>>>>>>>> -- Norbert Wiener >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>> Graduate Research Assistant >>>>>>>> Department of Mechanical Science and Engineering >>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>> (217) 550-2360 >>>>>>>> salaza11 at illinois.edu >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> What most experimenters take for granted before they begin their >>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>> experiments lead. >>>>>>> -- Norbert Wiener >>>>>>> >>>>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>> >>>> >>>> >>>> -- >>>> *Miguel Angel Salazar de Troya* >>>> Graduate Research Assistant >>>> Department of Mechanical Science and Engineering >>>> University of Illinois at Urbana-Champaign >>>> (217) 550-2360 >>>> salaza11 at illinois.edu >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> *Miguel Angel Salazar de Troya* >> Graduate Research Assistant >> Department of Mechanical Science and Engineering >> University of Illinois at Urbana-Champaign >> (217) 550-2360 >> salaza11 at illinois.edu >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- *Miguel Angel Salazar de Troya* Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From abhyshr at mcs.anl.gov Mon Feb 23 15:11:06 2015 From: abhyshr at mcs.anl.gov (Abhyankar, Shrirang G.) Date: Mon, 23 Feb 2015 21:11:06 +0000 Subject: [petsc-users] DMNetworkGetEdgeRange() in parallel In-Reply-To: Message-ID: I think you should call DMClone after partitioning (DMDistribute). Shri From: Miguel Angel Salazar de Troya > Date: Mon, 23 Feb 2015 14:15:00 -0600 To: Matthew Knepley > Cc: Shri >, "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel Thanks a lot, the partition should be done before setting up the section, right? Miguel On Mon, Feb 23, 2015 at 2:05 PM, Matthew Knepley > wrote: On Mon, Feb 23, 2015 at 1:40 PM, Miguel Angel Salazar de Troya > wrote: Wouldn't including the edge variables in the global vector make the code slower? I'm using the global vector in a TS, using one of the explicit RK schemes. The edge variables would not be updated in the RHSFunction evaluation. I only change the edge variables in the TSUpdate. If the global vector had the edge variables, it would be a much larger vector, and all the vector operations performed by the TS would be slower. Although the vector F returned by the RHSFunction would be zero in the edge variable components. I guess that being the vector sparse that would not be a problem. I think I'm more interested in the PetscSection approach because it might require less modifications in my code. However, I don't know how I could do this. Maybe something like this? PetscSectionCreate(PETSC_COMM_WORLD, &s); PetscSectionSetNumFields(s, 1); PetscSectionSetFieldComponents(s, 0, 1); // Now to set the chart, I pick the edge range DMNetworkGetEdgeRange(dm, & eStart, & eEnd PetscSectionSetChart(s, eStart, eEnd); for(PetscInt e = eStart; c < eEnd; ++e) { PetscSectionSetDof(s, e, 1); PetscSectionSetFieldDof(s, e, 1, 1); It should be PetscSectionSetFieldDof(s, e, 0, 1); } PetscSectionSetUp(s); Now in the manual I see this: First you want to do: DMClone(dm, &dmEdge); and then use dmEdge below. DMSetDefaultSection(dm, s); DMGetLocalVector(dm, &localVec); DMGetGlobalVector(dm, &globalVec); Setting up the default section in the DM would interfere with the section already set up with the variables in the vertices? Yep, thats why you would use a clone. Thanks, Matt Thanks a lot for your responses. On Mon, Feb 23, 2015 at 11:37 AM, Matthew Knepley > wrote: On Mon, Feb 23, 2015 at 9:27 AM, Miguel Angel Salazar de Troya > wrote: I'm iterating through local edges given in DMNetworkGetEdgeRange(). For each edge, I extract or modify its corresponding value in a global petsc vector. Therefore that vector must have as many components as edges there are in the network. To extract the value in the vector, I use VecGetArray() and a variable counter that is incremented in each iteration. The array that I obtain in VecGetArray() has to be the same size than the edge range. That variable counter starts as 0, so if the array that I obtained in VecGetArray() is x_array, x_array[0] must be the component in the global vector that corresponds with the start edge given in DMNetworkGetEdgeRange() I need that global petsc vector because I will use it in other operations, it's not just data. Sorry for the confusion. Thanks in advance. This sounds like an assembly operation. The usual paradigm is to compute in the local space, and then communicate to get to the global space. So you would make a PetscSection that had 1 (or some) unknowns on each cell (edge) and then you can use DMCreateGlobal/LocalVector() and DMLocalToGlobal() to do this. Does that make sense? Thanks, Matt Miguel On Mon, Feb 23, 2015 at 9:09 AM, Matthew Knepley > wrote: On Mon, Feb 23, 2015 at 8:42 AM, Miguel Angel Salazar de Troya > wrote: Thanks, that will help me. Now what I would like to have is the following: if I have two processors and ten edges, the partitioning results in the first processor having the edges 0-4 and the second processor, the edges 5-9. I also have a global vector with as many components as edges, 10. How can I partition it so the first processor also has the 0-4 components and the second, the 5-9 components of the vector? I think it would help to know what you want to accomplish. This is how you are proposing to do it.' If you just want to put data on edges, DMNetwork has a facility for that already. Thanks, Matt Miguel On Feb 23, 2015 8:08 AM, "Abhyankar, Shrirang G." > wrote: Miguel, One possible way is to store the global numbering of any edge/vertex in the "component" attached to it. Once the mesh gets partitioned, the components are also distributed so you can easily retrieve the global number of any edge/vertex by accessing its component. This is what is done in the DMNetwork example pf.c although the global numbering is not used for anything. Shri From: Matthew Knepley > Date: Mon, 23 Feb 2015 07:54:34 -0600 To: Miguel Angel Salazar de Troya > Cc: "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel On Sun, Feb 22, 2015 at 3:59 PM, Miguel Angel Salazar de Troya > wrote: Thanks. Once I obtain that Index Set with the routine DMPlexCreateCellNumbering() (I assume that the edges in DMNetwork correspond to cells in DMPlex) can I use it to partition a vector with as many components as edges I have in my network? I do not completely understand the question. If you want a partition of the edges, you can use DMPlexCreatePartition() and its friend DMPlexDistribute(). What are you trying to do? Matt Thanks Miguel On Sun, Feb 22, 2015 at 12:15 PM, Matthew Knepley > wrote: On Sun, Feb 22, 2015 at 11:01 AM, Miguel Angel Salazar de Troya > wrote: Hi I noticed that the routine DMNetworkGetEdgeRange() returns the local indices for the edge range. Is there any way to obtain the global indices? So if my network has 10 edges, the processor 1 has the 0-4 edges and the processor 2, the 5-9 edges, how can I obtain this information? One of the points of DMPlex is we do not require a global numbering. Everything is numbered locally, and the PetscSF maps local numbers to local numbers in order to determine ownership. If you want to create a global numbering for some reason, you can using DMPlexCreatePointNumbering(). There are also cell and vertex versions that we use for output, so you could do it just for edges as well. Thanks, Matt Thanks Miguel -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Feb 23 15:24:54 2015 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 23 Feb 2015 15:24:54 -0600 Subject: [petsc-users] DMNetworkGetEdgeRange() in parallel In-Reply-To: References: Message-ID: On Mon, Feb 23, 2015 at 2:15 PM, Miguel Angel Salazar de Troya < salazardetroya at gmail.com> wrote: > Thanks a lot, the partition should be done before setting up the section, > right? > The partition will be automatic. All you have to do is make the local section. The DM is already partitioned, and the Section will inherit that. Matt > Miguel > > On Mon, Feb 23, 2015 at 2:05 PM, Matthew Knepley > wrote: > >> On Mon, Feb 23, 2015 at 1:40 PM, Miguel Angel Salazar de Troya < >> salazardetroya at gmail.com> wrote: >> >>> Wouldn't including the edge variables in the global vector make the code >>> slower? I'm using the global vector in a TS, using one of the explicit RK >>> schemes. The edge variables would not be updated in the RHSFunction >>> evaluation. I only change the edge variables in the TSUpdate. If the global >>> vector had the edge variables, it would be a much larger vector, and all >>> the vector operations performed by the TS would be slower. Although the >>> vector F returned by the RHSFunction would be zero in the edge variable >>> components. I guess that being the vector sparse that would not be a >>> problem. >>> >>> I think I'm more interested in the PetscSection approach because it >>> might require less modifications in my code. However, I don't know how I >>> could do this. Maybe something like this? >>> >>> PetscSectionCreate(PETSC_COMM_WORLD, &s); >>> PetscSectionSetNumFields(s, 1); >>> PetscSectionSetFieldComponents(s, 0, 1); >>> >>> // Now to set the chart, I pick the edge range >>> >>> DMNetworkGetEdgeRange(dm, & eStart, & eEnd >>> >>> PetscSectionSetChart(s, eStart, eEnd); >>> >>> for(PetscInt e = eStart; c < eEnd; ++e) { >>> PetscSectionSetDof(s, e, 1); >>> PetscSectionSetFieldDof(s, e, 1, 1); >>> >> >> It should be PetscSectionSetFieldDof(s, e, 0, 1); >> >> >>> } >>> PetscSectionSetUp(s); >>> >>> Now in the manual I see this: >>> >> >> First you want to do: >> >> DMClone(dm, &dmEdge); >> >> and then use dmEdge below. >> >> >>> DMSetDefaultSection(dm, s); >>> DMGetLocalVector(dm, &localVec); >>> DMGetGlobalVector(dm, &globalVec); >>> >>> Setting up the default section in the DM would interfere with the >>> section already set up with the variables in the vertices? >>> >> >> Yep, thats why you would use a clone. >> >> Thanks, >> >> Matt >> >> >>> Thanks a lot for your responses. >>> >>> >>> >>> On Mon, Feb 23, 2015 at 11:37 AM, Matthew Knepley >>> wrote: >>> >>>> On Mon, Feb 23, 2015 at 9:27 AM, Miguel Angel Salazar de Troya < >>>> salazardetroya at gmail.com> wrote: >>>> >>>>> I'm iterating through local edges given in DMNetworkGetEdgeRange(). >>>>> For each edge, I extract or modify its corresponding value in a global >>>>> petsc vector. Therefore that vector must have as many components as edges >>>>> there are in the network. To extract the value in the vector, I use >>>>> VecGetArray() and a variable counter that is incremented in each iteration. >>>>> The array that I obtain in VecGetArray() has to be the same size than >>>>> the edge range. That variable counter starts as 0, so if the array that I >>>>> obtained in VecGetArray() is x_array, x_array[0] must be the >>>>> component in the global vector that corresponds with the start edge given >>>>> in DMNetworkGetEdgeRange() >>>>> >>>>> I need that global petsc vector because I will use it in other >>>>> operations, it's not just data. Sorry for the confusion. Thanks in advance. >>>>> >>>> >>>> This sounds like an assembly operation. The usual paradigm is to >>>> compute in the local space, and then communicate to get to the global >>>> space. So you would make a PetscSection that had 1 (or some) unknowns on >>>> each cell (edge) and then you can use DMCreateGlobal/LocalVector() and >>>> DMLocalToGlobal() to do this. >>>> >>>> Does that make sense? >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> Miguel >>>>> >>>>> >>>>> On Mon, Feb 23, 2015 at 9:09 AM, Matthew Knepley >>>>> wrote: >>>>> >>>>>> On Mon, Feb 23, 2015 at 8:42 AM, Miguel Angel Salazar de Troya < >>>>>> salazardetroya at gmail.com> wrote: >>>>>> >>>>>>> Thanks, that will help me. Now what I would like to have is the >>>>>>> following: if I have two processors and ten edges, the partitioning results >>>>>>> in the first processor having the edges 0-4 and the second processor, the >>>>>>> edges 5-9. I also have a global vector with as many components as edges, >>>>>>> 10. How can I partition it so the first processor also has the 0-4 >>>>>>> components and the second, the 5-9 components of the vector? >>>>>>> >>>>>> I think it would help to know what you want to accomplish. This is >>>>>> how you are proposing to do it.' >>>>>> >>>>>> If you just want to put data on edges, DMNetwork has a facility for >>>>>> that already. >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> >>>>>> >>>>>>> Miguel >>>>>>> On Feb 23, 2015 8:08 AM, "Abhyankar, Shrirang G." < >>>>>>> abhyshr at mcs.anl.gov> wrote: >>>>>>> >>>>>>>> Miguel, >>>>>>>> One possible way is to store the global numbering of any >>>>>>>> edge/vertex in the "component" attached to it. Once the mesh gets >>>>>>>> partitioned, the components are also distributed so you can easily retrieve >>>>>>>> the global number of any edge/vertex by accessing its component. This is >>>>>>>> what is done in the DMNetwork example pf.c although the global numbering is >>>>>>>> not used for anything. >>>>>>>> >>>>>>>> Shri >>>>>>>> From: Matthew Knepley >>>>>>>> Date: Mon, 23 Feb 2015 07:54:34 -0600 >>>>>>>> To: Miguel Angel Salazar de Troya >>>>>>>> Cc: "petsc-users at mcs.anl.gov" >>>>>>>> Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel >>>>>>>> >>>>>>>> On Sun, Feb 22, 2015 at 3:59 PM, Miguel Angel Salazar de Troya < >>>>>>>> salazardetroya at gmail.com> wrote: >>>>>>>> >>>>>>>>> Thanks. Once I obtain that Index Set with the routine DMPlexCreateCellNumbering() >>>>>>>>> (I assume that the edges in DMNetwork correspond to cells in DMPlex) can I >>>>>>>>> use it to partition a vector with as many components as edges I have in my >>>>>>>>> network? >>>>>>>>> >>>>>>>> >>>>>>>> I do not completely understand the question. >>>>>>>> >>>>>>>> If you want a partition of the edges, you can use >>>>>>>> DMPlexCreatePartition() and its friend DMPlexDistribute(). What >>>>>>>> are you trying to do? >>>>>>>> >>>>>>>> Matt >>>>>>>> >>>>>>>> >>>>>>>>> Thanks >>>>>>>>> Miguel >>>>>>>>> >>>>>>>>> On Sun, Feb 22, 2015 at 12:15 PM, Matthew Knepley < >>>>>>>>> knepley at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> On Sun, Feb 22, 2015 at 11:01 AM, Miguel Angel Salazar de Troya >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> Hi >>>>>>>>>>> >>>>>>>>>>> I noticed that the routine DMNetworkGetEdgeRange() returns the >>>>>>>>>>> local indices for the edge range. Is there any way to obtain the global >>>>>>>>>>> indices? So if my network has 10 edges, the processor 1 has the 0-4 edges >>>>>>>>>>> and the processor 2, the 5-9 edges, how can I obtain this information? >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> One of the points of DMPlex is we do not require a global >>>>>>>>>> numbering. Everything is numbered >>>>>>>>>> locally, and the PetscSF maps local numbers to local numbers in >>>>>>>>>> order to determine ownership. >>>>>>>>>> >>>>>>>>>> If you want to create a global numbering for some reason, you >>>>>>>>>> can using DMPlexCreatePointNumbering(). >>>>>>>>>> There are also cell and vertex versions that we use for output, >>>>>>>>>> so you could do it just for edges as well. >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> >>>>>>>>>> Matt >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Thanks >>>>>>>>>>> Miguel >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>>> Graduate Research Assistant >>>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>>> (217) 550-2360 >>>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> What most experimenters take for granted before they begin their >>>>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>>>> experiments lead. >>>>>>>>>> -- Norbert Wiener >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>> Graduate Research Assistant >>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>> (217) 550-2360 >>>>>>>>> salaza11 at illinois.edu >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> What most experimenters take for granted before they begin their >>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>> experiments lead. >>>>>>>> -- Norbert Wiener >>>>>>>> >>>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> *Miguel Angel Salazar de Troya* >>>>> Graduate Research Assistant >>>>> Department of Mechanical Science and Engineering >>>>> University of Illinois at Urbana-Champaign >>>>> (217) 550-2360 >>>>> salaza11 at illinois.edu >>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >>> >>> -- >>> *Miguel Angel Salazar de Troya* >>> Graduate Research Assistant >>> Department of Mechanical Science and Engineering >>> University of Illinois at Urbana-Champaign >>> (217) 550-2360 >>> salaza11 at illinois.edu >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > *Miguel Angel Salazar de Troya* > Graduate Research Assistant > Department of Mechanical Science and Engineering > University of Illinois at Urbana-Champaign > (217) 550-2360 > salaza11 at illinois.edu > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From ansp6066 at colorado.edu Mon Feb 23 15:45:20 2015 From: ansp6066 at colorado.edu (Andrew Spott) Date: Mon, 23 Feb 2015 13:45:20 -0800 (PST) Subject: [petsc-users] HermitianTranspose version of MatCreateTranspose. Message-ID: <1424727919956.b5849e8f@Nodemailer> It looks like this function doesn?t exist, but it should be pretty easy to write. A few questions: - Does a new MatType need to be created? or will MATTRANSPOSEMAT work? - Is there any way to write this from outside of the PETSc library proper? ?Initial examination makes it seem like it isn?t possible, because _p_Mat is an incomplete type after the library has been built. - Is this a good idea? ?Or should a MatShell just be made? -Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Feb 23 15:59:14 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 23 Feb 2015 15:59:14 -0600 Subject: [petsc-users] HermitianTranspose version of MatCreateTranspose. In-Reply-To: <1424727919956.b5849e8f@Nodemailer> References: <1424727919956.b5849e8f@Nodemailer> Message-ID: <0B6B4519-D3CC-4305-A4C2-D39F720075B5@mcs.anl.gov> > On Feb 23, 2015, at 3:45 PM, Andrew Spott wrote: > > It looks like this function doesn?t exist, but it should be pretty easy to write. > > A few questions: > > - Does a new MatType need to be created? or will MATTRANSPOSEMAT work? > - Is there any way to write this from outside of the PETSc library proper? > Initial examination makes it seem like it isn?t possible, because _p_Mat is an incomplete type after the library has been built. The directory include/petsc-private/matimp.h is always provided with a PETSc installation so though _p_Mat is "private" to PETSc you can extend PETSc "out side of the library proper". Barry > - Is this a good idea? Or should a MatShell just be made? > > -Andrew > From jed at jedbrown.org Mon Feb 23 16:05:46 2015 From: jed at jedbrown.org (Jed Brown) Date: Mon, 23 Feb 2015 15:05:46 -0700 Subject: [petsc-users] HermitianTranspose version of MatCreateTranspose. In-Reply-To: <0B6B4519-D3CC-4305-A4C2-D39F720075B5@mcs.anl.gov> References: <1424727919956.b5849e8f@Nodemailer> <0B6B4519-D3CC-4305-A4C2-D39F720075B5@mcs.anl.gov> Message-ID: <87d250i8qt.fsf@jedbrown.org> Barry Smith writes: > The directory include/petsc-private/matimp.h is always provided > with a PETSc installation so though _p_Mat is "private" to PETSc > you can extend PETSc "out side of the library proper". What you don't get when accessing the private headers is a guarantee of backward compatibility (even across subminor releases) or documentation of changes in the release notes. If people are using private headers frequently, we should strive to make a public interface to fulfill their needs. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From bsmith at mcs.anl.gov Mon Feb 23 16:43:27 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 23 Feb 2015 16:43:27 -0600 Subject: [petsc-users] HermitianTranspose version of MatCreateTranspose. In-Reply-To: <87d250i8qt.fsf@jedbrown.org> References: <1424727919956.b5849e8f@Nodemailer> <0B6B4519-D3CC-4305-A4C2-D39F720075B5@mcs.anl.gov> <87d250i8qt.fsf@jedbrown.org> Message-ID: > On Feb 23, 2015, at 4:05 PM, Jed Brown wrote: > > Barry Smith writes: >> The directory include/petsc-private/matimp.h is always provided >> with a PETSc installation so though _p_Mat is "private" to PETSc >> you can extend PETSc "out side of the library proper". > > What you don't get when accessing the private headers is a guarantee of > backward compatibility (even across subminor releases) or documentation > of changes in the release notes. If people are using private headers > frequently, we should strive to make a public interface to fulfill their > needs. If they are writing code that they intend to share with the PETSc community (i.e. eventually it becomes a pull request into petsc so the user will not maintain it themselves for a long time)*; like I assumed in this case, then it is fine to use the private headers. Barry * For example providing a piece of functionality that we frankly just forgot about. From ansp6066 at colorado.edu Mon Feb 23 18:50:40 2015 From: ansp6066 at colorado.edu (Andrew Spott) Date: Mon, 23 Feb 2015 16:50:40 -0800 (PST) Subject: [petsc-users] HermitianTranspose version of MatCreateTranspose. In-Reply-To: <87d250i8qt.fsf@jedbrown.org> References: <87d250i8qt.fsf@jedbrown.org> Message-ID: <1424739040415.5cf0ea56@Nodemailer> I?m definitely willing to submit it as a pull request. Also, while I?m at it, I?m going to write a ?duplicate? function for transpose and hermitian_transpose. ?Just because this seems 1) easy ( MatHermitianTranspose can return a new copy, as well as MatTranspose), and 2) necessary to use these for EPS. Also, is ?transpose? a good enough MatType? ?Or does a new one need to be written? -Andrew On Mon, Feb 23, 2015 at 3:12 PM, Jed Brown wrote: > Barry Smith writes: >> The directory include/petsc-private/matimp.h is always provided >> with a PETSc installation so though _p_Mat is "private" to PETSc >> you can extend PETSc "out side of the library proper". > What you don't get when accessing the private headers is a guarantee of > backward compatibility (even across subminor releases) or documentation > of changes in the release notes. If people are using private headers > frequently, we should strive to make a public interface to fulfill their > needs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Mon Feb 23 21:40:07 2015 From: jed at jedbrown.org (Jed Brown) Date: Mon, 23 Feb 2015 20:40:07 -0700 Subject: [petsc-users] solving multiple linear systems with same matrix (sequentially, not simultaneously) In-Reply-To: References: <98085B4A-166C-49E4-89F6-DDF53B6FFD4D@mcs.anl.gov> <7030ED15-A93B-403C-BE28-DEF842F1941D@mcs.anl.gov> <87zj85idga.fsf@jedbrown.org> Message-ID: <87r3tggep4.fsf@jedbrown.org> Daniel Goldberg writes: > Thanks Jed. > > So the nullspace would be initialized with an array of 3 petsc vectors: > (u,v)= (0,1), (1,0), and (-y,x), correct? > > And also to be sure -- this is usefuil only for multigrid preconditioners, > yes? Yes, and perhaps also for some exotic domain decomposition methods. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From jed at jedbrown.org Mon Feb 23 21:41:04 2015 From: jed at jedbrown.org (Jed Brown) Date: Mon, 23 Feb 2015 20:41:04 -0700 Subject: [petsc-users] Solving multiple linear systems with similar matrix sequentially In-Reply-To: <27A5BEE5-8058-4177-AB68-CC70ACEDB3A4@mcs.anl.gov> References: <7030ED15-A93B-403C-BE28-DEF842F1941D@mcs.anl.gov> <1378912628.4284286.1424690285600.JavaMail.yahoo@mail.yahoo.com> <27A5BEE5-8058-4177-AB68-CC70ACEDB3A4@mcs.anl.gov> Message-ID: <87oaokgenj.fsf@jedbrown.org> Barry Smith writes: > Freeze the hierarchy and coarse grid interpolations of GAMG but > compute the new coarse grid operators RAP for each new linear > system (this is a much cheaper operation). Use > PCGAMGSetReuseInterpolation() to freeze/unfreeze the hierarchy. The RAP is often more than 50% of PCSetUp, so this might not save much. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From bsmith at mcs.anl.gov Mon Feb 23 21:47:31 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 23 Feb 2015 21:47:31 -0600 Subject: [petsc-users] Solving multiple linear systems with similar matrix sequentially In-Reply-To: <87oaokgenj.fsf@jedbrown.org> References: <7030ED15-A93B-403C-BE28-DEF842F1941D@mcs.anl.gov> <1378912628.4284286.1424690285600.JavaMail.yahoo@mail.yahoo.com> <27A5BEE5-8058-4177-AB68-CC70ACEDB3A4@mcs.anl.gov> <87oaokgenj.fsf@jedbrown.org> Message-ID: <67908502-6832-463B-A4FD-64AF0AC99AE6@mcs.anl.gov> > On Feb 23, 2015, at 9:41 PM, Jed Brown wrote: > > Barry Smith writes: >> Freeze the hierarchy and coarse grid interpolations of GAMG but >> compute the new coarse grid operators RAP for each new linear >> system (this is a much cheaper operation). Use >> PCGAMGSetReuseInterpolation() to freeze/unfreeze the hierarchy. > > The RAP is often more than 50% of PCSetUp, so this might not save much. Hee,hee in some of my recent runs of GAMG the RAP was less than 25% of the time. Thus skipping the other portions could really pay off.* Barry * this could just be because the RAP portion has been optimized much more than the other parts but ... From bsmith at mcs.anl.gov Mon Feb 23 22:02:50 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 23 Feb 2015 22:02:50 -0600 Subject: [petsc-users] HermitianTranspose version of MatCreateTranspose. In-Reply-To: <1424739040415.5cf0ea56@Nodemailer> References: <87d250i8qt.fsf@jedbrown.org> <1424739040415.5cf0ea56@Nodemailer> Message-ID: We've had a small amount of debate over the years on how to handle the Hermitian transpose and non-Hermitian transpose that never got fully resolved. Approach 1) Each (complex) matrix has a full set of transpose and Hermitian transpose operations (MatTranspose(), MatHermitianTranspose(), MatMultTranspose()), MatMultHermitianTranspose(), MatSolveTranspose(), MatSolveHermitianTranspose(), MatMatMultTranspose(), MatMatMultHermitianTranspose(), MatTranposeMatMult(), MatHermitianTransposeMatMult().......) plus there are two vector "inner" products; VecDot() and VecTDot(). Approach 2) Consider a (complex) vector (and hence the associated matrix operators on it) to live in the usual Hermitian inner product space or the non-Hermitian "inner product space". Then one only needs a single VecDot() and MatTranspose(), MatMultTranspose() ... that just "does the right thing" based on what space the user has declared the vectors/matrices to be in. Approach 2) seems nicer since it only requires 1/2 the functions :-) and so long as the two vector "spaces" never interact directly (for example what would be the meaning of the "inner" product of a vector in the usual Hermitian inner product space with a vector from the non-Hermitian "inner product space"?) certain seems simpler. Approach 1) might be simpler for some people who like to always see exactly what they are doing. I personally wish I had started with Approach 2 (but I did not), but there could be some flaw with it I am not seeing. Barry > On Feb 23, 2015, at 6:50 PM, Andrew Spott wrote: > > I?m definitely willing to submit it as a pull request. > > Also, while I?m at it, I?m going to write a ?duplicate? function for transpose and hermitian_transpose. Just because this seems 1) easy ( MatHermitianTranspose can return a new copy, as well as MatTranspose), and 2) necessary to use these for EPS. > > Also, is ?transpose? a good enough MatType? Or does a new one need to be written? > > -Andrew > > > > On Mon, Feb 23, 2015 at 3:12 PM, Jed Brown wrote: > > > From hong at aspiritech.org Mon Feb 23 23:16:38 2015 From: hong at aspiritech.org (hong at aspiritech.org) Date: Mon, 23 Feb 2015 23:16:38 -0600 Subject: [petsc-users] Solving multiple linear systems with similar matrix sequentially In-Reply-To: <67908502-6832-463B-A4FD-64AF0AC99AE6@mcs.anl.gov> References: <7030ED15-A93B-403C-BE28-DEF842F1941D@mcs.anl.gov> <1378912628.4284286.1424690285600.JavaMail.yahoo@mail.yahoo.com> <27A5BEE5-8058-4177-AB68-CC70ACEDB3A4@mcs.anl.gov> <87oaokgenj.fsf@jedbrown.org> <67908502-6832-463B-A4FD-64AF0AC99AE6@mcs.anl.gov> Message-ID: ML and Hypre both use RAP, as I recall. PtAP uses outer product which cannot be as fast as RAP, based on an analysis of memory access. Sequential RARt might be fast if ARt has small number of colors. We do not have parallel RARt. Hong On Mon, Feb 23, 2015 at 9:47 PM, Barry Smith wrote: > > > On Feb 23, 2015, at 9:41 PM, Jed Brown wrote: > > > > Barry Smith writes: > >> Freeze the hierarchy and coarse grid interpolations of GAMG but > >> compute the new coarse grid operators RAP for each new linear > >> system (this is a much cheaper operation). Use > >> PCGAMGSetReuseInterpolation() to freeze/unfreeze the hierarchy. > > > > The RAP is often more than 50% of PCSetUp, so this might not save much. > > Hee,hee in some of my recent runs of GAMG the RAP was less than 25% of > the time. Thus skipping the other portions could really pay off.* > > Barry > > * this could just be because the RAP portion has been optimized much more > than the other parts but ... > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hus003 at ucsd.edu Tue Feb 24 00:08:12 2015 From: hus003 at ucsd.edu (Sun, Hui) Date: Tue, 24 Feb 2015 06:08:12 +0000 Subject: [petsc-users] On user defined PC of schur complement Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010E998D@XMAIL-MBX-BH1.AD.UCSD.EDU> I have a block matrix [A, B; C, D], and I want to use Schur complement to solve the linear system [A, B; C, D] * [x; y] = [c; d]. I want to apply a preconditioner for solving ( D - C*A^(-1)*B ) y = rhs. And it is easy for me to find an A' which is approximate to A, and A'^(-1) is easy to get. So for this part, I can easily define a preconditioner matrix P = ( D - C*A'^(-1)*B ) using PCFieldSplitSchurPrecondition. The next step is that I want to do incomplete LU factorization for P as my left and right preconditioner for solving ( D - C*A^(-1)*B ) y = rhs using gmres or bcgs. My question is since part of the PC is user defined, and part of it is PETSC defined, how can I combine them? Can I do something like: ierr = PCFieldSplitSchurPrecondition(pc, PC_FIELDSPLIT_SCHUR_PRE_USER, P);CHKERRQ(ierr); ierr = PCSetType(pc, PCILU);CHKERRQ(ierr); Hui -------------- next part -------------- An HTML attachment was scrubbed... URL: From lawrence.mitchell at imperial.ac.uk Tue Feb 24 03:50:38 2015 From: lawrence.mitchell at imperial.ac.uk (Lawrence Mitchell) Date: Tue, 24 Feb 2015 09:50:38 +0000 Subject: [petsc-users] Poor FAS convergence for linear problems when not monitoring on levels Message-ID: <54EC496E.8060700@imperial.ac.uk> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi all, I'm observing poor convergence of FAS on linear problems when I use a snes_type of ksponly on each level, unless I also monitor the snes on those levels. This is reproducible with snes ex35.c: Using one iteration of newton on each level: $ ./ex35 -da_refine 2 -snes_type fas -snes_monitor_short -fas_coarse_snes_type newtonls -fas_coarse_ksp_type preonly -fas_coarse_pc_type lu -fas_levels_snes_type newtonls -fas_levels_snes_max_it 1 -fas_levels_ksp_max_it 2 -fas_levels_ksp_convergence_test skip 0 SNES Function norm 7.46324 1 SNES Function norm 0.0783234 2 SNES Function norm 0.000381979 3 SNES Function norm 2.81934e-06 4 SNES Function norm 3.33505e-08 Using ksponly as the level snes type: $ ./ex35 -da_refine 2 -snes_type fas -snes_monitor_short -fas_coarse_snes_type newtonls -fas_coarse_ksp_type preonly -fas_coarse_pc_type lu -fas_levels_snes_type ksponly -fas_levels_snes_max_it 1 -fas_levels_ksp_max_it 2 -fas_levels_ksp_convergence_test skip 0 SNES Function norm 7.46324 1 SNES Function norm 11.0718 2 SNES Function norm 2.31708 3 SNES Function norm 0.354363 4 SNES Function norm 0.0966945 5 SNES Function norm 0.0184193 6 SNES Function norm 0.00553901 7 SNES Function norm 0.000916548 8 SNES Function norm 0.000235095 9 SNES Function norm 5.78095e-05 10 SNES Function norm 1.68048e-05 11 SNES Function norm 3.03099e-06 12 SNES Function norm 7.5299e-07 13 SNES Function norm 2.18443e-07 14 SNES Function norm 5.8899e-08 Same options, but turning on the monitors on each snes level: $ ./ex35 -da_refine 2 -snes_type fas -snes_monitor_short -fas_coarse_snes_type newtonls -fas_coarse_ksp_type preonly -fas_coarse_pc_type lu -fas_levels_snes_type ksponly -fas_levels_snes_max_it 1 -fas_levels_ksp_max_it 2 -fas_levels_ksp_convergence_test skip -fas_levels_snes_monitor /dev/null 0 SNES Function norm 7.46324 1 SNES Function norm 0.0783234 2 SNES Function norm 0.000381979 3 SNES Function norm 2.81934e-06 4 SNES Function norm 3.33505e-08 Any ideas? Cheers, Lawrence -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJU7EllAAoJECOc1kQ8PEYv/AgIAM/mziZZjhZrZlwIskAuPEm7 fSFf5i4z2RQyg6Q6m9VP9/7k6kb9g40QVWs03p/MIjB4ywdGnH1ZYWwGE+FpsvTJ DgRv8yQqHJDkxK0n5K31NzhSN7XalExjE/dZpcxnKhoD92sqkij+p0XGQtW35haL jbK906y+Ag+PhCNTX5T2imgTy3RbfHIycomwNdgcsqWrtxrTckvvM5oeBi4xBq+7 74lbf2eMUpcS2XNF3vn6kSKG5VGHbftQJFuBTQ1xPh6IsdeHKF3XjKJQ9TZj204Z rNfMlI3zSzdfN4Pf0v3nIbAWvAsjEKT2+fN6gSYXfcB9qXFabpOHS0UiWpFX8SQ= =vVA4 -----END PGP SIGNATURE----- From knepley at gmail.com Tue Feb 24 04:03:18 2015 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 24 Feb 2015 04:03:18 -0600 Subject: [petsc-users] On user defined PC of schur complement In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010E998D@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010E998D@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: On Tue, Feb 24, 2015 at 12:08 AM, Sun, Hui wrote: > I have a block matrix [A, B; C, D], and I want to use Schur complement > to solve the linear system [A, B; C, D] * [x; y] = [c; d]. > > I want to apply a preconditioner for solving ( D - C*A^(-1)*B ) y = rhs. > And it is easy for me to find an A' which is approximate to A, and A'^(-1) > is easy to get. So for this part, I can easily define a preconditioner > matrix P = ( D - C*A'^(-1)*B ) using PCFieldSplitSchurPrecondition. The > next step is that I want to do incomplete LU factorization for P as my > left and right preconditioner for solving ( D - C*A^(-1)*B ) y = rhs > using gmres or bcgs. > > My question is since part of the PC is user defined, and part of it is > PETSC defined, how can I combine them? Can I do something like: > > ierr = PCFieldSplitSchurPrecondition(pc, PC_FIELDSPLIT_SCHUR_PRE_USER, > P);CHKERRQ(ierr); > Once you specify the preconditioner matrix, you can use a factorization preconditioner: -fieldsplit_1_pc_type ilu Matt > ierr = PCSetType(pc, PCILU);CHKERRQ(ierr); > > > Hui > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Feb 24 04:06:24 2015 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 24 Feb 2015 04:06:24 -0600 Subject: [petsc-users] Poor FAS convergence for linear problems when not monitoring on levels In-Reply-To: <54EC496E.8060700@imperial.ac.uk> References: <54EC496E.8060700@imperial.ac.uk> Message-ID: On Tue, Feb 24, 2015 at 3:50 AM, Lawrence Mitchell < lawrence.mitchell at imperial.ac.uk> wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Hi all, > > I'm observing poor convergence of FAS on linear problems when I use a > snes_type of ksponly on each level, unless I also monitor the snes on > those levels. This is reproducible with snes ex35.c: > Thanks, I will look at it. I am guessing that it is something with the update, probably if there is an initial guess, but its definitely a bug if the monitor changes the convergence behavior. Matt > Using one iteration of newton on each level: > > $ ./ex35 -da_refine 2 -snes_type fas -snes_monitor_short > -fas_coarse_snes_type newtonls > -fas_coarse_ksp_type preonly > -fas_coarse_pc_type lu > -fas_levels_snes_type newtonls > -fas_levels_snes_max_it 1 > -fas_levels_ksp_max_it 2 > -fas_levels_ksp_convergence_test skip > > 0 SNES Function norm 7.46324 > 1 SNES Function norm 0.0783234 > 2 SNES Function norm 0.000381979 > 3 SNES Function norm 2.81934e-06 > 4 SNES Function norm 3.33505e-08 > > Using ksponly as the level snes type: > > $ ./ex35 -da_refine 2 -snes_type fas -snes_monitor_short > -fas_coarse_snes_type newtonls > -fas_coarse_ksp_type preonly > -fas_coarse_pc_type lu > -fas_levels_snes_type ksponly > -fas_levels_snes_max_it 1 > -fas_levels_ksp_max_it 2 > -fas_levels_ksp_convergence_test skip > > 0 SNES Function norm 7.46324 > 1 SNES Function norm 11.0718 > 2 SNES Function norm 2.31708 > 3 SNES Function norm 0.354363 > 4 SNES Function norm 0.0966945 > 5 SNES Function norm 0.0184193 > 6 SNES Function norm 0.00553901 > 7 SNES Function norm 0.000916548 > 8 SNES Function norm 0.000235095 > 9 SNES Function norm 5.78095e-05 > 10 SNES Function norm 1.68048e-05 > 11 SNES Function norm 3.03099e-06 > 12 SNES Function norm 7.5299e-07 > 13 SNES Function norm 2.18443e-07 > 14 SNES Function norm 5.8899e-08 > > Same options, but turning on the monitors on each snes level: > > $ ./ex35 -da_refine 2 -snes_type fas -snes_monitor_short > -fas_coarse_snes_type newtonls > -fas_coarse_ksp_type preonly > -fas_coarse_pc_type lu > -fas_levels_snes_type ksponly > -fas_levels_snes_max_it 1 > -fas_levels_ksp_max_it 2 > -fas_levels_ksp_convergence_test skip > -fas_levels_snes_monitor /dev/null > > 0 SNES Function norm 7.46324 > 1 SNES Function norm 0.0783234 > 2 SNES Function norm 0.000381979 > 3 SNES Function norm 2.81934e-06 > 4 SNES Function norm 3.33505e-08 > > > Any ideas? > > Cheers, > > Lawrence > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1 > > iQEcBAEBAgAGBQJU7EllAAoJECOc1kQ8PEYv/AgIAM/mziZZjhZrZlwIskAuPEm7 > fSFf5i4z2RQyg6Q6m9VP9/7k6kb9g40QVWs03p/MIjB4ywdGnH1ZYWwGE+FpsvTJ > DgRv8yQqHJDkxK0n5K31NzhSN7XalExjE/dZpcxnKhoD92sqkij+p0XGQtW35haL > jbK906y+Ag+PhCNTX5T2imgTy3RbfHIycomwNdgcsqWrtxrTckvvM5oeBi4xBq+7 > 74lbf2eMUpcS2XNF3vn6kSKG5VGHbftQJFuBTQ1xPh6IsdeHKF3XjKJQ9TZj204Z > rNfMlI3zSzdfN4Pf0v3nIbAWvAsjEKT2+fN6gSYXfcB9qXFabpOHS0UiWpFX8SQ= > =vVA4 > -----END PGP SIGNATURE----- > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From lawrence.mitchell at imperial.ac.uk Tue Feb 24 09:48:19 2015 From: lawrence.mitchell at imperial.ac.uk (Lawrence Mitchell) Date: Tue, 24 Feb 2015 15:48:19 +0000 Subject: [petsc-users] Multiple in-flight communications with PetscSFs In-Reply-To: References: <5C9B061D-6834-45E0-9C48-574B1F6B50B8@imperial.ac.uk> <871tm27vz1.fsf@jedbrown.org> <87r3u13ju0.fsf@jedbrown.org> Message-ID: <54EC9D43.9050905@imperial.ac.uk> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/02/15 17:04, Barry Smith wrote: > > Lawrence, > > In general we do not want XXXSetFromOptions() to be randomly > called deep within constructors or other places. Ideally we want > them called either when I user calls them directly or from another > YYYSetFromOptions() that the user called. We do violate this rule > occasionally but we don't want new XXXSetFromOptions() put into > the code randomly. > > Barry > >> On Feb 7, 2015, at 7:57 AM, Jed Brown wrote: >> >> Lawrence Mitchell writes: >>> >>> + ierr = PetscSFSetFromOptions(v->sf); >>> CHKERRQ(ierr); + ierr = >>> PetscSFSetFromOptions(v->defaultSF); CHKERRQ(ierr); >> >> Please use PetscObjectSetOptionsPrefix on the SFs and call >> PetscSFSetFromOptions in DMSetFromOptions. Now done here: https://bitbucket.org/petsc/petsc/pull-request/270 Cheers, Lawrence -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJU7J0/AAoJECOc1kQ8PEYv7SwIAKUwFy8anEw80H+KJ5W01dCb 3koLxqu+NxBM8krpv4POYtl8cEmGK4SFmFVVc7gw08FdP9ckscHznooAvGQiwsYN R0sgtpPdwvfzg2546K2rJVGfoUpYluwvFN3bKc1uC+vFgqvZA1Em4VzslgYnMlc1 em+J2Ue7poeWMfTA/omvGMiPDpnHuY9Kk+tEQ3ca5db+ZvLOYVDvcwXeu/ITq7kE kwf7K968TkS5My46Jge09orG1QKJaHM/1fOZ5ld0//Gagowht7HKvU0bbisDFIDz LdQl7APAeXlpatzrAX0ombkpOG+Mrz45DNOwFpLwdZi12icFhMlm/7HJejITzrs= =MSCw -----END PGP SIGNATURE----- From hus003 at ucsd.edu Tue Feb 24 10:12:54 2015 From: hus003 at ucsd.edu (Sun, Hui) Date: Tue, 24 Feb 2015 16:12:54 +0000 Subject: [petsc-users] On user defined PC of schur complement In-Reply-To: References: <7501CC2B7BBCC44A92ECEEC316170ECB010E998D@XMAIL-MBX-BH1.AD.UCSD.EDU>, Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010E99B5@XMAIL-MBX-BH1.AD.UCSD.EDU> Thank you Matt. Hui ________________________________ From: Matthew Knepley [knepley at gmail.com] Sent: Tuesday, February 24, 2015 2:03 AM To: Sun, Hui Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] On user defined PC of schur complement On Tue, Feb 24, 2015 at 12:08 AM, Sun, Hui > wrote: I have a block matrix [A, B; C, D], and I want to use Schur complement to solve the linear system [A, B; C, D] * [x; y] = [c; d]. I want to apply a preconditioner for solving ( D - C*A^(-1)*B ) y = rhs. And it is easy for me to find an A' which is approximate to A, and A'^(-1) is easy to get. So for this part, I can easily define a preconditioner matrix P = ( D - C*A'^(-1)*B ) using PCFieldSplitSchurPrecondition. The next step is that I want to do incomplete LU factorization for P as my left and right preconditioner for solving ( D - C*A^(-1)*B ) y = rhs using gmres or bcgs. My question is since part of the PC is user defined, and part of it is PETSC defined, how can I combine them? Can I do something like: ierr = PCFieldSplitSchurPrecondition(pc, PC_FIELDSPLIT_SCHUR_PRE_USER, P);CHKERRQ(ierr); Once you specify the preconditioner matrix, you can use a factorization preconditioner: -fieldsplit_1_pc_type ilu Matt ierr = PCSetType(pc, PCILU);CHKERRQ(ierr); Hui -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhatiamanav at gmail.com Tue Feb 24 11:33:33 2015 From: bhatiamanav at gmail.com (Manav Bhatia) Date: Tue, 24 Feb 2015 11:33:33 -0600 Subject: [petsc-users] access suitesparse from petsc Message-ID: <066E6FBF-5E30-4AE3-B218-4BDA21312FDF@gmail.com> Greetings! What are the command line options to access suitesparse from petsc? I was not able to find any specification in the manual. Thanks, Manav From jed at jedbrown.org Tue Feb 24 11:40:00 2015 From: jed at jedbrown.org (Jed Brown) Date: Tue, 24 Feb 2015 10:40:00 -0700 Subject: [petsc-users] access suitesparse from petsc In-Reply-To: <066E6FBF-5E30-4AE3-B218-4BDA21312FDF@gmail.com> References: <066E6FBF-5E30-4AE3-B218-4BDA21312FDF@gmail.com> Message-ID: <87y4nndx8v.fsf@jedbrown.org> Manav Bhatia writes: > Greetings! > > What are the command line options to access suitesparse from petsc? I was not able to find any specification in the manual. Presumably you mean Umfpack or Cholmod. -pc_type lu -pc_factor_mat_solver_package umfpack -pc_type cholesky -pc_factor_mat_solver_package cholmod -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From bhatiamanav at gmail.com Tue Feb 24 11:43:19 2015 From: bhatiamanav at gmail.com (Manav Bhatia) Date: Tue, 24 Feb 2015 11:43:19 -0600 Subject: [petsc-users] access suitesparse from petsc In-Reply-To: <87y4nndx8v.fsf@jedbrown.org> References: <066E6FBF-5E30-4AE3-B218-4BDA21312FDF@gmail.com> <87y4nndx8v.fsf@jedbrown.org> Message-ID: > On Feb 24, 2015, at 11:40 AM, Jed Brown wrote: > > Manav Bhatia writes: > >> Greetings! >> >> What are the command line options to access suitesparse from petsc? I was not able to find any specification in the manual. > > Presumably you mean Umfpack or Cholmod. > > -pc_type lu -pc_factor_mat_solver_package umfpack > > -pc_type cholesky -pc_factor_mat_solver_package cholmod Aha! I was thinking that there would be something that would read like suitesparse. Thanks for the info. Also, the suitesparse website talks of different orderings available (AMD, CAMD, COLAMD, and CCOLAMD). Are any of these accessible by command line options as well? Thanks, Manav From bsmith at mcs.anl.gov Tue Feb 24 13:32:52 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 24 Feb 2015 13:32:52 -0600 Subject: [petsc-users] access suitesparse from petsc In-Reply-To: References: <066E6FBF-5E30-4AE3-B218-4BDA21312FDF@gmail.com> <87y4nndx8v.fsf@jedbrown.org> Message-ID: <881C092D-B760-4011-9413-9D3F929E89F6@mcs.anl.gov> For UMFPack you can control a great deal of the solver behavior: MATSOLVERUMFPACK = "umfpack" - A matrix type providing direct solvers (LU) for sequential matrices via the external package UMFPACK. ./configure --download-suitesparse to install PETSc to use UMFPACK Consult UMFPACK documentation for more information about the Control parameters which correspond to the options database keys below. Options Database Keys: + -mat_umfpack_ordering - CHOLMOD, AMD, GIVEN, METIS, BEST, NONE . -mat_umfpack_prl - UMFPACK print level: Control[UMFPACK_PRL] . -mat_umfpack_strategy - (choose one of) AUTO UNSYMMETRIC SYMMETRIC 2BY2 . -mat_umfpack_dense_col - UMFPACK dense column threshold: Control[UMFPACK_DENSE_COL] . -mat_umfpack_dense_row <0.2> - Control[UMFPACK_DENSE_ROW] . -mat_umfpack_amd_dense <10> - Control[UMFPACK_AMD_DENSE] . -mat_umfpack_block_size - UMFPACK block size for BLAS-Level 3 calls: Control[UMFPACK_BLOCK_SIZE] . -mat_umfpack_2by2_tolerance <0.01> - Control[UMFPACK_2BY2_TOLERANCE] . -mat_umfpack_fixq <0> - Control[UMFPACK_FIXQ] . -mat_umfpack_aggressive <1> - Control[UMFPACK_AGGRESSIVE] . -mat_umfpack_pivot_tolerance - UMFPACK partial pivot tolerance: Control[UMFPACK_PIVOT_TOLERANCE] . -mat_umfpack_sym_pivot_tolerance <0.001> - Control[UMFPACK_SYM_PIVOT_TOLERANCE] . -mat_umfpack_scale - (choose one of) NONE SUM MAX . -mat_umfpack_alloc_init - UMFPACK factorized matrix allocation modifier: Control[UMFPACK_ALLOC_INIT] . -mat_umfpack_droptol <0> - Control[UMFPACK_DROPTOL] - -mat_umfpack_irstep - UMFPACK maximum number of iterative refinement steps: Control[UMFPACK_IRSTEP] For Cholmod we don't seem to have currently the ability. If there are some you would like added you can try yourself or let us know what you need. Barry > On Feb 24, 2015, at 11:43 AM, Manav Bhatia wrote: > > >> On Feb 24, 2015, at 11:40 AM, Jed Brown wrote: >> >> Manav Bhatia writes: >> >>> Greetings! >>> >>> What are the command line options to access suitesparse from petsc? I was not able to find any specification in the manual. >> >> Presumably you mean Umfpack or Cholmod. >> >> -pc_type lu -pc_factor_mat_solver_package umfpack >> >> -pc_type cholesky -pc_factor_mat_solver_package cholmod > > Aha! > > I was thinking that there would be something that would read like suitesparse. Thanks for the info. > > Also, the suitesparse website talks of different orderings available (AMD, CAMD, COLAMD, and CCOLAMD). Are any of these accessible by command line options as well? > > Thanks, > Manav From salazardetroya at gmail.com Tue Feb 24 18:42:08 2015 From: salazardetroya at gmail.com (Miguel Angel Salazar de Troya) Date: Tue, 24 Feb 2015 18:42:08 -0600 Subject: [petsc-users] DMNetworkGetEdgeRange() in parallel In-Reply-To: References: Message-ID: I implemented the code as agreed, but I don't get the results I expected. When I create the vector with DMCreateGlobalVector(), I obtain a vector with a layout similar to the original DMNetwork, instead of the cloned network with the new PetscSection. The code is as follows: DMClone(dm, &dmEdge); PetscSectionCreate(PETSC_COMM_WORLD, &s); PetscSectionSetNumFields(s, 1); PetscSectionSetFieldComponents(s, 0, 1); // Now to set the chart, I pick the edge range DMNetworkGetEdgeRange(dmEdge, & eStart, & eEnd) PetscSectionSetChart(s, eStart, eEnd); for(PetscInt e = eStart; c < eEnd; ++e) { PetscSectionSetDof(s, e, 1); PetscSectionSetFieldDof(s, e, 0, 1); } PetscSectionSetUp(s); DMSetDefaultSection(dmEdge s); DMCreateGlobalVector(dmEdge, &globalVec); When I get into DMCreateGlobalVector(dmEdge, &globalVec) in the debugger, in the function DMCreateSubDM_Section_Private() I call PetscSectionView() on the section obtained by DMGetDefaultGlobalSection(dm, §ionGlobal), and I obtain a PetscSection nothing like the one I see when I call PetscSectionView() on the PetscSection I created above. Does this have anything to do? I tried to compare this strange PetscSection with the one from the original DMNetwork, I call DMGetDefaultGlobalSection(dm, §ionGlobal) before the first line of the snippet above and I get this error message. 0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Object is in wrong state [0]PETSC ERROR: DM must have a default PetscSection in order to create a global PetscSection Thanks in advance Miguel On Mon, Feb 23, 2015 at 3:24 PM, Matthew Knepley wrote: > On Mon, Feb 23, 2015 at 2:15 PM, Miguel Angel Salazar de Troya < > salazardetroya at gmail.com> wrote: > >> Thanks a lot, the partition should be done before setting up the section, >> right? >> > > The partition will be automatic. All you have to do is make the local > section. The DM is already partitioned, > and the Section will inherit that. > > Matt > > >> Miguel >> >> On Mon, Feb 23, 2015 at 2:05 PM, Matthew Knepley >> wrote: >> >>> On Mon, Feb 23, 2015 at 1:40 PM, Miguel Angel Salazar de Troya < >>> salazardetroya at gmail.com> wrote: >>> >>>> Wouldn't including the edge variables in the global vector make the >>>> code slower? I'm using the global vector in a TS, using one of the explicit >>>> RK schemes. The edge variables would not be updated in the RHSFunction >>>> evaluation. I only change the edge variables in the TSUpdate. If the global >>>> vector had the edge variables, it would be a much larger vector, and all >>>> the vector operations performed by the TS would be slower. Although the >>>> vector F returned by the RHSFunction would be zero in the edge variable >>>> components. I guess that being the vector sparse that would not be a >>>> problem. >>>> >>>> I think I'm more interested in the PetscSection approach because it >>>> might require less modifications in my code. However, I don't know how I >>>> could do this. Maybe something like this? >>>> >>>> PetscSectionCreate(PETSC_COMM_WORLD, &s); >>>> PetscSectionSetNumFields(s, 1); >>>> PetscSectionSetFieldComponents(s, 0, 1); >>>> >>>> // Now to set the chart, I pick the edge range >>>> >>>> DMNetworkGetEdgeRange(dm, & eStart, & eEnd >>>> >>>> PetscSectionSetChart(s, eStart, eEnd); >>>> >>>> for(PetscInt e = eStart; c < eEnd; ++e) { >>>> PetscSectionSetDof(s, e, 1); >>>> PetscSectionSetFieldDof(s, e, 1, 1); >>>> >>> >>> It should be PetscSectionSetFieldDof(s, e, 0, 1); >>> >>> >>>> } >>>> PetscSectionSetUp(s); >>>> >>>> Now in the manual I see this: >>>> >>> >>> First you want to do: >>> >>> DMClone(dm, &dmEdge); >>> >>> and then use dmEdge below. >>> >>> >>>> DMSetDefaultSection(dm, s); >>>> DMGetLocalVector(dm, &localVec); >>>> DMGetGlobalVector(dm, &globalVec); >>>> >>>> Setting up the default section in the DM would interfere with the >>>> section already set up with the variables in the vertices? >>>> >>> >>> Yep, thats why you would use a clone. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Thanks a lot for your responses. >>>> >>>> >>>> >>>> On Mon, Feb 23, 2015 at 11:37 AM, Matthew Knepley >>>> wrote: >>>> >>>>> On Mon, Feb 23, 2015 at 9:27 AM, Miguel Angel Salazar de Troya < >>>>> salazardetroya at gmail.com> wrote: >>>>> >>>>>> I'm iterating through local edges given in DMNetworkGetEdgeRange(). >>>>>> For each edge, I extract or modify its corresponding value in a global >>>>>> petsc vector. Therefore that vector must have as many components as edges >>>>>> there are in the network. To extract the value in the vector, I use >>>>>> VecGetArray() and a variable counter that is incremented in each iteration. >>>>>> The array that I obtain in VecGetArray() has to be the same size >>>>>> than the edge range. That variable counter starts as 0, so if the array >>>>>> that I obtained in VecGetArray() is x_array, x_array[0] must be the >>>>>> component in the global vector that corresponds with the start edge given >>>>>> in DMNetworkGetEdgeRange() >>>>>> >>>>>> I need that global petsc vector because I will use it in other >>>>>> operations, it's not just data. Sorry for the confusion. Thanks in advance. >>>>>> >>>>> >>>>> This sounds like an assembly operation. The usual paradigm is to >>>>> compute in the local space, and then communicate to get to the global >>>>> space. So you would make a PetscSection that had 1 (or some) unknowns on >>>>> each cell (edge) and then you can use DMCreateGlobal/LocalVector() and >>>>> DMLocalToGlobal() to do this. >>>>> >>>>> Does that make sense? >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> >>>>>> Miguel >>>>>> >>>>>> >>>>>> On Mon, Feb 23, 2015 at 9:09 AM, Matthew Knepley >>>>>> wrote: >>>>>> >>>>>>> On Mon, Feb 23, 2015 at 8:42 AM, Miguel Angel Salazar de Troya < >>>>>>> salazardetroya at gmail.com> wrote: >>>>>>> >>>>>>>> Thanks, that will help me. Now what I would like to have is the >>>>>>>> following: if I have two processors and ten edges, the partitioning results >>>>>>>> in the first processor having the edges 0-4 and the second processor, the >>>>>>>> edges 5-9. I also have a global vector with as many components as edges, >>>>>>>> 10. How can I partition it so the first processor also has the 0-4 >>>>>>>> components and the second, the 5-9 components of the vector? >>>>>>>> >>>>>>> I think it would help to know what you want to accomplish. This is >>>>>>> how you are proposing to do it.' >>>>>>> >>>>>>> If you just want to put data on edges, DMNetwork has a facility for >>>>>>> that already. >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> Matt >>>>>>> >>>>>>> >>>>>>>> Miguel >>>>>>>> On Feb 23, 2015 8:08 AM, "Abhyankar, Shrirang G." < >>>>>>>> abhyshr at mcs.anl.gov> wrote: >>>>>>>> >>>>>>>>> Miguel, >>>>>>>>> One possible way is to store the global numbering of any >>>>>>>>> edge/vertex in the "component" attached to it. Once the mesh gets >>>>>>>>> partitioned, the components are also distributed so you can easily retrieve >>>>>>>>> the global number of any edge/vertex by accessing its component. This is >>>>>>>>> what is done in the DMNetwork example pf.c although the global numbering is >>>>>>>>> not used for anything. >>>>>>>>> >>>>>>>>> Shri >>>>>>>>> From: Matthew Knepley >>>>>>>>> Date: Mon, 23 Feb 2015 07:54:34 -0600 >>>>>>>>> To: Miguel Angel Salazar de Troya >>>>>>>>> Cc: "petsc-users at mcs.anl.gov" >>>>>>>>> Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel >>>>>>>>> >>>>>>>>> On Sun, Feb 22, 2015 at 3:59 PM, Miguel Angel Salazar de Troya < >>>>>>>>> salazardetroya at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Thanks. Once I obtain that Index Set with the routine DMPlexCreateCellNumbering() >>>>>>>>>> (I assume that the edges in DMNetwork correspond to cells in DMPlex) can I >>>>>>>>>> use it to partition a vector with as many components as edges I have in my >>>>>>>>>> network? >>>>>>>>>> >>>>>>>>> >>>>>>>>> I do not completely understand the question. >>>>>>>>> >>>>>>>>> If you want a partition of the edges, you can use >>>>>>>>> DMPlexCreatePartition() and its friend DMPlexDistribute(). What >>>>>>>>> are you trying to do? >>>>>>>>> >>>>>>>>> Matt >>>>>>>>> >>>>>>>>> >>>>>>>>>> Thanks >>>>>>>>>> Miguel >>>>>>>>>> >>>>>>>>>> On Sun, Feb 22, 2015 at 12:15 PM, Matthew Knepley < >>>>>>>>>> knepley at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> On Sun, Feb 22, 2015 at 11:01 AM, Miguel Angel Salazar de >>>>>>>>>>> Troya wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi >>>>>>>>>>>> >>>>>>>>>>>> I noticed that the routine DMNetworkGetEdgeRange() returns >>>>>>>>>>>> the local indices for the edge range. Is there any way to obtain the global >>>>>>>>>>>> indices? So if my network has 10 edges, the processor 1 has the 0-4 edges >>>>>>>>>>>> and the processor 2, the 5-9 edges, how can I obtain this information? >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> One of the points of DMPlex is we do not require a global >>>>>>>>>>> numbering. Everything is numbered >>>>>>>>>>> locally, and the PetscSF maps local numbers to local numbers in >>>>>>>>>>> order to determine ownership. >>>>>>>>>>> >>>>>>>>>>> If you want to create a global numbering for some reason, you >>>>>>>>>>> can using DMPlexCreatePointNumbering(). >>>>>>>>>>> There are also cell and vertex versions that we use for output, >>>>>>>>>>> so you could do it just for edges as well. >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> >>>>>>>>>>> Matt >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> Thanks >>>>>>>>>>>> Miguel >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>>>> Graduate Research Assistant >>>>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>>>> (217) 550-2360 >>>>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> What most experimenters take for granted before they begin their >>>>>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>>>>> experiments lead. >>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>> Graduate Research Assistant >>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>> (217) 550-2360 >>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> What most experimenters take for granted before they begin their >>>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>>> experiments lead. >>>>>>>>> -- Norbert Wiener >>>>>>>>> >>>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> What most experimenters take for granted before they begin their >>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>> experiments lead. >>>>>>> -- Norbert Wiener >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> *Miguel Angel Salazar de Troya* >>>>>> Graduate Research Assistant >>>>>> Department of Mechanical Science and Engineering >>>>>> University of Illinois at Urbana-Champaign >>>>>> (217) 550-2360 >>>>>> salaza11 at illinois.edu >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>> >>>> >>>> >>>> -- >>>> *Miguel Angel Salazar de Troya* >>>> Graduate Research Assistant >>>> Department of Mechanical Science and Engineering >>>> University of Illinois at Urbana-Champaign >>>> (217) 550-2360 >>>> salaza11 at illinois.edu >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> *Miguel Angel Salazar de Troya* >> Graduate Research Assistant >> Department of Mechanical Science and Engineering >> University of Illinois at Urbana-Champaign >> (217) 550-2360 >> salaza11 at illinois.edu >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- *Miguel Angel Salazar de Troya* Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Feb 24 18:49:32 2015 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 24 Feb 2015 18:49:32 -0600 Subject: [petsc-users] DMNetworkGetEdgeRange() in parallel In-Reply-To: References: Message-ID: On Tue, Feb 24, 2015 at 6:42 PM, Miguel Angel Salazar de Troya < salazardetroya at gmail.com> wrote: > I implemented the code as agreed, but I don't get the results I expected. > When I create the vector with DMCreateGlobalVector(), I obtain a vector > with a layout similar to the original DMNetwork, instead of the cloned > network with the new PetscSection. The code is as follows: > > DMClone(dm, &dmEdge); > > PetscSectionCreate(PETSC_COMM_WORLD, &s); > PetscSectionSetNumFields(s, 1); > PetscSectionSetFieldComponents(s, 0, 1); > > // Now to set the chart, I pick the edge range > > DMNetworkGetEdgeRange(dmEdge, & eStart, & eEnd) > > PetscSectionSetChart(s, eStart, eEnd); > > for(PetscInt e = eStart; c < eEnd; ++e) { > PetscSectionSetDof(s, e, 1); > PetscSectionSetFieldDof(s, e, 0, 1); > } > PetscSectionSetUp(s); > > DMSetDefaultSection(dmEdge s); > DMCreateGlobalVector(dmEdge, &globalVec); > > When I get into DMCreateGlobalVector(dmEdge, &globalVec) in the debugger, > in the function DMCreateSubDM_Section_Private() I call PetscSectionView() > on the section > I have no idea why you would be in DMCreateSubDM(). Just view globalVec. If the code is as above, it will give you a vector with that layout. If not it should be trivial to make a small code and send it. I do this everywhere is PETSc, so the basic mechanism certainly works. Thanks, Matt > obtained by DMGetDefaultGlobalSection(dm, §ionGlobal), and I obtain a > PetscSection nothing like the one I see when I call PetscSectionView() on > the PetscSection I created above. Does this have anything to do? I tried > to compare this strange PetscSection with the one from the original > DMNetwork, I call DMGetDefaultGlobalSection(dm, §ionGlobal) before > the first line of the snippet above and I get this error message. > > 0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Object is in wrong state > [0]PETSC ERROR: DM must have a default PetscSection in order to create a > global PetscSection > > Thanks in advance > Miguel > > > On Mon, Feb 23, 2015 at 3:24 PM, Matthew Knepley > wrote: > >> On Mon, Feb 23, 2015 at 2:15 PM, Miguel Angel Salazar de Troya < >> salazardetroya at gmail.com> wrote: >> >>> Thanks a lot, the partition should be done before setting up the >>> section, right? >>> >> >> The partition will be automatic. All you have to do is make the local >> section. The DM is already partitioned, >> and the Section will inherit that. >> >> Matt >> >> >>> Miguel >>> >>> On Mon, Feb 23, 2015 at 2:05 PM, Matthew Knepley >>> wrote: >>> >>>> On Mon, Feb 23, 2015 at 1:40 PM, Miguel Angel Salazar de Troya < >>>> salazardetroya at gmail.com> wrote: >>>> >>>>> Wouldn't including the edge variables in the global vector make the >>>>> code slower? I'm using the global vector in a TS, using one of the explicit >>>>> RK schemes. The edge variables would not be updated in the RHSFunction >>>>> evaluation. I only change the edge variables in the TSUpdate. If the global >>>>> vector had the edge variables, it would be a much larger vector, and all >>>>> the vector operations performed by the TS would be slower. Although the >>>>> vector F returned by the RHSFunction would be zero in the edge variable >>>>> components. I guess that being the vector sparse that would not be a >>>>> problem. >>>>> >>>>> I think I'm more interested in the PetscSection approach because it >>>>> might require less modifications in my code. However, I don't know how I >>>>> could do this. Maybe something like this? >>>>> >>>>> PetscSectionCreate(PETSC_COMM_WORLD, &s); >>>>> PetscSectionSetNumFields(s, 1); >>>>> PetscSectionSetFieldComponents(s, 0, 1); >>>>> >>>>> // Now to set the chart, I pick the edge range >>>>> >>>>> DMNetworkGetEdgeRange(dm, & eStart, & eEnd >>>>> >>>>> PetscSectionSetChart(s, eStart, eEnd); >>>>> >>>>> for(PetscInt e = eStart; c < eEnd; ++e) { >>>>> PetscSectionSetDof(s, e, 1); >>>>> PetscSectionSetFieldDof(s, e, 1, 1); >>>>> >>>> >>>> It should be PetscSectionSetFieldDof(s, e, 0, 1); >>>> >>>> >>>>> } >>>>> PetscSectionSetUp(s); >>>>> >>>>> Now in the manual I see this: >>>>> >>>> >>>> First you want to do: >>>> >>>> DMClone(dm, &dmEdge); >>>> >>>> and then use dmEdge below. >>>> >>>> >>>>> DMSetDefaultSection(dm, s); >>>>> DMGetLocalVector(dm, &localVec); >>>>> DMGetGlobalVector(dm, &globalVec); >>>>> >>>>> Setting up the default section in the DM would interfere with the >>>>> section already set up with the variables in the vertices? >>>>> >>>> >>>> Yep, thats why you would use a clone. >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> Thanks a lot for your responses. >>>>> >>>>> >>>>> >>>>> On Mon, Feb 23, 2015 at 11:37 AM, Matthew Knepley >>>>> wrote: >>>>> >>>>>> On Mon, Feb 23, 2015 at 9:27 AM, Miguel Angel Salazar de Troya < >>>>>> salazardetroya at gmail.com> wrote: >>>>>> >>>>>>> I'm iterating through local edges given in DMNetworkGetEdgeRange(). >>>>>>> For each edge, I extract or modify its corresponding value in a global >>>>>>> petsc vector. Therefore that vector must have as many components as edges >>>>>>> there are in the network. To extract the value in the vector, I use >>>>>>> VecGetArray() and a variable counter that is incremented in each iteration. >>>>>>> The array that I obtain in VecGetArray() has to be the same size >>>>>>> than the edge range. That variable counter starts as 0, so if the array >>>>>>> that I obtained in VecGetArray() is x_array, x_array[0] must be the >>>>>>> component in the global vector that corresponds with the start edge given >>>>>>> in DMNetworkGetEdgeRange() >>>>>>> >>>>>>> I need that global petsc vector because I will use it in other >>>>>>> operations, it's not just data. Sorry for the confusion. Thanks in advance. >>>>>>> >>>>>> >>>>>> This sounds like an assembly operation. The usual paradigm is to >>>>>> compute in the local space, and then communicate to get to the global >>>>>> space. So you would make a PetscSection that had 1 (or some) unknowns on >>>>>> each cell (edge) and then you can use DMCreateGlobal/LocalVector() and >>>>>> DMLocalToGlobal() to do this. >>>>>> >>>>>> Does that make sense? >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> >>>>>> >>>>>>> Miguel >>>>>>> >>>>>>> >>>>>>> On Mon, Feb 23, 2015 at 9:09 AM, Matthew Knepley >>>>>>> wrote: >>>>>>> >>>>>>>> On Mon, Feb 23, 2015 at 8:42 AM, Miguel Angel Salazar de Troya < >>>>>>>> salazardetroya at gmail.com> wrote: >>>>>>>> >>>>>>>>> Thanks, that will help me. Now what I would like to have is the >>>>>>>>> following: if I have two processors and ten edges, the partitioning results >>>>>>>>> in the first processor having the edges 0-4 and the second processor, the >>>>>>>>> edges 5-9. I also have a global vector with as many components as edges, >>>>>>>>> 10. How can I partition it so the first processor also has the 0-4 >>>>>>>>> components and the second, the 5-9 components of the vector? >>>>>>>>> >>>>>>>> I think it would help to know what you want to accomplish. This is >>>>>>>> how you are proposing to do it.' >>>>>>>> >>>>>>>> If you just want to put data on edges, DMNetwork has a facility for >>>>>>>> that already. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> >>>>>>>> Matt >>>>>>>> >>>>>>>> >>>>>>>>> Miguel >>>>>>>>> On Feb 23, 2015 8:08 AM, "Abhyankar, Shrirang G." < >>>>>>>>> abhyshr at mcs.anl.gov> wrote: >>>>>>>>> >>>>>>>>>> Miguel, >>>>>>>>>> One possible way is to store the global numbering of any >>>>>>>>>> edge/vertex in the "component" attached to it. Once the mesh gets >>>>>>>>>> partitioned, the components are also distributed so you can easily retrieve >>>>>>>>>> the global number of any edge/vertex by accessing its component. This is >>>>>>>>>> what is done in the DMNetwork example pf.c although the global numbering is >>>>>>>>>> not used for anything. >>>>>>>>>> >>>>>>>>>> Shri >>>>>>>>>> From: Matthew Knepley >>>>>>>>>> Date: Mon, 23 Feb 2015 07:54:34 -0600 >>>>>>>>>> To: Miguel Angel Salazar de Troya >>>>>>>>>> Cc: "petsc-users at mcs.anl.gov" >>>>>>>>>> Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel >>>>>>>>>> >>>>>>>>>> On Sun, Feb 22, 2015 at 3:59 PM, Miguel Angel Salazar de Troya >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> Thanks. Once I obtain that Index Set with the routine DMPlexCreateCellNumbering() >>>>>>>>>>> (I assume that the edges in DMNetwork correspond to cells in DMPlex) can I >>>>>>>>>>> use it to partition a vector with as many components as edges I have in my >>>>>>>>>>> network? >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I do not completely understand the question. >>>>>>>>>> >>>>>>>>>> If you want a partition of the edges, you can use >>>>>>>>>> DMPlexCreatePartition() and its friend DMPlexDistribute(). What >>>>>>>>>> are you trying to do? >>>>>>>>>> >>>>>>>>>> Matt >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Thanks >>>>>>>>>>> Miguel >>>>>>>>>>> >>>>>>>>>>> On Sun, Feb 22, 2015 at 12:15 PM, Matthew Knepley < >>>>>>>>>>> knepley at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> On Sun, Feb 22, 2015 at 11:01 AM, Miguel Angel Salazar de >>>>>>>>>>>> Troya wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hi >>>>>>>>>>>>> >>>>>>>>>>>>> I noticed that the routine DMNetworkGetEdgeRange() returns >>>>>>>>>>>>> the local indices for the edge range. Is there any way to obtain the global >>>>>>>>>>>>> indices? So if my network has 10 edges, the processor 1 has the 0-4 edges >>>>>>>>>>>>> and the processor 2, the 5-9 edges, how can I obtain this information? >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> One of the points of DMPlex is we do not require a global >>>>>>>>>>>> numbering. Everything is numbered >>>>>>>>>>>> locally, and the PetscSF maps local numbers to local numbers in >>>>>>>>>>>> order to determine ownership. >>>>>>>>>>>> >>>>>>>>>>>> If you want to create a global numbering for some reason, you >>>>>>>>>>>> can using DMPlexCreatePointNumbering(). >>>>>>>>>>>> There are also cell and vertex versions that we use for output, >>>>>>>>>>>> so you could do it just for edges as well. >>>>>>>>>>>> >>>>>>>>>>>> Thanks, >>>>>>>>>>>> >>>>>>>>>>>> Matt >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> Thanks >>>>>>>>>>>>> Miguel >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>>>>> Graduate Research Assistant >>>>>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>>>>> (217) 550-2360 >>>>>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> What most experimenters take for granted before they begin >>>>>>>>>>>> their experiments is infinitely more interesting than any results to which >>>>>>>>>>>> their experiments lead. >>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>>> Graduate Research Assistant >>>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>>> (217) 550-2360 >>>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> What most experimenters take for granted before they begin their >>>>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>>>> experiments lead. >>>>>>>>>> -- Norbert Wiener >>>>>>>>>> >>>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> What most experimenters take for granted before they begin their >>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>> experiments lead. >>>>>>>> -- Norbert Wiener >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> *Miguel Angel Salazar de Troya* >>>>>>> Graduate Research Assistant >>>>>>> Department of Mechanical Science and Engineering >>>>>>> University of Illinois at Urbana-Champaign >>>>>>> (217) 550-2360 >>>>>>> salaza11 at illinois.edu >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> *Miguel Angel Salazar de Troya* >>>>> Graduate Research Assistant >>>>> Department of Mechanical Science and Engineering >>>>> University of Illinois at Urbana-Champaign >>>>> (217) 550-2360 >>>>> salaza11 at illinois.edu >>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >>> >>> -- >>> *Miguel Angel Salazar de Troya* >>> Graduate Research Assistant >>> Department of Mechanical Science and Engineering >>> University of Illinois at Urbana-Champaign >>> (217) 550-2360 >>> salaza11 at illinois.edu >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > *Miguel Angel Salazar de Troya* > Graduate Research Assistant > Department of Mechanical Science and Engineering > University of Illinois at Urbana-Champaign > (217) 550-2360 > salaza11 at illinois.edu > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From lawrence.mitchell at imperial.ac.uk Wed Feb 25 06:02:16 2015 From: lawrence.mitchell at imperial.ac.uk (Lawrence Mitchell) Date: Wed, 25 Feb 2015 12:02:16 +0000 Subject: [petsc-users] Unable to extract submatrix from DMComposite Mat if block size > 1 Message-ID: <54EDB9C8.6000909@imperial.ac.uk> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi all, I have multi-field operators that have the same layout as given by the matrices created by DMComposite and would like to be able to switch between a MatNest representation (if I'm using fieldsplit PCs) or AIJ (if not). This works fine if the block sizes of all the sub fields are 1, but fails if not (even if they are all the same). The below code demonstrates the problem using a DMComposite. $ ./dm-test -ndof_v 1 -ndof_p V Mat block size (1, 1) P Mat block size (1, 1) Composite Global Mat block size (1, 1) Mat Object: 1 MPI processes type: seqaij rows=20, cols=20 total: nonzeros=56, allocated nonzeros=56 total number of mallocs used during MatSetValues calls =0 not using I-node routines Local (0, 0) block has block size (1, 1) Mat Object: 1 MPI processes type: localref rows=10, cols=10 Local (0, 1) block has block size (1, 1) Mat Object: 1 MPI processes type: localref rows=10, cols=10 Local (1, 0) block has block size (1, 1) Mat Object: 1 MPI processes type: localref rows=10, cols=10 Local (1, 1) block has block size (1, 1) Mat Object: 1 MPI processes type: localref rows=10, cols=10 $ ./dm-test -ndof_v 2 -ndof_p -pack_dm_mat_type nest V Mat block size (2, 2) P Mat block size (1, 1) Composite Global Mat block size (1, 1) Mat Object: 1 MPI processes type: nest rows=30, cols=30 Matrix object: type=nest, rows=2, cols=2 MatNest structure: (0,0) : type=seqaij, rows=20, cols=20 (0,1) : NULL (1,0) : NULL (1,1) : type=seqaij, rows=10, cols=10 Local (0, 0) block has block size (2, 2) Mat Object: 1 MPI processes type: seqaij rows=20, cols=20, bs=2 total: nonzeros=112, allocated nonzeros=120 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 10 nodes, limit used is 5 Local (1, 1) block has block size (1, 1) Mat Object: 1 MPI processes type: seqaij rows=10, cols=10 total: nonzeros=28, allocated nonzeros=30 total number of mallocs used during MatSetValues calls =0 not using I-node routines $ ./dm-test -ndof_v 2 -ndof_p -pack_dm_mat_type aij V Mat block size (2, 2) P Mat block size (1, 1) Composite Global Mat block size (1, 1) Mat Object: 1 MPI processes type: seqaij rows=30, cols=30 total: nonzeros=140, allocated nonzeros=140 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 20 nodes, limit used is 5 [0]PETSC ERROR: --------------------- Error Message - -------------------------------------------------------------- [0]PETSC ERROR: Petsc has generated inconsistent data [0]PETSC ERROR: Blocksize of localtoglobalmapping 1 must match that of layout 2 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.5.2-1615-g4631b1d GIT Date: 2015-01-27 21:52:20 -0600 [0]PETSC ERROR: ./dm-test on a arch-linux2-c-dbg named yam.doc.ic.ac.uk by lmitche1 Wed Feb 25 11:43:35 2015 [0]PETSC ERROR: Configure options --download-chaco=1 - --download-ctetgen=1 --download-exodusii=1 --download-hdf5=1 - --download-hypre=1 --download-metis=1 --download-ml=1 - --download-mumps=1 --download-netcdf=1 --download-parmetis=1 - --download-ptscotch=1 --download-scalapack=1 --download-superlu=1 - --download-superlu_dist=1 --download-triangle=1 --with-c2html=0 - --with-debugging=1 --with-make-np=24 --with-openmp=0 - --with-pthreadclasses=0 --with-shared-libraries=1 --with-threadcomm=0 PETSC_ARCH=arch-linux2-c-dbg [0]PETSC ERROR: #1 PetscLayoutSetBlockSize() line 438 in /data/lmitche1/src/deps/petsc/src/vec/is/utils/pmap.c [0]PETSC ERROR: #2 MatCreateLocalRef() line 259 in /data/lmitche1/src/deps/petsc/src/mat/impls/localref/mlocalref.c [0]PETSC ERROR: #3 MatGetLocalSubMatrix() line 9523 in /data/lmitche1/src/deps/petsc/src/mat/interface/matrix.c [0]PETSC ERROR: #4 main() line 66 in /data/lmitche1/src/petsc-doodles/dm-test.c [0]PETSC ERROR: PETSc Option Table entries: [0]PETSC ERROR: -ndof_p 1 [0]PETSC ERROR: -ndof_v 2 [0]PETSC ERROR: -pack_dm_mat_type aij [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- This looks like it no longer works after the changes to unify ISLocalToGlobalMappingApply and ISLocalToGlobalMappingApplyBlock. At least, on 98c3331e5 (before the raft of changes in isltog.c) I get: $ ./dm-test -ndof_v 2 -ndof_p 1 -pack_dm_mat_type aij V Mat block size (2, 2) P Mat block size (1, 1) Composite Global Mat block size (1, 1) Mat Object: 1 MPI processes type: seqaij rows=30, cols=30 total: nonzeros=140, allocated nonzeros=140 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 20 nodes, limit used is 5 Local (0, 0) block has block size (2, 2) Mat Object: 1 MPI processes type: localref rows=20, cols=20, bs=2 Local (0, 1) block has block size (1, 1) Mat Object: 1 MPI processes type: localref rows=20, cols=10 Local (1, 0) block has block size (1, 1) Mat Object: 1 MPI processes type: localref rows=10, cols=20 Local (1, 1) block has block size (1, 1) Mat Object: 1 MPI processes type: localref rows=10, cols=10 Although ideally the off-diagonal blocks would also support MatSetValuesBlocked (with block sizes (1, 2) and (2, 1) respectively). Cheers, Lawrence #include int main(int argc, char **argv) { PetscErrorCode ierr; DM v; DM p; DM pack; PetscInt rbs, cbs; PetscInt srbs, scbs; PetscInt i, j; PetscInt ndof_v = 1; PetscInt ndof_p = 1; Mat mat; Mat submat; IS *ises; MPI_Comm c; PetscViewer vwr; PetscInitialize(&argc, &argv, NULL, NULL); c = PETSC_COMM_WORLD; ierr = PetscOptionsGetInt(NULL, "-ndof_v", &ndof_v, NULL); CHKERRQ(ierr); ierr = PetscOptionsGetInt(NULL, "-ndof_p", &ndof_p, NULL); CHKERRQ(ierr); ierr = DMDACreate1d(c, DM_BOUNDARY_NONE, 10, ndof_v, 1, NULL, &v); CHKERRQ(ierr); ierr = DMDACreate1d(c, DM_BOUNDARY_NONE, 10, ndof_p, 1, NULL, &p); CHKERRQ(ierr); ierr = DMSetFromOptions(v); CHKERRQ(ierr); ierr = DMSetFromOptions(p); CHKERRQ(ierr); ierr = DMCreateMatrix(v, &mat); CHKERRQ(ierr); ierr = MatGetBlockSizes(mat, &rbs, &cbs); CHKERRQ(ierr); ierr = PetscPrintf(c, "V Mat block size (%d, %d)\n", rbs, cbs); CHKERRQ(ierr); ierr = MatDestroy(&mat); CHKERRQ(ierr); ierr = DMCreateMatrix(p, &mat); CHKERRQ(ierr); ierr = MatGetBlockSizes(mat, &rbs, &cbs); CHKERRQ(ierr); ierr = PetscPrintf(c, "P Mat block size (%d, %d)\n", rbs, cbs); CHKERRQ(ierr); ierr = MatDestroy(&mat); CHKERRQ(ierr); ierr = DMCompositeCreate(c, &pack); CHKERRQ(ierr); ierr = PetscObjectSetOptionsPrefix((PetscObject)pack, "pack_");CHKERRQ(ierr); ierr = DMCompositeAddDM(pack, v); CHKERRQ(ierr); ierr = DMCompositeAddDM(pack, p); CHKERRQ(ierr); ierr = DMSetFromOptions(pack); CHKERRQ(ierr); ierr = DMCompositeGetLocalISs(pack, &ises); CHKERRQ(ierr); ierr = DMCreateMatrix(pack, &mat); CHKERRQ(ierr); ierr = PetscViewerCreate(c, &vwr); CHKERRQ(ierr); ierr = PetscViewerSetType(vwr, PETSCVIEWERASCII); CHKERRQ(ierr); ierr = PetscViewerSetFormat(vwr, PETSC_VIEWER_ASCII_INFO); CHKERRQ(ierr); ierr = PetscViewerSetUp(vwr); CHKERRQ(ierr); ierr = MatGetBlockSizes(mat, &rbs, &cbs); CHKERRQ(ierr); ierr = PetscPrintf(c, "Composite Global Mat block size (%d, %d)\n", rbs, cbs); CHKERRQ(ierr); ierr = MatView(mat, vwr); CHKERRQ(ierr); ierr = PetscPrintf(c, "\n"); CHKERRQ(ierr); for (i=0; i < 2; i++ ) { for (j=0; j < 2; j++ ) { ierr = MatGetLocalSubMatrix(mat, ises[i], ises[j], &submat); CHKERRQ(ierr); if (submat) { ierr = MatGetBlockSizes(submat, &srbs, &scbs); CHKERRQ(ierr); ierr = PetscPrintf(c, "Local (%d, %d) block has block size (%d, %d)\n", i, j, srbs, scbs); CHKERRQ(ierr); ierr = MatAssemblyBegin(submat, MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); ierr = MatAssemblyEnd(submat, MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); ierr = MatView(submat, vwr); CHKERRQ(ierr); ierr = MatDestroy(&submat); CHKERRQ(ierr); ierr = PetscPrintf(c, "\n"); CHKERRQ(ierr); } } } ierr = PetscViewerDestroy(&vwr); CHKERRQ(ierr); ierr = MatDestroy(&mat); CHKERRQ(ierr); ierr = DMDestroy(&pack); CHKERRQ(ierr); ierr = DMDestroy(&v); CHKERRQ(ierr); ierr = DMDestroy(&p); CHKERRQ(ierr); ierr = ISDestroy(&(ises[0])); CHKERRQ(ierr); ierr = ISDestroy(&(ises[1])); CHKERRQ(ierr); ierr = PetscFree(ises); CHKERRQ(ierr); PetscFinalize(); return 0; } -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJU7bnBAAoJECOc1kQ8PEYv37EH/RRlZhuTHjBUBgpZCHDwMVph psTSRGPL/rEmOgQoYwFzlNLpptZKY1IIIykzxJBvWxW77horIRkX56u0HBIAOMej Hk29vaTbLXpreaH1Ov9MUbbh34ZZUMtn5EX5Ye3nu1jueUxRTu3roHwZleiRP4ff f8ankL9C5KVSea/8vPH17TO2us21LpVw1nwvmlCMyA5YUmf8lIKKbpEOagTFK6mO U/TfQoiU47hLBsCGlgwQhlaMsJW+4GGNfE4o621nc4TlC0J8VtqoyoeOvoWjJdx9 vUuZ+vQhaNrg0yeQsrGvWNn9T7fFVUjOaxrNlfp0u1ZIMvmEAx262IcQPHCGO5U= =tdzO -----END PGP SIGNATURE----- From salazardetroya at gmail.com Wed Feb 25 08:48:10 2015 From: salazardetroya at gmail.com (Miguel Angel Salazar de Troya) Date: Wed, 25 Feb 2015 08:48:10 -0600 Subject: [petsc-users] DMNetworkGetEdgeRange() in parallel In-Reply-To: References: Message-ID: I modified the DMNetwork example to include the new DM with the modified section. It has the same problems. Please find attached the code to this email. Thanks On Tue, Feb 24, 2015 at 6:49 PM, Matthew Knepley wrote: > On Tue, Feb 24, 2015 at 6:42 PM, Miguel Angel Salazar de Troya < > salazardetroya at gmail.com> wrote: > >> I implemented the code as agreed, but I don't get the results I expected. >> When I create the vector with DMCreateGlobalVector(), I obtain a vector >> with a layout similar to the original DMNetwork, instead of the cloned >> network with the new PetscSection. The code is as follows: >> >> DMClone(dm, &dmEdge); >> >> PetscSectionCreate(PETSC_COMM_WORLD, &s); >> PetscSectionSetNumFields(s, 1); >> PetscSectionSetFieldComponents(s, 0, 1); >> >> // Now to set the chart, I pick the edge range >> >> DMNetworkGetEdgeRange(dmEdge, & eStart, & eEnd) >> >> PetscSectionSetChart(s, eStart, eEnd); >> >> for(PetscInt e = eStart; c < eEnd; ++e) { >> PetscSectionSetDof(s, e, 1); >> PetscSectionSetFieldDof(s, e, 0, 1); >> } >> PetscSectionSetUp(s); >> >> DMSetDefaultSection(dmEdge s); >> DMCreateGlobalVector(dmEdge, &globalVec); >> >> When I get into DMCreateGlobalVector(dmEdge, &globalVec) in the debugger, >> in the function DMCreateSubDM_Section_Private() I call >> PetscSectionView() on the section >> > > I have no idea why you would be in DMCreateSubDM(). > > Just view globalVec. If the code is as above, it will give you a vector > with that layout. If not > it should be trivial to make a small code and send it. I do this > everywhere is PETSc, so the > basic mechanism certainly works. > > Thanks, > > Matt > > >> obtained by DMGetDefaultGlobalSection(dm, §ionGlobal), and I obtain >> a PetscSection nothing like the one I see when I call PetscSectionView() >> on the PetscSection I created above. Does this have anything to do? I >> tried to compare this strange PetscSection with the one from the original >> DMNetwork, I call DMGetDefaultGlobalSection(dm, §ionGlobal) before >> the first line of the snippet above and I get this error message. >> >> 0]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [0]PETSC ERROR: Object is in wrong state >> [0]PETSC ERROR: DM must have a default PetscSection in order to create a >> global PetscSection >> >> Thanks in advance >> Miguel >> >> >> On Mon, Feb 23, 2015 at 3:24 PM, Matthew Knepley >> wrote: >> >>> On Mon, Feb 23, 2015 at 2:15 PM, Miguel Angel Salazar de Troya < >>> salazardetroya at gmail.com> wrote: >>> >>>> Thanks a lot, the partition should be done before setting up the >>>> section, right? >>>> >>> >>> The partition will be automatic. All you have to do is make the local >>> section. The DM is already partitioned, >>> and the Section will inherit that. >>> >>> Matt >>> >>> >>>> Miguel >>>> >>>> On Mon, Feb 23, 2015 at 2:05 PM, Matthew Knepley >>>> wrote: >>>> >>>>> On Mon, Feb 23, 2015 at 1:40 PM, Miguel Angel Salazar de Troya < >>>>> salazardetroya at gmail.com> wrote: >>>>> >>>>>> Wouldn't including the edge variables in the global vector make the >>>>>> code slower? I'm using the global vector in a TS, using one of the explicit >>>>>> RK schemes. The edge variables would not be updated in the RHSFunction >>>>>> evaluation. I only change the edge variables in the TSUpdate. If the global >>>>>> vector had the edge variables, it would be a much larger vector, and all >>>>>> the vector operations performed by the TS would be slower. Although the >>>>>> vector F returned by the RHSFunction would be zero in the edge variable >>>>>> components. I guess that being the vector sparse that would not be a >>>>>> problem. >>>>>> >>>>>> I think I'm more interested in the PetscSection approach because it >>>>>> might require less modifications in my code. However, I don't know how I >>>>>> could do this. Maybe something like this? >>>>>> >>>>>> PetscSectionCreate(PETSC_COMM_WORLD, &s); >>>>>> PetscSectionSetNumFields(s, 1); >>>>>> PetscSectionSetFieldComponents(s, 0, 1); >>>>>> >>>>>> // Now to set the chart, I pick the edge range >>>>>> >>>>>> DMNetworkGetEdgeRange(dm, & eStart, & eEnd >>>>>> >>>>>> PetscSectionSetChart(s, eStart, eEnd); >>>>>> >>>>>> for(PetscInt e = eStart; c < eEnd; ++e) { >>>>>> PetscSectionSetDof(s, e, 1); >>>>>> PetscSectionSetFieldDof(s, e, 1, 1); >>>>>> >>>>> >>>>> It should be PetscSectionSetFieldDof(s, e, 0, 1); >>>>> >>>>> >>>>>> } >>>>>> PetscSectionSetUp(s); >>>>>> >>>>>> Now in the manual I see this: >>>>>> >>>>> >>>>> First you want to do: >>>>> >>>>> DMClone(dm, &dmEdge); >>>>> >>>>> and then use dmEdge below. >>>>> >>>>> >>>>>> DMSetDefaultSection(dm, s); >>>>>> DMGetLocalVector(dm, &localVec); >>>>>> DMGetGlobalVector(dm, &globalVec); >>>>>> >>>>>> Setting up the default section in the DM would interfere with the >>>>>> section already set up with the variables in the vertices? >>>>>> >>>>> >>>>> Yep, thats why you would use a clone. >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> >>>>>> Thanks a lot for your responses. >>>>>> >>>>>> >>>>>> >>>>>> On Mon, Feb 23, 2015 at 11:37 AM, Matthew Knepley >>>>>> wrote: >>>>>> >>>>>>> On Mon, Feb 23, 2015 at 9:27 AM, Miguel Angel Salazar de Troya < >>>>>>> salazardetroya at gmail.com> wrote: >>>>>>> >>>>>>>> I'm iterating through local edges given in DMNetworkGetEdgeRange(). >>>>>>>> For each edge, I extract or modify its corresponding value in a global >>>>>>>> petsc vector. Therefore that vector must have as many components as edges >>>>>>>> there are in the network. To extract the value in the vector, I use >>>>>>>> VecGetArray() and a variable counter that is incremented in each iteration. >>>>>>>> The array that I obtain in VecGetArray() has to be the same size >>>>>>>> than the edge range. That variable counter starts as 0, so if the array >>>>>>>> that I obtained in VecGetArray() is x_array, x_array[0] must be >>>>>>>> the component in the global vector that corresponds with the start edge >>>>>>>> given in DMNetworkGetEdgeRange() >>>>>>>> >>>>>>>> I need that global petsc vector because I will use it in other >>>>>>>> operations, it's not just data. Sorry for the confusion. Thanks in advance. >>>>>>>> >>>>>>> >>>>>>> This sounds like an assembly operation. The usual paradigm is to >>>>>>> compute in the local space, and then communicate to get to the global >>>>>>> space. So you would make a PetscSection that had 1 (or some) unknowns on >>>>>>> each cell (edge) and then you can use DMCreateGlobal/LocalVector() and >>>>>>> DMLocalToGlobal() to do this. >>>>>>> >>>>>>> Does that make sense? >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> Matt >>>>>>> >>>>>>> >>>>>>>> Miguel >>>>>>>> >>>>>>>> >>>>>>>> On Mon, Feb 23, 2015 at 9:09 AM, Matthew Knepley >>>>>>> > wrote: >>>>>>>> >>>>>>>>> On Mon, Feb 23, 2015 at 8:42 AM, Miguel Angel Salazar de Troya < >>>>>>>>> salazardetroya at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Thanks, that will help me. Now what I would like to have is the >>>>>>>>>> following: if I have two processors and ten edges, the partitioning results >>>>>>>>>> in the first processor having the edges 0-4 and the second processor, the >>>>>>>>>> edges 5-9. I also have a global vector with as many components as edges, >>>>>>>>>> 10. How can I partition it so the first processor also has the 0-4 >>>>>>>>>> components and the second, the 5-9 components of the vector? >>>>>>>>>> >>>>>>>>> I think it would help to know what you want to accomplish. This is >>>>>>>>> how you are proposing to do it.' >>>>>>>>> >>>>>>>>> If you just want to put data on edges, DMNetwork has a facility >>>>>>>>> for that already. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> >>>>>>>>> Matt >>>>>>>>> >>>>>>>>> >>>>>>>>>> Miguel >>>>>>>>>> On Feb 23, 2015 8:08 AM, "Abhyankar, Shrirang G." < >>>>>>>>>> abhyshr at mcs.anl.gov> wrote: >>>>>>>>>> >>>>>>>>>>> Miguel, >>>>>>>>>>> One possible way is to store the global numbering of any >>>>>>>>>>> edge/vertex in the "component" attached to it. Once the mesh gets >>>>>>>>>>> partitioned, the components are also distributed so you can easily retrieve >>>>>>>>>>> the global number of any edge/vertex by accessing its component. This is >>>>>>>>>>> what is done in the DMNetwork example pf.c although the global numbering is >>>>>>>>>>> not used for anything. >>>>>>>>>>> >>>>>>>>>>> Shri >>>>>>>>>>> From: Matthew Knepley >>>>>>>>>>> Date: Mon, 23 Feb 2015 07:54:34 -0600 >>>>>>>>>>> To: Miguel Angel Salazar de Troya >>>>>>>>>>> Cc: "petsc-users at mcs.anl.gov" >>>>>>>>>>> Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel >>>>>>>>>>> >>>>>>>>>>> On Sun, Feb 22, 2015 at 3:59 PM, Miguel Angel Salazar de >>>>>>>>>>> Troya wrote: >>>>>>>>>>> >>>>>>>>>>>> Thanks. Once I obtain that Index Set with the routine DMPlexCreateCellNumbering() >>>>>>>>>>>> (I assume that the edges in DMNetwork correspond to cells in DMPlex) can I >>>>>>>>>>>> use it to partition a vector with as many components as edges I have in my >>>>>>>>>>>> network? >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> I do not completely understand the question. >>>>>>>>>>> >>>>>>>>>>> If you want a partition of the edges, you can use >>>>>>>>>>> DMPlexCreatePartition() and its friend DMPlexDistribute(). What >>>>>>>>>>> are you trying to do? >>>>>>>>>>> >>>>>>>>>>> Matt >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> Thanks >>>>>>>>>>>> Miguel >>>>>>>>>>>> >>>>>>>>>>>> On Sun, Feb 22, 2015 at 12:15 PM, Matthew Knepley < >>>>>>>>>>>> knepley at gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> On Sun, Feb 22, 2015 at 11:01 AM, Miguel Angel Salazar de >>>>>>>>>>>>> Troya wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hi >>>>>>>>>>>>>> >>>>>>>>>>>>>> I noticed that the routine DMNetworkGetEdgeRange() returns >>>>>>>>>>>>>> the local indices for the edge range. Is there any way to obtain the global >>>>>>>>>>>>>> indices? So if my network has 10 edges, the processor 1 has the 0-4 edges >>>>>>>>>>>>>> and the processor 2, the 5-9 edges, how can I obtain this information? >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> One of the points of DMPlex is we do not require a global >>>>>>>>>>>>> numbering. Everything is numbered >>>>>>>>>>>>> locally, and the PetscSF maps local numbers to local numbers >>>>>>>>>>>>> in order to determine ownership. >>>>>>>>>>>>> >>>>>>>>>>>>> If you want to create a global numbering for some reason, >>>>>>>>>>>>> you can using DMPlexCreatePointNumbering(). >>>>>>>>>>>>> There are also cell and vertex versions that we use for >>>>>>>>>>>>> output, so you could do it just for edges as well. >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks, >>>>>>>>>>>>> >>>>>>>>>>>>> Matt >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>> Miguel >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>>>>>> Graduate Research Assistant >>>>>>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>>>>>> (217) 550-2360 >>>>>>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> What most experimenters take for granted before they begin >>>>>>>>>>>>> their experiments is infinitely more interesting than any results to which >>>>>>>>>>>>> their experiments lead. >>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>>>> Graduate Research Assistant >>>>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>>>> (217) 550-2360 >>>>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> What most experimenters take for granted before they begin their >>>>>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>>>>> experiments lead. >>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> What most experimenters take for granted before they begin their >>>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>>> experiments lead. >>>>>>>>> -- Norbert Wiener >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>> Graduate Research Assistant >>>>>>>> Department of Mechanical Science and Engineering >>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>> (217) 550-2360 >>>>>>>> salaza11 at illinois.edu >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> What most experimenters take for granted before they begin their >>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>> experiments lead. >>>>>>> -- Norbert Wiener >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> *Miguel Angel Salazar de Troya* >>>>>> Graduate Research Assistant >>>>>> Department of Mechanical Science and Engineering >>>>>> University of Illinois at Urbana-Champaign >>>>>> (217) 550-2360 >>>>>> salaza11 at illinois.edu >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>> >>>> >>>> >>>> -- >>>> *Miguel Angel Salazar de Troya* >>>> Graduate Research Assistant >>>> Department of Mechanical Science and Engineering >>>> University of Illinois at Urbana-Champaign >>>> (217) 550-2360 >>>> salaza11 at illinois.edu >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> *Miguel Angel Salazar de Troya* >> Graduate Research Assistant >> Department of Mechanical Science and Engineering >> University of Illinois at Urbana-Champaign >> (217) 550-2360 >> salaza11 at illinois.edu >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- *Miguel Angel Salazar de Troya* Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: case9.m Type: text/x-objcsrc Size: 2000 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: makefile Type: application/octet-stream Size: 430 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pf.c Type: text/x-csrc Size: 23659 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pf.h Type: text/x-chdr Size: 5267 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pfoptions Type: application/octet-stream Size: 262 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PFReadData.c Type: text/x-csrc Size: 6870 bytes Desc: not available URL: From karpeev at mcs.anl.gov Wed Feb 25 08:56:12 2015 From: karpeev at mcs.anl.gov (Dmitry Karpeyev) Date: Wed, 25 Feb 2015 14:56:12 +0000 Subject: [petsc-users] MatMPIAIJSetPreallocation: "nnz cannot be greater than row length" References: Message-ID: The previous patch didn't clean out the Mat data structure thoroughly enough. Please try the attached patch. It runs for me, but there is no convergence. In particular, KSP doesn't seem to converge, although that's with the default GMRES+BJACOBI/ILU(0) options. So I don't think resetting the Mat is the cause. Let me know if this works for you. MatReset() will be in the next release as well as, hopefully, an overhauled VI solver that should be able to handle these contact problems. Dmitry. On Mon Feb 23 2015 at 10:17:19 AM David Knezevic wrote: > OK, sounds good. Let me know if I can help with the digging. > > David > > > > On Mon, Feb 23, 2015 at 11:10 AM, Dmitry Karpeyev > wrote: > >> I just tried building against petsc/master, but there needs to be more >> work on libMesh before it can work with petsc/master: >> the new VecLockPush()/Pop() stuff isn't respected by vector manipulation >> in libMesh. >> I put a hack equivalent to MatReset() into your branch (patch attached, >> in case you want to take a look at it), >> but it generates the same error in MatCreateColmap that you reported >> earlier. It's odd that it occurs >> on the second nonlinear iteration. I'll have to dig a bit deeper to >> see what's going on. >> >> Dmitry. >> >> >> On Mon Feb 23 2015 at 10:03:33 AM David Knezevic < >> david.knezevic at akselos.com> wrote: >> >>> Hi Dmitry, >>> >>> OK, good to hear we're seeing the same behavior for the example. >>> >>> Regarding this comment: >>> >>> >>> libMesh needs to change the way configure extracts PETSc information -- >>>> configuration data were moved: >>>> conf --> lib/petsc-conf >>>> ${PETSC_ARCH}/conf --> ${PETSC_ARCH}/lib/petsc-conf >>>> >>>> At one point I started looking at m4/petsc.m4, but that got put on the >>>> back burner. For now making the relevant symlinks by hand lets you >>>> configure and build libMesh with petsc/master. >>>> >>> >>> >>> So do you suggest that the next step here is to build libmesh against >>> petsc/master so that we can try your PETSc pull request that implements >>> MatReset() to see if that gets this example working? >>> >>> David >>> >>> >>> >>> >>> >>>> On Mon Feb 23 2015 at 9:15:44 AM David Knezevic < >>>> david.knezevic at akselos.com> wrote: >>>> >>>>> Hi Dmitry, >>>>> >>>>> Thanks very much for testing out the example. >>>>> >>>>> examples/systems_of_equations/ex8 works fine for me in serial, but it >>>>> fails for me if I run with more than 1 MPI process. Can you try it with, >>>>> say, 2 or 4 MPI processes? >>>>> >>>>> If we need access to MatReset in libMesh to get this to work, I'll be >>>>> happy to work on a libMesh pull request for that. >>>>> >>>>> David >>>>> >>>>> >>>>> -- >>>>> >>>>> David J. Knezevic | CTO >>>>> Akselos | 17 Bay State Road | Boston, MA | 02215 >>>>> Phone (office): +1-857-265-2238 >>>>> Phone (mobile): +1-617-599-4755 >>>>> Web: http://www.akselos.com >>>>> >>>>> >>>>> On Mon, Feb 23, 2015 at 10:08 AM, Dmitry Karpeyev >>>> > wrote: >>>>> >>>>>> David, >>>>>> >>>>>> What code are you running when you encounter this error? I'm trying >>>>>> to reproduce it and >>>>>> I tried examples/systems_of_equations/ex8, but it ran for me, >>>>>> ostensibly to completion. >>>>>> >>>>>> I have a small PETSc pull request that implements MatReset(), which >>>>>> passes a small PETSc test, >>>>>> but libMesh needs some work to be able to build against petsc/master >>>>>> because of some recent >>>>>> changes to PETSc. >>>>>> >>>>>> Dmitry. >>>>>> >>>>>> On Mon Feb 23 2015 at 7:17:06 AM David Knezevic < >>>>>> david.knezevic at akselos.com> wrote: >>>>>> >>>>>>> Hi Barry, hi Dmitry, >>>>>>> >>>>>>> I set the matrix to BAIJ and back to AIJ, and the code got a bit >>>>>>> further. But I now run into the error pasted below (Note that I'm now using >>>>>>> "--with-debugging=1"): >>>>>>> >>>>>>> PETSC ERROR: --------------------- Error Message >>>>>>> -------------------------------------------------------------- >>>>>>> PETSC ERROR: Petsc has generated inconsistent data >>>>>>> PETSC ERROR: MPIAIJ Matrix was assembled but is missing garray >>>>>>> PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >>>>>>> for trouble shooting. >>>>>>> PETSC ERROR: Petsc Release Version 3.5.2, Sep, 08, 2014 >>>>>>> PETSC ERROR: ./example-dbg on a arch-linux2-c-debug named >>>>>>> david-Lenovo by dknez Mon Feb 23 08:05:44 2015 >>>>>>> PETSC ERROR: Configure options --with-shared-libraries=1 >>>>>>> --with-debugging=1 --download-suitesparse=1 --download-parmetis=1 >>>>>>> --download-blacs=1 --download-scalapack=1 --download-mumps=1 >>>>>>> --download-metis --download-superlu_dist --prefix=/home/dknez/software/libmesh_install/dbg_real/petsc >>>>>>> --download-hypre >>>>>>> PETSC ERROR: #1 MatCreateColmap_MPIAIJ_Private() line 361 in >>>>>>> /home/dknez/software/petsc-3.5.2/src/mat/impls/aij/mpi/mpiaij.c >>>>>>> PETSC ERROR: #2 MatSetValues_MPIAIJ() line 538 in >>>>>>> /home/dknez/software/petsc-3.5.2/src/mat/impls/aij/mpi/mpiaij.c >>>>>>> PETSC ERROR: #3 MatSetValues() line 1136 in >>>>>>> /home/dknez/software/petsc-3.5.2/src/mat/interface/matrix.c >>>>>>> PETSC ERROR: #4 add_matrix() line 765 in >>>>>>> /home/dknez/software/libmesh-src/src/numerics/petsc_matrix.C >>>>>>> ------------------------------------------------------------ >>>>>>> -------------- >>>>>>> >>>>>>> This occurs when I try to set some entries of the matrix. Do you >>>>>>> have any suggestions on how I can resolve this? >>>>>>> >>>>>>> Thanks! >>>>>>> David >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Sun, Feb 22, 2015 at 10:22 PM, Dmitry Karpeyev < >>>>>>> dkarpeev at gmail.com> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Sun Feb 22 2015 at 9:15:22 PM Barry Smith >>>>>>>> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> > On Feb 22, 2015, at 9:09 PM, David Knezevic < >>>>>>>>> david.knezevic at akselos.com> wrote: >>>>>>>>> > >>>>>>>>> > Hi Dmitry, >>>>>>>>> > >>>>>>>>> > Thanks for the suggestion. I tried MatSetType(mat,MATMPIAIJ) >>>>>>>>> followed by MatXAIJSetPreallocation(...), but unfortunately this still >>>>>>>>> gives me the same error as before: "nnz cannot be greater than row length: >>>>>>>>> local row 168 value 24 rowlength 0". >>>>>>>>> > >>>>>>>>> > I gather that the idea here is that MatSetType builds a new >>>>>>>>> matrix object, and then I should be able to pre-allocate for that new >>>>>>>>> matrix however I like, right? Was I supposed to clear the matrix object >>>>>>>>> somehow before calling MatSetType? (I didn't do any sort of clear >>>>>>>>> operation.) >>>>>>>>> >>>>>>>>> If the type doesn't change then MatSetType() won't do anything. >>>>>>>>> You can try setting the type to BAIJ and then setting the type back to AIJ. >>>>>>>>> This may/should clear out the matrix. >>>>>>>>> >>>>>>>> Ah, yes. If the type is the same as before it does quit early, but >>>>>>>> changing the type and then back will clear out and rebuild the matrix. We >>>>>>>> need >>>>>>>> something like MatReset() to do the equivalent thing. >>>>>>>> >>>>>>>>> >>>>>>>>> > >>>>>>>>> > As I said earlier, I'll make a dbg PETSc build, so hopefully >>>>>>>>> that will help shed some light on what's going wrong for me. >>>>>>>>> >>>>>>>> I think it's always a good idea to have a dbg build of PETSc when >>>>>>>> you doing things like these. >>>>>>>> >>>>>>>> Dmitry. >>>>>>>> >>>>>>>>> >>>>>>>>> Don't bother, what I suggested won't work. >>>>>>>>> >>>>>>>>> Barry >>>>>>>>> >>>>>>>>> >>>>>>>>> > >>>>>>>>> > Thanks, >>>>>>>>> > David >>>>>>>>> > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> > On Sun, Feb 22, 2015 at 6:02 PM, Dmitry Karpeyev < >>>>>>>>> dkarpeev at gmail.com> wrote: >>>>>>>>> > David, >>>>>>>>> > It might be easier to just rebuild the whole matrix from >>>>>>>>> scratch: you would in effect be doing all that with disassembling and >>>>>>>>> resetting the preallocation. >>>>>>>>> > MatSetType(mat,MATMPIAIJ) >>>>>>>>> > or >>>>>>>>> > PetscObjectGetType((PetscObject)mat,&type); >>>>>>>>> > MatSetType(mat,type); >>>>>>>>> > followed by >>>>>>>>> > MatXAIJSetPreallocation(...); >>>>>>>>> > should do. >>>>>>>>> > Dmitry. >>>>>>>>> > >>>>>>>>> > >>>>>>>>> > On Sun Feb 22 2015 at 4:45:46 PM Barry Smith >>>>>>>>> wrote: >>>>>>>>> > >>>>>>>>> > Do not call for SeqAIJ matrix. Do not call before the first >>>>>>>>> time you have preallocated and put entries in the matrix and done the >>>>>>>>> MatAssemblyBegin/End() >>>>>>>>> > >>>>>>>>> > If it still crashes you'll need to try the debugger >>>>>>>>> > >>>>>>>>> > Barry >>>>>>>>> > >>>>>>>>> > > On Feb 22, 2015, at 4:09 PM, David Knezevic < >>>>>>>>> david.knezevic at akselos.com> wrote: >>>>>>>>> > > >>>>>>>>> > > Hi Barry, >>>>>>>>> > > >>>>>>>>> > > Thanks for your help, much appreciated. >>>>>>>>> > > >>>>>>>>> > > I added a prototype for MatDisAssemble_MPIAIJ: >>>>>>>>> > > PETSC_INTERN PetscErrorCode MatDisAssemble_MPIAIJ(Mat); >>>>>>>>> > > >>>>>>>>> > > and I added a call to MatDisAssemble_MPIAIJ before >>>>>>>>> MatMPIAIJSetPreallocation. However, I get a segfault on the call to >>>>>>>>> MatDisAssemble_MPIAIJ. The segfault occurs in both serial and parallel. >>>>>>>>> > > >>>>>>>>> > > FYI, I'm using Petsc 3.5.2, and I'm not using a non-debug >>>>>>>>> build (though I could rebuild PETSc in debug mode if you think that would >>>>>>>>> help figure out what's happening here). >>>>>>>>> > > >>>>>>>>> > > Thanks, >>>>>>>>> > > David >>>>>>>>> > > >>>>>>>>> > > >>>>>>>>> > > >>>>>>>>> > > On Sun, Feb 22, 2015 at 1:13 PM, Barry Smith < >>>>>>>>> bsmith at mcs.anl.gov> wrote: >>>>>>>>> > > >>>>>>>>> > > David, >>>>>>>>> > > >>>>>>>>> > > This is an obscure little feature of MatMPIAIJ, each time >>>>>>>>> you change the sparsity pattern before you call the >>>>>>>>> MatMPIAIJSetPreallocation you need to call MatDisAssemble_MPIAIJ(Mat >>>>>>>>> mat). This is a private PETSc function so you need to provide your own >>>>>>>>> prototype for it above the function you use it in. >>>>>>>>> > > >>>>>>>>> > > Let us know if this resolves the problem. >>>>>>>>> > > >>>>>>>>> > > Barry >>>>>>>>> > > >>>>>>>>> > > We never really intended that people would call >>>>>>>>> MatMPIAIJSetPreallocation() AFTER they had already used the matrix. >>>>>>>>> > > >>>>>>>>> > > >>>>>>>>> > > > On Feb 22, 2015, at 6:50 AM, David Knezevic < >>>>>>>>> david.knezevic at akselos.com> wrote: >>>>>>>>> > > > >>>>>>>>> > > > Hi all, >>>>>>>>> > > > >>>>>>>>> > > > I've implemented a solver for a contact problem using SNES. >>>>>>>>> The sparsity pattern of the jacobian matrix needs to change at each >>>>>>>>> nonlinear iteration (because the elements which are in contact can change), >>>>>>>>> so I tried to deal with this by calling MatSeqAIJSetPreallocation and >>>>>>>>> MatMPIAIJSetPreallocation during each iteration in order to update the >>>>>>>>> preallocation. >>>>>>>>> > > > >>>>>>>>> > > > This seems to work fine in serial, but with two or more MPI >>>>>>>>> processes I run into the error "nnz cannot be greater than row length", >>>>>>>>> e.g.: >>>>>>>>> > > > nnz cannot be greater than row length: local row 528 value >>>>>>>>> 12 rowlength 0 >>>>>>>>> > > > >>>>>>>>> > > > This error is from the call to >>>>>>>>> > > > MatSeqAIJSetPreallocation(b->B,o_nz,o_nnz); in >>>>>>>>> MatMPIAIJSetPreallocation_MPIAIJ. >>>>>>>>> > > > >>>>>>>>> > > > Any guidance on what the problem might be would be most >>>>>>>>> appreciated. For example, I was wondering if there is a problem with >>>>>>>>> calling SetPreallocation on a matrix that has already been preallocated? >>>>>>>>> > > > >>>>>>>>> > > > Some notes: >>>>>>>>> > > > - I'm using PETSc via libMesh >>>>>>>>> > > > - The code that triggers this issue is available as a PR on >>>>>>>>> the libMesh github repo, in case anyone is interested: >>>>>>>>> https://github.com/libMesh/libmesh/pull/460/ >>>>>>>>> > > > - I can try to make a minimal pure-PETSc example that >>>>>>>>> reproduces this error, if that would be helpful. >>>>>>>>> > > > >>>>>>>>> > > > Many thanks, >>>>>>>>> > > > David >>>>>>>>> > > > >>>>>>>>> > > >>>>>>>>> > > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> >>>>>>>>> >>>>>>> >>>>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: mat-reset-hack.patch Type: application/octet-stream Size: 1141 bytes Desc: not available URL: From david.knezevic at akselos.com Wed Feb 25 09:47:18 2015 From: david.knezevic at akselos.com (David Knezevic) Date: Wed, 25 Feb 2015 10:47:18 -0500 Subject: [petsc-users] MatMPIAIJSetPreallocation: "nnz cannot be greater than row length" In-Reply-To: References: Message-ID: Hi Dmitry, Thanks so much, that's great! I've updated the pull request now to include your patch: https://github.com/dknez/libmesh/commit/247cc12f7536a5f848fd5c63765d97af307d9fa7 We can update the code later, once MatReset() is available in PETSc. Regarding the convergence issues: I think the main issue is that here we're using the penalty method for contact, which gives a non-smooth term in the PDE, and hence we don't get nice convergence with the nonlinear solver. In particular, the SNES solver terminates with "DIVERGED_LINE_SEARCH", but I don't think that's very surprising here. The simulation results seem to be good though (e.g. see the screenshots in the PR). I'd be interested in your thoughts on this though? I'm not sure why the KSP isn't converging fully. I didn't put any thought into the choice of default solver, happy to change that (to CG or whatever). But I also used a direct solver for comparison, and SuperLU gives the same overall result that I get with GMRES (though MUMPS crashes with "Numerically singular matrix" for some reason). Regarding the VI solver: That sounds very good, I'll be interested to try the overhauled VI solver out for contact problems, once it's available. Thanks, David On Wed, Feb 25, 2015 at 9:56 AM, Dmitry Karpeyev wrote: > The previous patch didn't clean out the Mat data structure thoroughly > enough. Please try the attached patch. It runs for me, but there is no > convergence. In particular, KSP doesn't seem to converge, although that's > with the default GMRES+BJACOBI/ILU(0) options. So I don't think resetting > the Mat is the cause. > > Let me know if this works for you. MatReset() will be in the next release > as well as, hopefully, an overhauled VI solver that should be able to > handle these contact problems. > > Dmitry. > > On Mon Feb 23 2015 at 10:17:19 AM David Knezevic < > david.knezevic at akselos.com> wrote: > >> OK, sounds good. Let me know if I can help with the digging. >> >> David >> >> >> >> On Mon, Feb 23, 2015 at 11:10 AM, Dmitry Karpeyev >> wrote: >> >>> I just tried building against petsc/master, but there needs to be more >>> work on libMesh before it can work with petsc/master: >>> the new VecLockPush()/Pop() stuff isn't respected by vector manipulation >>> in libMesh. >>> I put a hack equivalent to MatReset() into your branch (patch attached, >>> in case you want to take a look at it), >>> but it generates the same error in MatCreateColmap that you reported >>> earlier. It's odd that it occurs >>> on the second nonlinear iteration. I'll have to dig a bit deeper to >>> see what's going on. >>> >>> Dmitry. >>> >>> >>> On Mon Feb 23 2015 at 10:03:33 AM David Knezevic < >>> david.knezevic at akselos.com> wrote: >>> >>>> Hi Dmitry, >>>> >>>> OK, good to hear we're seeing the same behavior for the example. >>>> >>>> Regarding this comment: >>>> >>>> >>>> libMesh needs to change the way configure extracts PETSc information -- >>>>> configuration data were moved: >>>>> conf --> lib/petsc-conf >>>>> ${PETSC_ARCH}/conf --> ${PETSC_ARCH}/lib/petsc-conf >>>>> >>>>> At one point I started looking at m4/petsc.m4, but that got put on the >>>>> back burner. For now making the relevant symlinks by hand lets you >>>>> configure and build libMesh with petsc/master. >>>>> >>>> >>>> >>>> So do you suggest that the next step here is to build libmesh against >>>> petsc/master so that we can try your PETSc pull request that implements >>>> MatReset() to see if that gets this example working? >>>> >>>> David >>>> >>>> >>>> >>>> >>>> >>>>> On Mon Feb 23 2015 at 9:15:44 AM David Knezevic < >>>>> david.knezevic at akselos.com> wrote: >>>>> >>>>>> Hi Dmitry, >>>>>> >>>>>> Thanks very much for testing out the example. >>>>>> >>>>>> examples/systems_of_equations/ex8 works fine for me in serial, but >>>>>> it fails for me if I run with more than 1 MPI process. Can you try it with, >>>>>> say, 2 or 4 MPI processes? >>>>>> >>>>>> If we need access to MatReset in libMesh to get this to work, I'll be >>>>>> happy to work on a libMesh pull request for that. >>>>>> >>>>>> David >>>>>> >>>>>> >>>>>> -- >>>>>> >>>>>> David J. Knezevic | CTO >>>>>> Akselos | 17 Bay State Road | Boston, MA | 02215 >>>>>> Phone (office): +1-857-265-2238 >>>>>> Phone (mobile): +1-617-599-4755 >>>>>> Web: http://www.akselos.com >>>>>> >>>>>> >>>>>> On Mon, Feb 23, 2015 at 10:08 AM, Dmitry Karpeyev < >>>>>> karpeev at mcs.anl.gov> wrote: >>>>>> >>>>>>> David, >>>>>>> >>>>>>> What code are you running when you encounter this error? I'm trying >>>>>>> to reproduce it and >>>>>>> I tried examples/systems_of_equations/ex8, but it ran for me, >>>>>>> ostensibly to completion. >>>>>>> >>>>>>> I have a small PETSc pull request that implements MatReset(), which >>>>>>> passes a small PETSc test, >>>>>>> but libMesh needs some work to be able to build against petsc/master >>>>>>> because of some recent >>>>>>> changes to PETSc. >>>>>>> >>>>>>> Dmitry. >>>>>>> >>>>>>> On Mon Feb 23 2015 at 7:17:06 AM David Knezevic < >>>>>>> david.knezevic at akselos.com> wrote: >>>>>>> >>>>>>>> Hi Barry, hi Dmitry, >>>>>>>> >>>>>>>> I set the matrix to BAIJ and back to AIJ, and the code got a bit >>>>>>>> further. But I now run into the error pasted below (Note that I'm now using >>>>>>>> "--with-debugging=1"): >>>>>>>> >>>>>>>> PETSC ERROR: --------------------- Error Message >>>>>>>> -------------------------------------------------------------- >>>>>>>> PETSC ERROR: Petsc has generated inconsistent data >>>>>>>> PETSC ERROR: MPIAIJ Matrix was assembled but is missing garray >>>>>>>> PETSC ERROR: See http://www.mcs.anl.gov/petsc/ >>>>>>>> documentation/faq.html for trouble shooting. >>>>>>>> PETSC ERROR: Petsc Release Version 3.5.2, Sep, 08, 2014 >>>>>>>> PETSC ERROR: ./example-dbg on a arch-linux2-c-debug named >>>>>>>> david-Lenovo by dknez Mon Feb 23 08:05:44 2015 >>>>>>>> PETSC ERROR: Configure options --with-shared-libraries=1 >>>>>>>> --with-debugging=1 --download-suitesparse=1 --download-parmetis=1 >>>>>>>> --download-blacs=1 --download-scalapack=1 --download-mumps=1 >>>>>>>> --download-metis --download-superlu_dist --prefix=/home/dknez/software/libmesh_install/dbg_real/petsc >>>>>>>> --download-hypre >>>>>>>> PETSC ERROR: #1 MatCreateColmap_MPIAIJ_Private() line 361 in >>>>>>>> /home/dknez/software/petsc-3.5.2/src/mat/impls/aij/mpi/mpiaij.c >>>>>>>> PETSC ERROR: #2 MatSetValues_MPIAIJ() line 538 in >>>>>>>> /home/dknez/software/petsc-3.5.2/src/mat/impls/aij/mpi/mpiaij.c >>>>>>>> PETSC ERROR: #3 MatSetValues() line 1136 in >>>>>>>> /home/dknez/software/petsc-3.5.2/src/mat/interface/matrix.c >>>>>>>> PETSC ERROR: #4 add_matrix() line 765 in >>>>>>>> /home/dknez/software/libmesh-src/src/numerics/petsc_matrix.C >>>>>>>> ------------------------------------------------------------ >>>>>>>> -------------- >>>>>>>> >>>>>>>> This occurs when I try to set some entries of the matrix. Do you >>>>>>>> have any suggestions on how I can resolve this? >>>>>>>> >>>>>>>> Thanks! >>>>>>>> David >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Sun, Feb 22, 2015 at 10:22 PM, Dmitry Karpeyev < >>>>>>>> dkarpeev at gmail.com> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Sun Feb 22 2015 at 9:15:22 PM Barry Smith >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> > On Feb 22, 2015, at 9:09 PM, David Knezevic < >>>>>>>>>> david.knezevic at akselos.com> wrote: >>>>>>>>>> > >>>>>>>>>> > Hi Dmitry, >>>>>>>>>> > >>>>>>>>>> > Thanks for the suggestion. I tried MatSetType(mat,MATMPIAIJ) >>>>>>>>>> followed by MatXAIJSetPreallocation(...), but unfortunately this still >>>>>>>>>> gives me the same error as before: "nnz cannot be greater than row length: >>>>>>>>>> local row 168 value 24 rowlength 0". >>>>>>>>>> > >>>>>>>>>> > I gather that the idea here is that MatSetType builds a new >>>>>>>>>> matrix object, and then I should be able to pre-allocate for that new >>>>>>>>>> matrix however I like, right? Was I supposed to clear the matrix object >>>>>>>>>> somehow before calling MatSetType? (I didn't do any sort of clear >>>>>>>>>> operation.) >>>>>>>>>> >>>>>>>>>> If the type doesn't change then MatSetType() won't do anything. >>>>>>>>>> You can try setting the type to BAIJ and then setting the type back to AIJ. >>>>>>>>>> This may/should clear out the matrix. >>>>>>>>>> >>>>>>>>> Ah, yes. If the type is the same as before it does quit early, >>>>>>>>> but changing the type and then back will clear out and rebuild the matrix. >>>>>>>>> We need >>>>>>>>> something like MatReset() to do the equivalent thing. >>>>>>>>> >>>>>>>>>> >>>>>>>>>> > >>>>>>>>>> > As I said earlier, I'll make a dbg PETSc build, so hopefully >>>>>>>>>> that will help shed some light on what's going wrong for me. >>>>>>>>>> >>>>>>>>> I think it's always a good idea to have a dbg build of PETSc when >>>>>>>>> you doing things like these. >>>>>>>>> >>>>>>>>> Dmitry. >>>>>>>>> >>>>>>>>>> >>>>>>>>>> Don't bother, what I suggested won't work. >>>>>>>>>> >>>>>>>>>> Barry >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> > >>>>>>>>>> > Thanks, >>>>>>>>>> > David >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> > On Sun, Feb 22, 2015 at 6:02 PM, Dmitry Karpeyev < >>>>>>>>>> dkarpeev at gmail.com> wrote: >>>>>>>>>> > David, >>>>>>>>>> > It might be easier to just rebuild the whole matrix from >>>>>>>>>> scratch: you would in effect be doing all that with disassembling and >>>>>>>>>> resetting the preallocation. >>>>>>>>>> > MatSetType(mat,MATMPIAIJ) >>>>>>>>>> > or >>>>>>>>>> > PetscObjectGetType((PetscObject)mat,&type); >>>>>>>>>> > MatSetType(mat,type); >>>>>>>>>> > followed by >>>>>>>>>> > MatXAIJSetPreallocation(...); >>>>>>>>>> > should do. >>>>>>>>>> > Dmitry. >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> > On Sun Feb 22 2015 at 4:45:46 PM Barry Smith < >>>>>>>>>> bsmith at mcs.anl.gov> wrote: >>>>>>>>>> > >>>>>>>>>> > Do not call for SeqAIJ matrix. Do not call before the first >>>>>>>>>> time you have preallocated and put entries in the matrix and done the >>>>>>>>>> MatAssemblyBegin/End() >>>>>>>>>> > >>>>>>>>>> > If it still crashes you'll need to try the debugger >>>>>>>>>> > >>>>>>>>>> > Barry >>>>>>>>>> > >>>>>>>>>> > > On Feb 22, 2015, at 4:09 PM, David Knezevic < >>>>>>>>>> david.knezevic at akselos.com> wrote: >>>>>>>>>> > > >>>>>>>>>> > > Hi Barry, >>>>>>>>>> > > >>>>>>>>>> > > Thanks for your help, much appreciated. >>>>>>>>>> > > >>>>>>>>>> > > I added a prototype for MatDisAssemble_MPIAIJ: >>>>>>>>>> > > PETSC_INTERN PetscErrorCode MatDisAssemble_MPIAIJ(Mat); >>>>>>>>>> > > >>>>>>>>>> > > and I added a call to MatDisAssemble_MPIAIJ before >>>>>>>>>> MatMPIAIJSetPreallocation. However, I get a segfault on the call to >>>>>>>>>> MatDisAssemble_MPIAIJ. The segfault occurs in both serial and parallel. >>>>>>>>>> > > >>>>>>>>>> > > FYI, I'm using Petsc 3.5.2, and I'm not using a non-debug >>>>>>>>>> build (though I could rebuild PETSc in debug mode if you think that would >>>>>>>>>> help figure out what's happening here). >>>>>>>>>> > > >>>>>>>>>> > > Thanks, >>>>>>>>>> > > David >>>>>>>>>> > > >>>>>>>>>> > > >>>>>>>>>> > > >>>>>>>>>> > > On Sun, Feb 22, 2015 at 1:13 PM, Barry Smith < >>>>>>>>>> bsmith at mcs.anl.gov> wrote: >>>>>>>>>> > > >>>>>>>>>> > > David, >>>>>>>>>> > > >>>>>>>>>> > > This is an obscure little feature of MatMPIAIJ, each >>>>>>>>>> time you change the sparsity pattern before you call the >>>>>>>>>> MatMPIAIJSetPreallocation you need to call MatDisAssemble_MPIAIJ(Mat >>>>>>>>>> mat). This is a private PETSc function so you need to provide your own >>>>>>>>>> prototype for it above the function you use it in. >>>>>>>>>> > > >>>>>>>>>> > > Let us know if this resolves the problem. >>>>>>>>>> > > >>>>>>>>>> > > Barry >>>>>>>>>> > > >>>>>>>>>> > > We never really intended that people would call >>>>>>>>>> MatMPIAIJSetPreallocation() AFTER they had already used the matrix. >>>>>>>>>> > > >>>>>>>>>> > > >>>>>>>>>> > > > On Feb 22, 2015, at 6:50 AM, David Knezevic < >>>>>>>>>> david.knezevic at akselos.com> wrote: >>>>>>>>>> > > > >>>>>>>>>> > > > Hi all, >>>>>>>>>> > > > >>>>>>>>>> > > > I've implemented a solver for a contact problem using SNES. >>>>>>>>>> The sparsity pattern of the jacobian matrix needs to change at each >>>>>>>>>> nonlinear iteration (because the elements which are in contact can change), >>>>>>>>>> so I tried to deal with this by calling MatSeqAIJSetPreallocation and >>>>>>>>>> MatMPIAIJSetPreallocation during each iteration in order to update the >>>>>>>>>> preallocation. >>>>>>>>>> > > > >>>>>>>>>> > > > This seems to work fine in serial, but with two or more MPI >>>>>>>>>> processes I run into the error "nnz cannot be greater than row length", >>>>>>>>>> e.g.: >>>>>>>>>> > > > nnz cannot be greater than row length: local row 528 value >>>>>>>>>> 12 rowlength 0 >>>>>>>>>> > > > >>>>>>>>>> > > > This error is from the call to >>>>>>>>>> > > > MatSeqAIJSetPreallocation(b->B,o_nz,o_nnz); in >>>>>>>>>> MatMPIAIJSetPreallocation_MPIAIJ. >>>>>>>>>> > > > >>>>>>>>>> > > > Any guidance on what the problem might be would be most >>>>>>>>>> appreciated. For example, I was wondering if there is a problem with >>>>>>>>>> calling SetPreallocation on a matrix that has already been preallocated? >>>>>>>>>> > > > >>>>>>>>>> > > > Some notes: >>>>>>>>>> > > > - I'm using PETSc via libMesh >>>>>>>>>> > > > - The code that triggers this issue is available as a PR on >>>>>>>>>> the libMesh github repo, in case anyone is interested: >>>>>>>>>> https://github.com/libMesh/libmesh/pull/460/ >>>>>>>>>> > > > - I can try to make a minimal pure-PETSc example that >>>>>>>>>> reproduces this error, if that would be helpful. >>>>>>>>>> > > > >>>>>>>>>> > > > Many thanks, >>>>>>>>>> > > > David >>>>>>>>>> > > > >>>>>>>>>> > > >>>>>>>>>> > > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> >>>>>>>>>> >>>>>>>> >>>>>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From abhyshr at mcs.anl.gov Wed Feb 25 09:59:00 2015 From: abhyshr at mcs.anl.gov (Abhyankar, Shrirang G.) Date: Wed, 25 Feb 2015 15:59:00 +0000 Subject: [petsc-users] DMNetworkGetEdgeRange() in parallel In-Reply-To: Message-ID: Miguel, I'm a bit tied up today. I'll try to debug this issue tomorrow and get back to you. Thanks, Shri From: Miguel Angel Salazar de Troya > Date: Wed, 25 Feb 2015 08:48:10 -0600 To: Matthew Knepley > Cc: Shri >, "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel I modified the DMNetwork example to include the new DM with the modified section. It has the same problems. Please find attached the code to this email. Thanks On Tue, Feb 24, 2015 at 6:49 PM, Matthew Knepley > wrote: On Tue, Feb 24, 2015 at 6:42 PM, Miguel Angel Salazar de Troya > wrote: I implemented the code as agreed, but I don't get the results I expected. When I create the vector with DMCreateGlobalVector(), I obtain a vector with a layout similar to the original DMNetwork, instead of the cloned network with the new PetscSection. The code is as follows: DMClone(dm, &dmEdge); PetscSectionCreate(PETSC_COMM_WORLD, &s); PetscSectionSetNumFields(s, 1); PetscSectionSetFieldComponents(s, 0, 1); // Now to set the chart, I pick the edge range DMNetworkGetEdgeRange(dmEdge, & eStart, & eEnd) PetscSectionSetChart(s, eStart, eEnd); for(PetscInt e = eStart; c < eEnd; ++e) { PetscSectionSetDof(s, e, 1); PetscSectionSetFieldDof(s, e, 0, 1); } PetscSectionSetUp(s); DMSetDefaultSection(dmEdge s); DMCreateGlobalVector(dmEdge, &globalVec); When I get into DMCreateGlobalVector(dmEdge, &globalVec) in the debugger, in the function DMCreateSubDM_Section_Private() I call PetscSectionView() on the section I have no idea why you would be in DMCreateSubDM(). Just view globalVec. If the code is as above, it will give you a vector with that layout. If not it should be trivial to make a small code and send it. I do this everywhere is PETSc, so the basic mechanism certainly works. Thanks, Matt obtained by DMGetDefaultGlobalSection(dm, §ionGlobal), and I obtain a PetscSection nothing like the one I see when I call PetscSectionView() on the PetscSection I created above. Does this have anything to do? I tried to compare this strange PetscSection with the one from the original DMNetwork, I call DMGetDefaultGlobalSection(dm, §ionGlobal) before the first line of the snippet above and I get this error message. 0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Object is in wrong state [0]PETSC ERROR: DM must have a default PetscSection in order to create a global PetscSection Thanks in advance Miguel On Mon, Feb 23, 2015 at 3:24 PM, Matthew Knepley > wrote: On Mon, Feb 23, 2015 at 2:15 PM, Miguel Angel Salazar de Troya > wrote: Thanks a lot, the partition should be done before setting up the section, right? The partition will be automatic. All you have to do is make the local section. The DM is already partitioned, and the Section will inherit that. Matt Miguel On Mon, Feb 23, 2015 at 2:05 PM, Matthew Knepley > wrote: On Mon, Feb 23, 2015 at 1:40 PM, Miguel Angel Salazar de Troya > wrote: Wouldn't including the edge variables in the global vector make the code slower? I'm using the global vector in a TS, using one of the explicit RK schemes. The edge variables would not be updated in the RHSFunction evaluation. I only change the edge variables in the TSUpdate. If the global vector had the edge variables, it would be a much larger vector, and all the vector operations performed by the TS would be slower. Although the vector F returned by the RHSFunction would be zero in the edge variable components. I guess that being the vector sparse that would not be a problem. I think I'm more interested in the PetscSection approach because it might require less modifications in my code. However, I don't know how I could do this. Maybe something like this? PetscSectionCreate(PETSC_COMM_WORLD, &s); PetscSectionSetNumFields(s, 1); PetscSectionSetFieldComponents(s, 0, 1); // Now to set the chart, I pick the edge range DMNetworkGetEdgeRange(dm, & eStart, & eEnd PetscSectionSetChart(s, eStart, eEnd); for(PetscInt e = eStart; c < eEnd; ++e) { PetscSectionSetDof(s, e, 1); PetscSectionSetFieldDof(s, e, 1, 1); It should be PetscSectionSetFieldDof(s, e, 0, 1); } PetscSectionSetUp(s); Now in the manual I see this: First you want to do: DMClone(dm, &dmEdge); and then use dmEdge below. DMSetDefaultSection(dm, s); DMGetLocalVector(dm, &localVec); DMGetGlobalVector(dm, &globalVec); Setting up the default section in the DM would interfere with the section already set up with the variables in the vertices? Yep, thats why you would use a clone. Thanks, Matt Thanks a lot for your responses. On Mon, Feb 23, 2015 at 11:37 AM, Matthew Knepley > wrote: On Mon, Feb 23, 2015 at 9:27 AM, Miguel Angel Salazar de Troya > wrote: I'm iterating through local edges given in DMNetworkGetEdgeRange(). For each edge, I extract or modify its corresponding value in a global petsc vector. Therefore that vector must have as many components as edges there are in the network. To extract the value in the vector, I use VecGetArray() and a variable counter that is incremented in each iteration. The array that I obtain in VecGetArray() has to be the same size than the edge range. That variable counter starts as 0, so if the array that I obtained in VecGetArray() is x_array, x_array[0] must be the component in the global vector that corresponds with the start edge given in DMNetworkGetEdgeRange() I need that global petsc vector because I will use it in other operations, it's not just data. Sorry for the confusion. Thanks in advance. This sounds like an assembly operation. The usual paradigm is to compute in the local space, and then communicate to get to the global space. So you would make a PetscSection that had 1 (or some) unknowns on each cell (edge) and then you can use DMCreateGlobal/LocalVector() and DMLocalToGlobal() to do this. Does that make sense? Thanks, Matt Miguel On Mon, Feb 23, 2015 at 9:09 AM, Matthew Knepley > wrote: On Mon, Feb 23, 2015 at 8:42 AM, Miguel Angel Salazar de Troya > wrote: Thanks, that will help me. Now what I would like to have is the following: if I have two processors and ten edges, the partitioning results in the first processor having the edges 0-4 and the second processor, the edges 5-9. I also have a global vector with as many components as edges, 10. How can I partition it so the first processor also has the 0-4 components and the second, the 5-9 components of the vector? I think it would help to know what you want to accomplish. This is how you are proposing to do it.' If you just want to put data on edges, DMNetwork has a facility for that already. Thanks, Matt Miguel On Feb 23, 2015 8:08 AM, "Abhyankar, Shrirang G." > wrote: Miguel, One possible way is to store the global numbering of any edge/vertex in the "component" attached to it. Once the mesh gets partitioned, the components are also distributed so you can easily retrieve the global number of any edge/vertex by accessing its component. This is what is done in the DMNetwork example pf.c although the global numbering is not used for anything. Shri From: Matthew Knepley > Date: Mon, 23 Feb 2015 07:54:34 -0600 To: Miguel Angel Salazar de Troya > Cc: "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel On Sun, Feb 22, 2015 at 3:59 PM, Miguel Angel Salazar de Troya > wrote: Thanks. Once I obtain that Index Set with the routine DMPlexCreateCellNumbering() (I assume that the edges in DMNetwork correspond to cells in DMPlex) can I use it to partition a vector with as many components as edges I have in my network? I do not completely understand the question. If you want a partition of the edges, you can use DMPlexCreatePartition() and its friend DMPlexDistribute(). What are you trying to do? Matt Thanks Miguel On Sun, Feb 22, 2015 at 12:15 PM, Matthew Knepley > wrote: On Sun, Feb 22, 2015 at 11:01 AM, Miguel Angel Salazar de Troya > wrote: Hi I noticed that the routine DMNetworkGetEdgeRange() returns the local indices for the edge range. Is there any way to obtain the global indices? So if my network has 10 edges, the processor 1 has the 0-4 edges and the processor 2, the 5-9 edges, how can I obtain this information? One of the points of DMPlex is we do not require a global numbering. Everything is numbered locally, and the PetscSF maps local numbers to local numbers in order to determine ownership. If you want to create a global numbering for some reason, you can using DMPlexCreatePointNumbering(). There are also cell and vertex versions that we use for output, so you could do it just for edges as well. Thanks, Matt Thanks Miguel -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 25 10:12:43 2015 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 25 Feb 2015 10:12:43 -0600 Subject: [petsc-users] DMNetworkGetEdgeRange() in parallel In-Reply-To: References: Message-ID: On Wed, Feb 25, 2015 at 9:59 AM, Abhyankar, Shrirang G. wrote: > Miguel, > I'm a bit tied up today. I'll try to debug this issue tomorrow and get > back to you. > The problem is the way that DMNetwork is using the default section. When DMCreateGlobalVec() is called, it uses the default section for its included network->plex, but DMSetDefaultSection() sets the section associated with network. This is inconsistent. I am sending the code which I hacked to show the correct result by using the private header. I think that 1) The underlying Plex should be exposed by DMNetworkGetPlex() 2) The default section business should be made consistent Thanks, Matt > Thanks, > Shri > > From: Miguel Angel Salazar de Troya > Date: Wed, 25 Feb 2015 08:48:10 -0600 > To: Matthew Knepley > Cc: Shri , "petsc-users at mcs.anl.gov" < > petsc-users at mcs.anl.gov> > > Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel > > I modified the DMNetwork example to include the new DM with the > modified section. It has the same problems. Please find attached the code > to this email. > > Thanks > > On Tue, Feb 24, 2015 at 6:49 PM, Matthew Knepley > wrote: > >> On Tue, Feb 24, 2015 at 6:42 PM, Miguel Angel Salazar de Troya < >> salazardetroya at gmail.com> wrote: >> >>> I implemented the code as agreed, but I don't get the results I >>> expected. When I create the vector with DMCreateGlobalVector(), I obtain a >>> vector with a layout similar to the original DMNetwork, instead of the >>> cloned network with the new PetscSection. The code is as follows: >>> >>> DMClone(dm, &dmEdge); >>> >>> PetscSectionCreate(PETSC_COMM_WORLD, &s); >>> PetscSectionSetNumFields(s, 1); >>> PetscSectionSetFieldComponents(s, 0, 1); >>> >>> // Now to set the chart, I pick the edge range >>> >>> DMNetworkGetEdgeRange(dmEdge, & eStart, & eEnd) >>> >>> PetscSectionSetChart(s, eStart, eEnd); >>> >>> for(PetscInt e = eStart; c < eEnd; ++e) { >>> PetscSectionSetDof(s, e, 1); >>> PetscSectionSetFieldDof(s, e, 0, 1); >>> } >>> PetscSectionSetUp(s); >>> >>> DMSetDefaultSection(dmEdge s); >>> DMCreateGlobalVector(dmEdge, &globalVec); >>> >>> When I get into DMCreateGlobalVector(dmEdge, &globalVec) in the >>> debugger, in the function DMCreateSubDM_Section_Private() I call >>> PetscSectionView() on the section >>> >> >> I have no idea why you would be in DMCreateSubDM(). >> >> Just view globalVec. If the code is as above, it will give you a vector >> with that layout. If not >> it should be trivial to make a small code and send it. I do this >> everywhere is PETSc, so the >> basic mechanism certainly works. >> >> Thanks, >> >> Matt >> >> >>> obtained by DMGetDefaultGlobalSection(dm, §ionGlobal), and I >>> obtain a PetscSection nothing like the one I see when I call PetscSectionView() >>> on the PetscSection I created above. Does this have anything to do? I >>> tried to compare this strange PetscSection with the one from the original >>> DMNetwork, I call DMGetDefaultGlobalSection(dm, §ionGlobal) before >>> the first line of the snippet above and I get this error message. >>> >>> 0]PETSC ERROR: --------------------- Error Message >>> -------------------------------------------------------------- >>> [0]PETSC ERROR: Object is in wrong state >>> [0]PETSC ERROR: DM must have a default PetscSection in order to create a >>> global PetscSection >>> >>> Thanks in advance >>> Miguel >>> >>> >>> On Mon, Feb 23, 2015 at 3:24 PM, Matthew Knepley >>> wrote: >>> >>>> On Mon, Feb 23, 2015 at 2:15 PM, Miguel Angel Salazar de Troya < >>>> salazardetroya at gmail.com> wrote: >>>> >>>>> Thanks a lot, the partition should be done before setting up the >>>>> section, right? >>>>> >>>> >>>> The partition will be automatic. All you have to do is make the local >>>> section. The DM is already partitioned, >>>> and the Section will inherit that. >>>> >>>> Matt >>>> >>>> >>>>> Miguel >>>>> >>>>> On Mon, Feb 23, 2015 at 2:05 PM, Matthew Knepley >>>>> wrote: >>>>> >>>>>> On Mon, Feb 23, 2015 at 1:40 PM, Miguel Angel Salazar de Troya < >>>>>> salazardetroya at gmail.com> wrote: >>>>>> >>>>>>> Wouldn't including the edge variables in the global vector make the >>>>>>> code slower? I'm using the global vector in a TS, using one of the explicit >>>>>>> RK schemes. The edge variables would not be updated in the RHSFunction >>>>>>> evaluation. I only change the edge variables in the TSUpdate. If the global >>>>>>> vector had the edge variables, it would be a much larger vector, and all >>>>>>> the vector operations performed by the TS would be slower. Although the >>>>>>> vector F returned by the RHSFunction would be zero in the edge variable >>>>>>> components. I guess that being the vector sparse that would not be a >>>>>>> problem. >>>>>>> >>>>>>> I think I'm more interested in the PetscSection approach because >>>>>>> it might require less modifications in my code. However, I don't know how I >>>>>>> could do this. Maybe something like this? >>>>>>> >>>>>>> PetscSectionCreate(PETSC_COMM_WORLD, &s); >>>>>>> PetscSectionSetNumFields(s, 1); >>>>>>> PetscSectionSetFieldComponents(s, 0, 1); >>>>>>> >>>>>>> // Now to set the chart, I pick the edge range >>>>>>> >>>>>>> DMNetworkGetEdgeRange(dm, & eStart, & eEnd >>>>>>> >>>>>>> PetscSectionSetChart(s, eStart, eEnd); >>>>>>> >>>>>>> for(PetscInt e = eStart; c < eEnd; ++e) { >>>>>>> PetscSectionSetDof(s, e, 1); >>>>>>> PetscSectionSetFieldDof(s, e, 1, 1); >>>>>>> >>>>>> >>>>>> It should be PetscSectionSetFieldDof(s, e, 0, 1); >>>>>> >>>>>> >>>>>>> } >>>>>>> PetscSectionSetUp(s); >>>>>>> >>>>>>> Now in the manual I see this: >>>>>>> >>>>>> >>>>>> First you want to do: >>>>>> >>>>>> DMClone(dm, &dmEdge); >>>>>> >>>>>> and then use dmEdge below. >>>>>> >>>>>> >>>>>>> DMSetDefaultSection(dm, s); >>>>>>> DMGetLocalVector(dm, &localVec); >>>>>>> DMGetGlobalVector(dm, &globalVec); >>>>>>> >>>>>>> Setting up the default section in the DM would interfere with the >>>>>>> section already set up with the variables in the vertices? >>>>>>> >>>>>> >>>>>> Yep, thats why you would use a clone. >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> >>>>>> >>>>>>> Thanks a lot for your responses. >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Mon, Feb 23, 2015 at 11:37 AM, Matthew Knepley >>>>>> > wrote: >>>>>>> >>>>>>>> On Mon, Feb 23, 2015 at 9:27 AM, Miguel Angel Salazar de Troya < >>>>>>>> salazardetroya at gmail.com> wrote: >>>>>>>> >>>>>>>>> I'm iterating through local edges given in DMNetworkGetEdgeRange(). >>>>>>>>> For each edge, I extract or modify its corresponding value in a global >>>>>>>>> petsc vector. Therefore that vector must have as many components as edges >>>>>>>>> there are in the network. To extract the value in the vector, I use >>>>>>>>> VecGetArray() and a variable counter that is incremented in each iteration. >>>>>>>>> The array that I obtain in VecGetArray() has to be the same size >>>>>>>>> than the edge range. That variable counter starts as 0, so if the array >>>>>>>>> that I obtained in VecGetArray() is x_array, x_array[0] must be >>>>>>>>> the component in the global vector that corresponds with the start edge >>>>>>>>> given in DMNetworkGetEdgeRange() >>>>>>>>> >>>>>>>>> I need that global petsc vector because I will use it in other >>>>>>>>> operations, it's not just data. Sorry for the confusion. Thanks in advance. >>>>>>>>> >>>>>>>> >>>>>>>> This sounds like an assembly operation. The usual paradigm is to >>>>>>>> compute in the local space, and then communicate to get to the global >>>>>>>> space. So you would make a PetscSection that had 1 (or some) unknowns on >>>>>>>> each cell (edge) and then you can use DMCreateGlobal/LocalVector() and >>>>>>>> DMLocalToGlobal() to do this. >>>>>>>> >>>>>>>> Does that make sense? >>>>>>>> >>>>>>>> Thanks, >>>>>>>> >>>>>>>> Matt >>>>>>>> >>>>>>>> >>>>>>>>> Miguel >>>>>>>>> >>>>>>>>> >>>>>>>>> On Mon, Feb 23, 2015 at 9:09 AM, Matthew Knepley < >>>>>>>>> knepley at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> On Mon, Feb 23, 2015 at 8:42 AM, Miguel Angel Salazar de Troya < >>>>>>>>>> salazardetroya at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> Thanks, that will help me. Now what I would like to have is the >>>>>>>>>>> following: if I have two processors and ten edges, the partitioning results >>>>>>>>>>> in the first processor having the edges 0-4 and the second processor, the >>>>>>>>>>> edges 5-9. I also have a global vector with as many components as edges, >>>>>>>>>>> 10. How can I partition it so the first processor also has the 0-4 >>>>>>>>>>> components and the second, the 5-9 components of the vector? >>>>>>>>>>> >>>>>>>>>> I think it would help to know what you want to accomplish. This >>>>>>>>>> is how you are proposing to do it.' >>>>>>>>>> >>>>>>>>>> If you just want to put data on edges, DMNetwork has a facility >>>>>>>>>> for that already. >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> >>>>>>>>>> Matt >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Miguel >>>>>>>>>>> On Feb 23, 2015 8:08 AM, "Abhyankar, Shrirang G." < >>>>>>>>>>> abhyshr at mcs.anl.gov> wrote: >>>>>>>>>>> >>>>>>>>>>>> Miguel, >>>>>>>>>>>> One possible way is to store the global numbering of any >>>>>>>>>>>> edge/vertex in the "component" attached to it. Once the mesh gets >>>>>>>>>>>> partitioned, the components are also distributed so you can easily retrieve >>>>>>>>>>>> the global number of any edge/vertex by accessing its component. This is >>>>>>>>>>>> what is done in the DMNetwork example pf.c although the global numbering is >>>>>>>>>>>> not used for anything. >>>>>>>>>>>> >>>>>>>>>>>> Shri >>>>>>>>>>>> From: Matthew Knepley >>>>>>>>>>>> Date: Mon, 23 Feb 2015 07:54:34 -0600 >>>>>>>>>>>> To: Miguel Angel Salazar de Troya >>>>>>>>>>>> Cc: "petsc-users at mcs.anl.gov" >>>>>>>>>>>> Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel >>>>>>>>>>>> >>>>>>>>>>>> On Sun, Feb 22, 2015 at 3:59 PM, Miguel Angel Salazar de >>>>>>>>>>>> Troya wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Thanks. Once I obtain that Index Set with the routine DMPlexCreateCellNumbering() >>>>>>>>>>>>> (I assume that the edges in DMNetwork correspond to cells in DMPlex) can I >>>>>>>>>>>>> use it to partition a vector with as many components as edges I have in my >>>>>>>>>>>>> network? >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> I do not completely understand the question. >>>>>>>>>>>> >>>>>>>>>>>> If you want a partition of the edges, you can use >>>>>>>>>>>> DMPlexCreatePartition() and its friend DMPlexDistribute(). What >>>>>>>>>>>> are you trying to do? >>>>>>>>>>>> >>>>>>>>>>>> Matt >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> Thanks >>>>>>>>>>>>> Miguel >>>>>>>>>>>>> >>>>>>>>>>>>> On Sun, Feb 22, 2015 at 12:15 PM, Matthew Knepley < >>>>>>>>>>>>> knepley at gmail.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> On Sun, Feb 22, 2015 at 11:01 AM, Miguel Angel Salazar de >>>>>>>>>>>>>> Troya wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hi >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I noticed that the routine DMNetworkGetEdgeRange() returns >>>>>>>>>>>>>>> the local indices for the edge range. Is there any way to obtain the global >>>>>>>>>>>>>>> indices? So if my network has 10 edges, the processor 1 has the 0-4 edges >>>>>>>>>>>>>>> and the processor 2, the 5-9 edges, how can I obtain this information? >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> One of the points of DMPlex is we do not require a global >>>>>>>>>>>>>> numbering. Everything is numbered >>>>>>>>>>>>>> locally, and the PetscSF maps local numbers to local numbers >>>>>>>>>>>>>> in order to determine ownership. >>>>>>>>>>>>>> >>>>>>>>>>>>>> If you want to create a global numbering for some reason, >>>>>>>>>>>>>> you can using DMPlexCreatePointNumbering(). >>>>>>>>>>>>>> There are also cell and vertex versions that we use for >>>>>>>>>>>>>> output, so you could do it just for edges as well. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>> >>>>>>>>>>>>>> Matt >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>> Miguel >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>>>>>>> Graduate Research Assistant >>>>>>>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>>>>>>> (217) 550-2360 >>>>>>>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> What most experimenters take for granted before they begin >>>>>>>>>>>>>> their experiments is infinitely more interesting than any results to which >>>>>>>>>>>>>> their experiments lead. >>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>>>>> Graduate Research Assistant >>>>>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>>>>> (217) 550-2360 >>>>>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> What most experimenters take for granted before they begin >>>>>>>>>>>> their experiments is infinitely more interesting than any results to which >>>>>>>>>>>> their experiments lead. >>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> What most experimenters take for granted before they begin their >>>>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>>>> experiments lead. >>>>>>>>>> -- Norbert Wiener >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>> Graduate Research Assistant >>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>> (217) 550-2360 >>>>>>>>> salaza11 at illinois.edu >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> What most experimenters take for granted before they begin their >>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>> experiments lead. >>>>>>>> -- Norbert Wiener >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> *Miguel Angel Salazar de Troya* >>>>>>> Graduate Research Assistant >>>>>>> Department of Mechanical Science and Engineering >>>>>>> University of Illinois at Urbana-Champaign >>>>>>> (217) 550-2360 >>>>>>> salaza11 at illinois.edu >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> *Miguel Angel Salazar de Troya* >>>>> Graduate Research Assistant >>>>> Department of Mechanical Science and Engineering >>>>> University of Illinois at Urbana-Champaign >>>>> (217) 550-2360 >>>>> salaza11 at illinois.edu >>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >>> >>> -- >>> *Miguel Angel Salazar de Troya* >>> Graduate Research Assistant >>> Department of Mechanical Science and Engineering >>> University of Illinois at Urbana-Champaign >>> (217) 550-2360 >>> salaza11 at illinois.edu >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > *Miguel Angel Salazar de Troya* > Graduate Research Assistant > Department of Mechanical Science and Engineering > University of Illinois at Urbana-Champaign > (217) 550-2360 > salaza11 at illinois.edu > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pf.c Type: text/x-csrc Size: 23826 bytes Desc: not available URL: From mail at saschaschnepp.net Wed Feb 25 10:44:29 2015 From: mail at saschaschnepp.net (Sascha Schnepp) Date: Wed, 25 Feb 2015 17:44:29 +0100 Subject: [petsc-users] Memory leak in PetscRandom? Message-ID: <1EB6EB57-D6A7-4532-B95C-985E5E0B4611@saschaschnepp.net> Hello, when I run ksp/ksp/examples/tutorials/ex2 through valgrind with random exact vector enabled (-random_exact_sol) it shows some lost memory. Patrick Sanan discovered this playing around with random positions of multiple inclusions for ex43 but that is in a fork/branch of his. The part of the valgrind output concerning the memory loss for ex2 with -random_exact_sol is identical. Cheers, Sascha sascha at geop-304 ?/ksp/examples/tutorials [master ?333|?19] 02/25/15 [17:29:26] $ valgrind --leak-check=full --dsymutil=yes ./ex2 ./ex2 -ksp_monitor_short -m 5 -n 5 -ksp_gmres_cgs_refinement_type refine_always -random_exact_sol==68417== Memcheck, a memory error detector ==68417== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al. ==68417== Using Valgrind-3.10.1 and LibVEX; rerun with -h for copyright info ==68417== Command: ./ex2 ./ex2 -ksp_monitor_short -m 5 -n 5 -ksp_gmres_cgs_refinement_type refine_always -random_exact_sol ==68417== --68417-- run: /usr/bin/dsymutil "./ex2" --68417-- run: /usr/bin/dsymutil "/Users/sascha/Documents/codes/PETSc/petsc-dev/arch-osx-master-debug/lib/libmpifort.12.dylib" --68417-- run: /usr/bin/dsymutil "/Users/sascha/Documents/codes/PETSc/petsc-dev/arch-osx-master-debug/lib/libmpicxx.12.dylib" --68417-- run: /usr/bin/dsymutil "/Users/sascha/Documents/codes/PETSc/petsc-dev/arch-osx-master-debug/lib/libmpi.12.dylib" --68417-- run: /usr/bin/dsymutil "/Users/sascha/Documents/codes/PETSc/petsc-dev/arch-osx-master-debug/lib/libpmpi.12.dylib" 0 KSP Residual norm 2.28401 1 KSP Residual norm 0.541581 2 KSP Residual norm 0.114601 3 KSP Residual norm 0.0109825 4 KSP Residual norm 0.00112854 5 KSP Residual norm 8.41066e-05 Norm of error 9.07246e-05 iterations 5 ==68417== ==68417== HEAP SUMMARY: ==68417== in use at exit: 44,377 bytes in 387 blocks ==68417== total heap usage: 1,999 allocs, 1,612 frees, 328,837 bytes allocated ==68417== ==68417== 1,060 bytes in 1 blocks are possibly lost in loss record 85 of 95 ==68417== at 0x66BB: malloc (vg_replace_malloc.c:300) ==68417== by 0x234DFC3: __emutls_get_address (in /opt/local/lib/libgcc/libgcc_s.1.dylib) ==68417== ==68417== 2,080 (1,040 direct, 1,040 indirect) bytes in 1 blocks are definitely lost in loss record 92 of 95 ==68417== at 0x66BB: malloc (vg_replace_malloc.c:300) ==68417== by 0x25175AE: atexit_register (in /usr/lib/system/libsystem_c.dylib) ==68417== by 0x25176E9: __cxa_atexit (in /usr/lib/system/libsystem_c.dylib) ==68417== by 0x1E3CC27: _GLOBAL__sub_I_initcxx.cxx (initcxx.cxx:110) ==68417== by 0x7FFF5FC3D15F: ??? ==68417== by 0x1E3C28F: MPI::Datatype::Get_name(char*, int&) const (in /Users/sascha/Documents/codes/PETSc/petsc-dev/arch-osx-master-debug/lib/libmpicxx.12.dylib) ==68417== by 0x10480D61F: ??? ==68417== by 0x7FFF5FC11C2D: ImageLoaderMachO::doModInitFunctions(ImageLoader::LinkContext const&) (in /usr/lib/dyld) ==68417== by 0x7FFF5FC3D15F: ??? ==68417== by 0x100000011: ??? (in ./ex2) ==68417== by 0x1E35297: ??? (in /Users/sascha/Documents/codes/PETSc/petsc-dev/arch-osx-master-debug/lib/libmpicxx.12.dylib) ==68417== by 0x1E3555F: ??? (in /Users/sascha/Documents/codes/PETSc/petsc-dev/arch-osx-master-debug/lib/libmpicxx.12.dylib) ==68417== ==68417== LEAK SUMMARY: ==68417== definitely lost: 1,040 bytes in 1 blocks ==68417== indirectly lost: 1,040 bytes in 1 blocks ==68417== possibly lost: 1,060 bytes in 1 blocks ==68417== still reachable: 5,318 bytes in 15 blocks ==68417== suppressed: 35,919 bytes in 369 blocks ==68417== Reachable blocks (those to which a pointer was found) are not shown. ==68417== To see them, rerun with: --leak-check=full --show-leak-kinds=all ==68417== ==68417== For counts of detected and suppressed errors, rerun with: -v ==68417== ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 17 from 17) -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 3786789 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: make.log Type: application/octet-stream Size: 97340 bytes Desc: not available URL: From mfadams at lbl.gov Wed Feb 25 10:54:28 2015 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 25 Feb 2015 11:54:28 -0500 Subject: [petsc-users] Solving multiple linear systems with similar matrix sequentially In-Reply-To: <67908502-6832-463B-A4FD-64AF0AC99AE6@mcs.anl.gov> References: <7030ED15-A93B-403C-BE28-DEF842F1941D@mcs.anl.gov> <1378912628.4284286.1424690285600.JavaMail.yahoo@mail.yahoo.com> <27A5BEE5-8058-4177-AB68-CC70ACEDB3A4@mcs.anl.gov> <87oaokgenj.fsf@jedbrown.org> <67908502-6832-463B-A4FD-64AF0AC99AE6@mcs.anl.gov> Message-ID: > > > > > > The RAP is often more than 50% of PCSetUp, so this might not save much. > > Hee,hee in some of my recent runs of GAMG the RAP was less than 25% of > the time. Thus skipping the other portions could really pay off.* > > This percentage is very problem dependant. 3D elasticity is higher, 2D scalar is lower. > Barry > > * this could just be because the RAP portion has been optimized much more > than the other parts but ... > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From abhyshr at mcs.anl.gov Wed Feb 25 10:56:58 2015 From: abhyshr at mcs.anl.gov (Abhyankar, Shrirang G.) Date: Wed, 25 Feb 2015 16:56:58 +0000 Subject: [petsc-users] DMNetworkGetEdgeRange() in parallel In-Reply-To: Message-ID: Matt, Thanks for debugging. I'll try to fix this tomorrow. Shri From: Matthew Knepley > Date: Wed, 25 Feb 2015 10:12:43 -0600 To: Shri > Cc: Miguel Angel Salazar de Troya >, "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel On Wed, Feb 25, 2015 at 9:59 AM, Abhyankar, Shrirang G. > wrote: Miguel, I'm a bit tied up today. I'll try to debug this issue tomorrow and get back to you. The problem is the way that DMNetwork is using the default section. When DMCreateGlobalVec() is called, it uses the default section for its included network->plex, but DMSetDefaultSection() sets the section associated with network. This is inconsistent. I am sending the code which I hacked to show the correct result by using the private header. I think that 1) The underlying Plex should be exposed by DMNetworkGetPlex() 2) The default section business should be made consistent Thanks, Matt Thanks, Shri From: Miguel Angel Salazar de Troya > Date: Wed, 25 Feb 2015 08:48:10 -0600 To: Matthew Knepley > Cc: Shri >, "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel I modified the DMNetwork example to include the new DM with the modified section. It has the same problems. Please find attached the code to this email. Thanks On Tue, Feb 24, 2015 at 6:49 PM, Matthew Knepley > wrote: On Tue, Feb 24, 2015 at 6:42 PM, Miguel Angel Salazar de Troya > wrote: I implemented the code as agreed, but I don't get the results I expected. When I create the vector with DMCreateGlobalVector(), I obtain a vector with a layout similar to the original DMNetwork, instead of the cloned network with the new PetscSection. The code is as follows: DMClone(dm, &dmEdge); PetscSectionCreate(PETSC_COMM_WORLD, &s); PetscSectionSetNumFields(s, 1); PetscSectionSetFieldComponents(s, 0, 1); // Now to set the chart, I pick the edge range DMNetworkGetEdgeRange(dmEdge, & eStart, & eEnd) PetscSectionSetChart(s, eStart, eEnd); for(PetscInt e = eStart; c < eEnd; ++e) { PetscSectionSetDof(s, e, 1); PetscSectionSetFieldDof(s, e, 0, 1); } PetscSectionSetUp(s); DMSetDefaultSection(dmEdge s); DMCreateGlobalVector(dmEdge, &globalVec); When I get into DMCreateGlobalVector(dmEdge, &globalVec) in the debugger, in the function DMCreateSubDM_Section_Private() I call PetscSectionView() on the section I have no idea why you would be in DMCreateSubDM(). Just view globalVec. If the code is as above, it will give you a vector with that layout. If not it should be trivial to make a small code and send it. I do this everywhere is PETSc, so the basic mechanism certainly works. Thanks, Matt obtained by DMGetDefaultGlobalSection(dm, §ionGlobal), and I obtain a PetscSection nothing like the one I see when I call PetscSectionView() on the PetscSection I created above. Does this have anything to do? I tried to compare this strange PetscSection with the one from the original DMNetwork, I call DMGetDefaultGlobalSection(dm, §ionGlobal) before the first line of the snippet above and I get this error message. 0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Object is in wrong state [0]PETSC ERROR: DM must have a default PetscSection in order to create a global PetscSection Thanks in advance Miguel On Mon, Feb 23, 2015 at 3:24 PM, Matthew Knepley > wrote: On Mon, Feb 23, 2015 at 2:15 PM, Miguel Angel Salazar de Troya > wrote: Thanks a lot, the partition should be done before setting up the section, right? The partition will be automatic. All you have to do is make the local section. The DM is already partitioned, and the Section will inherit that. Matt Miguel On Mon, Feb 23, 2015 at 2:05 PM, Matthew Knepley > wrote: On Mon, Feb 23, 2015 at 1:40 PM, Miguel Angel Salazar de Troya > wrote: Wouldn't including the edge variables in the global vector make the code slower? I'm using the global vector in a TS, using one of the explicit RK schemes. The edge variables would not be updated in the RHSFunction evaluation. I only change the edge variables in the TSUpdate. If the global vector had the edge variables, it would be a much larger vector, and all the vector operations performed by the TS would be slower. Although the vector F returned by the RHSFunction would be zero in the edge variable components. I guess that being the vector sparse that would not be a problem. I think I'm more interested in the PetscSection approach because it might require less modifications in my code. However, I don't know how I could do this. Maybe something like this? PetscSectionCreate(PETSC_COMM_WORLD, &s); PetscSectionSetNumFields(s, 1); PetscSectionSetFieldComponents(s, 0, 1); // Now to set the chart, I pick the edge range DMNetworkGetEdgeRange(dm, & eStart, & eEnd PetscSectionSetChart(s, eStart, eEnd); for(PetscInt e = eStart; c < eEnd; ++e) { PetscSectionSetDof(s, e, 1); PetscSectionSetFieldDof(s, e, 1, 1); It should be PetscSectionSetFieldDof(s, e, 0, 1); } PetscSectionSetUp(s); Now in the manual I see this: First you want to do: DMClone(dm, &dmEdge); and then use dmEdge below. DMSetDefaultSection(dm, s); DMGetLocalVector(dm, &localVec); DMGetGlobalVector(dm, &globalVec); Setting up the default section in the DM would interfere with the section already set up with the variables in the vertices? Yep, thats why you would use a clone. Thanks, Matt Thanks a lot for your responses. On Mon, Feb 23, 2015 at 11:37 AM, Matthew Knepley > wrote: On Mon, Feb 23, 2015 at 9:27 AM, Miguel Angel Salazar de Troya > wrote: I'm iterating through local edges given in DMNetworkGetEdgeRange(). For each edge, I extract or modify its corresponding value in a global petsc vector. Therefore that vector must have as many components as edges there are in the network. To extract the value in the vector, I use VecGetArray() and a variable counter that is incremented in each iteration. The array that I obtain in VecGetArray() has to be the same size than the edge range. That variable counter starts as 0, so if the array that I obtained in VecGetArray() is x_array, x_array[0] must be the component in the global vector that corresponds with the start edge given in DMNetworkGetEdgeRange() I need that global petsc vector because I will use it in other operations, it's not just data. Sorry for the confusion. Thanks in advance. This sounds like an assembly operation. The usual paradigm is to compute in the local space, and then communicate to get to the global space. So you would make a PetscSection that had 1 (or some) unknowns on each cell (edge) and then you can use DMCreateGlobal/LocalVector() and DMLocalToGlobal() to do this. Does that make sense? Thanks, Matt Miguel On Mon, Feb 23, 2015 at 9:09 AM, Matthew Knepley > wrote: On Mon, Feb 23, 2015 at 8:42 AM, Miguel Angel Salazar de Troya > wrote: Thanks, that will help me. Now what I would like to have is the following: if I have two processors and ten edges, the partitioning results in the first processor having the edges 0-4 and the second processor, the edges 5-9. I also have a global vector with as many components as edges, 10. How can I partition it so the first processor also has the 0-4 components and the second, the 5-9 components of the vector? I think it would help to know what you want to accomplish. This is how you are proposing to do it.' If you just want to put data on edges, DMNetwork has a facility for that already. Thanks, Matt Miguel On Feb 23, 2015 8:08 AM, "Abhyankar, Shrirang G." > wrote: Miguel, One possible way is to store the global numbering of any edge/vertex in the "component" attached to it. Once the mesh gets partitioned, the components are also distributed so you can easily retrieve the global number of any edge/vertex by accessing its component. This is what is done in the DMNetwork example pf.c although the global numbering is not used for anything. Shri From: Matthew Knepley > Date: Mon, 23 Feb 2015 07:54:34 -0600 To: Miguel Angel Salazar de Troya > Cc: "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel On Sun, Feb 22, 2015 at 3:59 PM, Miguel Angel Salazar de Troya > wrote: Thanks. Once I obtain that Index Set with the routine DMPlexCreateCellNumbering() (I assume that the edges in DMNetwork correspond to cells in DMPlex) can I use it to partition a vector with as many components as edges I have in my network? I do not completely understand the question. If you want a partition of the edges, you can use DMPlexCreatePartition() and its friend DMPlexDistribute(). What are you trying to do? Matt Thanks Miguel On Sun, Feb 22, 2015 at 12:15 PM, Matthew Knepley > wrote: On Sun, Feb 22, 2015 at 11:01 AM, Miguel Angel Salazar de Troya > wrote: Hi I noticed that the routine DMNetworkGetEdgeRange() returns the local indices for the edge range. Is there any way to obtain the global indices? So if my network has 10 edges, the processor 1 has the 0-4 edges and the processor 2, the 5-9 edges, how can I obtain this information? One of the points of DMPlex is we do not require a global numbering. Everything is numbered locally, and the PetscSF maps local numbers to local numbers in order to determine ownership. If you want to create a global numbering for some reason, you can using DMPlexCreatePointNumbering(). There are also cell and vertex versions that we use for output, so you could do it just for edges as well. Thanks, Matt Thanks Miguel -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- Miguel Angel Salazar de Troya Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From salazardetroya at gmail.com Wed Feb 25 11:31:28 2015 From: salazardetroya at gmail.com (Miguel Angel Salazar de Troya) Date: Wed, 25 Feb 2015 11:31:28 -0600 Subject: [petsc-users] DMNetworkGetEdgeRange() in parallel In-Reply-To: References: Message-ID: Now it works, thanks a lot! Miguel On Wed, Feb 25, 2015 at 10:12 AM, Matthew Knepley wrote: > On Wed, Feb 25, 2015 at 9:59 AM, Abhyankar, Shrirang G. < > abhyshr at mcs.anl.gov> wrote: > >> Miguel, >> I'm a bit tied up today. I'll try to debug this issue tomorrow and get >> back to you. >> > > The problem is the way that DMNetwork is using the default section. When > DMCreateGlobalVec() is called, > it uses the default section for its included network->plex, but > DMSetDefaultSection() sets the section associated > with network. This is inconsistent. > > I am sending the code which I hacked to show the correct result by using > the private header. > > I think that > > 1) The underlying Plex should be exposed by DMNetworkGetPlex() > > 2) The default section business should be made consistent > > Thanks, > > Matt > > >> Thanks, >> Shri >> >> From: Miguel Angel Salazar de Troya >> Date: Wed, 25 Feb 2015 08:48:10 -0600 >> To: Matthew Knepley >> Cc: Shri , "petsc-users at mcs.anl.gov" < >> petsc-users at mcs.anl.gov> >> >> Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel >> >> I modified the DMNetwork example to include the new DM with the >> modified section. It has the same problems. Please find attached the code >> to this email. >> >> Thanks >> >> On Tue, Feb 24, 2015 at 6:49 PM, Matthew Knepley >> wrote: >> >>> On Tue, Feb 24, 2015 at 6:42 PM, Miguel Angel Salazar de Troya < >>> salazardetroya at gmail.com> wrote: >>> >>>> I implemented the code as agreed, but I don't get the results I >>>> expected. When I create the vector with DMCreateGlobalVector(), I obtain a >>>> vector with a layout similar to the original DMNetwork, instead of the >>>> cloned network with the new PetscSection. The code is as follows: >>>> >>>> DMClone(dm, &dmEdge); >>>> >>>> PetscSectionCreate(PETSC_COMM_WORLD, &s); >>>> PetscSectionSetNumFields(s, 1); >>>> PetscSectionSetFieldComponents(s, 0, 1); >>>> >>>> // Now to set the chart, I pick the edge range >>>> >>>> DMNetworkGetEdgeRange(dmEdge, & eStart, & eEnd) >>>> >>>> PetscSectionSetChart(s, eStart, eEnd); >>>> >>>> for(PetscInt e = eStart; c < eEnd; ++e) { >>>> PetscSectionSetDof(s, e, 1); >>>> PetscSectionSetFieldDof(s, e, 0, 1); >>>> } >>>> PetscSectionSetUp(s); >>>> >>>> DMSetDefaultSection(dmEdge s); >>>> DMCreateGlobalVector(dmEdge, &globalVec); >>>> >>>> When I get into DMCreateGlobalVector(dmEdge, &globalVec) in the >>>> debugger, in the function DMCreateSubDM_Section_Private() I call >>>> PetscSectionView() on the section >>>> >>> >>> I have no idea why you would be in DMCreateSubDM(). >>> >>> Just view globalVec. If the code is as above, it will give you a >>> vector with that layout. If not >>> it should be trivial to make a small code and send it. I do this >>> everywhere is PETSc, so the >>> basic mechanism certainly works. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> obtained by DMGetDefaultGlobalSection(dm, §ionGlobal), and I >>>> obtain a PetscSection nothing like the one I see when I call PetscSectionView() >>>> on the PetscSection I created above. Does this have anything to do? I >>>> tried to compare this strange PetscSection with the one from the original >>>> DMNetwork, I call DMGetDefaultGlobalSection(dm, §ionGlobal) before >>>> the first line of the snippet above and I get this error message. >>>> >>>> 0]PETSC ERROR: --------------------- Error Message >>>> -------------------------------------------------------------- >>>> [0]PETSC ERROR: Object is in wrong state >>>> [0]PETSC ERROR: DM must have a default PetscSection in order to create >>>> a global PetscSection >>>> >>>> Thanks in advance >>>> Miguel >>>> >>>> >>>> On Mon, Feb 23, 2015 at 3:24 PM, Matthew Knepley >>>> wrote: >>>> >>>>> On Mon, Feb 23, 2015 at 2:15 PM, Miguel Angel Salazar de Troya < >>>>> salazardetroya at gmail.com> wrote: >>>>> >>>>>> Thanks a lot, the partition should be done before setting up the >>>>>> section, right? >>>>>> >>>>> >>>>> The partition will be automatic. All you have to do is make the >>>>> local section. The DM is already partitioned, >>>>> and the Section will inherit that. >>>>> >>>>> Matt >>>>> >>>>> >>>>>> Miguel >>>>>> >>>>>> On Mon, Feb 23, 2015 at 2:05 PM, Matthew Knepley >>>>>> wrote: >>>>>> >>>>>>> On Mon, Feb 23, 2015 at 1:40 PM, Miguel Angel Salazar de Troya < >>>>>>> salazardetroya at gmail.com> wrote: >>>>>>> >>>>>>>> Wouldn't including the edge variables in the global vector make the >>>>>>>> code slower? I'm using the global vector in a TS, using one of the explicit >>>>>>>> RK schemes. The edge variables would not be updated in the RHSFunction >>>>>>>> evaluation. I only change the edge variables in the TSUpdate. If the global >>>>>>>> vector had the edge variables, it would be a much larger vector, and all >>>>>>>> the vector operations performed by the TS would be slower. Although the >>>>>>>> vector F returned by the RHSFunction would be zero in the edge variable >>>>>>>> components. I guess that being the vector sparse that would not be a >>>>>>>> problem. >>>>>>>> >>>>>>>> I think I'm more interested in the PetscSection approach because >>>>>>>> it might require less modifications in my code. However, I don't know how I >>>>>>>> could do this. Maybe something like this? >>>>>>>> >>>>>>>> PetscSectionCreate(PETSC_COMM_WORLD, &s); >>>>>>>> PetscSectionSetNumFields(s, 1); >>>>>>>> PetscSectionSetFieldComponents(s, 0, 1); >>>>>>>> >>>>>>>> // Now to set the chart, I pick the edge range >>>>>>>> >>>>>>>> DMNetworkGetEdgeRange(dm, & eStart, & eEnd >>>>>>>> >>>>>>>> PetscSectionSetChart(s, eStart, eEnd); >>>>>>>> >>>>>>>> for(PetscInt e = eStart; c < eEnd; ++e) { >>>>>>>> PetscSectionSetDof(s, e, 1); >>>>>>>> PetscSectionSetFieldDof(s, e, 1, 1); >>>>>>>> >>>>>>> >>>>>>> It should be PetscSectionSetFieldDof(s, e, 0, 1); >>>>>>> >>>>>>> >>>>>>>> } >>>>>>>> PetscSectionSetUp(s); >>>>>>>> >>>>>>>> Now in the manual I see this: >>>>>>>> >>>>>>> >>>>>>> First you want to do: >>>>>>> >>>>>>> DMClone(dm, &dmEdge); >>>>>>> >>>>>>> and then use dmEdge below. >>>>>>> >>>>>>> >>>>>>>> DMSetDefaultSection(dm, s); >>>>>>>> DMGetLocalVector(dm, &localVec); >>>>>>>> DMGetGlobalVector(dm, &globalVec); >>>>>>>> >>>>>>>> Setting up the default section in the DM would interfere with the >>>>>>>> section already set up with the variables in the vertices? >>>>>>>> >>>>>>> >>>>>>> Yep, thats why you would use a clone. >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> Matt >>>>>>> >>>>>>> >>>>>>>> Thanks a lot for your responses. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Mon, Feb 23, 2015 at 11:37 AM, Matthew Knepley < >>>>>>>> knepley at gmail.com> wrote: >>>>>>>> >>>>>>>>> On Mon, Feb 23, 2015 at 9:27 AM, Miguel Angel Salazar de Troya < >>>>>>>>> salazardetroya at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> I'm iterating through local edges given in DMNetworkGetEdgeRange(). >>>>>>>>>> For each edge, I extract or modify its corresponding value in a global >>>>>>>>>> petsc vector. Therefore that vector must have as many components as edges >>>>>>>>>> there are in the network. To extract the value in the vector, I use >>>>>>>>>> VecGetArray() and a variable counter that is incremented in each iteration. >>>>>>>>>> The array that I obtain in VecGetArray() has to be the same size >>>>>>>>>> than the edge range. That variable counter starts as 0, so if the array >>>>>>>>>> that I obtained in VecGetArray() is x_array, x_array[0] must be >>>>>>>>>> the component in the global vector that corresponds with the start edge >>>>>>>>>> given in DMNetworkGetEdgeRange() >>>>>>>>>> >>>>>>>>>> I need that global petsc vector because I will use it in other >>>>>>>>>> operations, it's not just data. Sorry for the confusion. Thanks in advance. >>>>>>>>>> >>>>>>>>> >>>>>>>>> This sounds like an assembly operation. The usual paradigm is to >>>>>>>>> compute in the local space, and then communicate to get to the global >>>>>>>>> space. So you would make a PetscSection that had 1 (or some) unknowns on >>>>>>>>> each cell (edge) and then you can use DMCreateGlobal/LocalVector() and >>>>>>>>> DMLocalToGlobal() to do this. >>>>>>>>> >>>>>>>>> Does that make sense? >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> >>>>>>>>> Matt >>>>>>>>> >>>>>>>>> >>>>>>>>>> Miguel >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Mon, Feb 23, 2015 at 9:09 AM, Matthew Knepley < >>>>>>>>>> knepley at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> On Mon, Feb 23, 2015 at 8:42 AM, Miguel Angel Salazar de Troya >>>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>>> Thanks, that will help me. Now what I would like to have is the >>>>>>>>>>>> following: if I have two processors and ten edges, the partitioning results >>>>>>>>>>>> in the first processor having the edges 0-4 and the second processor, the >>>>>>>>>>>> edges 5-9. I also have a global vector with as many components as edges, >>>>>>>>>>>> 10. How can I partition it so the first processor also has the 0-4 >>>>>>>>>>>> components and the second, the 5-9 components of the vector? >>>>>>>>>>>> >>>>>>>>>>> I think it would help to know what you want to accomplish. >>>>>>>>>>> This is how you are proposing to do it.' >>>>>>>>>>> >>>>>>>>>>> If you just want to put data on edges, DMNetwork has a >>>>>>>>>>> facility for that already. >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> >>>>>>>>>>> Matt >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> Miguel >>>>>>>>>>>> On Feb 23, 2015 8:08 AM, "Abhyankar, Shrirang G." < >>>>>>>>>>>> abhyshr at mcs.anl.gov> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Miguel, >>>>>>>>>>>>> One possible way is to store the global numbering of any >>>>>>>>>>>>> edge/vertex in the "component" attached to it. Once the mesh gets >>>>>>>>>>>>> partitioned, the components are also distributed so you can easily retrieve >>>>>>>>>>>>> the global number of any edge/vertex by accessing its component. This is >>>>>>>>>>>>> what is done in the DMNetwork example pf.c although the global numbering is >>>>>>>>>>>>> not used for anything. >>>>>>>>>>>>> >>>>>>>>>>>>> Shri >>>>>>>>>>>>> From: Matthew Knepley >>>>>>>>>>>>> Date: Mon, 23 Feb 2015 07:54:34 -0600 >>>>>>>>>>>>> To: Miguel Angel Salazar de Troya >>>>>>>>>>>>> Cc: "petsc-users at mcs.anl.gov" >>>>>>>>>>>>> Subject: Re: [petsc-users] DMNetworkGetEdgeRange() in parallel >>>>>>>>>>>>> >>>>>>>>>>>>> On Sun, Feb 22, 2015 at 3:59 PM, Miguel Angel Salazar de >>>>>>>>>>>>> Troya wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks. Once I obtain that Index Set with the routine DMPlexCreateCellNumbering() >>>>>>>>>>>>>> (I assume that the edges in DMNetwork correspond to cells in DMPlex) can I >>>>>>>>>>>>>> use it to partition a vector with as many components as edges I have in my >>>>>>>>>>>>>> network? >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> I do not completely understand the question. >>>>>>>>>>>>> >>>>>>>>>>>>> If you want a partition of the edges, you can use >>>>>>>>>>>>> DMPlexCreatePartition() and its friend DMPlexDistribute(). What >>>>>>>>>>>>> are you trying to do? >>>>>>>>>>>>> >>>>>>>>>>>>> Matt >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>> Miguel >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Sun, Feb 22, 2015 at 12:15 PM, Matthew Knepley < >>>>>>>>>>>>>> knepley at gmail.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Sun, Feb 22, 2015 at 11:01 AM, Miguel Angel Salazar de >>>>>>>>>>>>>>> Troya wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Hi >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I noticed that the routine DMNetworkGetEdgeRange() >>>>>>>>>>>>>>>> returns the local indices for the edge range. Is there any way to obtain >>>>>>>>>>>>>>>> the global indices? So if my network has 10 edges, the processor 1 has the >>>>>>>>>>>>>>>> 0-4 edges and the processor 2, the 5-9 edges, how can I obtain this >>>>>>>>>>>>>>>> information? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> One of the points of DMPlex is we do not require a global >>>>>>>>>>>>>>> numbering. Everything is numbered >>>>>>>>>>>>>>> locally, and the PetscSF maps local numbers to local numbers >>>>>>>>>>>>>>> in order to determine ownership. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> If you want to create a global numbering for some reason, >>>>>>>>>>>>>>> you can using DMPlexCreatePointNumbering(). >>>>>>>>>>>>>>> There are also cell and vertex versions that we use for >>>>>>>>>>>>>>> output, so you could do it just for edges as well. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Matt >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>>> Miguel >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>>>>>>>> Graduate Research Assistant >>>>>>>>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>>>>>>>> (217) 550-2360 >>>>>>>>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> What most experimenters take for granted before they begin >>>>>>>>>>>>>>> their experiments is infinitely more interesting than any results to which >>>>>>>>>>>>>>> their experiments lead. >>>>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>>>>>> Graduate Research Assistant >>>>>>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>>>>>> (217) 550-2360 >>>>>>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> What most experimenters take for granted before they begin >>>>>>>>>>>>> their experiments is infinitely more interesting than any results to which >>>>>>>>>>>>> their experiments lead. >>>>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> What most experimenters take for granted before they begin their >>>>>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>>>>> experiments lead. >>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>>>> Graduate Research Assistant >>>>>>>>>> Department of Mechanical Science and Engineering >>>>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>>>> (217) 550-2360 >>>>>>>>>> salaza11 at illinois.edu >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> What most experimenters take for granted before they begin their >>>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>>> experiments lead. >>>>>>>>> -- Norbert Wiener >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> *Miguel Angel Salazar de Troya* >>>>>>>> Graduate Research Assistant >>>>>>>> Department of Mechanical Science and Engineering >>>>>>>> University of Illinois at Urbana-Champaign >>>>>>>> (217) 550-2360 >>>>>>>> salaza11 at illinois.edu >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> What most experimenters take for granted before they begin their >>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>> experiments lead. >>>>>>> -- Norbert Wiener >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> *Miguel Angel Salazar de Troya* >>>>>> Graduate Research Assistant >>>>>> Department of Mechanical Science and Engineering >>>>>> University of Illinois at Urbana-Champaign >>>>>> (217) 550-2360 >>>>>> salaza11 at illinois.edu >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>> >>>> >>>> >>>> -- >>>> *Miguel Angel Salazar de Troya* >>>> Graduate Research Assistant >>>> Department of Mechanical Science and Engineering >>>> University of Illinois at Urbana-Champaign >>>> (217) 550-2360 >>>> salaza11 at illinois.edu >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> *Miguel Angel Salazar de Troya* >> Graduate Research Assistant >> Department of Mechanical Science and Engineering >> University of Illinois at Urbana-Champaign >> (217) 550-2360 >> salaza11 at illinois.edu >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- *Miguel Angel Salazar de Troya* Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From hus003 at ucsd.edu Wed Feb 25 12:14:25 2015 From: hus003 at ucsd.edu (Sun, Hui) Date: Wed, 25 Feb 2015 18:14:25 +0000 Subject: [petsc-users] about MATSOR Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010E9A34@XMAIL-MBX-BH1.AD.UCSD.EDU> I want to do 3 steps of gauss seidel from a Mat A to another Mat B. Is there a way to do this? I mean, what I can think of is to get the column vectors of B by ierr = MatGetColumnVector(B,v,col);CHKERRQ(ierr); and apply Gauss seidel from A to v: ierr = MatSOR(A, v, 1, (SOR_FORWARD_SWEEP|SOR_ZERO_INITIAL_GUESS), 0, 3, 0, x);CHKERRQ(ierr); But then I have to form another matrix C whose columns are composed by v, and I'm not sure how to do that. Best, Hui -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Feb 25 13:31:52 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 25 Feb 2015 13:31:52 -0600 Subject: [petsc-users] Memory leak in PetscRandom? In-Reply-To: <1EB6EB57-D6A7-4532-B95C-985E5E0B4611@saschaschnepp.net> References: <1EB6EB57-D6A7-4532-B95C-985E5E0B4611@saschaschnepp.net> Message-ID: <21BC56AF-3E7D-4003-A2E8-9A9774A870BD@mcs.anl.gov> Thanks for the report. I ran the example with those options under Linux with valgrind and found no memory leaks, I cannot run valgrind on my Mac. I suspect the issue is related to some internal library memory problems on the Apple and is not in the PETSc library or that example. Barry > On Feb 25, 2015, at 10:44 AM, Sascha Schnepp wrote: > > Hello, > > when I run ksp/ksp/examples/tutorials/ex2 through valgrind with random exact vector enabled (-random_exact_sol) it shows some lost memory. Patrick Sanan discovered this playing around with random positions of multiple inclusions for ex43 but that is in a fork/branch of his. The part of the valgrind output concerning the memory loss for ex2 with -random_exact_sol is identical. > > Cheers, > Sascha > > sascha at geop-304 ?/ksp/examples/tutorials [master ?333|?19] > 02/25/15 [17:29:26] $ valgrind --leak-check=full --dsymutil=yes ./ex2 ./ex2 -ksp_monitor_short -m 5 -n 5 -ksp_gmres_cgs_refinement_type refine_always -random_exact_sol==68417== Memcheck, a memory error detector > ==68417== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al. > ==68417== Using Valgrind-3.10.1 and LibVEX; rerun with -h for copyright info > ==68417== Command: ./ex2 ./ex2 -ksp_monitor_short -m 5 -n 5 -ksp_gmres_cgs_refinement_type refine_always -random_exact_sol > ==68417== > --68417-- run: /usr/bin/dsymutil "./ex2" > --68417-- run: /usr/bin/dsymutil "/Users/sascha/Documents/codes/PETSc/petsc-dev/arch-osx-master-debug/lib/libmpifort.12.dylib" > --68417-- run: /usr/bin/dsymutil "/Users/sascha/Documents/codes/PETSc/petsc-dev/arch-osx-master-debug/lib/libmpicxx.12.dylib" > --68417-- run: /usr/bin/dsymutil "/Users/sascha/Documents/codes/PETSc/petsc-dev/arch-osx-master-debug/lib/libmpi.12.dylib" > --68417-- run: /usr/bin/dsymutil "/Users/sascha/Documents/codes/PETSc/petsc-dev/arch-osx-master-debug/lib/libpmpi.12.dylib" > 0 KSP Residual norm 2.28401 > 1 KSP Residual norm 0.541581 > 2 KSP Residual norm 0.114601 > 3 KSP Residual norm 0.0109825 > 4 KSP Residual norm 0.00112854 > 5 KSP Residual norm 8.41066e-05 > Norm of error 9.07246e-05 iterations 5 > ==68417== > ==68417== HEAP SUMMARY: > ==68417== in use at exit: 44,377 bytes in 387 blocks > ==68417== total heap usage: 1,999 allocs, 1,612 frees, 328,837 bytes allocated > ==68417== > ==68417== 1,060 bytes in 1 blocks are possibly lost in loss record 85 of 95 > ==68417== at 0x66BB: malloc (vg_replace_malloc.c:300) > ==68417== by 0x234DFC3: __emutls_get_address (in /opt/local/lib/libgcc/libgcc_s.1.dylib) > ==68417== > ==68417== 2,080 (1,040 direct, 1,040 indirect) bytes in 1 blocks are definitely lost in loss record 92 of 95 > ==68417== at 0x66BB: malloc (vg_replace_malloc.c:300) > ==68417== by 0x25175AE: atexit_register (in /usr/lib/system/libsystem_c.dylib) > ==68417== by 0x25176E9: __cxa_atexit (in /usr/lib/system/libsystem_c.dylib) > ==68417== by 0x1E3CC27: _GLOBAL__sub_I_initcxx.cxx (initcxx.cxx:110) > ==68417== by 0x7FFF5FC3D15F: ??? > ==68417== by 0x1E3C28F: MPI::Datatype::Get_name(char*, int&) const (in /Users/sascha/Documents/codes/PETSc/petsc-dev/arch-osx-master-debug/lib/libmpicxx.12.dylib) > ==68417== by 0x10480D61F: ??? > ==68417== by 0x7FFF5FC11C2D: ImageLoaderMachO::doModInitFunctions(ImageLoader::LinkContext const&) (in /usr/lib/dyld) > ==68417== by 0x7FFF5FC3D15F: ??? > ==68417== by 0x100000011: ??? (in ./ex2) > ==68417== by 0x1E35297: ??? (in /Users/sascha/Documents/codes/PETSc/petsc-dev/arch-osx-master-debug/lib/libmpicxx.12.dylib) > ==68417== by 0x1E3555F: ??? (in /Users/sascha/Documents/codes/PETSc/petsc-dev/arch-osx-master-debug/lib/libmpicxx.12.dylib) > ==68417== > ==68417== LEAK SUMMARY: > ==68417== definitely lost: 1,040 bytes in 1 blocks > ==68417== indirectly lost: 1,040 bytes in 1 blocks > ==68417== possibly lost: 1,060 bytes in 1 blocks > ==68417== still reachable: 5,318 bytes in 15 blocks > ==68417== suppressed: 35,919 bytes in 369 blocks > ==68417== Reachable blocks (those to which a pointer was found) are not shown. > ==68417== To see them, rerun with: --leak-check=full --show-leak-kinds=all > ==68417== > ==68417== For counts of detected and suppressed errors, rerun with: -v > ==68417== ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 17 from 17) > > From bsmith at mcs.anl.gov Wed Feb 25 13:48:39 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 25 Feb 2015 13:48:39 -0600 Subject: [petsc-users] about MATSOR In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010E9A34@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010E9A34@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: Let me try to understand what you wish to do. You have a sequential matrix A and a matrix B and you wish to compute the (dense) matrix C where each column of C is obtained by running three iterations of SOR (using the matrix A) with the corresponding column of B as the right hand side for the SOR; using an initial guess of zero to start the SOR? If this is the case then create a seq dense matrix C of the same size as B (it will automatically be initialized with zeros). Say B has n rows VecCreateSeq(PETSC_COMM_SELF,n,&b); VecCreateSeqWithArray(PETSC_COMM_SELF,1,n,NULL,&x); PetscScalar *xx,*carray; MatCreateSeqDense(PETSC_COMM_SELF,n,n,NULL,&C); MatDenseGetArray(C,&carray); loop over columns ierr = MatGetColumnVector(B,b,col);CHKERRQ(ierr); ierr = VecPlaceArray(x,carray + n*col); /* this makes the vec x point to the correct column of entries in the matrix C. > err = MatSOR(A, b, 1, (SOR_FORWARD_SWEEP|SOR_ZERO_INITIAL_GUESS), 0, 3, 0, x);CHKERRQ(ierr); Barry Note you could also do this in parallel but it is kind of bogus be the SOR is run independently on each process so I don't recommend it as useful. > > On Feb 25, 2015, at 12:14 PM, Sun, Hui wrote: > > I want to do 3 steps of gauss seidel from a Mat A to another Mat B. Is there a way to do this? > > I mean, what I can think of is to get the column vectors of B by > ierr = MatGetColumnVector(B,v,col);CHKERRQ(ierr); > > > > and apply Gauss seidel from A to v: > ierr = MatSOR(A, v, 1, (SOR_FORWARD_SWEEP|SOR_ZERO_INITIAL_GUESS), 0, 3, 0, x);CHKERRQ(ierr); > > > > But then I have to form another matrix C whose columns are composed by v, and I'm not sure how to do that. > > > > Best, > > Hui > From hus003 at ucsd.edu Wed Feb 25 15:02:03 2015 From: hus003 at ucsd.edu (Sun, Hui) Date: Wed, 25 Feb 2015 21:02:03 +0000 Subject: [petsc-users] about MATSOR In-Reply-To: References: <7501CC2B7BBCC44A92ECEEC316170ECB010E9A34@XMAIL-MBX-BH1.AD.UCSD.EDU>, Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010E9A7B@XMAIL-MBX-BH1.AD.UCSD.EDU> Thank you, Barry. In fact, my matrix B is sparse. Is it possible to drop the terms less than a certain threshold from C, so that C can be sparse? Another question, there are pc_sor_its and pc_sor_lits in MatSOR. What is local its? Best, Hui ________________________________________ From: Barry Smith [bsmith at mcs.anl.gov] Sent: Wednesday, February 25, 2015 11:48 AM To: Sun, Hui Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] about MATSOR Let me try to understand what you wish to do. You have a sequential matrix A and a matrix B and you wish to compute the (dense) matrix C where each column of C is obtained by running three iterations of SOR (using the matrix A) with the corresponding column of B as the right hand side for the SOR; using an initial guess of zero to start the SOR? If this is the case then create a seq dense matrix C of the same size as B (it will automatically be initialized with zeros). Say B has n rows VecCreateSeq(PETSC_COMM_SELF,n,&b); VecCreateSeqWithArray(PETSC_COMM_SELF,1,n,NULL,&x); PetscScalar *xx,*carray; MatCreateSeqDense(PETSC_COMM_SELF,n,n,NULL,&C); MatDenseGetArray(C,&carray); loop over columns ierr = MatGetColumnVector(B,b,col);CHKERRQ(ierr); ierr = VecPlaceArray(x,carray + n*col); /* this makes the vec x point to the correct column of entries in the matrix C. > err = MatSOR(A, b, 1, (SOR_FORWARD_SWEEP|SOR_ZERO_INITIAL_GUESS), 0, 3, 0, x);CHKERRQ(ierr); Barry Note you could also do this in parallel but it is kind of bogus be the SOR is run independently on each process so I don't recommend it as useful. > > On Feb 25, 2015, at 12:14 PM, Sun, Hui wrote: > > I want to do 3 steps of gauss seidel from a Mat A to another Mat B. Is there a way to do this? > > I mean, what I can think of is to get the column vectors of B by > ierr = MatGetColumnVector(B,v,col);CHKERRQ(ierr); > > > > and apply Gauss seidel from A to v: > ierr = MatSOR(A, v, 1, (SOR_FORWARD_SWEEP|SOR_ZERO_INITIAL_GUESS), 0, 3, 0, x);CHKERRQ(ierr); > > > > But then I have to form another matrix C whose columns are composed by v, and I'm not sure how to do that. > > > > Best, > > Hui > From bsmith at mcs.anl.gov Wed Feb 25 15:12:17 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 25 Feb 2015 15:12:17 -0600 Subject: [petsc-users] about MATSOR In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010E9A7B@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010E9A34@XMAIL-MBX-BH1.AD.UCSD.EDU> <, <>> <7501CC2B7BBCC44A92ECEEC316170ECB010E9A7B@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: > On Feb 25, 2015, at 3:02 PM, Sun, Hui wrote: > > Thank you, Barry. In fact, my matrix B is sparse. Is it possible to drop the terms less than a certain threshold from C, so that C can be sparse? You can do that but it becomes much more complicated and expensive, especially assembling back the global sparse matrix. Why do you want to do this? Are you doing some algebraic multigrid method with smoothed agglomeration or something? If so, this is big projection and it would likely be better to build off some existing tool rather than writing something from scratch. > > Another question, there are pc_sor_its and pc_sor_lits in MatSOR. What is local its? In parallel it does its "global iterations" with communication between each iteration and inside each global iteration it does lits local iterations (without communication between them). (Some people call this hybrid parallel SOR. Sequentially the number of SOR iterations it does is its*lits since there is never any communication between processes since there is only one. Barry > > Best, > Hui > > ________________________________________ > From: Barry Smith [bsmith at mcs.anl.gov] > Sent: Wednesday, February 25, 2015 11:48 AM > To: Sun, Hui > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] about MATSOR > > Let me try to understand what you wish to do. You have a sequential matrix A and a matrix B and you wish to compute the (dense) matrix C where each column of C is obtained by running three iterations of SOR (using the matrix A) with the corresponding column of B as the right hand side for the SOR; using an initial guess of zero to start the SOR? > > If this is the case then create a seq dense matrix C of the same size as B (it will automatically be initialized with zeros). Say B has n rows > > VecCreateSeq(PETSC_COMM_SELF,n,&b); > VecCreateSeqWithArray(PETSC_COMM_SELF,1,n,NULL,&x); > PetscScalar *xx,*carray; > MatCreateSeqDense(PETSC_COMM_SELF,n,n,NULL,&C); > MatDenseGetArray(C,&carray); > loop over columns > ierr = MatGetColumnVector(B,b,col);CHKERRQ(ierr); > ierr = VecPlaceArray(x,carray + n*col); /* this makes the vec x point to the correct column of entries in the matrix C. >> err = MatSOR(A, b, 1, (SOR_FORWARD_SWEEP|SOR_ZERO_INITIAL_GUESS), 0, 3, 0, x);CHKERRQ(ierr); > > > Barry > > Note you could also do this in parallel but it is kind of bogus be the SOR is run independently on each process so I don't recommend it as useful. >> > >> On Feb 25, 2015, at 12:14 PM, Sun, Hui wrote: >> >> I want to do 3 steps of gauss seidel from a Mat A to another Mat B. Is there a way to do this? >> >> I mean, what I can think of is to get the column vectors of B by >> ierr = MatGetColumnVector(B,v,col);CHKERRQ(ierr); >> >> >> >> and apply Gauss seidel from A to v: >> ierr = MatSOR(A, v, 1, (SOR_FORWARD_SWEEP|SOR_ZERO_INITIAL_GUESS), 0, 3, 0, x);CHKERRQ(ierr); >> >> >> >> But then I have to form another matrix C whose columns are composed by v, and I'm not sure how to do that. >> >> >> >> Best, >> >> Hui >> From jed at jedbrown.org Wed Feb 25 16:09:50 2015 From: jed at jedbrown.org (Jed Brown) Date: Wed, 25 Feb 2015 15:09:50 -0700 Subject: [petsc-users] Memory leak in PetscRandom? In-Reply-To: <21BC56AF-3E7D-4003-A2E8-9A9774A870BD@mcs.anl.gov> References: <1EB6EB57-D6A7-4532-B95C-985E5E0B4611@saschaschnepp.net> <21BC56AF-3E7D-4003-A2E8-9A9774A870BD@mcs.anl.gov> Message-ID: <87h9u9bq35.fsf@jedbrown.org> Barry Smith writes: > Thanks for the report. I ran the example with those options under > Linux with valgrind and found no memory leaks, I cannot run > valgrind on my Mac. When are you going to get a real operating system? In the mean time, several people have reported that this works: http://ranf.tl/2014/11/28/valgrind-on-mac-os-x-10-10-yosemite/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From knepley at gmail.com Wed Feb 25 16:12:25 2015 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 25 Feb 2015 16:12:25 -0600 Subject: [petsc-users] Memory leak in PetscRandom? In-Reply-To: <87h9u9bq35.fsf@jedbrown.org> References: <1EB6EB57-D6A7-4532-B95C-985E5E0B4611@saschaschnepp.net> <21BC56AF-3E7D-4003-A2E8-9A9774A870BD@mcs.anl.gov> <87h9u9bq35.fsf@jedbrown.org> Message-ID: On Wed, Feb 25, 2015 at 4:09 PM, Jed Brown wrote: > Barry Smith writes: > > > Thanks for the report. I ran the example with those options under > > Linux with valgrind and found no memory leaks, I cannot run > > valgrind on my Mac. > > When are you going to get a real operating system? In the mean time, > several people have reported that this works: > > http://ranf.tl/2014/11/28/valgrind-on-mac-os-x-10-10-yosemite/ > It works for me. I can't wait for Jed's suffering when a kid wants to watch PBS Kids on his custom Debian build. Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Feb 25 16:15:26 2015 From: jed at jedbrown.org (Jed Brown) Date: Wed, 25 Feb 2015 15:15:26 -0700 Subject: [petsc-users] Memory leak in PetscRandom? In-Reply-To: References: <1EB6EB57-D6A7-4532-B95C-985E5E0B4611@saschaschnepp.net> <21BC56AF-3E7D-4003-A2E8-9A9774A870BD@mcs.anl.gov> <87h9u9bq35.fsf@jedbrown.org> Message-ID: <87bnkhbptt.fsf@jedbrown.org> Matthew Knepley writes: > It works for me. I can't wait for Jed's suffering when a kid wants to > watch PBS Kids on his custom Debian build. Touch?, but in fact, PBS Kids plays great on my Linux box. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From bsmith at mcs.anl.gov Wed Feb 25 16:42:42 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 25 Feb 2015 16:42:42 -0600 Subject: [petsc-users] Memory leak in PetscRandom? In-Reply-To: <87h9u9bq35.fsf@jedbrown.org> References: <1EB6EB57-D6A7-4532-B95C-985E5E0B4611@saschaschnepp.net> <21BC56AF-3E7D-4003-A2E8-9A9774A870BD@mcs.anl.gov> <87h9u9bq35.fsf@jedbrown.org> Message-ID: > On Feb 25, 2015, at 4:09 PM, Jed Brown wrote: > > Barry Smith writes: > >> Thanks for the report. I ran the example with those options under >> Linux with valgrind and found no memory leaks, I cannot run >> valgrind on my Mac. > > When are you going to get a real operating system? In the mean time, > several people have reported that this works: > > http://ranf.tl/2014/11/28/valgrind-on-mac-os-x-10-10-yosemite/ I have a screen running on es so it only takes seconds to use valgrind under linux so I've avoided unsupported valgrind. From hus003 at ucsd.edu Wed Feb 25 16:50:33 2015 From: hus003 at ucsd.edu (Sun, Hui) Date: Wed, 25 Feb 2015 22:50:33 +0000 Subject: [petsc-users] about MATSOR In-Reply-To: References: <7501CC2B7BBCC44A92ECEEC316170ECB010E9A34@XMAIL-MBX-BH1.AD.UCSD.EDU> <, <>> <7501CC2B7BBCC44A92ECEEC316170ECB010E9A7B@XMAIL-MBX-BH1.AD.UCSD.EDU>, Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010E9AB5@XMAIL-MBX-BH1.AD.UCSD.EDU> Thank you Barry. It is a step in forming my PC matrix for the schur complement. I think I will try something else. Best, Hui ________________________________________ From: Barry Smith [bsmith at mcs.anl.gov] Sent: Wednesday, February 25, 2015 1:12 PM To: Sun, Hui Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] about MATSOR > On Feb 25, 2015, at 3:02 PM, Sun, Hui wrote: > > Thank you, Barry. In fact, my matrix B is sparse. Is it possible to drop the terms less than a certain threshold from C, so that C can be sparse? You can do that but it becomes much more complicated and expensive, especially assembling back the global sparse matrix. Why do you want to do this? Are you doing some algebraic multigrid method with smoothed agglomeration or something? If so, this is big projection and it would likely be better to build off some existing tool rather than writing something from scratch. > > Another question, there are pc_sor_its and pc_sor_lits in MatSOR. What is local its? In parallel it does its "global iterations" with communication between each iteration and inside each global iteration it does lits local iterations (without communication between them). (Some people call this hybrid parallel SOR. Sequentially the number of SOR iterations it does is its*lits since there is never any communication between processes since there is only one. Barry > > Best, > Hui > > ________________________________________ > From: Barry Smith [bsmith at mcs.anl.gov] > Sent: Wednesday, February 25, 2015 11:48 AM > To: Sun, Hui > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] about MATSOR > > Let me try to understand what you wish to do. You have a sequential matrix A and a matrix B and you wish to compute the (dense) matrix C where each column of C is obtained by running three iterations of SOR (using the matrix A) with the corresponding column of B as the right hand side for the SOR; using an initial guess of zero to start the SOR? > > If this is the case then create a seq dense matrix C of the same size as B (it will automatically be initialized with zeros). Say B has n rows > > VecCreateSeq(PETSC_COMM_SELF,n,&b); > VecCreateSeqWithArray(PETSC_COMM_SELF,1,n,NULL,&x); > PetscScalar *xx,*carray; > MatCreateSeqDense(PETSC_COMM_SELF,n,n,NULL,&C); > MatDenseGetArray(C,&carray); > loop over columns > ierr = MatGetColumnVector(B,b,col);CHKERRQ(ierr); > ierr = VecPlaceArray(x,carray + n*col); /* this makes the vec x point to the correct column of entries in the matrix C. >> err = MatSOR(A, b, 1, (SOR_FORWARD_SWEEP|SOR_ZERO_INITIAL_GUESS), 0, 3, 0, x);CHKERRQ(ierr); > > > Barry > > Note you could also do this in parallel but it is kind of bogus be the SOR is run independently on each process so I don't recommend it as useful. >> > >> On Feb 25, 2015, at 12:14 PM, Sun, Hui wrote: >> >> I want to do 3 steps of gauss seidel from a Mat A to another Mat B. Is there a way to do this? >> >> I mean, what I can think of is to get the column vectors of B by >> ierr = MatGetColumnVector(B,v,col);CHKERRQ(ierr); >> >> >> >> and apply Gauss seidel from A to v: >> ierr = MatSOR(A, v, 1, (SOR_FORWARD_SWEEP|SOR_ZERO_INITIAL_GUESS), 0, 3, 0, x);CHKERRQ(ierr); >> >> >> >> But then I have to form another matrix C whose columns are composed by v, and I'm not sure how to do that. >> >> >> >> Best, >> >> Hui >> From hus003 at ucsd.edu Wed Feb 25 18:36:30 2015 From: hus003 at ucsd.edu (Sun, Hui) Date: Thu, 26 Feb 2015 00:36:30 +0000 Subject: [petsc-users] about MATSOR In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010E9AB5@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010E9A34@XMAIL-MBX-BH1.AD.UCSD.EDU> <, <>> <7501CC2B7BBCC44A92ECEEC316170ECB010E9A7B@XMAIL-MBX-BH1.AD.UCSD.EDU>, , <7501CC2B7BBCC44A92ECEEC316170ECB010E9AB5@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010E9AED@XMAIL-MBX-BH1.AD.UCSD.EDU> By the way, how do you add up two matrices? Assume A and B are of same size, both sparse and of type Matmpi. I want to perform this operation: C=A+B. What should I do? I haven't found anything related in the user manual. Best, Hui ________________________________________ From: Sun, Hui Sent: Wednesday, February 25, 2015 2:50 PM To: Barry Smith Cc: petsc-users at mcs.anl.gov Subject: RE: [petsc-users] about MATSOR Thank you Barry. It is a step in forming my PC matrix for the schur complement. I think I will try something else. Best, Hui ________________________________________ From: Barry Smith [bsmith at mcs.anl.gov] Sent: Wednesday, February 25, 2015 1:12 PM To: Sun, Hui Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] about MATSOR > On Feb 25, 2015, at 3:02 PM, Sun, Hui wrote: > > Thank you, Barry. In fact, my matrix B is sparse. Is it possible to drop the terms less than a certain threshold from C, so that C can be sparse? You can do that but it becomes much more complicated and expensive, especially assembling back the global sparse matrix. Why do you want to do this? Are you doing some algebraic multigrid method with smoothed agglomeration or something? If so, this is big projection and it would likely be better to build off some existing tool rather than writing something from scratch. > > Another question, there are pc_sor_its and pc_sor_lits in MatSOR. What is local its? In parallel it does its "global iterations" with communication between each iteration and inside each global iteration it does lits local iterations (without communication between them). (Some people call this hybrid parallel SOR. Sequentially the number of SOR iterations it does is its*lits since there is never any communication between processes since there is only one. Barry > > Best, > Hui > > ________________________________________ > From: Barry Smith [bsmith at mcs.anl.gov] > Sent: Wednesday, February 25, 2015 11:48 AM > To: Sun, Hui > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] about MATSOR > > Let me try to understand what you wish to do. You have a sequential matrix A and a matrix B and you wish to compute the (dense) matrix C where each column of C is obtained by running three iterations of SOR (using the matrix A) with the corresponding column of B as the right hand side for the SOR; using an initial guess of zero to start the SOR? > > If this is the case then create a seq dense matrix C of the same size as B (it will automatically be initialized with zeros). Say B has n rows > > VecCreateSeq(PETSC_COMM_SELF,n,&b); > VecCreateSeqWithArray(PETSC_COMM_SELF,1,n,NULL,&x); > PetscScalar *xx,*carray; > MatCreateSeqDense(PETSC_COMM_SELF,n,n,NULL,&C); > MatDenseGetArray(C,&carray); > loop over columns > ierr = MatGetColumnVector(B,b,col);CHKERRQ(ierr); > ierr = VecPlaceArray(x,carray + n*col); /* this makes the vec x point to the correct column of entries in the matrix C. >> err = MatSOR(A, b, 1, (SOR_FORWARD_SWEEP|SOR_ZERO_INITIAL_GUESS), 0, 3, 0, x);CHKERRQ(ierr); > > > Barry > > Note you could also do this in parallel but it is kind of bogus be the SOR is run independently on each process so I don't recommend it as useful. >> > >> On Feb 25, 2015, at 12:14 PM, Sun, Hui wrote: >> >> I want to do 3 steps of gauss seidel from a Mat A to another Mat B. Is there a way to do this? >> >> I mean, what I can think of is to get the column vectors of B by >> ierr = MatGetColumnVector(B,v,col);CHKERRQ(ierr); >> >> >> >> and apply Gauss seidel from A to v: >> ierr = MatSOR(A, v, 1, (SOR_FORWARD_SWEEP|SOR_ZERO_INITIAL_GUESS), 0, 3, 0, x);CHKERRQ(ierr); >> >> >> >> But then I have to form another matrix C whose columns are composed by v, and I'm not sure how to do that. >> >> >> >> Best, >> >> Hui >> From knepley at gmail.com Wed Feb 25 19:00:42 2015 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 25 Feb 2015 19:00:42 -0600 Subject: [petsc-users] about MATSOR In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010E9AED@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010E9A34@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E9A7B@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E9AB5@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E9AED@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: On Wed, Feb 25, 2015 at 6:36 PM, Sun, Hui wrote: > By the way, how do you add up two matrices? Assume A and B are of same > size, both sparse and of type Matmpi. I want to perform this operation: > C=A+B. What should I do? I haven't found anything related in the user > manual. > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatAXPY.html Matt > Best, > Hui > > > ________________________________________ > From: Sun, Hui > Sent: Wednesday, February 25, 2015 2:50 PM > To: Barry Smith > Cc: petsc-users at mcs.anl.gov > Subject: RE: [petsc-users] about MATSOR > > Thank you Barry. It is a step in forming my PC matrix for the schur > complement. I think I will try something else. > > Best, > Hui > > > ________________________________________ > From: Barry Smith [bsmith at mcs.anl.gov] > Sent: Wednesday, February 25, 2015 1:12 PM > To: Sun, Hui > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] about MATSOR > > > On Feb 25, 2015, at 3:02 PM, Sun, Hui wrote: > > > > Thank you, Barry. In fact, my matrix B is sparse. Is it possible to drop > the terms less than a certain threshold from C, so that C can be sparse? > > You can do that but it becomes much more complicated and expensive, > especially assembling back the global sparse matrix. Why do you want to > do this? Are you doing some algebraic multigrid method with smoothed > agglomeration or something? If so, this is big projection and it would > likely be better to build off some existing tool rather than writing > something from scratch. > > > > > Another question, there are pc_sor_its and pc_sor_lits in MatSOR. What > is local its? > > In parallel it does its "global iterations" with communication between > each iteration and inside each global iteration it does lits local > iterations (without communication between them). (Some people call this > hybrid parallel SOR. > > Sequentially the number of SOR iterations it does is its*lits since > there is never any communication between processes since there is only one. > > Barry > > > > > Best, > > Hui > > > > ________________________________________ > > From: Barry Smith [bsmith at mcs.anl.gov] > > Sent: Wednesday, February 25, 2015 11:48 AM > > To: Sun, Hui > > Cc: petsc-users at mcs.anl.gov > > Subject: Re: [petsc-users] about MATSOR > > > > Let me try to understand what you wish to do. You have a sequential > matrix A and a matrix B and you wish to compute the (dense) matrix C where > each column of C is obtained by running three iterations of SOR (using the > matrix A) with the corresponding column of B as the right hand side for the > SOR; using an initial guess of zero to start the SOR? > > > > If this is the case then create a seq dense matrix C of the same size > as B (it will automatically be initialized with zeros). Say B has n rows > > > > VecCreateSeq(PETSC_COMM_SELF,n,&b); > > VecCreateSeqWithArray(PETSC_COMM_SELF,1,n,NULL,&x); > > PetscScalar *xx,*carray; > > MatCreateSeqDense(PETSC_COMM_SELF,n,n,NULL,&C); > > MatDenseGetArray(C,&carray); > > loop over columns > > ierr = MatGetColumnVector(B,b,col);CHKERRQ(ierr); > > ierr = VecPlaceArray(x,carray + n*col); /* this makes the vec x > point to the correct column of entries in the matrix C. > >> err = MatSOR(A, b, 1, (SOR_FORWARD_SWEEP|SOR_ZERO_INITIAL_GUESS), 0, 3, > 0, x);CHKERRQ(ierr); > > > > > > Barry > > > > Note you could also do this in parallel but it is kind of bogus be the > SOR is run independently on each process so I don't recommend it as useful. > >> > > > >> On Feb 25, 2015, at 12:14 PM, Sun, Hui wrote: > >> > >> I want to do 3 steps of gauss seidel from a Mat A to another Mat B. Is > there a way to do this? > >> > >> I mean, what I can think of is to get the column vectors of B by > >> ierr = MatGetColumnVector(B,v,col);CHKERRQ(ierr); > >> > >> > >> > >> and apply Gauss seidel from A to v: > >> ierr = MatSOR(A, v, 1, (SOR_FORWARD_SWEEP|SOR_ZERO_INITIAL_GUESS), 0, > 3, 0, x);CHKERRQ(ierr); > >> > >> > >> > >> But then I have to form another matrix C whose columns are composed by > v, and I'm not sure how to do that. > >> > >> > >> > >> Best, > >> > >> Hui > >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From hus003 at ucsd.edu Wed Feb 25 19:09:09 2015 From: hus003 at ucsd.edu (Sun, Hui) Date: Thu, 26 Feb 2015 01:09:09 +0000 Subject: [petsc-users] about MATSOR In-Reply-To: References: <7501CC2B7BBCC44A92ECEEC316170ECB010E9A34@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E9A7B@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E9AB5@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010E9AED@XMAIL-MBX-BH1.AD.UCSD.EDU>, Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010E9B02@XMAIL-MBX-BH1.AD.UCSD.EDU> Thank you Matt. Hui ________________________________ From: Matthew Knepley [knepley at gmail.com] Sent: Wednesday, February 25, 2015 5:00 PM To: Sun, Hui Cc: Barry Smith; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] about MATSOR On Wed, Feb 25, 2015 at 6:36 PM, Sun, Hui > wrote: By the way, how do you add up two matrices? Assume A and B are of same size, both sparse and of type Matmpi. I want to perform this operation: C=A+B. What should I do? I haven't found anything related in the user manual. http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatAXPY.html Matt Best, Hui ________________________________________ From: Sun, Hui Sent: Wednesday, February 25, 2015 2:50 PM To: Barry Smith Cc: petsc-users at mcs.anl.gov Subject: RE: [petsc-users] about MATSOR Thank you Barry. It is a step in forming my PC matrix for the schur complement. I think I will try something else. Best, Hui ________________________________________ From: Barry Smith [bsmith at mcs.anl.gov] Sent: Wednesday, February 25, 2015 1:12 PM To: Sun, Hui Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] about MATSOR > On Feb 25, 2015, at 3:02 PM, Sun, Hui > wrote: > > Thank you, Barry. In fact, my matrix B is sparse. Is it possible to drop the terms less than a certain threshold from C, so that C can be sparse? You can do that but it becomes much more complicated and expensive, especially assembling back the global sparse matrix. Why do you want to do this? Are you doing some algebraic multigrid method with smoothed agglomeration or something? If so, this is big projection and it would likely be better to build off some existing tool rather than writing something from scratch. > > Another question, there are pc_sor_its and pc_sor_lits in MatSOR. What is local its? In parallel it does its "global iterations" with communication between each iteration and inside each global iteration it does lits local iterations (without communication between them). (Some people call this hybrid parallel SOR. Sequentially the number of SOR iterations it does is its*lits since there is never any communication between processes since there is only one. Barry > > Best, > Hui > > ________________________________________ > From: Barry Smith [bsmith at mcs.anl.gov] > Sent: Wednesday, February 25, 2015 11:48 AM > To: Sun, Hui > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] about MATSOR > > Let me try to understand what you wish to do. You have a sequential matrix A and a matrix B and you wish to compute the (dense) matrix C where each column of C is obtained by running three iterations of SOR (using the matrix A) with the corresponding column of B as the right hand side for the SOR; using an initial guess of zero to start the SOR? > > If this is the case then create a seq dense matrix C of the same size as B (it will automatically be initialized with zeros). Say B has n rows > > VecCreateSeq(PETSC_COMM_SELF,n,&b); > VecCreateSeqWithArray(PETSC_COMM_SELF,1,n,NULL,&x); > PetscScalar *xx,*carray; > MatCreateSeqDense(PETSC_COMM_SELF,n,n,NULL,&C); > MatDenseGetArray(C,&carray); > loop over columns > ierr = MatGetColumnVector(B,b,col);CHKERRQ(ierr); > ierr = VecPlaceArray(x,carray + n*col); /* this makes the vec x point to the correct column of entries in the matrix C. >> err = MatSOR(A, b, 1, (SOR_FORWARD_SWEEP|SOR_ZERO_INITIAL_GUESS), 0, 3, 0, x);CHKERRQ(ierr); > > > Barry > > Note you could also do this in parallel but it is kind of bogus be the SOR is run independently on each process so I don't recommend it as useful. >> > >> On Feb 25, 2015, at 12:14 PM, Sun, Hui > wrote: >> >> I want to do 3 steps of gauss seidel from a Mat A to another Mat B. Is there a way to do this? >> >> I mean, what I can think of is to get the column vectors of B by >> ierr = MatGetColumnVector(B,v,col);CHKERRQ(ierr); >> >> >> >> and apply Gauss seidel from A to v: >> ierr = MatSOR(A, v, 1, (SOR_FORWARD_SWEEP|SOR_ZERO_INITIAL_GUESS), 0, 3, 0, x);CHKERRQ(ierr); >> >> >> >> But then I have to form another matrix C whose columns are composed by v, and I'm not sure how to do that. >> >> >> >> Best, >> >> Hui >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From salazardetroya at gmail.com Thu Feb 26 11:33:41 2015 From: salazardetroya at gmail.com (Miguel Angel Salazar de Troya) Date: Thu, 26 Feb 2015 11:33:41 -0600 Subject: [petsc-users] Partitioning in DMNetwork Message-ID: Hi In DMNetwork we have edges and vertices and we can add an arbitrary number of variables to both edges and vertices. When the partitioning is carried out, is this done according to the number of edges and vertices or the variables? Say for example, that we assigned a very large number of variables to the first edge, a number much greater than the total number of edges or vertices. After the partitioning, the processor that contains that edge with many variables would have a very large portion of the global vector, wouldn't it? This case is hypothetical and something to avoid, but, could it happen? Would the partitioning be made in some other way to avoid this load unbalance? Thanks Miguel -- *Miguel Angel Salazar de Troya* Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalcinl at gmail.com Thu Feb 26 11:59:15 2015 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Thu, 26 Feb 2015 20:59:15 +0300 Subject: [petsc-users] Memory leak in PetscRandom? In-Reply-To: <87h9u9bq35.fsf@jedbrown.org> References: <1EB6EB57-D6A7-4532-B95C-985E5E0B4611@saschaschnepp.net> <21BC56AF-3E7D-4003-A2E8-9A9774A870BD@mcs.anl.gov> <87h9u9bq35.fsf@jedbrown.org> Message-ID: On 26 February 2015 at 01:09, Jed Brown wrote: > Barry Smith writes: > >> Thanks for the report. I ran the example with those options under >> Linux with valgrind and found no memory leaks, I cannot run >> valgrind on my Mac. > > When are you going to get a real operating system? In the mean time, > several people have reported that this works: > > http://ranf.tl/2014/11/28/valgrind-on-mac-os-x-10-10-yosemite/ When is valgrind going to use a real VCS? ;-) -- Lisandro Dalcin ============ Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (NumPor) King Abdullah University of Science and Technology (KAUST) http://numpor.kaust.edu.sa/ 4700 King Abdullah University of Science and Technology al-Khawarizmi Bldg (Bldg 1), Office # 4332 Thuwal 23955-6900, Kingdom of Saudi Arabia http://www.kaust.edu.sa Office Phone: +966 12 808-0459 From jed at jedbrown.org Thu Feb 26 12:03:28 2015 From: jed at jedbrown.org (Jed Brown) Date: Thu, 26 Feb 2015 11:03:28 -0700 Subject: [petsc-users] Memory leak in PetscRandom? In-Reply-To: References: <1EB6EB57-D6A7-4532-B95C-985E5E0B4611@saschaschnepp.net> <21BC56AF-3E7D-4003-A2E8-9A9774A870BD@mcs.anl.gov> <87h9u9bq35.fsf@jedbrown.org> Message-ID: <87fv9sa6tr.fsf@jedbrown.org> Lisandro Dalcin writes: > When is valgrind going to use a real VCS? ;-) Good question. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From knepley at gmail.com Thu Feb 26 13:08:56 2015 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 26 Feb 2015 13:08:56 -0600 Subject: [petsc-users] Partitioning in DMNetwork In-Reply-To: References: Message-ID: On Thu, Feb 26, 2015 at 11:33 AM, Miguel Angel Salazar de Troya < salazardetroya at gmail.com> wrote: > Hi > > In DMNetwork we have edges and vertices and we can add an arbitrary number > of variables to both edges and vertices. When the partitioning is carried > out, is this done according to the number of edges and vertices or the > variables? Say for example, that we assigned a very large number of > variables to the first edge, a number much greater than the total number of > edges or vertices. After the partitioning, the processor that contains that > edge with many variables would have a very large portion of the global > vector, wouldn't it? > > This case is hypothetical and something to avoid, but, could it happen? > Would the partitioning be made in some other way to avoid this load > unbalance? > We can do weighted partitioning. We have not done it yet because in my experience the partitioners are quite fragile with respect to the weights and also the weighting has to be very coarse-grained. However, if you have something that really needs it, we can do it. Thanks, Matt > Thanks > Miguel > > -- > *Miguel Angel Salazar de Troya* > Graduate Research Assistant > Department of Mechanical Science and Engineering > University of Illinois at Urbana-Champaign > (217) 550-2360 > salaza11 at illinois.edu > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From orxan.shibli at gmail.com Thu Feb 26 22:25:00 2015 From: orxan.shibli at gmail.com (Orxan Shibliyev) Date: Thu, 26 Feb 2015 21:25:00 -0700 Subject: [petsc-users] GMRES stability Message-ID: Hi I tried to solve Ax=b with my own Gauss-Seidel code and Petsc's GMRES. With my GS, for a steady state problem I can set CFL=40 and for unsteady case can set dt=0.1. However, for GMRES I can't set CFL more than 5 and for unsteady case dt more than 0.00001. I need GMRES for parallel computations so I cannot use GS for this purpose. Is there a way to improve the stability of GMRES? From bsmith at mcs.anl.gov Thu Feb 26 22:36:39 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 26 Feb 2015 22:36:39 -0600 Subject: [petsc-users] GMRES stability In-Reply-To: References: Message-ID: <0A826E90-DD1A-450B-BDE2-D5A4867B682B@mcs.anl.gov> By stability I assume you mean the the GMRES does not converge (or converges too slowly)? The way to improve GMRES convergence is with a preconditioner better suited to your problem. By default PETSc uses GMRES with a block Jacobi preconditioner with one block per process and ILU(0) on each block. For some problems this is fine, but for many problems it will give bad convergence. What do you get for -ksp_view (are you using the default?) Are you running yet in parallel? As a test on one process you can use GS in PETSc as the preconditioner and make sure you get similar convergence to your code. For example -ksp_richardson -pc_type sor on one processor will give you a GS solver. Once we know a bit more about your problem we can suggest better preconditioners. Barry > On Feb 26, 2015, at 10:25 PM, Orxan Shibliyev wrote: > > Hi > > I tried to solve Ax=b with my own Gauss-Seidel code and Petsc's GMRES. > With my GS, for a steady state problem I can set CFL=40 and for > unsteady case can set dt=0.1. However, for GMRES I can't set CFL more > than 5 and for unsteady case dt more than 0.00001. I need GMRES for > parallel computations so I cannot use GS for this purpose. Is there a > way to improve the stability of GMRES? From bhatiamanav at gmail.com Thu Feb 26 23:01:02 2015 From: bhatiamanav at gmail.com (Manav Bhatia) Date: Thu, 26 Feb 2015 23:01:02 -0600 Subject: [petsc-users] changing solver options at runtime Message-ID: Hi, Is there a way to change, between two consecutive linear solves, the command line options that petsc uses to initialize the solver (using xxxSetFromOptions)? I am attempting a multidisciplinary simulation, such that each discipline has its own linear system of equations to solve (perhaps repeatedly), and I wish to set separate options for each disciplinary solve. Passing the options at command line will set the same values for each discipline, which is what I wish to change. Of course, this can be done by writing code to set each option, but the convenience of doing it through command line options is very attractive. Any help will be greatly appreciated. Thanks, Manav From bsmith at mcs.anl.gov Thu Feb 26 23:55:58 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 26 Feb 2015 23:55:58 -0600 Subject: [petsc-users] changing solver options at runtime In-Reply-To: References: Message-ID: TSSetOptionsPrefix() or SNESSetOptionsPrefix() or KSPSetOptionsPrefix(); you can call them at any time and then any call to TS/SNES/KSPSetFromOptions() on that object after that will use the command line options associated with that prefix. Check the manual pages. Barry > On Feb 26, 2015, at 11:01 PM, Manav Bhatia wrote: > > Hi, > > Is there a way to change, between two consecutive linear solves, the command line options that petsc uses to initialize the solver (using xxxSetFromOptions)? > > I am attempting a multidisciplinary simulation, such that each discipline has its own linear system of equations to solve (perhaps repeatedly), and I wish to set separate options for each disciplinary solve. Passing the options at command line will set the same values for each discipline, which is what I wish to change. Of course, this can be done by writing code to set each option, but the convenience of doing it through command line options is very attractive. > > Any help will be greatly appreciated. > > Thanks, > Manav > > From zonexo at gmail.com Fri Feb 27 02:05:31 2015 From: zonexo at gmail.com (TAY wee-beng) Date: Fri, 27 Feb 2015 16:05:31 +0800 Subject: [petsc-users] hypre 2.10.0b and PETSc 3.5.3 Message-ID: <54F0254B.9060001@gmail.com> Hi, It seems that PETSc 3.5.3 download hpyre 2.9.1a by default. Is PETSc compatible with the latest hypre 2.10.0b? Or can I manually add the 2.10.0b download link? -- Thank you Yours sincerely, TAY wee-beng From hus003 at ucsd.edu Fri Feb 27 03:42:39 2015 From: hus003 at ucsd.edu (Sun, Hui) Date: Fri, 27 Feb 2015 09:42:39 +0000 Subject: [petsc-users] question about KSPSetOperators Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010E9BCD@XMAIL-MBX-BH1.AD.UCSD.EDU> For KSPSetOperators or KSPSetComputeOperators, one needs to define the matrix. Now, instead of specifying the matrix, is it possible to define a linear operator, say LinearOperator(Vec input, &Vec output), and set it to be the KSP operator? Thank you. Hui -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Feb 27 05:30:53 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 27 Feb 2015 05:30:53 -0600 Subject: [petsc-users] GMRES stability In-Reply-To: References: <0A826E90-DD1A-450B-BDE2-D5A4867B682B@mcs.anl.gov> Message-ID: <239AD9F8-8DBE-42D6-B185-7BF246FC4900@mcs.anl.gov> Ok, please provide the rest of the information I asked for. Barry > On Feb 27, 2015, at 2:33 AM, Orxan Shibliyev wrote: > > No. It does not converge at all or iow it diverges. > > On Thu, Feb 26, 2015 at 9:36 PM, Barry Smith wrote: >> >> By stability I assume you mean the the GMRES does not converge (or converges too slowly)? >> >> The way to improve GMRES convergence is with a preconditioner better suited to your problem. By default PETSc uses GMRES with a block Jacobi preconditioner with one block per process and ILU(0) on each block. For some problems this is fine, but for many problems it will give bad convergence. >> >> What do you get for -ksp_view (are you using the default?) Are you running yet in parallel? >> >> As a test on one process you can use GS in PETSc as the preconditioner and make sure you get similar convergence to your code. For example -ksp_richardson -pc_type sor on one processor will give you a GS solver. >> >> Once we know a bit more about your problem we can suggest better preconditioners. >> >> Barry >> >> >>> On Feb 26, 2015, at 10:25 PM, Orxan Shibliyev wrote: >>> >>> Hi >>> >>> I tried to solve Ax=b with my own Gauss-Seidel code and Petsc's GMRES. >>> With my GS, for a steady state problem I can set CFL=40 and for >>> unsteady case can set dt=0.1. However, for GMRES I can't set CFL more >>> than 5 and for unsteady case dt more than 0.00001. I need GMRES for >>> parallel computations so I cannot use GS for this purpose. Is there a >>> way to improve the stability of GMRES? >> From bsmith at mcs.anl.gov Fri Feb 27 05:31:59 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 27 Feb 2015 05:31:59 -0600 Subject: [petsc-users] question about KSPSetOperators In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010E9BCD@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010E9BCD@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: <680D975F-1EFF-490A-82A2-87F3CE252C7C@mcs.anl.gov> Yes, see MatCreateShell() and MatShellSetOperation() Barry > On Feb 27, 2015, at 3:42 AM, Sun, Hui wrote: > > For KSPSetOperators or KSPSetComputeOperators, one needs to define the matrix. Now, instead of specifying the matrix, is it possible to define a linear operator, say LinearOperator(Vec input, &Vec output), and set it to be the KSP operator? > > Thank you. > Hui From jed at jedbrown.org Fri Feb 27 06:35:43 2015 From: jed at jedbrown.org (Jed Brown) Date: Fri, 27 Feb 2015 05:35:43 -0700 Subject: [petsc-users] hypre 2.10.0b and PETSc 3.5.3 In-Reply-To: <54F0254B.9060001@gmail.com> References: <54F0254B.9060001@gmail.com> Message-ID: <87pp8vedls.fsf@jedbrown.org> TAY wee-beng writes: > Hi, > > It seems that PETSc 3.5.3 download hpyre 2.9.1a by default. Is PETSc > compatible with the latest hypre 2.10.0b? Or can I manually add the > 2.10.0b download link? Try it. Download the tarball and configure --download-hypre=/path/to/your/hypre-2.10.0b.tar.gz -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From haakon at hakostra.net Fri Feb 27 09:00:52 2015 From: haakon at hakostra.net (=?UTF-8?Q?H=C3=A5kon_Strandenes?=) Date: Fri, 27 Feb 2015 16:00:52 +0100 Subject: [petsc-users] =?utf-8?q?Bug_in_VecLoad=5FHDF5=5FDA_when_using_sin?= =?utf-8?q?gle_precision?= Message-ID: <4b4bc715605590aa97cee9f37e4921e1@webmail.domeneshop.no> Hi, Recently I decided to try using PETSc with single precision, and that resulted in a segmentation fault in my application. Digging a bit into this I quickly found a bug in VecLoad_HDF5_DA.c, line 858, where the HDF5 type 'H5T_NATIVE_DOUBLE' is hard-coded into H5Dread(), independent on the precision PETSc is built with. This obviously leads to a segmentation fault, since H5Dread() tries to fill twice as much data into the memory as there is allocated space for. I think this should be handled as in VecView_MPI_HDF5_DA, where there are some #if defined(...) that sets a variable to pass on to the HDF5 functions depending on the floating point type PETSc is compiled with. That did at least solve my segmentation fault problems. Have a nice weekend. Regards, H?kon From alice.raeli at math.u-bordeaux1.fr Fri Feb 27 10:57:07 2015 From: alice.raeli at math.u-bordeaux1.fr (Alice Raeli) Date: Fri, 27 Feb 2015 17:57:07 +0100 Subject: [petsc-users] Trouble finding PETSc with cmake Message-ID: Hi all, i?m trying to build my project using cmake. PETSc is already installed and its tests runned but when I try cmake .. for my project it appears that it doesn?t find PETSc. The error message is: cmake module path /Users/Alice/Documents/Work/Programmazione/TEST_AUTOMATIZZAZIONE/ParallelOrder/Penalizzazione/cmake-modules dir /usr/local/petsc arch arch-darwin-c-debug CMake Warning (dev) at cmake-modules/FindPackageMultipass.cmake:48 (if): Policy CMP0054 is not set: Only interpret if() arguments as variables or keywords when unquoted. Run "cmake --help-policy CMP0054" for policy details. Use the cmake_policy command to set the policy and suppress this warning. Quoted variables like "/usr/local/petsc" will no longer be dereferenced when the policy is set to NEW. Since the policy is not set the OLD behavior will be used. Call Stack (most recent call first): cmake-modules/FindPETSc.cmake:108 (find_package_multipass) test/CMakeLists.txt:8 (FIND_PACKAGE) This warning is for project developers. Use -Wno-dev to suppress it. CMake Error at /usr/local/homebrew/Cellar/cmake/3.1.3/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:138 (message): PETSc could not be found. Be sure to set PETSC_DIR and PETSC_ARCH. (missing: PETSC_EXECUTABLE_RUNS) Call Stack (most recent call first): /usr/local/homebrew/Cellar/cmake/3.1.3/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:374 (_FPHSA_FAILURE_MESSAGE) cmake-modules/FindPETSc.cmake:323 (find_package_handle_standard_args) test/CMakeLists.txt:8 (FIND_PACKAGE) ______________ PETSC_DIR and PETSC_ARCH are: /usr/local/petsc/ arch-darwin-c-debug but also following the given instruction it seems not reading the correct path. I work on OS X Yosemite 10.10.2 cmake version 3.1.3 CmakeLists.txt [?] SET(CMAKE_C_COMPILER mpicc) SET(CMAKE_CXX_COMPILER mpicxx) SET (CMAKE_C_FLAGS_INIT "-Wall -std=c99") SET (CMAKE_C_FLAGS_DEBUG_INIT "-g") SET (CMAKE_C_FLAGS_MINSIZEREL_INIT "-Os -DNDEBUG") SET (CMAKE_C_FLAGS_RELEASE_INIT "-O4 -DNDEBUG") SET (CMAKE_C_FLAGS_RELWITHDEBINFO_INIT "-O2 -g") SET (CMAKE_CXX_FLAGS_INIT "-Wall") SET (CMAKE_CXX_FLAGS_DEBUG_INIT "-g") SET (CMAKE_CXX_FLAGS_MINSIZEREL_INIT "-Os -DNDEBUG") SET (CMAKE_CXX_FLAGS_RELEASE_INIT "-O4 -DNDEBUG") SET (CMAKE_CXX_FLAGS_RELWITHDEBINFO_INIT "-O2 -g") [?] Have you got a solution? Thanks, Alice Alice Raeli alice.raeli at math.u-bordeaux1.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Fri Feb 27 12:05:02 2015 From: jed at jedbrown.org (Jed Brown) Date: Fri, 27 Feb 2015 11:05:02 -0700 Subject: [petsc-users] Trouble finding PETSc with cmake In-Reply-To: References: Message-ID: <8761andycx.fsf@jedbrown.org> Alice Raeli writes: > Hi all, > > i?m trying to build my project using cmake. > PETSc is already installed and its tests runned but when I try cmake .. for my project it appears that it doesn?t find PETSc. The error message is: CMake insists on sequestering all the useful information in CMakeFiles/CMakeError.log and CMakeFiles/CMakeOutput.log. Can you check those files for details about the error? Send the output here if it doesn't make sense to you. > cmake module path /Users/Alice/Documents/Work/Programmazione/TEST_AUTOMATIZZAZIONE/ParallelOrder/Penalizzazione/cmake-modules > dir /usr/local/petsc arch arch-darwin-c-debug > CMake Warning (dev) at cmake-modules/FindPackageMultipass.cmake:48 (if): > Policy CMP0054 is not set: Only interpret if() arguments as variables or > keywords when unquoted. Run "cmake --help-policy CMP0054" for policy > details. Use the cmake_policy command to set the policy and suppress this > warning. > > Quoted variables like "/usr/local/petsc" will no longer be dereferenced > when the policy is set to NEW. Since the policy is not set the OLD > behavior will be used. > Call Stack (most recent call first): > cmake-modules/FindPETSc.cmake:108 (find_package_multipass) > test/CMakeLists.txt:8 (FIND_PACKAGE) > This warning is for project developers. Use -Wno-dev to suppress it. > > CMake Error at /usr/local/homebrew/Cellar/cmake/3.1.3/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:138 (message): > PETSc could not be found. Be sure to set PETSC_DIR and PETSC_ARCH. > (missing: PETSC_EXECUTABLE_RUNS) > Call Stack (most recent call first): > /usr/local/homebrew/Cellar/cmake/3.1.3/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:374 (_FPHSA_FAILURE_MESSAGE) > cmake-modules/FindPETSc.cmake:323 (find_package_handle_standard_args) > test/CMakeLists.txt:8 (FIND_PACKAGE) > > ______________ > > PETSC_DIR and PETSC_ARCH are: > /usr/local/petsc/ > arch-darwin-c-debug > > but also following the given instruction it seems not reading the correct path. > > I work on OS X Yosemite 10.10.2 > > cmake version 3.1.3 > > > CmakeLists.txt [?] > SET(CMAKE_C_COMPILER mpicc) > > SET(CMAKE_CXX_COMPILER mpicxx) > > SET (CMAKE_C_FLAGS_INIT "-Wall -std=c99") > SET (CMAKE_C_FLAGS_DEBUG_INIT "-g") > SET (CMAKE_C_FLAGS_MINSIZEREL_INIT "-Os -DNDEBUG") > SET (CMAKE_C_FLAGS_RELEASE_INIT "-O4 -DNDEBUG") > SET (CMAKE_C_FLAGS_RELWITHDEBINFO_INIT "-O2 -g") > > SET (CMAKE_CXX_FLAGS_INIT "-Wall") > SET (CMAKE_CXX_FLAGS_DEBUG_INIT "-g") > SET (CMAKE_CXX_FLAGS_MINSIZEREL_INIT "-Os -DNDEBUG") > SET (CMAKE_CXX_FLAGS_RELEASE_INIT "-O4 -DNDEBUG") > SET (CMAKE_CXX_FLAGS_RELWITHDEBINFO_INIT "-O2 -g") > > [?] > > Have you got a solution? > > Thanks, > Alice > > Alice Raeli > alice.raeli at math.u-bordeaux1.fr -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From haakon at hakostra.net Fri Feb 27 12:13:13 2015 From: haakon at hakostra.net (=?UTF-8?Q?H=C3=A5kon_Strandenes?=) Date: Fri, 27 Feb 2015 19:13:13 +0100 Subject: [petsc-users] Trouble finding PETSc with cmake In-Reply-To: References: Message-ID: <3b7d51f094011b3a32478df4c7843ede@webmail.domeneshop.no> I use the PkgConfig CMake module with PETSc in my projects like this: find_package(PkgConfig) pkg_search_module(PETSC REQUIRED PETSc) The only requirement is that the environment variable $PKG_CONFIG_PATH is set correctly (f.ex. 'arch-linux2-c-debug/lib/pkgconfig/'). I have found this this to be rather successful on my machines. Perhaps it works for you as well? Regards, H?kon Den 2015-02-27 17:57, skrev Alice Raeli: > Hi all, > > i?m trying to build my project using cmake. > PETSc is already installed and its tests runned but when I try cmake > .. for my project it appears that it doesn?t find PETSc. The error > message is: > > cmake module path > /Users/Alice/Documents/Work/Programmazione/TEST_AUTOMATIZZAZIONE/ParallelOrder/Penalizzazione/cmake-modules > dir /usr/local/petsc arch arch-darwin-c-debug > CMake Warning (dev) at cmake-modules/FindPackageMultipass.cmake:48 > (if): > Policy CMP0054 is not set: Only interpret if() arguments as variables > or > keywords when unquoted. Run "cmake --help-policy CMP0054" for policy > details. Use the cmake_policy command to set the policy and suppress > this > warning. > > Quoted variables like "/usr/local/petsc" will no longer be > dereferenced > when the policy is set to NEW. Since the policy is not set the OLD > behavior will be used. > Call Stack (most recent call first): > cmake-modules/FindPETSc.cmake:108 (find_package_multipass) > test/CMakeLists.txt:8 (FIND_PACKAGE) > This warning is for project developers. Use -Wno-dev to suppress it. > > CMake Error at > /usr/local/homebrew/Cellar/cmake/3.1.3/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:138 > (message): > PETSc could not be found. Be sure to set PETSC_DIR and PETSC_ARCH. > (missing: PETSC_EXECUTABLE_RUNS) > Call Stack (most recent call first): > > /usr/local/homebrew/Cellar/cmake/3.1.3/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:374 > (_FPHSA_FAILURE_MESSAGE) > cmake-modules/FindPETSc.cmake:323 (find_package_handle_standard_args) > test/CMakeLists.txt:8 (FIND_PACKAGE) > > ______________ > > PETSC_DIR and PETSC_ARCH are: > > /usr/local/petsc/ > > arch-darwin-c-debug > > but also following the given instruction it seems not reading the > correct path. > > I work on OS X Yosemite 10.10.2 > > cmake version 3.1.3 > > CmakeLists.txt [?] > > SET(CMAKE_C_COMPILER mpicc) > > SET(CMAKE_CXX_COMPILER mpicxx) > > SET (CMAKE_C_FLAGS_INIT "-Wall -std=c99") > SET (CMAKE_C_FLAGS_DEBUG_INIT "-g") > SET (CMAKE_C_FLAGS_MINSIZEREL_INIT "-Os -DNDEBUG") > SET (CMAKE_C_FLAGS_RELEASE_INIT "-O4 -DNDEBUG") > SET (CMAKE_C_FLAGS_RELWITHDEBINFO_INIT "-O2 -g") > > SET (CMAKE_CXX_FLAGS_INIT "-Wall") > SET (CMAKE_CXX_FLAGS_DEBUG_INIT "-g") > SET (CMAKE_CXX_FLAGS_MINSIZEREL_INIT "-Os -DNDEBUG") > SET (CMAKE_CXX_FLAGS_RELEASE_INIT "-O4 -DNDEBUG") > SET (CMAKE_CXX_FLAGS_RELWITHDEBINFO_INIT "-O2 -g") > > [?] > > Have you got a solution? > > Thanks, > Alice > > Alice Raeli > alice.raeli at math.u-bordeaux1.fr From aliraeli at math.u-bordeaux1.fr Fri Feb 27 12:57:12 2015 From: aliraeli at math.u-bordeaux1.fr (Alice Raeli) Date: Fri, 27 Feb 2015 19:57:12 +0100 Subject: [petsc-users] Trouble finding PETSc with cmake In-Reply-To: <8761andycx.fsf@jedbrown.org> References: <8761andycx.fsf@jedbrown.org> Message-ID: <33B78377-3639-4F0D-B5FE-A9A82B84BCDF@math.u-bordeaux1.fr> I have set PkgConfig CMake module and it changed the error with the same result -- checking for one of the modules 'PETSc' CMake Error at /usr/local/homebrew/Cellar/cmake/3.1.3/share/cmake/Modules/FindPkgConfig.cmake:548 (message): None of the required 'PETSc' found Call Stack (most recent call first): CMakeLists.txt:25 (pkg_search_module) cmake module path /Users/Alice/Documents/Work/Programmazione/TEST_AUTOMATIZZAZIONE/ParallelOrder/Penalizzazione/cmake-modules -- checking for one of the modules 'PETSc' CMake Error at /usr/local/homebrew/Cellar/cmake/3.1.3/share/cmake/Modules/FindPkgConfig.cmake:548 (message): None of the required 'PETSc' found Call Stack (most recent call first): test/CMakeLists.txt:10 (pkg_search_module) -- Configuring incomplete, errors occurred! See also "/Users/Alice/Documents/Work/Programmazione/TEST_AUTOMATIZZAZIONE/ParallelOrder/Penalizzazione/build/CMakeFiles/CMakeOutput.log". CMake Error: Unable to open check cache file for write. /Users/Alice/Documents/Work/Programmazione/TEST_AUTOMATIZZAZIONE/ParallelOrder/Penalizzazione/build/CMakeFiles/cmake.check_cache > Il giorno 27 f?vr. 2015, alle ore 19:05, Jed Brown ha scritto: > > Alice Raeli writes: > >> Hi all, >> >> i?m trying to build my project using cmake. >> PETSc is already installed and its tests runned but when I try cmake .. for my project it appears that it doesn?t find PETSc. The error message is: > > CMake insists on sequestering all the useful information in > CMakeFiles/CMakeError.log and CMakeFiles/CMakeOutput.log. Can you check > those files for details about the error? Send the output here if it > doesn't make sense to you. > CMakeOutput.log produced: ignore line: [/Library/Developer/CommandLineTools/usr/bin/make -f CMakeFiles/cmTryCompileExec251571393.dir/build.make CMakeFiles/cmTryCompileExec251571393.dir/build] ignore line: [/usr/local/homebrew/Cellar/cmake/3.1.3/bin/cmake -E cmake_progress_report /Users/Alice/Documents/Work/Programmazione/TEST_AUTOMATIZZAZIONE/ParallelOrder/Penalizzazione/build/CMakeFiles/CMakeTmp/CMakeFiles 1] ignore line: [Building CXX object CMakeFiles/cmTryCompileExec251571393.dir/CMakeCXXCompilerABI.cpp.o] ignore line: [/usr/bin/c++ -o CMakeFiles/cmTryCompileExec251571393.dir/CMakeCXXCompilerABI.cpp.o -c /usr/local/homebrew/Cellar/cmake/3.1.3/share/cmake/Modules/CMakeCXXCompilerABI.cpp] ignore line: [Linking CXX executable cmTryCompileExec251571393] ignore line: [/usr/local/homebrew/Cellar/cmake/3.1.3/bin/cmake -E cmake_link_script CMakeFiles/cmTryCompileExec251571393.dir/link.txt --verbose=1] ignore line: [/usr/bin/c++ -Wl,-search_paths_first -Wl,-headerpad_max_install_names -v -Wl,-v CMakeFiles/cmTryCompileExec251571393.dir/CMakeCXXCompilerABI.cpp.o -o cmTryCompileExec251571393 ] ignore line: [Apple LLVM version 6.0 (clang-600.0.56) (based on LLVM 3.5svn)] ignore line: [Target: x86_64-apple-darwin14.1.0] ignore line: [Thread model: posix] link line: [ "/Library/Developer/CommandLineTools/usr/bin/ld" -demangle -dynamic -arch x86_64 -macosx_version_min 10.10.0 -o cmTryCompileExec251571393 -search_paths_first -headerpad_max_install_names -v CMakeFiles/cmTryCompileExec251571393.dir/CMakeCXXCompilerABI.cpp.o -lc++ -lSystem /Library/Developer/CommandLineTools/usr/bin/../lib/clang/6.0/lib/darwin/libclang_rt.osx.a] arg [/Library/Developer/CommandLineTools/usr/bin/ld] ==> ignore arg [-demangle] ==> ignore arg [-dynamic] ==> ignore arg [-arch] ==> ignore arg [x86_64] ==> ignore arg [-macosx_version_min] ==> ignore arg [10.10.0] ==> ignore arg [-o] ==> ignore arg [cmTryCompileExec251571393] ==> ignore arg [-search_paths_first] ==> ignore arg [-headerpad_max_install_names] ==> ignore arg [-v] ==> ignore arg [CMakeFiles/cmTryCompileExec251571393.dir/CMakeCXXCompilerABI.cpp.o] ==> ignore arg [-lc++] ==> lib [c++] arg [-lSystem] ==> lib [System] arg [/Library/Developer/CommandLineTools/usr/bin/../lib/clang/6.0/lib/darwin/libclang_rt.osx.a] ==> lib [/Library/Developer/CommandLineTools/usr/bin/../lib/clang/6.0/lib/darwin/libclang_rt.osx.a] Library search paths: [;/usr/lib;/usr/local/lib] Framework search paths: [;/Library/Frameworks/;/System/Library/Frameworks/] remove lib [System] collapse lib [/Library/Developer/CommandLineTools/usr/bin/../lib/clang/6.0/lib/darwin/libclang_rt.osx.a] ==> [/Library/Developer/CommandLineTools/usr/lib/clang/6.0/lib/darwin/libclang_rt.osx.a] collapse library dir [/usr/lib] ==> [/usr/lib] collapse library dir [/usr/local/lib] ==> [/usr/local/lib] collapse framework dir [/Library/Frameworks/] ==> [/Library/Frameworks] collapse framework dir [/System/Library/Frameworks/] ==> [/System/Library/Frameworks] implicit libs: [c++;/Library/Developer/CommandLineTools/usr/lib/clang/6.0/lib/darwin/libclang_rt.osx.a] implicit dirs: [/usr/lib;/usr/local/lib] implicit fwks: [/Library/Frameworks;/System/Library/Frameworks] However CMakeError.log appearly has not been built yet. Yours sincerely, Alice >> cmake module path /Users/Alice/Documents/Work/Programmazione/TEST_AUTOMATIZZAZIONE/ParallelOrder/Penalizzazione/cmake-modules >> dir /usr/local/petsc arch arch-darwin-c-debug >> CMake Warning (dev) at cmake-modules/FindPackageMultipass.cmake:48 (if): >> Policy CMP0054 is not set: Only interpret if() arguments as variables or >> keywords when unquoted. Run "cmake --help-policy CMP0054" for policy >> details. Use the cmake_policy command to set the policy and suppress this >> warning. >> >> Quoted variables like "/usr/local/petsc" will no longer be dereferenced >> when the policy is set to NEW. Since the policy is not set the OLD >> behavior will be used. >> Call Stack (most recent call first): >> cmake-modules/FindPETSc.cmake:108 (find_package_multipass) >> test/CMakeLists.txt:8 (FIND_PACKAGE) >> This warning is for project developers. Use -Wno-dev to suppress it. >> >> CMake Error at /usr/local/homebrew/Cellar/cmake/3.1.3/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:138 (message): >> PETSc could not be found. Be sure to set PETSC_DIR and PETSC_ARCH. >> (missing: PETSC_EXECUTABLE_RUNS) >> Call Stack (most recent call first): >> /usr/local/homebrew/Cellar/cmake/3.1.3/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:374 (_FPHSA_FAILURE_MESSAGE) >> cmake-modules/FindPETSc.cmake:323 (find_package_handle_standard_args) >> test/CMakeLists.txt:8 (FIND_PACKAGE) >> >> ______________ >> >> PETSC_DIR and PETSC_ARCH are: >> /usr/local/petsc/ >> arch-darwin-c-debug >> >> but also following the given instruction it seems not reading the correct path. >> >> I work on OS X Yosemite 10.10.2 >> >> cmake version 3.1.3 >> >> >> CmakeLists.txt [?] >> SET(CMAKE_C_COMPILER mpicc) >> >> SET(CMAKE_CXX_COMPILER mpicxx) >> >> SET (CMAKE_C_FLAGS_INIT "-Wall -std=c99") >> SET (CMAKE_C_FLAGS_DEBUG_INIT "-g") >> SET (CMAKE_C_FLAGS_MINSIZEREL_INIT "-Os -DNDEBUG") >> SET (CMAKE_C_FLAGS_RELEASE_INIT "-O4 -DNDEBUG") >> SET (CMAKE_C_FLAGS_RELWITHDEBINFO_INIT "-O2 -g") >> >> SET (CMAKE_CXX_FLAGS_INIT "-Wall") >> SET (CMAKE_CXX_FLAGS_DEBUG_INIT "-g") >> SET (CMAKE_CXX_FLAGS_MINSIZEREL_INIT "-Os -DNDEBUG") >> SET (CMAKE_CXX_FLAGS_RELEASE_INIT "-O4 -DNDEBUG") >> SET (CMAKE_CXX_FLAGS_RELWITHDEBINFO_INIT "-O2 -g") >> >> [?] >> >> Have you got a solution? >> >> Thanks, >> Alice >> >> Alice Raeli >> alice.raeli at math.u-bordeaux1.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Feb 27 13:17:41 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 27 Feb 2015 13:17:41 -0600 Subject: [petsc-users] Bug in VecLoad_HDF5_DA when using single precision In-Reply-To: <4b4bc715605590aa97cee9f37e4921e1@webmail.domeneshop.no> References: <4b4bc715605590aa97cee9f37e4921e1@webmail.domeneshop.no> Message-ID: Thanks. Fixed in maint, master and next > On Feb 27, 2015, at 9:00 AM, H?kon Strandenes wrote: > > Hi, > > Recently I decided to try using PETSc with single precision, and that resulted in a segmentation fault in my application. Digging a bit into this I quickly found a bug in VecLoad_HDF5_DA.c, line 858, where the HDF5 type 'H5T_NATIVE_DOUBLE' is hard-coded into H5Dread(), independent on the precision PETSc is built with. This obviously leads to a segmentation fault, since H5Dread() tries to fill twice as much data into the memory as there is allocated space for. > > I think this should be handled as in VecView_MPI_HDF5_DA, where there are some #if defined(...) that sets a variable to pass on to the HDF5 functions depending on the floating point type PETSc is compiled with. That did at least solve my segmentation fault problems. > > Have a nice weekend. > > Regards, > H?kon From hus003 at ucsd.edu Fri Feb 27 18:36:39 2015 From: hus003 at ucsd.edu (Sun, Hui) Date: Sat, 28 Feb 2015 00:36:39 +0000 Subject: [petsc-users] DMDA with dof=4, multigrid solver Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010E9C4F@XMAIL-MBX-BH1.AD.UCSD.EDU> I'm trying to work on 4 Poisson's equations defined on a DMDA grid, Hence the parameter dof in DMDACreate3d should be 4, and I've set stencil width to be 4, and stencil type to be star. If I run the code with -pc_type ilu and -ksp_type gmres, it works alright. However, if I run with pc_type mg, it gives me an error saying that when it is doing MatSetValues, the argument is out of range, and there is a new nonzero at (60,64) in the matrix. However, that new nonzero is expected to be there, the row number 60 corresponds to i=15 and c=0 in x direction, and the column number 64 corresponds to i=16 and c=0 in x direction. So they are next to each other, and the star stencil with width 1 should include that. I have also checked with the memory allocations, and I'm found no problem. So I'm wondering if there is any problem of using multigrid on a DMDA with dof greater than 1? Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Feb 27 19:11:42 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 27 Feb 2015 19:11:42 -0600 Subject: [petsc-users] DMDA with dof=4, multigrid solver In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010E9C4F@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010E9C4F@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: > On Feb 27, 2015, at 6:36 PM, Sun, Hui wrote: > > I'm trying to work on 4 Poisson's equations defined on a DMDA grid, Hence the parameter dof in DMDACreate3d should be 4, and I've set stencil width to be 4, and stencil type to be star. Use a stencil width of 1, not 4. The stencil width is defined in terms of dof. > > If I run the code with -pc_type ilu and -ksp_type gmres, it works alright. > > However, if I run with pc_type mg, it gives me an error saying that when it is doing MatSetValues, the argument is out of range, and there is a new nonzero at (60,64) in the matrix. However, that new nonzero is expected to be there, the row number 60 corresponds to i=15 and c=0 in x direction, and the column number 64 corresponds to i=16 and c=0 in x direction. So they are next to each other, and the star stencil with width 1 should include that. I have also checked with the memory allocations, and I'm found no problem. > > So I'm wondering if there is any problem of using multigrid on a DMDA with dof greater than 1? No it handles dof > 1 fine. Send your code. Barry > > Thank you! From hus003 at ucsd.edu Fri Feb 27 19:25:33 2015 From: hus003 at ucsd.edu (Sun, Hui) Date: Sat, 28 Feb 2015 01:25:33 +0000 Subject: [petsc-users] DMDA with dof=4, multigrid solver In-Reply-To: References: <7501CC2B7BBCC44A92ECEEC316170ECB010E9C4F@XMAIL-MBX-BH1.AD.UCSD.EDU>, Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010E9C62@XMAIL-MBX-BH1.AD.UCSD.EDU> Thank you Barry. Another question: I observe that in those ksp examples, whenever multigrid is used, DMDA is also used, besides, KSPSetComputeOperators and KSPSetComputeRHS are also used. Is it true that 1) Only DMDA can use mg? 2) We have to set up matrices and rhs using KSPSetComputeOperators and KSPSetComputeRHS? We cannot create a matrix and add it to KSP if we want to use mg? Best, Hui ________________________________________ From: Barry Smith [bsmith at mcs.anl.gov] Sent: Friday, February 27, 2015 5:11 PM To: Sun, Hui Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] DMDA with dof=4, multigrid solver > On Feb 27, 2015, at 6:36 PM, Sun, Hui wrote: > > I'm trying to work on 4 Poisson's equations defined on a DMDA grid, Hence the parameter dof in DMDACreate3d should be 4, and I've set stencil width to be 4, and stencil type to be star. Use a stencil width of 1, not 4. The stencil width is defined in terms of dof. > > If I run the code with -pc_type ilu and -ksp_type gmres, it works alright. > > However, if I run with pc_type mg, it gives me an error saying that when it is doing MatSetValues, the argument is out of range, and there is a new nonzero at (60,64) in the matrix. However, that new nonzero is expected to be there, the row number 60 corresponds to i=15 and c=0 in x direction, and the column number 64 corresponds to i=16 and c=0 in x direction. So they are next to each other, and the star stencil with width 1 should include that. I have also checked with the memory allocations, and I'm found no problem. > > So I'm wondering if there is any problem of using multigrid on a DMDA with dof greater than 1? No it handles dof > 1 fine. Send your code. Barry > > Thank you! From hus003 at ucsd.edu Fri Feb 27 20:25:28 2015 From: hus003 at ucsd.edu (Sun, Hui) Date: Sat, 28 Feb 2015 02:25:28 +0000 Subject: [petsc-users] DMDA with dof=4, multigrid solver In-Reply-To: References: <7501CC2B7BBCC44A92ECEEC316170ECB010E9C4F@XMAIL-MBX-BH1.AD.UCSD.EDU>, Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010E9C6D@XMAIL-MBX-BH1.AD.UCSD.EDU> Sorry, I misread your email. I thought you were saying that it only handles dof = 1 fine. Sure I will send you the code. However, the code has some other dependencies. Let me remove those and send it to you in one file. Thanks a lot. ________________________________________ From: Barry Smith [bsmith at mcs.anl.gov] Sent: Friday, February 27, 2015 5:11 PM To: Sun, Hui Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] DMDA with dof=4, multigrid solver > On Feb 27, 2015, at 6:36 PM, Sun, Hui wrote: > > I'm trying to work on 4 Poisson's equations defined on a DMDA grid, Hence the parameter dof in DMDACreate3d should be 4, and I've set stencil width to be 4, and stencil type to be star. Use a stencil width of 1, not 4. The stencil width is defined in terms of dof. > > If I run the code with -pc_type ilu and -ksp_type gmres, it works alright. > > However, if I run with pc_type mg, it gives me an error saying that when it is doing MatSetValues, the argument is out of range, and there is a new nonzero at (60,64) in the matrix. However, that new nonzero is expected to be there, the row number 60 corresponds to i=15 and c=0 in x direction, and the column number 64 corresponds to i=16 and c=0 in x direction. So they are next to each other, and the star stencil with width 1 should include that. I have also checked with the memory allocations, and I'm found no problem. > > So I'm wondering if there is any problem of using multigrid on a DMDA with dof greater than 1? No it handles dof > 1 fine. Send your code. Barry > > Thank you! From hus003 at ucsd.edu Fri Feb 27 21:46:20 2015 From: hus003 at ucsd.edu (Sun, Hui) Date: Sat, 28 Feb 2015 03:46:20 +0000 Subject: [petsc-users] DMDA with dof=4, multigrid solver In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010E9C6D@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010E9C4F@XMAIL-MBX-BH1.AD.UCSD.EDU>, , <7501CC2B7BBCC44A92ECEEC316170ECB010E9C6D@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010E9C81@XMAIL-MBX-BH1.AD.UCSD.EDU> Barry, sorry but I can see that with a simpler version, mg starts to work with dof = 4. So maybe there are some bugs in my original code. ________________________________________ From: Sun, Hui Sent: Friday, February 27, 2015 6:25 PM To: Barry Smith Cc: petsc-users at mcs.anl.gov Subject: RE: [petsc-users] DMDA with dof=4, multigrid solver Sorry, I misread your email. I thought you were saying that it only handles dof = 1 fine. Sure I will send you the code. However, the code has some other dependencies. Let me remove those and send it to you in one file. Thanks a lot. ________________________________________ From: Barry Smith [bsmith at mcs.anl.gov] Sent: Friday, February 27, 2015 5:11 PM To: Sun, Hui Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] DMDA with dof=4, multigrid solver > On Feb 27, 2015, at 6:36 PM, Sun, Hui wrote: > > I'm trying to work on 4 Poisson's equations defined on a DMDA grid, Hence the parameter dof in DMDACreate3d should be 4, and I've set stencil width to be 4, and stencil type to be star. Use a stencil width of 1, not 4. The stencil width is defined in terms of dof. > > If I run the code with -pc_type ilu and -ksp_type gmres, it works alright. > > However, if I run with pc_type mg, it gives me an error saying that when it is doing MatSetValues, the argument is out of range, and there is a new nonzero at (60,64) in the matrix. However, that new nonzero is expected to be there, the row number 60 corresponds to i=15 and c=0 in x direction, and the column number 64 corresponds to i=16 and c=0 in x direction. So they are next to each other, and the star stencil with width 1 should include that. I have also checked with the memory allocations, and I'm found no problem. > > So I'm wondering if there is any problem of using multigrid on a DMDA with dof greater than 1? No it handles dof > 1 fine. Send your code. Barry > > Thank you! From elbueler at alaska.edu Fri Feb 27 22:28:58 2015 From: elbueler at alaska.edu (Ed Bueler) Date: Fri, 27 Feb 2015 21:28:58 -0700 Subject: [petsc-users] do SNESVI objects support multigrid? Message-ID: Dear Petsc -- I am confused on whether -pc_type mg is supported for SNESVI solvers. The error messages sure aren't helping. I am using petsc-maint. First I looked at couple of simplest examples in src/snes/examples/tutorials, namely ex9 and ex54. I got mixed results (below), namely crashes with 'nonconforming object sizes' or 'zero pivot' errors in 3 of 4 cases. But I am not sure from the very limited docs what is supposed to work. The documented ex54 run itself (see line 12 of ex54.c) just hangs/stagnates for me: $ ./ex54 -pc_type mg -pc_mg_galerkin -T .01 -da_grid_x 65 -da_grid_y 65 -pc_mg_levels 4 -ksp_type fgmres -snes_atol 1.e-14 -mat_no_inode -snes_vi_monitor -snes_monitor 0 SNES Function norm 6.177810506000e-03 0 SNES VI Function norm 6.177810506000e-03 Active lower constraints 0/0 upper constraints 0/0 Percent of total 0 Percent of bounded 0 0 SNES Function norm 6.177810506000e-03 0 SNES VI Function norm 6.177810506000e-03 Active lower constraints 0/0 upper constraints 0/0 Percent of total 0 Percent of bounded 0 0 SNES Function norm 6.177810506000e-03 0 SNES VI Function norm 6.177810506000e-03 Active lower constraints 0/0 upper constraints 0/0 Percent of total 0 Percent of bounded 0 0 SNES Function norm 6.177810506000e-03 0 SNES VI Function norm 6.177810506000e-03 Active lower constraints 0/0 upper constraints 0/0 Percent of total 0 Percent of bounded 0 0 SNES Function norm 6.177810506000e-03 0 SNES VI Function norm 6.177810506000e-03 Active lower constraints 0/0 upper constraints 0/0 Percent of total 0 Percent of bounded 0 ... The wordy documented run of ex55.c stagnates and then seg faults for me (see attachment): ~/petsc-maint/src/snes/examples/tutorials[maint*]$ ./ex55 -ksp_type fgmres -pc_type mg -mg_levels_ksp_type fgmres -mg_levels_pc_type fieldsplit -mg_levels_pc_fieldsplit_detect_saddle_point -mg_levels_pc_fieldsplit_type schur -mg_levels_pc_fieldsplit_factorization_type full -mg_levels_pc_fieldsplit_schur_precondition user -mg_levels_fieldsplit_1_ksp_type gmres -mg_levels_fieldsplit_1_pc_type none -mg_levels_fieldsplit_0_ksp_type preonly -mg_levels_fieldsplit_0_pc_type sor -mg_levels_fieldsplit_0_pc_sor_forward -snes_vi_monitor -ksp_monitor_true_residual -pc_mg_levels 5 -pc_mg_galerkin -mg_levels_ksp_monitor -mg_levels_fieldsplit_ksp_monitor -mg_levels_ksp_max_it 2 -mg_levels_fieldsplit_ksp_max_it 5 -snes_atol 1.e-11 -mg_coarse_ksp_type preonly -mg_coarse_pc_type svd -da_grid_x 65 -da_grid_y 65 -ksp_rtol 1.e-8 &> out.ex55err These examples all seem to perform o.k. with default (and non-multigrid) options. Thanks for help! Ed errors from simple ex9, ex54 runs below: ~/petsc-maint/src/snes/examples/tutorials[maint]$ ./ex9 -da_refine 2 -snes_type vinewtonrsls -pc_type mg setup done: square side length = 4.000 grid Mx,My = 41,41 spacing dx,dy = 0.100,0.100 [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Nonconforming object sizes [0]PETSC ERROR: Mat mat,Vec x: global dim 1388 1681 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.5.3, unknown [0]PETSC ERROR: ./ex9 on a linux-c-opt named bueler-leopard by ed Fri Feb 27 21:02:20 2015 [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack --download-mpich --with-debugging=0 [0]PETSC ERROR: #1 MatMultTranspose() line 2232 in /home/ed/petsc-maint/src/mat/interface/matrix.c [0]PETSC ERROR: #2 MatRestrict() line 7529 in /home/ed/petsc-maint/src/mat/interface/matrix.c [0]PETSC ERROR: #3 DMRestrictHook_SNESVecSol() line 480 in /home/ed/petsc-maint/src/snes/interface/snes.c [0]PETSC ERROR: #4 DMRestrict() line 2022 in /home/ed/petsc-maint/src/dm/interface/dm.c [0]PETSC ERROR: #5 PCSetUp_MG() line 699 in /home/ed/petsc-maint/src/ksp/pc/impls/mg/mg.c [0]PETSC ERROR: #6 PCSetUp() line 902 in /home/ed/petsc-maint/src/ksp/pc/interface/precon.c [0]PETSC ERROR: #7 KSPSetUp() line 306 in /home/ed/petsc-maint/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: #8 SNESSolve_VINEWTONRSLS() line 506 in /home/ed/petsc-maint/src/snes/impls/vi/rs/virs.c [0]PETSC ERROR: #9 SNESSolve() line 3743 in /home/ed/petsc-maint/src/snes/interface/snes.c [0]PETSC ERROR: #10 main() line 122 in /home/ed/petsc-maint/src/snes/examples/tutorials/ex9.c [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- application called MPI_Abort(MPI_COMM_WORLD, 60) - process 0 [unset]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 60) - process 0 ~/petsc-maint/src/snes/examples/tutorials[maint]$ ./ex9 -da_refine 2 -snes_type vinewtonssls -pc_type mg setup done: square side length = 4.000 grid Mx,My = 41,41 spacing dx,dy = 0.100,0.100 number of Newton iterations = 8; result = CONVERGED_FNORM_RELATIVE errors: av |u-uexact| = 2.909e-04 |u-uexact|_inf = 1.896e-03 ~/petsc-maint/src/snes/examples/tutorials[maint]$ ./ex54 -snes_monitor -da_refine 2 -snes_type vinewtonssls -pc_type mg 0 SNES Function norm 3.156635589354e-02 [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Zero pivot in LU factorization: http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot [0]PETSC ERROR: Zero pivot, row 0 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.5.3, unknown [0]PETSC ERROR: ./ex54 on a linux-c-opt named bueler-leopard by ed Fri Feb 27 21:02:43 2015 [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack --download-mpich --with-debugging=0 [0]PETSC ERROR: #1 PetscKernel_A_gets_inverse_A_2() line 50 in /home/ed/petsc-maint/src/mat/impls/baij/seq/dgefa2.c [0]PETSC ERROR: #2 MatSOR_SeqAIJ_Inode() line 2806 in /home/ed/petsc-maint/src/mat/impls/aij/seq/inode.c [0]PETSC ERROR: #3 MatSOR() line 3643 in /home/ed/petsc-maint/src/mat/interface/matrix.c [0]PETSC ERROR: #4 PCApply_SOR() line 35 in /home/ed/petsc-maint/src/ksp/pc/impls/sor/sor.c [0]PETSC ERROR: #5 PCApply() line 440 in /home/ed/petsc-maint/src/ksp/pc/interface/precon.c [0]PETSC ERROR: #6 KSP_PCApply() line 230 in /home/ed/petsc-maint/include/petsc-private/kspimpl.h [0]PETSC ERROR: #7 KSPInitialResidual() line 56 in /home/ed/petsc-maint/src/ksp/ksp/interface/itres.c [0]PETSC ERROR: #8 KSPSolve_GMRES() line 234 in /home/ed/petsc-maint/src/ksp/ksp/impls/gmres/gmres.c [0]PETSC ERROR: #9 KSPSolve() line 460 in /home/ed/petsc-maint/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: #10 KSPSolve_Chebyshev() line 368 in /home/ed/petsc-maint/src/ksp/ksp/impls/cheby/cheby.c [0]PETSC ERROR: #11 KSPSolve() line 460 in /home/ed/petsc-maint/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: #12 PCMGMCycle_Private() line 19 in /home/ed/petsc-maint/src/ksp/pc/impls/mg/mg.c [0]PETSC ERROR: #13 PCMGMCycle_Private() line 48 in /home/ed/petsc-maint/src/ksp/pc/impls/mg/mg.c [0]PETSC ERROR: #14 PCApply_MG() line 337 in /home/ed/petsc-maint/src/ksp/pc/impls/mg/mg.c [0]PETSC ERROR: #15 PCApply() line 440 in /home/ed/petsc-maint/src/ksp/pc/interface/precon.c [0]PETSC ERROR: #16 KSP_PCApply() line 230 in /home/ed/petsc-maint/include/petsc-private/kspimpl.h [0]PETSC ERROR: #17 KSPInitialResidual() line 63 in /home/ed/petsc-maint/src/ksp/ksp/interface/itres.c [0]PETSC ERROR: #18 KSPSolve_GMRES() line 234 in /home/ed/petsc-maint/src/ksp/ksp/impls/gmres/gmres.c [0]PETSC ERROR: #19 KSPSolve() line 460 in /home/ed/petsc-maint/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: #20 SNESSolve_VINEWTONSSLS() line 317 in /home/ed/petsc-maint/src/snes/impls/vi/ss/viss.c [0]PETSC ERROR: #21 SNESSolve() line 3743 in /home/ed/petsc-maint/src/snes/interface/snes.c [0]PETSC ERROR: #22 main() line 98 in /home/ed/petsc-maint/src/snes/examples/tutorials/ex54.c [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- application called MPI_Abort(MPI_COMM_WORLD, 71) - process 0 [unset]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 71) - process 0 ~/petsc-maint/src/snes/examples/tutorials[maint]$ ./ex54 -snes_monitor -da_refine 2 -snes_type vinewtonrsls -pc_type mg 0 SNES Function norm 3.160548858489e-02 [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Zero pivot in LU factorization: http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot [0]PETSC ERROR: Zero pivot, row 0 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.5.3, unknown [0]PETSC ERROR: ./ex54 on a linux-c-opt named bueler-leopard by ed Fri Feb 27 21:02:48 2015 [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack --download-mpich --with-debugging=0 [0]PETSC ERROR: #1 PetscKernel_A_gets_inverse_A_2() line 50 in /home/ed/petsc-maint/src/mat/impls/baij/seq/dgefa2.c [0]PETSC ERROR: #2 MatSOR_SeqAIJ_Inode() line 2806 in /home/ed/petsc-maint/src/mat/impls/aij/seq/inode.c [0]PETSC ERROR: #3 MatSOR() line 3643 in /home/ed/petsc-maint/src/mat/interface/matrix.c [0]PETSC ERROR: #4 PCApply_SOR() line 35 in /home/ed/petsc-maint/src/ksp/pc/impls/sor/sor.c [0]PETSC ERROR: #5 PCApply() line 440 in /home/ed/petsc-maint/src/ksp/pc/interface/precon.c [0]PETSC ERROR: #6 KSP_PCApply() line 230 in /home/ed/petsc-maint/include/petsc-private/kspimpl.h [0]PETSC ERROR: #7 KSPInitialResidual() line 56 in /home/ed/petsc-maint/src/ksp/ksp/interface/itres.c [0]PETSC ERROR: #8 KSPSolve_GMRES() line 234 in /home/ed/petsc-maint/src/ksp/ksp/impls/gmres/gmres.c [0]PETSC ERROR: #9 KSPSolve() line 460 in /home/ed/petsc-maint/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: #10 KSPSolve_Chebyshev() line 368 in /home/ed/petsc-maint/src/ksp/ksp/impls/cheby/cheby.c [0]PETSC ERROR: #11 KSPSolve() line 460 in /home/ed/petsc-maint/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: #12 PCMGMCycle_Private() line 19 in /home/ed/petsc-maint/src/ksp/pc/impls/mg/mg.c [0]PETSC ERROR: #13 PCMGMCycle_Private() line 48 in /home/ed/petsc-maint/src/ksp/pc/impls/mg/mg.c [0]PETSC ERROR: #14 PCApply_MG() line 337 in /home/ed/petsc-maint/src/ksp/pc/impls/mg/mg.c [0]PETSC ERROR: #15 PCApply() line 440 in /home/ed/petsc-maint/src/ksp/pc/interface/precon.c [0]PETSC ERROR: #16 KSP_PCApply() line 230 in /home/ed/petsc-maint/include/petsc-private/kspimpl.h [0]PETSC ERROR: #17 KSPInitialResidual() line 63 in /home/ed/petsc-maint/src/ksp/ksp/interface/itres.c [0]PETSC ERROR: #18 KSPSolve_GMRES() line 234 in /home/ed/petsc-maint/src/ksp/ksp/impls/gmres/gmres.c [0]PETSC ERROR: #19 KSPSolve() line 460 in /home/ed/petsc-maint/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: #20 SNESSolve_VINEWTONRSLS() line 536 in /home/ed/petsc-maint/src/snes/impls/vi/rs/virs.c [0]PETSC ERROR: #21 SNESSolve() line 3743 in /home/ed/petsc-maint/src/snes/interface/snes.c [0]PETSC ERROR: #22 main() line 98 in /home/ed/petsc-maint/src/snes/examples/tutorials/ex54.c [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- application called MPI_Abort(MPI_COMM_WORLD, 71) - process 0 [unset]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 71) - process 0 -- Ed Bueler Dept of Math and Stat and Geophysical Institute University of Alaska Fairbanks Fairbanks, AK 99775-6660 301C Chapman and 410D Elvey 907 474-7693 and 907 474-7199 (fax 907 474-5394) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: out.ex55err Type: application/octet-stream Size: 310391 bytes Desc: not available URL: From elbueler at alaska.edu Fri Feb 27 23:34:05 2015 From: elbueler at alaska.edu (Ed Bueler) Date: Fri, 27 Feb 2015 22:34:05 -0700 Subject: [petsc-users] how to read netcdf into petsc Message-ID: Dear Petsc -- There is a "--download-netcdf" configure option that seems to work for me. The petscviewer.h (in petsc-maint) has a PetscViewerNetcdfOpen() declaration, but that seems to not be linkable, perhaps because it has no implementation of it: $ make mahaffy /home/ed/petsc-maint/linux-c-opt/bin/mpicc -o mahaffy.o -c -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O -I/home/ed/petsc-maint/include -I/home/ed/petsc-maint/linux-c-opt/include `pwd`/mahaffy.c /home/ed/petsc-maint/linux-c-opt/bin/mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O -o mahaffy mahaffy.o -Wl,-rpath,/home/ed/petsc-maint/linux-c-opt/lib -L/home/ed/petsc-maint/linux-c-opt/lib -lpetsc -Wl,-rpath,/home/ed/petsc-maint/linux-c-opt/lib -lflapack -lfblas -lX11 -lpthread -lnetcdf -lm -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.8 -L/usr/lib/gcc/x86_64-linux-gnu/4.8 -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lmpichf90 -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpichcxx -lstdc++ -Wl,-rpath,/home/ed/petsc-maint/linux-c-opt/lib -L/home/ed/petsc-maint/linux-c-opt/lib -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.8 -L/usr/lib/gcc/x86_64-linux-gnu/4.8 -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -ldl -Wl,-rpath,/home/ed/petsc-maint/linux-c-opt/lib -lmpich -lopa -lmpl -lrt -lpthread -lgcc_s -ldl mahaffy.o: In function `ReadThicknessFromNetCDF': mahaffy.c:(.text+0x2cbb): undefined reference to `PetscViewerNetcdfOpen' collect2: error: ld returned 1 exit status make: [mahaffy] Error 1 (ignored) /bin/rm -f mahaffy.o In particular, src/sys/classes/viewer/impls/ does not have a netcdf/ case. So what is the recommended way to read a netcdf file into a petsc vec? Should I go via python and netcdf4-python or something, using bin/pythonscripts/PetscBinaryIO.py to write a petsc binary, and then read that? Dump to ascii and read that?--yes, this is a taunt. Thanks for help! Ed -- Ed Bueler Dept of Math and Stat and Geophysical Institute University of Alaska Fairbanks Fairbanks, AK 99775-6660 301C Chapman and 410D Elvey 907 474-7693 and 907 474-7199 (fax 907 474-5394) -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Feb 28 08:38:54 2015 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 28 Feb 2015 08:38:54 -0600 Subject: [petsc-users] how to read netcdf into petsc In-Reply-To: References: Message-ID: On Fri, Feb 27, 2015 at 11:34 PM, Ed Bueler wrote: > Dear Petsc -- > > There is a "--download-netcdf" configure option that seems to work for > me. The petscviewer.h (in petsc-maint) has a PetscViewerNetcdfOpen() > declaration, but that seems to not be linkable, perhaps because it has no > implementation of it: > > $ make mahaffy > /home/ed/petsc-maint/linux-c-opt/bin/mpicc -o mahaffy.o -c -fPIC -Wall > -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O > -I/home/ed/petsc-maint/include -I/home/ed/petsc-maint/linux-c-opt/include > `pwd`/mahaffy.c > /home/ed/petsc-maint/linux-c-opt/bin/mpicc -fPIC -Wall -Wwrite-strings > -Wno-strict-aliasing -Wno-unknown-pragmas -O -o mahaffy mahaffy.o > -Wl,-rpath,/home/ed/petsc-maint/linux-c-opt/lib > -L/home/ed/petsc-maint/linux-c-opt/lib -lpetsc > -Wl,-rpath,/home/ed/petsc-maint/linux-c-opt/lib -lflapack -lfblas -lX11 > -lpthread -lnetcdf -lm -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.8 > -L/usr/lib/gcc/x86_64-linux-gnu/4.8 -Wl,-rpath,/usr/lib/x86_64-linux-gnu > -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/lib/x86_64-linux-gnu > -L/lib/x86_64-linux-gnu -lmpichf90 -lgfortran -lm -lgfortran -lm -lquadmath > -lm -lmpichcxx -lstdc++ -Wl,-rpath,/home/ed/petsc-maint/linux-c-opt/lib > -L/home/ed/petsc-maint/linux-c-opt/lib > -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.8 > -L/usr/lib/gcc/x86_64-linux-gnu/4.8 -Wl,-rpath,/usr/lib/x86_64-linux-gnu > -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/lib/x86_64-linux-gnu > -L/lib/x86_64-linux-gnu -Wl,-rpath,/usr/lib/x86_64-linux-gnu > -L/usr/lib/x86_64-linux-gnu -ldl > -Wl,-rpath,/home/ed/petsc-maint/linux-c-opt/lib -lmpich -lopa -lmpl -lrt > -lpthread -lgcc_s -ldl > mahaffy.o: In function `ReadThicknessFromNetCDF': > mahaffy.c:(.text+0x2cbb): undefined reference to `PetscViewerNetcdfOpen' > collect2: error: ld returned 1 exit status > make: [mahaffy] Error 1 (ignored) > /bin/rm -f mahaffy.o > > In particular, src/sys/classes/viewer/impls/ does not have a netcdf/ case. > > So what is the recommended way to read a netcdf file into a petsc vec? > Should I go via python and netcdf4-python or something, using > bin/pythonscripts/PetscBinaryIO.py to write a petsc binary, and then read > that? > I think a converter to binary in Python sounds great. However, do you want the data in the absence of a mesh? > Dump to ascii and read that?--yes, this is a taunt. > I think we at one time had a NetCDF viewer which we abandoned because HDF5 turned out to be better (a damning judgment indeed). The --download-netcdf is there to support ExodusII, which we can load. Thanks, Matt > Thanks for help! > > Ed > > > -- > Ed Bueler > Dept of Math and Stat and Geophysical Institute > University of Alaska Fairbanks > Fairbanks, AK 99775-6660 > 301C Chapman and 410D Elvey > 907 474-7693 and 907 474-7199 (fax 907 474-5394) > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sat Feb 28 11:35:55 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 28 Feb 2015 11:35:55 -0600 Subject: [petsc-users] DMDA with dof=4, multigrid solver In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010E9C62@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010E9C4F@XMAIL-MBX-BH1.AD.UCSD.EDU> <, <>> <7501CC2B7BBCC44A92ECEEC316170ECB010E9C62@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: > On Feb 27, 2015, at 7:25 PM, Sun, Hui wrote: > > Thank you Barry. Another question: I observe that in those ksp examples, whenever multigrid is used, DMDA is also used, besides, KSPSetComputeOperators and KSPSetComputeRHS are also used. > > Is it true that > 1) Only DMDA can use mg? No this is not true > 2) We have to set up matrices and rhs using KSPSetComputeOperators and KSPSetComputeRHS? No you do not have to > We cannot create a matrix and add it to KSP if we want to use mg? Yes you can. There are many many variants of multigrid one can do with PETSc; we don't have the time to have examples of all the possibilities. More details > 1) Only DMDA can use mg? Because DMDA provides structured grids with easy interpolation between levels and it is easy for users to write Jacobians we have many examples that use the DMDA. However, so long as YOU (or something) can provide interpolation between the multigrid levels you can use multigrid. For example PCGAMG uses algebraic multigrid to generate the interpolations. If you have your own interpolations you can provide them with PCMGSetInterpolation() (when you use PCMG with DMDA PETSc essentially handles those details automatically for you). > 2) We have to set up matrices and rhs using KSPSetComputeOperators and KSPSetComputeRHS? Normally with geometric multigrid one discretizes the operator on each level of the grid. Thus the user has to provide several matrices (one for each level). KSPSetComputeOperators() is ONE way that the user can provide them. You can also provide them by call PCMGetSmoother(pc,level,&ksp) and then call KSPSetOperators(ksp,...) for each of the levels (KSPSetComputeOperators() essentially does the book keeping for you). > We cannot create a matrix and add it to KSP if we want to use mg? As I said in 2 normally multigrid requires you to provide a discretized operator at each level. But with Galerkin coarse grids (which is what algebraic multigrid users and can also be used by geometric multigrid) the user does not provide coarser grid operators instead the code computes them automatically from the formula R*A*P where R is the restriction operator used in multigrid and P is the interpolation operator (usually the transpose of P). If you are looking for a simple automatic multigrid then you want to use PCGAMG in PETSc, it does algebraic multigrid and doesn't require you provide interpolations or coarser operators. However algebraic multigrid doesn't work for all problems; though it does work for many. Try it with -pc_type gamg Barry > > Best, > Hui > > ________________________________________ > From: Barry Smith [bsmith at mcs.anl.gov] > Sent: Friday, February 27, 2015 5:11 PM > To: Sun, Hui > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] DMDA with dof=4, multigrid solver > >> On Feb 27, 2015, at 6:36 PM, Sun, Hui wrote: >> >> I'm trying to work on 4 Poisson's equations defined on a DMDA grid, Hence the parameter dof in DMDACreate3d should be 4, and I've set stencil width to be 4, and stencil type to be star. > > Use a stencil width of 1, not 4. The stencil width is defined in terms of dof. >> >> If I run the code with -pc_type ilu and -ksp_type gmres, it works alright. >> >> However, if I run with pc_type mg, it gives me an error saying that when it is doing MatSetValues, the argument is out of range, and there is a new nonzero at (60,64) in the matrix. However, that new nonzero is expected to be there, the row number 60 corresponds to i=15 and c=0 in x direction, and the column number 64 corresponds to i=16 and c=0 in x direction. So they are next to each other, and the star stencil with width 1 should include that. I have also checked with the memory allocations, and I'm found no problem. >> >> So I'm wondering if there is any problem of using multigrid on a DMDA with dof greater than 1? > > No it handles dof > 1 fine. > > Send your code. > > Barry > >> >> Thank you! From bsmith at mcs.anl.gov Sat Feb 28 11:40:05 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 28 Feb 2015 11:40:05 -0600 Subject: [petsc-users] do SNESVI objects support multigrid? In-Reply-To: References: Message-ID: <2F686956-82E5-43BC-AF3C-0E14D3952373@mcs.anl.gov> Ed, At one time it did. However it has suffered from neglect and bit rot over several years. Part of the reason is that we have had big plans to "do it much better" and hence neglected the old code; while we never "did it much better". So algorithmically what we implemented as an active set method for the VI with multigrid solving and I feel algorithmically it is a pretty good method (for problems where multigrid works well without the VI). My implementation was a bit ad hoc and didn't mesh great with the rest of PETSc which is why it hasn't been maintained properly. If you would like I could take a look at trying to get it working again as it did before. Barry > On Feb 27, 2015, at 10:28 PM, Ed Bueler wrote: > > Dear Petsc -- > > I am confused on whether -pc_type mg is supported for SNESVI solvers. The error messages sure aren't helping. > > I am using petsc-maint. > > First I looked at couple of simplest examples in src/snes/examples/tutorials, namely ex9 and ex54. I got mixed results (below), namely crashes with 'nonconforming object sizes' or 'zero pivot' errors in 3 of 4 cases. But I am not sure from the very limited docs what is supposed to work. > > The documented ex54 run itself (see line 12 of ex54.c) just hangs/stagnates for me: > > $ ./ex54 -pc_type mg -pc_mg_galerkin -T .01 -da_grid_x 65 -da_grid_y 65 -pc_mg_levels 4 -ksp_type fgmres -snes_atol 1.e-14 -mat_no_inode -snes_vi_monitor -snes_monitor > 0 SNES Function norm 6.177810506000e-03 > 0 SNES VI Function norm 6.177810506000e-03 Active lower constraints 0/0 upper constraints 0/0 Percent of total 0 Percent of bounded 0 > 0 SNES Function norm 6.177810506000e-03 > 0 SNES VI Function norm 6.177810506000e-03 Active lower constraints 0/0 upper constraints 0/0 Percent of total 0 Percent of bounded 0 > 0 SNES Function norm 6.177810506000e-03 > 0 SNES VI Function norm 6.177810506000e-03 Active lower constraints 0/0 upper constraints 0/0 Percent of total 0 Percent of bounded 0 > 0 SNES Function norm 6.177810506000e-03 > 0 SNES VI Function norm 6.177810506000e-03 Active lower constraints 0/0 upper constraints 0/0 Percent of total 0 Percent of bounded 0 > 0 SNES Function norm 6.177810506000e-03 > 0 SNES VI Function norm 6.177810506000e-03 Active lower constraints 0/0 upper constraints 0/0 Percent of total 0 Percent of bounded 0 > ... > > The wordy documented run of ex55.c stagnates and then seg faults for me (see attachment): > > ~/petsc-maint/src/snes/examples/tutorials[maint*]$ ./ex55 -ksp_type fgmres -pc_type mg -mg_levels_ksp_type fgmres -mg_levels_pc_type fieldsplit -mg_levels_pc_fieldsplit_detect_saddle_point -mg_levels_pc_fieldsplit_type schur -mg_levels_pc_fieldsplit_factorization_type full -mg_levels_pc_fieldsplit_schur_precondition user -mg_levels_fieldsplit_1_ksp_type gmres -mg_levels_fieldsplit_1_pc_type none -mg_levels_fieldsplit_0_ksp_type preonly -mg_levels_fieldsplit_0_pc_type sor -mg_levels_fieldsplit_0_pc_sor_forward -snes_vi_monitor -ksp_monitor_true_residual -pc_mg_levels 5 -pc_mg_galerkin -mg_levels_ksp_monitor -mg_levels_fieldsplit_ksp_monitor -mg_levels_ksp_max_it 2 -mg_levels_fieldsplit_ksp_max_it 5 -snes_atol 1.e-11 -mg_coarse_ksp_type preonly -mg_coarse_pc_type svd -da_grid_x 65 -da_grid_y 65 -ksp_rtol 1.e-8 &> out.ex55err > > These examples all seem to perform o.k. with default (and non-multigrid) options. > > Thanks for help! > > Ed > > > errors from simple ex9, ex54 runs below: > > ~/petsc-maint/src/snes/examples/tutorials[maint]$ ./ex9 -da_refine 2 -snes_type vinewtonrsls -pc_type mg > setup done: square side length = 4.000 > grid Mx,My = 41,41 > spacing dx,dy = 0.100,0.100 > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Nonconforming object sizes > [0]PETSC ERROR: Mat mat,Vec x: global dim 1388 1681 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.5.3, unknown > [0]PETSC ERROR: ./ex9 on a linux-c-opt named bueler-leopard by ed Fri Feb 27 21:02:20 2015 > [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack --download-mpich --with-debugging=0 > [0]PETSC ERROR: #1 MatMultTranspose() line 2232 in /home/ed/petsc-maint/src/mat/interface/matrix.c > [0]PETSC ERROR: #2 MatRestrict() line 7529 in /home/ed/petsc-maint/src/mat/interface/matrix.c > [0]PETSC ERROR: #3 DMRestrictHook_SNESVecSol() line 480 in /home/ed/petsc-maint/src/snes/interface/snes.c > [0]PETSC ERROR: #4 DMRestrict() line 2022 in /home/ed/petsc-maint/src/dm/interface/dm.c > [0]PETSC ERROR: #5 PCSetUp_MG() line 699 in /home/ed/petsc-maint/src/ksp/pc/impls/mg/mg.c > [0]PETSC ERROR: #6 PCSetUp() line 902 in /home/ed/petsc-maint/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: #7 KSPSetUp() line 306 in /home/ed/petsc-maint/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #8 SNESSolve_VINEWTONRSLS() line 506 in /home/ed/petsc-maint/src/snes/impls/vi/rs/virs.c > [0]PETSC ERROR: #9 SNESSolve() line 3743 in /home/ed/petsc-maint/src/snes/interface/snes.c > [0]PETSC ERROR: #10 main() line 122 in /home/ed/petsc-maint/src/snes/examples/tutorials/ex9.c > [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- > application called MPI_Abort(MPI_COMM_WORLD, 60) - process 0 > [unset]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 60) - process 0 > > > ~/petsc-maint/src/snes/examples/tutorials[maint]$ ./ex9 -da_refine 2 -snes_type vinewtonssls -pc_type mg > setup done: square side length = 4.000 > grid Mx,My = 41,41 > spacing dx,dy = 0.100,0.100 > number of Newton iterations = 8; result = CONVERGED_FNORM_RELATIVE > errors: av |u-uexact| = 2.909e-04 > |u-uexact|_inf = 1.896e-03 > > > ~/petsc-maint/src/snes/examples/tutorials[maint]$ ./ex54 -snes_monitor -da_refine 2 -snes_type vinewtonssls -pc_type mg > 0 SNES Function norm 3.156635589354e-02 > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Zero pivot in LU factorization: http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot > [0]PETSC ERROR: Zero pivot, row 0 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.5.3, unknown > [0]PETSC ERROR: ./ex54 on a linux-c-opt named bueler-leopard by ed Fri Feb 27 21:02:43 2015 > [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack --download-mpich --with-debugging=0 > [0]PETSC ERROR: #1 PetscKernel_A_gets_inverse_A_2() line 50 in /home/ed/petsc-maint/src/mat/impls/baij/seq/dgefa2.c > [0]PETSC ERROR: #2 MatSOR_SeqAIJ_Inode() line 2806 in /home/ed/petsc-maint/src/mat/impls/aij/seq/inode.c > [0]PETSC ERROR: #3 MatSOR() line 3643 in /home/ed/petsc-maint/src/mat/interface/matrix.c > [0]PETSC ERROR: #4 PCApply_SOR() line 35 in /home/ed/petsc-maint/src/ksp/pc/impls/sor/sor.c > [0]PETSC ERROR: #5 PCApply() line 440 in /home/ed/petsc-maint/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: #6 KSP_PCApply() line 230 in /home/ed/petsc-maint/include/petsc-private/kspimpl.h > [0]PETSC ERROR: #7 KSPInitialResidual() line 56 in /home/ed/petsc-maint/src/ksp/ksp/interface/itres.c > [0]PETSC ERROR: #8 KSPSolve_GMRES() line 234 in /home/ed/petsc-maint/src/ksp/ksp/impls/gmres/gmres.c > [0]PETSC ERROR: #9 KSPSolve() line 460 in /home/ed/petsc-maint/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #10 KSPSolve_Chebyshev() line 368 in /home/ed/petsc-maint/src/ksp/ksp/impls/cheby/cheby.c > [0]PETSC ERROR: #11 KSPSolve() line 460 in /home/ed/petsc-maint/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #12 PCMGMCycle_Private() line 19 in /home/ed/petsc-maint/src/ksp/pc/impls/mg/mg.c > [0]PETSC ERROR: #13 PCMGMCycle_Private() line 48 in /home/ed/petsc-maint/src/ksp/pc/impls/mg/mg.c > [0]PETSC ERROR: #14 PCApply_MG() line 337 in /home/ed/petsc-maint/src/ksp/pc/impls/mg/mg.c > [0]PETSC ERROR: #15 PCApply() line 440 in /home/ed/petsc-maint/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: #16 KSP_PCApply() line 230 in /home/ed/petsc-maint/include/petsc-private/kspimpl.h > [0]PETSC ERROR: #17 KSPInitialResidual() line 63 in /home/ed/petsc-maint/src/ksp/ksp/interface/itres.c > [0]PETSC ERROR: #18 KSPSolve_GMRES() line 234 in /home/ed/petsc-maint/src/ksp/ksp/impls/gmres/gmres.c > [0]PETSC ERROR: #19 KSPSolve() line 460 in /home/ed/petsc-maint/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #20 SNESSolve_VINEWTONSSLS() line 317 in /home/ed/petsc-maint/src/snes/impls/vi/ss/viss.c > [0]PETSC ERROR: #21 SNESSolve() line 3743 in /home/ed/petsc-maint/src/snes/interface/snes.c > [0]PETSC ERROR: #22 main() line 98 in /home/ed/petsc-maint/src/snes/examples/tutorials/ex54.c > [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- > application called MPI_Abort(MPI_COMM_WORLD, 71) - process 0 > [unset]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 71) - process 0 > > > ~/petsc-maint/src/snes/examples/tutorials[maint]$ ./ex54 -snes_monitor -da_refine 2 -snes_type vinewtonrsls -pc_type mg > 0 SNES Function norm 3.160548858489e-02 > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Zero pivot in LU factorization: http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot > [0]PETSC ERROR: Zero pivot, row 0 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.5.3, unknown > [0]PETSC ERROR: ./ex54 on a linux-c-opt named bueler-leopard by ed Fri Feb 27 21:02:48 2015 > [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack --download-mpich --with-debugging=0 > [0]PETSC ERROR: #1 PetscKernel_A_gets_inverse_A_2() line 50 in /home/ed/petsc-maint/src/mat/impls/baij/seq/dgefa2.c > [0]PETSC ERROR: #2 MatSOR_SeqAIJ_Inode() line 2806 in /home/ed/petsc-maint/src/mat/impls/aij/seq/inode.c > [0]PETSC ERROR: #3 MatSOR() line 3643 in /home/ed/petsc-maint/src/mat/interface/matrix.c > [0]PETSC ERROR: #4 PCApply_SOR() line 35 in /home/ed/petsc-maint/src/ksp/pc/impls/sor/sor.c > [0]PETSC ERROR: #5 PCApply() line 440 in /home/ed/petsc-maint/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: #6 KSP_PCApply() line 230 in /home/ed/petsc-maint/include/petsc-private/kspimpl.h > [0]PETSC ERROR: #7 KSPInitialResidual() line 56 in /home/ed/petsc-maint/src/ksp/ksp/interface/itres.c > [0]PETSC ERROR: #8 KSPSolve_GMRES() line 234 in /home/ed/petsc-maint/src/ksp/ksp/impls/gmres/gmres.c > [0]PETSC ERROR: #9 KSPSolve() line 460 in /home/ed/petsc-maint/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #10 KSPSolve_Chebyshev() line 368 in /home/ed/petsc-maint/src/ksp/ksp/impls/cheby/cheby.c > [0]PETSC ERROR: #11 KSPSolve() line 460 in /home/ed/petsc-maint/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #12 PCMGMCycle_Private() line 19 in /home/ed/petsc-maint/src/ksp/pc/impls/mg/mg.c > [0]PETSC ERROR: #13 PCMGMCycle_Private() line 48 in /home/ed/petsc-maint/src/ksp/pc/impls/mg/mg.c > [0]PETSC ERROR: #14 PCApply_MG() line 337 in /home/ed/petsc-maint/src/ksp/pc/impls/mg/mg.c > [0]PETSC ERROR: #15 PCApply() line 440 in /home/ed/petsc-maint/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: #16 KSP_PCApply() line 230 in /home/ed/petsc-maint/include/petsc-private/kspimpl.h > [0]PETSC ERROR: #17 KSPInitialResidual() line 63 in /home/ed/petsc-maint/src/ksp/ksp/interface/itres.c > [0]PETSC ERROR: #18 KSPSolve_GMRES() line 234 in /home/ed/petsc-maint/src/ksp/ksp/impls/gmres/gmres.c > [0]PETSC ERROR: #19 KSPSolve() line 460 in /home/ed/petsc-maint/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #20 SNESSolve_VINEWTONRSLS() line 536 in /home/ed/petsc-maint/src/snes/impls/vi/rs/virs.c > [0]PETSC ERROR: #21 SNESSolve() line 3743 in /home/ed/petsc-maint/src/snes/interface/snes.c > [0]PETSC ERROR: #22 main() line 98 in /home/ed/petsc-maint/src/snes/examples/tutorials/ex54.c > [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- > application called MPI_Abort(MPI_COMM_WORLD, 71) - process 0 > [unset]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 71) - process 0 > > > -- > Ed Bueler > Dept of Math and Stat and Geophysical Institute > University of Alaska Fairbanks > Fairbanks, AK 99775-6660 > 301C Chapman and 410D Elvey > 907 474-7693 and 907 474-7199 (fax 907 474-5394) > From gideon.simpson at gmail.com Sat Feb 28 13:35:34 2015 From: gideon.simpson at gmail.com (Gideon Simpson) Date: Sat, 28 Feb 2015 14:35:34 -0500 Subject: [petsc-users] difference between DMDAVecGetArrayDOF and DMDAVecGetArray? Message-ID: <3696857D-7A04-4D5C-9927-949828343D08@gmail.com> I?m having some trouble understanding what the difference between these two routines are, though I am finding that there certainly is a difference. I have the following monte carlo problem. I am generating n_sample paths each of length n_points, and storing them in a 1D DA: DMDACreate1d(PETSC_COMM_WORLD,DM_BOUNDARY_NONE, n_samples, n_points, 0, NULL, &da); DMCreateGlobalVector(da,&paths_vec); When I then go to access them, PetscScalar **u_array; I find that: DMDAVecGetArrayDOF(da, paths_vec, &u_array); works as exepected, in that u_array[i] is a pointer to the first index of the i-th sample path, but if I call: DMDAVecGetArray(da, paths_vec, &u_array); u_array[i] is something else, and my attempts to manipulate it result in segmentation faults, even though the code compiles and builds. -gideon From knepley at gmail.com Sat Feb 28 13:42:08 2015 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 28 Feb 2015 13:42:08 -0600 Subject: [petsc-users] difference between DMDAVecGetArrayDOF and DMDAVecGetArray? In-Reply-To: <3696857D-7A04-4D5C-9927-949828343D08@gmail.com> References: <3696857D-7A04-4D5C-9927-949828343D08@gmail.com> Message-ID: On Sat, Feb 28, 2015 at 1:35 PM, Gideon Simpson wrote: > I?m having some trouble understanding what the difference between these > two routines are, though I am finding that there certainly is a > difference. I have the following monte carlo problem. I am generating > n_sample paths each of length n_points, and storing them in a 1D DA: > > DMDACreate1d(PETSC_COMM_WORLD,DM_BOUNDARY_NONE, n_samples, n_points, 0, > NULL, &da); > DMCreateGlobalVector(da,&paths_vec); > > When I then go to access them, > > PetscScalar **u_array; > > I find that: > > DMDAVecGetArrayDOF(da, paths_vec, &u_array); > > works as exepected, in that u_array[i] is a pointer to the first index of > the i-th sample path, but if I call: > > DMDAVecGetArray(da, paths_vec, &u_array); > > u_array[i] is something else, and my attempts to manipulate it result in > segmentation faults, even though the code compiles and builds. Suppose that you have 4 PetscScalar values at each vertex of the 1D DMDA. If you use PetscScalar **u; DMDAVecGetArrayDOF(da, uVec, &u); u[i][2] /* refers to 3rd scalar on vertex i */ On the other hand you could use typedef struct { PetscScalar a, b, c, d; } Vals; Vals *u; DMDAVecGetArray(da, uVec, &u); u[i].c /* refers to the same value as above */ Basically the DOF version gives you an extra level of indirection for the components. Thanks, Matt > > -gideon > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From gideon.simpson at gmail.com Sat Feb 28 13:50:01 2015 From: gideon.simpson at gmail.com (Gideon Simpson) Date: Sat, 28 Feb 2015 14:50:01 -0500 Subject: [petsc-users] difference between DMDAVecGetArrayDOF and DMDAVecGetArray? In-Reply-To: References: <3696857D-7A04-4D5C-9927-949828343D08@gmail.com> Message-ID: <9EAAB6BB-7AC2-49C2-BAAB-97325F4BAC80@gmail.com> Supposing that I do not have a priori information as to the number of degrees of freedom per DA vertex; I want the number of mesh points along each sample path to be variable. Hence, I can?t really use a statically defined structure as you suggest. In that case, are the DOF routines the only option? -gideon > On Feb 28, 2015, at 2:42 PM, Matthew Knepley wrote: > > On Sat, Feb 28, 2015 at 1:35 PM, Gideon Simpson > wrote: > I?m having some trouble understanding what the difference between these two routines are, though I am finding that there certainly is a difference. I have the following monte carlo problem. I am generating n_sample paths each of length n_points, and storing them in a 1D DA: > > DMDACreate1d(PETSC_COMM_WORLD,DM_BOUNDARY_NONE, n_samples, n_points, 0, NULL, &da); > DMCreateGlobalVector(da,&paths_vec); > > When I then go to access them, > > PetscScalar **u_array; > > I find that: > > DMDAVecGetArrayDOF(da, paths_vec, &u_array); > > works as exepected, in that u_array[i] is a pointer to the first index of the i-th sample path, but if I call: > > DMDAVecGetArray(da, paths_vec, &u_array); > > u_array[i] is something else, and my attempts to manipulate it result in segmentation faults, even though the code compiles and builds. > > Suppose that you have 4 PetscScalar values at each vertex of the 1D DMDA. If you use > > PetscScalar **u; > > DMDAVecGetArrayDOF(da, uVec, &u); > > u[i][2] /* refers to 3rd scalar on vertex i */ > > On the other hand you could use > > typedef struct { > PetscScalar a, b, c, d; > } Vals; > > Vals *u; > > DMDAVecGetArray(da, uVec, &u); > > u[i].c /* refers to the same value as above */ > > Basically the DOF version gives you an extra level of indirection for the components. > > Thanks, > > Matt > > > -gideon > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Feb 28 13:54:14 2015 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 28 Feb 2015 13:54:14 -0600 Subject: [petsc-users] difference between DMDAVecGetArrayDOF and DMDAVecGetArray? In-Reply-To: <9EAAB6BB-7AC2-49C2-BAAB-97325F4BAC80@gmail.com> References: <3696857D-7A04-4D5C-9927-949828343D08@gmail.com> <9EAAB6BB-7AC2-49C2-BAAB-97325F4BAC80@gmail.com> Message-ID: On Sat, Feb 28, 2015 at 1:50 PM, Gideon Simpson wrote: > Supposing that I do not have a priori information as to the number of > degrees of freedom per DA vertex; I want the number of mesh points along > each sample path to be variable. Hence, I can?t really use a statically > defined structure as you suggest. In that case, are the DOF routines the > only option? > DMDA just plain does not support that. It is close, but everything is not hooked up right. If you have variable numbers of unknowns per site, you can 1) Fix DMDA with our help (demands programming) 2) Use DMPlex (demands reading and maybe looking at code) Thanks, Matt > -gideon > > On Feb 28, 2015, at 2:42 PM, Matthew Knepley wrote: > > On Sat, Feb 28, 2015 at 1:35 PM, Gideon Simpson > wrote: > >> I?m having some trouble understanding what the difference between these >> two routines are, though I am finding that there certainly is a >> difference. I have the following monte carlo problem. I am generating >> n_sample paths each of length n_points, and storing them in a 1D DA: >> >> DMDACreate1d(PETSC_COMM_WORLD,DM_BOUNDARY_NONE, n_samples, n_points, 0, >> NULL, &da); >> DMCreateGlobalVector(da,&paths_vec); >> >> When I then go to access them, >> >> PetscScalar **u_array; >> >> I find that: >> >> DMDAVecGetArrayDOF(da, paths_vec, &u_array); >> >> works as exepected, in that u_array[i] is a pointer to the first index of >> the i-th sample path, but if I call: >> >> DMDAVecGetArray(da, paths_vec, &u_array); >> >> u_array[i] is something else, and my attempts to manipulate it result in >> segmentation faults, even though the code compiles and builds. > > > Suppose that you have 4 PetscScalar values at each vertex of the 1D DMDA. > If you use > > PetscScalar **u; > > DMDAVecGetArrayDOF(da, uVec, &u); > > u[i][2] /* refers to 3rd scalar on vertex i */ > > On the other hand you could use > > typedef struct { > PetscScalar a, b, c, d; > } Vals; > > Vals *u; > > DMDAVecGetArray(da, uVec, &u); > > u[i].c /* refers to the same value as above */ > > Basically the DOF version gives you an extra level of indirection for the > components. > > Thanks, > > Matt > > >> >> -gideon >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From gideon.simpson at gmail.com Sat Feb 28 13:56:54 2015 From: gideon.simpson at gmail.com (Gideon Simpson) Date: Sat, 28 Feb 2015 14:56:54 -0500 Subject: [petsc-users] difference between DMDAVecGetArrayDOF and DMDAVecGetArray? In-Reply-To: References: <3696857D-7A04-4D5C-9927-949828343D08@gmail.com> <9EAAB6BB-7AC2-49C2-BAAB-97325F4BAC80@gmail.com> Message-ID: <0035D4B4-9668-4706-95F8-FF1480F0C074@gmail.com> Wait, then why is my code working when I use the DOF routines? The number of degrees of freedom per DA vertex is constant across the DA, but that value is not set until run time. In other words, what?s wrong with the code: PetscOptionsGetInt(NULL,"-n_samples",&n_samples,NULL); PetscOptionsGetInt(NULL,"-n_points",&n_points,NULL); DMDACreate1d(PETSC_COMM_WORLD,DM_BOUNDARY_NONE, n_samples, n_points, 0, NULL, &da); -gideon > On Feb 28, 2015, at 2:54 PM, Matthew Knepley wrote: > > On Sat, Feb 28, 2015 at 1:50 PM, Gideon Simpson > wrote: > Supposing that I do not have a priori information as to the number of degrees of freedom per DA vertex; I want the number of mesh points along each sample path to be variable. Hence, I can?t really use a statically defined structure as you suggest. In that case, are the DOF routines the only option? > > DMDA just plain does not support that. It is close, but everything is not hooked up right. If you have > variable numbers of unknowns per site, you can > > 1) Fix DMDA with our help (demands programming) > > 2) Use DMPlex (demands reading and maybe looking at code) > > Thanks, > > Matt > > -gideon > >> On Feb 28, 2015, at 2:42 PM, Matthew Knepley > wrote: >> >> On Sat, Feb 28, 2015 at 1:35 PM, Gideon Simpson > wrote: >> I?m having some trouble understanding what the difference between these two routines are, though I am finding that there certainly is a difference. I have the following monte carlo problem. I am generating n_sample paths each of length n_points, and storing them in a 1D DA: >> >> DMDACreate1d(PETSC_COMM_WORLD,DM_BOUNDARY_NONE, n_samples, n_points, 0, NULL, &da); >> DMCreateGlobalVector(da,&paths_vec); >> >> When I then go to access them, >> >> PetscScalar **u_array; >> >> I find that: >> >> DMDAVecGetArrayDOF(da, paths_vec, &u_array); >> >> works as exepected, in that u_array[i] is a pointer to the first index of the i-th sample path, but if I call: >> >> DMDAVecGetArray(da, paths_vec, &u_array); >> >> u_array[i] is something else, and my attempts to manipulate it result in segmentation faults, even though the code compiles and builds. >> >> Suppose that you have 4 PetscScalar values at each vertex of the 1D DMDA. If you use >> >> PetscScalar **u; >> >> DMDAVecGetArrayDOF(da, uVec, &u); >> >> u[i][2] /* refers to 3rd scalar on vertex i */ >> >> On the other hand you could use >> >> typedef struct { >> PetscScalar a, b, c, d; >> } Vals; >> >> Vals *u; >> >> DMDAVecGetArray(da, uVec, &u); >> >> u[i].c /* refers to the same value as above */ >> >> Basically the DOF version gives you an extra level of indirection for the components. >> >> Thanks, >> >> Matt >> >> >> -gideon >> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Feb 28 13:59:22 2015 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 28 Feb 2015 13:59:22 -0600 Subject: [petsc-users] difference between DMDAVecGetArrayDOF and DMDAVecGetArray? In-Reply-To: <0035D4B4-9668-4706-95F8-FF1480F0C074@gmail.com> References: <3696857D-7A04-4D5C-9927-949828343D08@gmail.com> <9EAAB6BB-7AC2-49C2-BAAB-97325F4BAC80@gmail.com> <0035D4B4-9668-4706-95F8-FF1480F0C074@gmail.com> Message-ID: On Sat, Feb 28, 2015 at 1:56 PM, Gideon Simpson wrote: > Wait, then why is my code working when I use the DOF routines? The number > of degrees of freedom per DA vertex is constant across the DA, but that > value is not set until run time. In other words, what?s wrong with the > code: > > PetscOptionsGetInt(NULL,"-n_samples",&n_samples,NULL); > PetscOptionsGetInt(NULL,"-n_points",&n_points,NULL); > > DMDACreate1d(PETSC_COMM_WORLD,DM_BOUNDARY_NONE, n_samples, n_points, 0, > NULL, &da); > Nothing is wrong with that. I meant that you must have the smae number on every vertex. Matt > -gideon > > On Feb 28, 2015, at 2:54 PM, Matthew Knepley wrote: > > On Sat, Feb 28, 2015 at 1:50 PM, Gideon Simpson > wrote: > >> Supposing that I do not have a priori information as to the number of >> degrees of freedom per DA vertex; I want the number of mesh points along >> each sample path to be variable. Hence, I can?t really use a statically >> defined structure as you suggest. In that case, are the DOF routines the >> only option? >> > > DMDA just plain does not support that. It is close, but everything is not > hooked up right. If you have > variable numbers of unknowns per site, you can > > 1) Fix DMDA with our help (demands programming) > > 2) Use DMPlex (demands reading and maybe looking at code) > > Thanks, > > Matt > > >> -gideon >> >> On Feb 28, 2015, at 2:42 PM, Matthew Knepley wrote: >> >> On Sat, Feb 28, 2015 at 1:35 PM, Gideon Simpson > > wrote: >> >>> I?m having some trouble understanding what the difference between these >>> two routines are, though I am finding that there certainly is a >>> difference. I have the following monte carlo problem. I am generating >>> n_sample paths each of length n_points, and storing them in a 1D DA: >>> >>> DMDACreate1d(PETSC_COMM_WORLD,DM_BOUNDARY_NONE, n_samples, n_points, 0, >>> NULL, &da); >>> DMCreateGlobalVector(da,&paths_vec); >>> >>> When I then go to access them, >>> >>> PetscScalar **u_array; >>> >>> I find that: >>> >>> DMDAVecGetArrayDOF(da, paths_vec, &u_array); >>> >>> works as exepected, in that u_array[i] is a pointer to the first index >>> of the i-th sample path, but if I call: >>> >>> DMDAVecGetArray(da, paths_vec, &u_array); >>> >>> u_array[i] is something else, and my attempts to manipulate it result in >>> segmentation faults, even though the code compiles and builds. >> >> >> Suppose that you have 4 PetscScalar values at each vertex of the 1D DMDA. >> If you use >> >> PetscScalar **u; >> >> DMDAVecGetArrayDOF(da, uVec, &u); >> >> u[i][2] /* refers to 3rd scalar on vertex i */ >> >> On the other hand you could use >> >> typedef struct { >> PetscScalar a, b, c, d; >> } Vals; >> >> Vals *u; >> >> DMDAVecGetArray(da, uVec, &u); >> >> u[i].c /* refers to the same value as above */ >> >> Basically the DOF version gives you an extra level of indirection for the >> components. >> >> Thanks, >> >> Matt >> >> >>> >>> -gideon >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sat Feb 28 14:47:58 2015 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 28 Feb 2015 14:47:58 -0600 Subject: [petsc-users] difference between DMDAVecGetArrayDOF and DMDAVecGetArray? In-Reply-To: <9EAAB6BB-7AC2-49C2-BAAB-97325F4BAC80@gmail.com> References: <3696857D-7A04-4D5C-9927-949828343D08@gmail.com> <9EAAB6BB-7AC2-49C2-BAAB-97325F4BAC80@gmail.com> Message-ID: <413C814E-3F23-4C75-BBD0-8A4BB74CB2C6@mcs.anl.gov> > On Feb 28, 2015, at 1:50 PM, Gideon Simpson wrote: > > Supposing that I do not have a priori information as to the number of degrees of freedom per DA vertex; I want the number of mesh points along each sample path to be variable. Hence, I can?t really use a statically defined structure as you suggest. In that case, are the DOF routines the only option? Yes. Because the DMDAVecGetArray() requires you make a struct in the final "coordinate" of the correct size, which, of course, since it is a struct must be defined at compile time. In fact DMDAVecGetArrayDOF() was written for you case and there is no reason or logic to use DMDAVecGetArray() then. Barry > > -gideon > >> On Feb 28, 2015, at 2:42 PM, Matthew Knepley wrote: >> >> On Sat, Feb 28, 2015 at 1:35 PM, Gideon Simpson wrote: >> I?m having some trouble understanding what the difference between these two routines are, though I am finding that there certainly is a difference. I have the following monte carlo problem. I am generating n_sample paths each of length n_points, and storing them in a 1D DA: >> >> DMDACreate1d(PETSC_COMM_WORLD,DM_BOUNDARY_NONE, n_samples, n_points, 0, NULL, &da); >> DMCreateGlobalVector(da,&paths_vec); >> >> When I then go to access them, >> >> PetscScalar **u_array; >> >> I find that: >> >> DMDAVecGetArrayDOF(da, paths_vec, &u_array); >> >> works as exepected, in that u_array[i] is a pointer to the first index of the i-th sample path, but if I call: >> >> DMDAVecGetArray(da, paths_vec, &u_array); >> >> u_array[i] is something else, and my attempts to manipulate it result in segmentation faults, even though the code compiles and builds. >> >> Suppose that you have 4 PetscScalar values at each vertex of the 1D DMDA. If you use >> >> PetscScalar **u; >> >> DMDAVecGetArrayDOF(da, uVec, &u); >> >> u[i][2] /* refers to 3rd scalar on vertex i */ >> >> On the other hand you could use >> >> typedef struct { >> PetscScalar a, b, c, d; >> } Vals; >> >> Vals *u; >> >> DMDAVecGetArray(da, uVec, &u); >> >> u[i].c /* refers to the same value as above */ >> >> Basically the DOF version gives you an extra level of indirection for the components. >> >> Thanks, >> >> Matt >> >> >> -gideon >> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener >