Solving a linear TS problem on 32 processors mx : 1024, my: 1024, energy(in eV) : 1.500000e+04 Assembling matrix Finished assembling matrix 0 TS dt 3.0808e-06 time 0. 0 KSP Residual norm 1.048050872309e+03 1 KSP Residual norm 3.053683369041e+02 2 KSP Residual norm 1.070837024091e+02 3 KSP Residual norm 7.488720501573e+01 4 KSP Residual norm 3.129639813638e+01 5 KSP Residual norm 1.543108655448e+01 6 KSP Residual norm 6.845219704638e+00 7 KSP Residual norm 3.766815883639e+00 8 KSP Residual norm 2.162367773994e+00 9 KSP Residual norm 1.107909866344e+00 10 KSP Residual norm 6.276464151977e-01 11 KSP Residual norm 3.125340131748e-01 12 KSP Residual norm 1.746021293696e-01 13 KSP Residual norm 9.549148490918e-02 14 KSP Residual norm 5.175306617765e-02 15 KSP Residual norm 2.829254795593e-02 16 KSP Residual norm 1.497597627925e-02 17 KSP Residual norm 8.346613021051e-03 KSP Object: 32 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 32 MPI processes type: hmg Reuse interpolation: true Use subspace coarsening: false Coarsening component: 0 Use MatMAIJ: true Inner PC type: hypre type is MULTIPLICATIVE, levels=1 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_levels_0_) 32 MPI processes type: chebyshev eigenvalue estimates used: min = 0.146873, max = 1.61561 eigenvalues estimate via gmres min 0.137844, max 1.46873 eigenvalues estimated using gmres with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_0_esteig_) 32 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_0_) 32 MPI processes type: sor type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. linear system matrix = precond matrix: Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 linear system matrix followed by preconditioner matrix: Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 1 TS dt 3.0808e-06 time 3.0808e-06 0 KSP Residual norm 9.673177049893e+02 1 KSP Residual norm 2.490435861925e+02 2 KSP Residual norm 1.013163112023e+02 3 KSP Residual norm 7.059245491251e+01 4 KSP Residual norm 2.870549562335e+01 5 KSP Residual norm 1.419154260364e+01 6 KSP Residual norm 6.416012292339e+00 7 KSP Residual norm 3.583253921262e+00 8 KSP Residual norm 2.059057046272e+00 9 KSP Residual norm 1.042106166162e+00 10 KSP Residual norm 5.963252198906e-01 11 KSP Residual norm 2.969386163202e-01 12 KSP Residual norm 1.668432240019e-01 13 KSP Residual norm 9.109662190104e-02 14 KSP Residual norm 4.879935259366e-02 15 KSP Residual norm 2.666218849453e-02 16 KSP Residual norm 1.406813006120e-02 17 KSP Residual norm 7.884539882854e-03 KSP Object: 32 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 32 MPI processes type: hmg Reuse interpolation: true Use subspace coarsening: false Coarsening component: 0 Use MatMAIJ: true Inner PC type: hypre type is MULTIPLICATIVE, levels=1 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices for pmat Coarse grid solver -- level ------------------------------- KSP Object: (mg_levels_0_) 32 MPI processes type: chebyshev eigenvalue estimates used: min = 0.146873, max = 1.61561 eigenvalues estimate via gmres min 0.137844, max 1.46873 eigenvalues estimated using gmres with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_0_esteig_) 32 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_0_) 32 MPI processes type: sor type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. linear system matrix = precond matrix: Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 linear system matrix followed by preconditioner matrix: Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 2 TS dt 3.0808e-06 time 6.1616e-06 0 KSP Residual norm 9.165696803707e+02 1 KSP Residual norm 2.550771235484e+02 2 KSP Residual norm 1.014432901308e+02 3 KSP Residual norm 6.995792660028e+01 4 KSP Residual norm 2.933296027598e+01 5 KSP Residual norm 1.448780162568e+01 6 KSP Residual norm 6.545287171659e+00 7 KSP Residual norm 3.625715678013e+00 8 KSP Residual norm 2.084034507968e+00 9 KSP Residual norm 1.056199745305e+00 10 KSP Residual norm 6.023419467834e-01 11 KSP Residual norm 2.998173726235e-01 12 KSP Residual norm 1.673632271906e-01 13 KSP Residual norm 9.153439784601e-02 14 KSP Residual norm 4.915468986689e-02 15 KSP Residual norm 2.675303391617e-02 16 KSP Residual norm 1.416526661311e-02 17 KSP Residual norm 7.920048633153e-03 KSP Object: 32 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 32 MPI processes type: hmg Reuse interpolation: true Use subspace coarsening: false Coarsening component: 0 Use MatMAIJ: true Inner PC type: hypre type is MULTIPLICATIVE, levels=1 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices for pmat Coarse grid solver -- level ------------------------------- KSP Object: (mg_levels_0_) 32 MPI processes type: chebyshev eigenvalue estimates used: min = 0.146873, max = 1.61561 eigenvalues estimate via gmres min 0.137844, max 1.46873 eigenvalues estimated using gmres with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_0_esteig_) 32 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_0_) 32 MPI processes type: sor type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. linear system matrix = precond matrix: Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 linear system matrix followed by preconditioner matrix: Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 3 TS dt 3.0808e-06 time 9.2424e-06 0 KSP Residual norm 8.859098334199e+02 1 KSP Residual norm 2.576173956829e+02 2 KSP Residual norm 1.029860906726e+02 3 KSP Residual norm 7.188388988285e+01 4 KSP Residual norm 2.890748911885e+01 5 KSP Residual norm 1.430563999426e+01 6 KSP Residual norm 6.580515650656e+00 7 KSP Residual norm 3.593859465854e+00 8 KSP Residual norm 2.082535410333e+00 9 KSP Residual norm 1.048091174254e+00 10 KSP Residual norm 5.968738178302e-01 11 KSP Residual norm 2.949321000463e-01 12 KSP Residual norm 1.637216124775e-01 13 KSP Residual norm 8.989188757429e-02 14 KSP Residual norm 4.825741950426e-02 15 KSP Residual norm 2.640876194858e-02 16 KSP Residual norm 1.391968873085e-02 17 KSP Residual norm 7.779405889485e-03 KSP Object: 32 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 32 MPI processes type: hmg Reuse interpolation: true Use subspace coarsening: false Coarsening component: 0 Use MatMAIJ: true Inner PC type: hypre type is MULTIPLICATIVE, levels=1 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices for pmat Coarse grid solver -- level ------------------------------- KSP Object: (mg_levels_0_) 32 MPI processes type: chebyshev eigenvalue estimates used: min = 0.146873, max = 1.61561 eigenvalues estimate via gmres min 0.137844, max 1.46873 eigenvalues estimated using gmres with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_0_esteig_) 32 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_0_) 32 MPI processes type: sor type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. linear system matrix = precond matrix: Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 linear system matrix followed by preconditioner matrix: Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 4 TS dt 3.0808e-06 time 1.23232e-05 0 KSP Residual norm 8.355255885533e+02 1 KSP Residual norm 2.296841491314e+02 2 KSP Residual norm 1.009452895747e+02 3 KSP Residual norm 6.822822233245e+01 4 KSP Residual norm 2.844881414416e+01 5 KSP Residual norm 1.426758157403e+01 6 KSP Residual norm 6.391713833262e+00 7 KSP Residual norm 3.487480836044e+00 8 KSP Residual norm 2.035530307449e+00 9 KSP Residual norm 1.022308296234e+00 10 KSP Residual norm 5.856797828155e-01 11 KSP Residual norm 2.864888611122e-01 12 KSP Residual norm 1.597184189811e-01 13 KSP Residual norm 8.740761615654e-02 14 KSP Residual norm 4.706794311031e-02 15 KSP Residual norm 2.601019479793e-02 16 KSP Residual norm 1.369030130513e-02 17 KSP Residual norm 7.643842818641e-03 KSP Object: 32 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 32 MPI processes type: hmg Reuse interpolation: true Use subspace coarsening: false Coarsening component: 0 Use MatMAIJ: true Inner PC type: hypre type is MULTIPLICATIVE, levels=1 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices for pmat Coarse grid solver -- level ------------------------------- KSP Object: (mg_levels_0_) 32 MPI processes type: chebyshev eigenvalue estimates used: min = 0.146873, max = 1.61561 eigenvalues estimate via gmres min 0.137844, max 1.46873 eigenvalues estimated using gmres with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_0_esteig_) 32 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_0_) 32 MPI processes type: sor type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. linear system matrix = precond matrix: Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 linear system matrix followed by preconditioner matrix: Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 5 TS dt 3.0808e-06 time 1.5404e-05 0 KSP Residual norm 8.038469386521e+02 1 KSP Residual norm 2.340557448530e+02 2 KSP Residual norm 1.003011511737e+02 3 KSP Residual norm 6.798411623727e+01 4 KSP Residual norm 2.837032928351e+01 5 KSP Residual norm 1.401212489934e+01 6 KSP Residual norm 6.375348109716e+00 7 KSP Residual norm 3.506321634910e+00 8 KSP Residual norm 2.028004402918e+00 9 KSP Residual norm 1.020899105196e+00 10 KSP Residual norm 5.872374589092e-01 11 KSP Residual norm 2.906152018311e-01 12 KSP Residual norm 1.618423835229e-01 13 KSP Residual norm 8.886452526121e-02 14 KSP Residual norm 4.780123641782e-02 15 KSP Residual norm 2.637315935128e-02 16 KSP Residual norm 1.395655956813e-02 17 KSP Residual norm 7.787230305169e-03 KSP Object: 32 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 32 MPI processes type: hmg Reuse interpolation: true Use subspace coarsening: false Coarsening component: 0 Use MatMAIJ: true Inner PC type: hypre type is MULTIPLICATIVE, levels=1 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices for pmat Coarse grid solver -- level ------------------------------- KSP Object: (mg_levels_0_) 32 MPI processes type: chebyshev eigenvalue estimates used: min = 0.146873, max = 1.61561 eigenvalues estimate via gmres min 0.137844, max 1.46873 eigenvalues estimated using gmres with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_0_esteig_) 32 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_0_) 32 MPI processes type: sor type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. linear system matrix = precond matrix: Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 linear system matrix followed by preconditioner matrix: Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 6 TS dt 3.0808e-06 time 1.84848e-05 0 KSP Residual norm 7.807631043415e+02 1 KSP Residual norm 2.335097445420e+02 2 KSP Residual norm 9.997112165317e+01 3 KSP Residual norm 6.857979129203e+01 4 KSP Residual norm 2.834315222520e+01 5 KSP Residual norm 1.400099621134e+01 6 KSP Residual norm 6.343390286148e+00 7 KSP Residual norm 3.507308109310e+00 8 KSP Residual norm 2.048089136060e+00 9 KSP Residual norm 1.016387310549e+00 10 KSP Residual norm 5.826980606834e-01 11 KSP Residual norm 2.908782512084e-01 12 KSP Residual norm 1.629083816009e-01 13 KSP Residual norm 8.951508682783e-02 14 KSP Residual norm 4.765231456235e-02 15 KSP Residual norm 2.630929306167e-02 16 KSP Residual norm 1.386011512619e-02 17 KSP Residual norm 7.762471837978e-03 KSP Object: 32 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 32 MPI processes type: hmg Reuse interpolation: true Use subspace coarsening: false Coarsening component: 0 Use MatMAIJ: true Inner PC type: hypre type is MULTIPLICATIVE, levels=1 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices for pmat Coarse grid solver -- level ------------------------------- KSP Object: (mg_levels_0_) 32 MPI processes type: chebyshev eigenvalue estimates used: min = 0.146873, max = 1.61561 eigenvalues estimate via gmres min 0.137844, max 1.46873 eigenvalues estimated using gmres with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_0_esteig_) 32 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_0_) 32 MPI processes type: sor type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. linear system matrix = precond matrix: Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 linear system matrix followed by preconditioner matrix: Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 7 TS dt 3.0808e-06 time 2.15656e-05 0 KSP Residual norm 7.420352637979e+02 1 KSP Residual norm 2.149781074572e+02 2 KSP Residual norm 9.831716648421e+01 3 KSP Residual norm 6.571495961582e+01 4 KSP Residual norm 2.775490874183e+01 5 KSP Residual norm 1.365376278607e+01 6 KSP Residual norm 6.221112948640e+00 7 KSP Residual norm 3.457981848947e+00 8 KSP Residual norm 1.986687452685e+00 9 KSP Residual norm 9.939159894919e-01 10 KSP Residual norm 5.733876968611e-01 11 KSP Residual norm 2.801110696375e-01 12 KSP Residual norm 1.580378387028e-01 13 KSP Residual norm 8.669862031342e-02 14 KSP Residual norm 4.605352078921e-02 15 KSP Residual norm 2.541872886972e-02 16 KSP Residual norm 1.330638824589e-02 17 KSP Residual norm 7.499316123171e-03 18 KSP Residual norm 4.101895621727e-03 KSP Object: 32 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 32 MPI processes type: hmg Reuse interpolation: true Use subspace coarsening: false Coarsening component: 0 Use MatMAIJ: true Inner PC type: hypre type is MULTIPLICATIVE, levels=1 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices for pmat Coarse grid solver -- level ------------------------------- KSP Object: (mg_levels_0_) 32 MPI processes type: chebyshev eigenvalue estimates used: min = 0.146873, max = 1.61561 eigenvalues estimate via gmres min 0.137844, max 1.46873 eigenvalues estimated using gmres with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_0_esteig_) 32 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_0_) 32 MPI processes type: sor type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. linear system matrix = precond matrix: Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 linear system matrix followed by preconditioner matrix: Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 8 TS dt 3.0808e-06 time 2.46464e-05 0 KSP Residual norm 7.162180371519e+02 1 KSP Residual norm 2.181870633699e+02 2 KSP Residual norm 9.788814236144e+01 3 KSP Residual norm 6.569114364764e+01 4 KSP Residual norm 2.749980350432e+01 5 KSP Residual norm 1.335800719229e+01 6 KSP Residual norm 6.176206479365e+00 7 KSP Residual norm 3.389767445124e+00 8 KSP Residual norm 1.956502835976e+00 9 KSP Residual norm 9.860518033604e-01 10 KSP Residual norm 5.616547561274e-01 11 KSP Residual norm 2.740277523272e-01 12 KSP Residual norm 1.536066819759e-01 13 KSP Residual norm 8.464159456393e-02 14 KSP Residual norm 4.517070049611e-02 15 KSP Residual norm 2.479948829409e-02 16 KSP Residual norm 1.303061431214e-02 17 KSP Residual norm 7.311700032754e-03 18 KSP Residual norm 4.020133735923e-03 KSP Object: 32 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 32 MPI processes type: hmg Reuse interpolation: true Use subspace coarsening: false Coarsening component: 0 Use MatMAIJ: true Inner PC type: hypre type is MULTIPLICATIVE, levels=1 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices for pmat Coarse grid solver -- level ------------------------------- KSP Object: (mg_levels_0_) 32 MPI processes type: chebyshev eigenvalue estimates used: min = 0.146873, max = 1.61561 eigenvalues estimate via gmres min 0.137844, max 1.46873 eigenvalues estimated using gmres with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_0_esteig_) 32 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_0_) 32 MPI processes type: sor type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. linear system matrix = precond matrix: Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 linear system matrix followed by preconditioner matrix: Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 9 TS dt 3.0808e-06 time 2.77272e-05 0 KSP Residual norm 6.921552038562e+02 1 KSP Residual norm 2.139826470571e+02 2 KSP Residual norm 9.666120029397e+01 3 KSP Residual norm 6.527281863091e+01 4 KSP Residual norm 2.704121598750e+01 5 KSP Residual norm 1.321684543597e+01 6 KSP Residual norm 6.091998428522e+00 7 KSP Residual norm 3.354764302272e+00 8 KSP Residual norm 1.929955686919e+00 9 KSP Residual norm 9.711680738487e-01 10 KSP Residual norm 5.582364613146e-01 11 KSP Residual norm 2.693617009135e-01 12 KSP Residual norm 1.509751115548e-01 13 KSP Residual norm 8.331375127394e-02 14 KSP Residual norm 4.437566345853e-02 15 KSP Residual norm 2.457997914852e-02 16 KSP Residual norm 1.288024745867e-02 17 KSP Residual norm 7.216519197611e-03 18 KSP Residual norm 3.955666621669e-03 KSP Object: 32 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 32 MPI processes type: hmg Reuse interpolation: true Use subspace coarsening: false Coarsening component: 0 Use MatMAIJ: true Inner PC type: hypre type is MULTIPLICATIVE, levels=1 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices for pmat Coarse grid solver -- level ------------------------------- KSP Object: (mg_levels_0_) 32 MPI processes type: chebyshev eigenvalue estimates used: min = 0.146873, max = 1.61561 eigenvalues estimate via gmres min 0.137844, max 1.46873 eigenvalues estimated using gmres with translations [0. 0.1; 0. 1.1] KSP Object: (mg_levels_0_esteig_) 32 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test estimating eigenvalues using noisy right hand side maximum iterations=2, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_0_) 32 MPI processes type: sor type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. linear system matrix = precond matrix: Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 linear system matrix followed by preconditioner matrix: Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 Mat Object: 32 MPI processes type: mpiaij rows=2097152, cols=2097152, bs=2 total: nonzeros=20971520, allocated nonzeros=20971520 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 32768 nodes, limit used is 5 10 TS dt 3.0808e-06 time 3.0808e-05 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./ex_k1 on a named xrmlite with 32 processors, by sajid Mon May 4 09:14:14 2020 Using Petsc Release Version 3.13.0, Mar 29, 2020 Max Max/Min Avg Total Time (sec): 1.600e+01 1.000 1.600e+01 Objects: 2.930e+02 1.000 2.930e+02 Flop: 1.986e+09 1.000 1.986e+09 6.355e+10 Flop/sec: 1.241e+08 1.000 1.241e+08 3.971e+09 MPI Messages: 2.096e+03 1.000 2.096e+03 6.707e+04 MPI Message Lengths: 6.139e+06 1.000 2.929e+03 1.964e+08 MPI Reductions: 8.690e+02 1.000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flop and VecAXPY() for complex vectors of length N --> 8N flop Summary of Stages: ----- Time ------ ----- Flop ------ --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total Count %Total Avg %Total Count %Total 0: Main Stage: 1.6004e+01 100.0% 6.3555e+10 100.0% 6.707e+04 100.0% 2.929e+03 100.0% 8.620e+02 99.2% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flop: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent AvgLen: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flop in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flop --- Global --- --- Stage ---- Total Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage BuildTwoSided 48 1.0 6.1337e-02 2.3 0.00e+00 0.0 3.1e+03 8.0e+00 4.8e+01 0 0 5 0 6 0 0 5 0 6 0 BuildTwoSidedF 24 1.0 5.8151e-02 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 2.4e+01 0 0 0 0 3 0 0 0 0 3 0 MatMult 476 1.0 2.4188e+00 1.0 5.93e+08 1.0 6.1e+04 3.1e+03 0.0e+00 15 30 91 95 0 15 30 91 95 0 7841 MatSOR 476 1.0 4.9206e+00 1.1 6.21e+08 1.0 0.0e+00 0.0e+00 0.0e+00 30 31 0 0 0 30 31 0 0 0 4038 MatConvert 3 1.0 1.2147e-01 1.0 0.00e+00 0.0 5.1e+02 1.5e+03 7.0e+00 1 0 1 0 1 1 0 1 0 1 0 MatAssemblyBegin 43 1.0 5.8541e-02 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 2.3e+01 0 0 0 0 3 0 0 0 0 3 0 MatAssemblyEnd 43 1.0 3.4865e-01 1.0 0.00e+00 0.0 5.4e+03 1.5e+03 1.1e+02 2 0 8 4 13 2 0 8 4 13 0 MatGetRowIJ 2 1.0 8.3447e-06 7.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatZeroEntries 20 1.0 4.2331e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 30 1.0 7.6773e-03 4.2 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+01 0 0 0 0 3 0 0 0 0 3 0 MatAXPY 20 1.0 3.7882e+00 1.0 1.31e+07 1.0 5.1e+03 1.5e+03 1.6e+02 24 1 8 4 18 24 1 8 4 19 111 DMCreateMat 1 1.0 1.5515e-01 1.0 0.00e+00 0.0 2.6e+02 1.5e+03 8.0e+00 1 0 0 0 1 1 0 0 0 1 0 SFSetGraph 24 1.0 6.4063e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFSetUp 24 1.0 1.1436e-02 1.2 0.00e+00 0.0 6.1e+03 1.5e+03 2.4e+01 0 0 9 5 3 0 0 9 5 3 0 SFBcastOpBegin 476 1.0 2.4664e-02 1.6 0.00e+00 0.0 6.1e+04 3.1e+03 0.0e+00 0 0 91 95 0 0 0 91 95 0 0 SFBcastOpEnd 476 1.0 1.8093e-02 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFPack 476 1.0 1.2315e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFUnpack 476 1.0 4.0889e-04 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecView 1 1.0 2.4840e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 VecMDot 273 1.0 8.5971e-01 1.6 2.80e+08 1.0 0.0e+00 0.0e+00 2.7e+02 5 14 0 0 31 5 14 0 0 32 10411 VecNorm 293 1.0 7.9286e-02 1.7 3.84e+07 1.0 0.0e+00 0.0e+00 2.9e+02 0 2 0 0 34 0 2 0 0 34 15500 VecScale 303 1.0 2.5187e-02 1.1 1.99e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 25229 VecCopy 416 1.0 1.3553e-01 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecSet 439 1.0 5.3005e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 40 1.0 1.0852e-02 1.4 5.24e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 15460 VecAYPX 386 1.0 1.7088e-01 1.1 3.73e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 1 2 0 0 0 6983 VecAXPBYCZ 193 1.0 1.2874e-01 1.1 6.19e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 3 0 0 0 1 3 0 0 0 15394 VecMAXPY 293 1.0 6.4237e-01 1.0 3.15e+08 1.0 0.0e+00 0.0e+00 0.0e+00 4 16 0 0 0 4 16 0 0 0 15716 VecAssemblyBegin 1 1.0 8.7500e-05 2.7 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyEnd 1 1.0 3.8147e-06 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecLoad 1 1.0 2.5969e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecScatterBegin 476 1.0 2.6758e-02 1.6 0.00e+00 0.0 6.1e+04 3.1e+03 0.0e+00 0 0 91 95 0 0 0 91 95 0 0 VecScatterEnd 476 1.0 1.9709e-02 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 293 1.0 9.8557e-02 1.5 5.76e+07 1.0 0.0e+00 0.0e+00 2.9e+02 1 3 0 0 34 1 3 0 0 34 18704 TSStep 10 1.0 1.5309e+01 1.0 1.99e+09 1.0 6.6e+04 3.0e+03 8.2e+02 96100 98 99 94 96100 98 99 95 4151 TSFunctionEval 20 1.0 1.1894e-01 1.0 2.62e+07 1.0 2.6e+03 3.1e+03 0.0e+00 1 1 4 4 0 1 1 4 4 0 7053 TSJacobianEval 30 1.0 4.0875e+00 1.0 1.44e+07 1.0 5.1e+03 1.5e+03 1.6e+02 26 1 8 4 18 26 1 8 4 19 113 SNESSolve 10 1.0 1.5232e+01 1.0 1.97e+09 1.0 6.5e+04 3.0e+03 8.1e+02 95 99 97 97 94 95 99 97 97 94 4143 SNESSetUp 1 1.0 5.1522e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 SNESFunctionEval 10 1.0 6.5235e-02 1.0 1.51e+07 1.0 1.3e+03 3.1e+03 0.0e+00 0 1 2 2 0 0 1 2 2 0 7394 SNESJacobianEval 10 1.0 4.0876e+00 1.0 1.44e+07 1.0 5.1e+03 1.5e+03 1.6e+02 26 1 8 4 18 26 1 8 4 19 113 KSPSetUp 21 1.0 7.5703e-03 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01 0 0 0 0 1 0 0 0 0 1 0 KSPSolve 10 1.0 1.1072e+01 1.0 1.94e+09 1.0 5.8e+04 3.1e+03 6.2e+02 69 98 87 91 72 69 98 87 91 72 5607 KSPGMRESOrthog 273 1.0 1.4142e+00 1.3 5.59e+08 1.0 0.0e+00 0.0e+00 2.7e+02 8 28 0 0 31 8 28 0 0 32 12658 PCSetUp 10 1.0 2.1504e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 7.0e+00 13 0 0 0 1 13 0 0 0 1 0 PCApply 183 1.0 7.1285e+00 1.0 1.25e+09 1.0 3.6e+04 3.1e+03 2.1e+02 44 63 54 57 24 44 63 54 57 24 5609 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Distributed Mesh 2 1 5088 0. Matrix 70 70 360184304 0. Index Set 46 46 1106704 0. IS L to G Mapping 1 0 0 0. Star Forest Graph 28 25 30280 0. Discrete System 2 1 992 0. Vec Scatter 24 23 19872 0. Vector 102 102 27728576 0. Viewer 4 3 2592 0. TSAdapt 1 1 1448 0. TS 1 1 2472 0. DMTS 1 0 0 0. SNES 1 1 1532 0. DMSNES 3 2 1440 0. Krylov Solver 3 3 51352 0. DMKSP interface 1 0 0 0. Preconditioner 3 3 3888 0. ======================================================================================================================== Average time to get PetscTime(): 7.15256e-08 Average time for MPI_Barrier(): 1.0252e-05 Average time for zero size MPI_Send(): 2.85357e-06 #PETSc Option Table entries: -hmg_inner_pc_hypre_boomeramg_agg_nl 2 -hmg_inner_pc_hypre_boomeramg_coarsen_type modifiedRuge-Stueben -hmg_inner_pc_hypre_boomeramg_eu_level 2 -hmg_inner_pc_hypre_boomeramg_grid_sweeps_all 2 -hmg_inner_pc_hypre_boomeramg_interp_type ext+i -hmg_inner_pc_hypre_boomeramg_max_iter 2 -hmg_inner_pc_hypre_boomeramg_numfunctions 2 -hmg_inner_pc_hypre_boomeramg_relax_type_all l1scaled-SOR/Jacobi -hmg_inner_pc_hypre_boomeramg_smooth_type Euclid -hmg_inner_pc_hypre_boomeramg_strong_threshold 0.5 -hmg_inner_pc_hypre_euclid_reuse -hmg_inner_pc_type hypre -ksp_monitor -ksp_type gmres -ksp_view -log_view -pc_hmg_reuse_interpolation 1 -pc_type hmg -prop_steps 10 -ts_monitor -ts_type cn #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with 64 bit PetscInt Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 8 Configure options: --prefix=/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/petsc-3.13.0-ixjmudlfqith3lrxfcttq2f3plvucfrt --with-ssl=0 --download-c2html=0 --download-sowing=0 --download-hwloc=0 CFLAGS= FFLAGS= CXXFLAGS= --with-cc=/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/mpich-3.3.2-oxccmmod4vmpmxsz47se5pjxnsyy5kdt/bin/mpicc --with-cxx=/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/mpich-3.3.2-oxccmmod4vmpmxsz47se5pjxnsyy5kdt/bin/mpic++ --with-fc=/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/mpich-3.3.2-oxccmmod4vmpmxsz47se5pjxnsyy5kdt/bin/mpif90 --with-precision=double --with-scalar-type=real --with-shared-libraries=1 --with-debugging=0 --with-64-bit-indices=1 COPTFLAGS= FOPTFLAGS= CXXOPTFLAGS= --with-blaslapack-lib="/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/intel-mkl-2020.0.166-xcdij7v4hccrboxlwsyrjnarehyaauzt/compilers_and_libraries_2020.0.166/linux/mkl/lib/intel64/libmkl_intel_lp64.so /home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/intel-mkl-2020.0.166-xcdij7v4hccrboxlwsyrjnarehyaauzt/compilers_and_libraries_2020.0.166/linux/mkl/lib/intel64/libmkl_sequential.so /home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/intel-mkl-2020.0.166-xcdij7v4hccrboxlwsyrjnarehyaauzt/compilers_and_libraries_2020.0.166/linux/mkl/lib/intel64/libmkl_core.so /lib64/libpthread.so /lib64/libm.so /lib64/libdl.so" --with-x=0 --with-clanguage=C --with-scalapack=0 --with-metis=1 --with-metis-dir=/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/metis-5.1.0-cc5mnza4r4hdocybr7hgnaa55qdygegv --with-hdf5=1 --with-hdf5-dir=/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/hdf5-1.10.6-u2yapuygssqkrvo7qcihw66kzlg3ngtw --with-hypre=1 --with-hypre-dir=/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/hypre-2.18.2-4p4r2ph4zp5hbufbpswitiiij37oovuw --with-parmetis=1 --with-parmetis-dir=/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/parmetis-4.0.3-vxj3qtfmtdzyzyg2t3e224gocvgabu4h --with-mumps=0 --with-trilinos=0 --with-fftw=0 --with-valgrind=0 --with-cxx-dialect=C++11 --with-superlu_dist-include=/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/superlu-dist-6.3.0-suzf4hdgfgdjpblojcglmp7wc2wcjepk/include --with-superlu_dist-lib=/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/superlu-dist-6.3.0-suzf4hdgfgdjpblojcglmp7wc2wcjepk/lib/libsuperlu_dist.a --with-superlu_dist=1 --with-suitesparse=0 --with-zlib-include=/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/zlib-1.2.11-fjzlxw5lmcb2y4s6ca2e4su4qteufcm7/include --with-zlib-lib=/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/zlib-1.2.11-fjzlxw5lmcb2y4s6ca2e4su4qteufcm7/lib/libz.so --with-zlib=1 ----------------------------------------- Libraries compiled on 2020-04-14 04:45:47 on xrmlite Machine characteristics: Linux-4.18.0-147.5.1.el8_1.x86_64-x86_64-with-centos-8.1.1911-Core Using PETSc directory: /home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/petsc-3.13.0-ixjmudlfqith3lrxfcttq2f3plvucfrt Using PETSc arch: ----------------------------------------- Using C compiler: /home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/mpich-3.3.2-oxccmmod4vmpmxsz47se5pjxnsyy5kdt/bin/mpicc -fPIC Using Fortran compiler: /home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/mpich-3.3.2-oxccmmod4vmpmxsz47se5pjxnsyy5kdt/bin/mpif90 -fPIC ----------------------------------------- Using include paths: -I/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/petsc-3.13.0-ixjmudlfqith3lrxfcttq2f3plvucfrt/include -I/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/hypre-2.18.2-4p4r2ph4zp5hbufbpswitiiij37oovuw/include -I/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/superlu-dist-6.3.0-suzf4hdgfgdjpblojcglmp7wc2wcjepk/include -I/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/hdf5-1.10.6-u2yapuygssqkrvo7qcihw66kzlg3ngtw/include -I/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/parmetis-4.0.3-vxj3qtfmtdzyzyg2t3e224gocvgabu4h/include -I/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/metis-5.1.0-cc5mnza4r4hdocybr7hgnaa55qdygegv/include -I/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/zlib-1.2.11-fjzlxw5lmcb2y4s6ca2e4su4qteufcm7/include ----------------------------------------- Using C linker: /home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/mpich-3.3.2-oxccmmod4vmpmxsz47se5pjxnsyy5kdt/bin/mpicc Using Fortran linker: /home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/mpich-3.3.2-oxccmmod4vmpmxsz47se5pjxnsyy5kdt/bin/mpif90 Using libraries: -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/petsc-3.13.0-ixjmudlfqith3lrxfcttq2f3plvucfrt/lib -L/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/petsc-3.13.0-ixjmudlfqith3lrxfcttq2f3plvucfrt/lib -lpetsc -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/hypre-2.18.2-4p4r2ph4zp5hbufbpswitiiij37oovuw/lib -L/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/hypre-2.18.2-4p4r2ph4zp5hbufbpswitiiij37oovuw/lib -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/superlu-dist-6.3.0-suzf4hdgfgdjpblojcglmp7wc2wcjepk/lib -L/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/superlu-dist-6.3.0-suzf4hdgfgdjpblojcglmp7wc2wcjepk/lib -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/intel-mkl-2020.0.166-xcdij7v4hccrboxlwsyrjnarehyaauzt/compilers_and_libraries_2020.0.166/linux/mkl/lib/intel64 -L/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/intel-mkl-2020.0.166-xcdij7v4hccrboxlwsyrjnarehyaauzt/compilers_and_libraries_2020.0.166/linux/mkl/lib/intel64 /lib64/libpthread.so /lib64/libm.so /lib64/libdl.so -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/hdf5-1.10.6-u2yapuygssqkrvo7qcihw66kzlg3ngtw/lib -L/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/hdf5-1.10.6-u2yapuygssqkrvo7qcihw66kzlg3ngtw/lib -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/parmetis-4.0.3-vxj3qtfmtdzyzyg2t3e224gocvgabu4h/lib -L/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/parmetis-4.0.3-vxj3qtfmtdzyzyg2t3e224gocvgabu4h/lib -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/metis-5.1.0-cc5mnza4r4hdocybr7hgnaa55qdygegv/lib -L/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/metis-5.1.0-cc5mnza4r4hdocybr7hgnaa55qdygegv/lib -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/zlib-1.2.11-fjzlxw5lmcb2y4s6ca2e4su4qteufcm7/lib -L/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/zlib-1.2.11-fjzlxw5lmcb2y4s6ca2e4su4qteufcm7/lib -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/mpich-3.3.2-oxccmmod4vmpmxsz47se5pjxnsyy5kdt/lib -L/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/mpich-3.3.2-oxccmmod4vmpmxsz47se5pjxnsyy5kdt/lib -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.2.1/gcc-8.3.0-j573htph2tblzijltjxvql7hkkzzkpyn/lib:/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.2.1/gcc-8.3.0-j573htph2tblzijltjxvql7hkkzzkpyn/lib64 -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.2.1/gcc-8.3.0-j573htph2tblzijltjxvql7hkkzzkpyn/lib/gcc/x86_64-pc-linux-gnu/8.3.0 -L/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.2.1/gcc-8.3.0-j573htph2tblzijltjxvql7hkkzzkpyn/lib/gcc/x86_64-pc-linux-gnu/8.3.0 -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.2.1/gcc-8.3.0-j573htph2tblzijltjxvql7hkkzzkpyn/lib64 -L/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.2.1/gcc-8.3.0-j573htph2tblzijltjxvql7hkkzzkpyn/lib64 -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.2.1/gcc-8.3.0-j573htph2tblzijltjxvql7hkkzzkpyn/lib -L/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.2.1/gcc-8.3.0-j573htph2tblzijltjxvql7hkkzzkpyn/lib -lHYPRE -lsuperlu_dist -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -lparmetis -lmetis -lm -lz -lstdc++ -ldl -lmpifort -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lstdc++ -ldl -----------------------------------------