<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Wed, Mar 2, 2016 at 7:15 PM, Justin Chang <span dir="ltr"><<a href="mailto:jychang48@gmail.com" target="_blank">jychang48@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div>Barry,<br><br></div>Attached are the log_summary output for each preconditioner.<br></div></div></div></blockquote><div><br></div><div>MatPtAP takes all the time. It looks like there is no coarsening at all at the first level. Mark, can you see what is going on here?</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div></div>Thanks,<br></div>Justin<div><div class="h5"><br><div><div><div><br>On Wednesday, March 2, 2016, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
Justin,<br>
<br>
Do you have the -log_summary output for these runs?<br>
<br>
Barry<br>
<br>
> On Mar 2, 2016, at 4:28 PM, Justin Chang <<a>jychang48@gmail.com</a>> wrote:<br>
><br>
> Dear all,<br>
><br>
> Using the firedrake project, I am solving this simple mixed poisson problem:<br>
><br>
> mesh = UnitCubeMesh(40,40,40)<br>
> V = FunctionSpace(mesh,"RT",1)<br>
> Q = FunctionSpace(mesh,"DG",0)<br>
> W = V*Q<br>
><br>
> v, p = TrialFunctions(W)<br>
> w, q = TestFunctions(W)<br>
><br>
> f = Function(Q)<br>
> f.interpolate(Expression("12*pi*pi*sin(pi*x[0]*2)*sin(pi*x[1]*2)*sin(2*pi*x[2])"))<br>
><br>
> a = dot(v,w)*dx - p*div(w)*dx + div(v)*q*dx<br>
> L = f*q*dx<br>
><br>
> u = Function(W)<br>
> solve(a==L,u,solver_parameters={...})<br>
><br>
> This problem has 1161600 degrees of freedom. The solver_parameters are:<br>
><br>
> -ksp_type gmres<br>
> -pc_type fieldsplit<br>
> -pc_fieldsplit_type schur<br>
> -pc_fieldsplit_schur_fact_type: upper<br>
> -pc_fieldsplit_schur_precondition selfp<br>
> -fieldsplit_0_ksp_type preonly<br>
> -fieldsplit_0_pc_type bjacobi<br>
> -fieldsplit_1_ksp_type preonly<br>
> -fieldsplit_1_pc_type hypre/ml/gamg<br>
><br>
> for the last option, I compared the wall-clock timings for hypre, ml,and gamg. Here are the strong-scaling results (across 64 cores, 8 cores per Intel Xeon E5-2670 node) for hypre, ml, and gamg:<br>
><br>
> hypre:<br>
> 1 core: 47.5 s, 12 solver iters<br>
> 2 cores: 34.1 s, 15 solver iters<br>
> 4 cores: 21.5 s, 15 solver iters<br>
> 8 cores: 16.6 s, 15 solver iters<br>
> 16 cores: 10.2 s, 15 solver iters<br>
> 24 cores: 7.66 s, 15 solver iters<br>
> 32 cores: 6.31 s, 15 solver iters<br>
> 40 cores: 5.68 s, 15 solver iters<br>
> 48 cores: 5.36 s, 16 solver iters<br>
> 56 cores: 5.12 s, 16 solver iters<br>
> 64 cores: 4.99 s, 16 solver iters<br>
><br>
> ml:<br>
> 1 core: 4.44 s, 14 solver iters<br>
> 2 cores: 2.85 s, 16 solver iters<br>
> 4 cores: 1.6 s, 17 solver iters<br>
> 8 cores: 0.966 s, 17 solver iters<br>
> 16 cores: 0.585 s, 18 solver iters<br>
> 24 cores: 0.440 s, 18 solver iters<br>
> 32 cores: 0.375 s, 18 solver iters<br>
> 40 cores: 0.332 s, 18 solver iters<br>
> 48 cores: 0.307 s, 17 solver iters<br>
> 56 cores: 0.290 s, 18 solver iters<br>
> 64 cores: 0.281 s, 18 solver items<br>
><br>
> gamg:<br>
> 1 core: 613 s, 12 solver iters<br>
> 2 cores: 204 s, 15 solver iters<br>
> 4 cores: 77.1 s, 15 solver iters<br>
> 8 cores: 38.1 s, 15 solver iters<br>
> 16 cores: 15.9 s, 16 solver iters<br>
> 24 cores: 9.24 s, 16 solver iters<br>
> 32 cores: 5.92 s, 16 solver iters<br>
> 40 cores: 4.72 s, 16 solver iters<br>
> 48 cores: 3.89 s, 16 solver iters<br>
> 56 cores: 3.65 s, 16 solver iters<br>
> 64 cores: 3.46 s, 16 solver iters<br>
><br>
> The performance difference between ML and HYPRE makes sense to me, but what I am really confused about is GAMG. It seems GAMG is really slow on a single core but something internally is causing it to speed up super-linearly as I increase the number of MPI processes. Shouldn't ML and GAMG have the same performance? I am not sure what log outputs to give you guys, but for starters, below is -ksp_view for the single core case with GAMG<br>
><br>
> KSP Object:(solver_) 1 MPI processes<br>
> type: gmres<br>
> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
> GMRES: happy breakdown tolerance 1e-30<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-07, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using PRECONDITIONED norm type for convergence test<br>
> PC Object:(solver_) 1 MPI processes<br>
> type: fieldsplit<br>
> FieldSplit with Schur preconditioner, factorization UPPER<br>
> Preconditioner for the Schur complement formed from Sp, an assembled approximation to S, which uses (lumped, if requested) A00's diagonal's inverse<br>
> Split info:<br>
> Split number 0 Defined by IS<br>
> Split number 1 Defined by IS<br>
> KSP solver for A00 block<br>
> KSP Object: (solver_fieldsplit_0_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (solver_fieldsplit_0_) 1 MPI processes<br>
> type: bjacobi<br>
> block Jacobi: number of blocks = 1<br>
> Local solve is same for all blocks, in the following KSP and PC objects:<br>
> KSP Object: (solver_fieldsplit_0_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (solver_fieldsplit_0_sub_) 1 MPI processes<br>
> type: ilu<br>
> ILU: out-of-place factorization<br>
> 0 levels of fill<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: natural<br>
> factor fill ratio given 1., needed 1.<br>
> Factored matrix follows:<br>
> Mat Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=777600, cols=777600<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=5385600, allocated nonzeros=5385600<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Mat Object: (solver_fieldsplit_0_) 1 MPI processes<br>
> type: seqaij<br>
> rows=777600, cols=777600<br>
> total: nonzeros=5385600, allocated nonzeros=5385600<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Mat Object: (solver_fieldsplit_0_) 1 MPI processes<br>
> type: seqaij<br>
> rows=777600, cols=777600<br>
> total: nonzeros=5385600, allocated nonzeros=5385600<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> KSP solver for S = A11 - A10 inv(A00) A01<br>
> KSP Object: (solver_fieldsplit_1_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (solver_fieldsplit_1_) 1 MPI processes<br>
> type: gamg<br>
> MG: type is MULTIPLICATIVE, levels=5 cycles=v<br>
> Cycles per PCApply=1<br>
> Using Galerkin computed coarse grid matrices<br>
> GAMG specific options<br>
> Threshold for dropping small values from graph 0.<br>
> AGG specific options<br>
> Symmetric graph false<br>
> Coarse grid solver -- level -------------------------------<br>
> KSP Object: (solver_fieldsplit_1_mg_coarse_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=1, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (solver_fieldsplit_1_mg_coarse_) 1 MPI processes<br>
> type: bjacobi<br>
> block Jacobi: number of blocks = 1<br>
> Local solve is same for all blocks, in the following KSP and PC objects:<br>
> KSP Object: (solver_fieldsplit_1_mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=1, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (solver_fieldsplit_1_mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> using diagonal shift on blocks to prevent zero pivot [INBLOCKS]<br>
> matrix ordering: nd<br>
> factor fill ratio given 5., needed 1.<br>
> Factored matrix follows:<br>
> Mat Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=9, cols=9<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=81, allocated nonzeros=81<br>
> total number of mallocs used during MatSetValues calls =0<br>
> using I-node routines: found 2 nodes, limit used is 5<br>
> linear system matrix = precond matrix:<br>
> Mat Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=9, cols=9<br>
> total: nonzeros=81, allocated nonzeros=81<br>
> total number of mallocs used during MatSetValues calls =0<br>
> using I-node routines: found 2 nodes, limit used is 5<br>
> linear system matrix = precond matrix:<br>
> Mat Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=9, cols=9<br>
> total: nonzeros=81, allocated nonzeros=81<br>
> total number of mallocs used during MatSetValues calls =0<br>
> using I-node routines: found 2 nodes, limit used is 5<br>
> Down solver (pre-smoother) on level 1 -------------------------------<br>
> KSP Object: (solver_fieldsplit_1_mg_levels_1_) 1 MPI processes<br>
> type: chebyshev<br>
> Chebyshev: eigenvalue estimates: min = 0.0999525, max = 1.09948<br>
> Chebyshev: eigenvalues estimated using gmres with translations [0. 0.1; 0. 1.1]<br>
> KSP Object: (solver_fieldsplit_1_mg_levels_1_esteig_) 1 MPI processes<br>
> type: gmres<br>
> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
> GMRES: happy breakdown tolerance 1e-30<br>
> maximum iterations=10, initial guess is zero<br>
> tolerances: relative=1e-12, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using PRECONDITIONED norm type for convergence test<br>
> maximum iterations=2<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using nonzero initial guess<br>
> using NONE norm type for convergence test<br>
> PC Object: (solver_fieldsplit_1_mg_levels_1_) 1 MPI processes<br>
> type: sor<br>
> SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.<br>
> linear system matrix = precond matrix:<br>
> Mat Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=207, cols=207<br>
> total: nonzeros=42849, allocated nonzeros=42849<br>
> total number of mallocs used during MatSetValues calls =0<br>
> using I-node routines: found 42 nodes, limit used is 5<br>
> Up solver (post-smoother) same as down solver (pre-smoother)<br>
> Down solver (pre-smoother) on level 2 -------------------------------<br>
> KSP Object: (solver_fieldsplit_1_mg_levels_2_) 1 MPI processes<br>
> type: chebyshev<br>
> Chebyshev: eigenvalue estimates: min = 0.0996628, max = 1.09629<br>
> Chebyshev: eigenvalues estimated using gmres with translations [0. 0.1; 0. 1.1]<br>
> KSP Object: (solver_fieldsplit_1_mg_levels_2_esteig_) 1 MPI processes<br>
> type: gmres<br>
> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
> GMRES: happy breakdown tolerance 1e-30<br>
> maximum iterations=10, initial guess is zero<br>
> tolerances: relative=1e-12, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using PRECONDITIONED norm type for convergence test<br>
> maximum iterations=2<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using nonzero initial guess<br>
> using NONE norm type for convergence test<br>
> PC Object: (solver_fieldsplit_1_mg_levels_2_) 1 MPI processes<br>
> type: sor<br>
> SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.<br>
> linear system matrix = precond matrix:<br>
> Mat Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=5373, cols=5373<br>
> total: nonzeros=28852043, allocated nonzeros=28852043<br>
> total number of mallocs used during MatSetValues calls =0<br>
> using I-node routines: found 1481 nodes, limit used is 5<br>
> Up solver (post-smoother) same as down solver (pre-smoother)<br>
> Down solver (pre-smoother) on level 3 -------------------------------<br>
> KSP Object: (solver_fieldsplit_1_mg_levels_3_) 1 MPI processes<br>
> type: chebyshev<br>
> Chebyshev: eigenvalue estimates: min = 0.0994294, max = 1.09372<br>
> Chebyshev: eigenvalues estimated using gmres with translations [0. 0.1; 0. 1.1]<br>
> KSP Object: (solver_fieldsplit_1_mg_levels_3_esteig_) 1 MPI processes<br>
> type: gmres<br>
> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
> GMRES: happy breakdown tolerance 1e-30<br>
> maximum iterations=10, initial guess is zero<br>
> tolerances: relative=1e-12, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using PRECONDITIONED norm type for convergence test<br>
> maximum iterations=2<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using nonzero initial guess<br>
> using NONE norm type for convergence test<br>
> PC Object: (solver_fieldsplit_1_mg_levels_3_) 1 MPI processes<br>
> type: sor<br>
> SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.<br>
> linear system matrix = precond matrix:<br>
> Mat Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=52147, cols=52147<br>
> total: nonzeros=38604909, allocated nonzeros=38604909<br>
> total number of mallocs used during MatSetValues calls =2<br>
> not using I-node routines<br>
> Up solver (post-smoother) same as down solver (pre-smoother)<br>
> Down solver (pre-smoother) on level 4 -------------------------------<br>
> KSP Object: (solver_fieldsplit_1_mg_levels_4_) 1 MPI processes<br>
> type: chebyshev<br>
> Chebyshev: eigenvalue estimates: min = 0.158979, max = 1.74876<br>
> Chebyshev: eigenvalues estimated using gmres with translations [0. 0.1; 0. 1.1]<br>
> KSP Object: (solver_fieldsplit_1_mg_levels_4_esteig_) 1 MPI processes<br>
> type: gmres<br>
> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
> GMRES: happy breakdown tolerance 1e-30<br>
> maximum iterations=10, initial guess is zero<br>
> tolerances: relative=1e-12, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using PRECONDITIONED norm type for convergence test<br>
> maximum iterations=2<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using nonzero initial guess<br>
> using NONE norm type for convergence test<br>
> PC Object: (solver_fieldsplit_1_mg_levels_4_) 1 MPI processes<br>
> type: sor<br>
> SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.<br>
> linear system matrix followed by preconditioner matrix:<br>
> Mat Object: (solver_fieldsplit_1_) 1 MPI processes<br>
> type: schurcomplement<br>
> rows=384000, cols=384000<br>
> Schur complement A11 - A10 inv(A00) A01<br>
> A11<br>
> Mat Object: (solver_fieldsplit_1_) 1 MPI processes<br>
> type: seqaij<br>
> rows=384000, cols=384000<br>
> total: nonzeros=384000, allocated nonzeros=384000<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> A10<br>
> Mat Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=384000, cols=777600<br>
> total: nonzeros=1919999, allocated nonzeros=1919999<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> KSP of A00<br>
> KSP Object: (solver_fieldsplit_0_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (solver_fieldsplit_0_) 1 MPI processes<br>
> type: bjacobi<br>
> block Jacobi: number of blocks = 1<br>
> Local solve is same for all blocks, in the following KSP and PC objects:<br>
> KSP Object: (solver_fieldsplit_0_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (solver_fieldsplit_0_sub_) 1 MPI processes<br>
> type: ilu<br>
> ILU: out-of-place factorization<br>
> 0 levels of fill<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: natural<br>
> factor fill ratio given 1., needed 1.<br>
> Factored matrix follows:<br>
> Mat Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=777600, cols=777600<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=5385600, allocated nonzeros=5385600<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Mat Object: (solver_fieldsplit_0_) 1 MPI processes<br>
> type: seqaij<br>
> rows=777600, cols=777600<br>
> total: nonzeros=5385600, allocated nonzeros=5385600<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Mat Object: (solver_fieldsplit_0_) 1 MPI processes<br>
> type: seqaij<br>
> rows=777600, cols=777600<br>
> total: nonzeros=5385600, allocated nonzeros=5385600<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> A01<br>
> Mat Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=777600, cols=384000<br>
> total: nonzeros=1919999, allocated nonzeros=1919999<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> Mat Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=384000, cols=384000<br>
> total: nonzeros=3416452, allocated nonzeros=3416452<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> Up solver (post-smoother) same as down solver (pre-smoother)<br>
> linear system matrix followed by preconditioner matrix:<br>
> Mat Object: (solver_fieldsplit_1_) 1 MPI processes<br>
> type: schurcomplement<br>
> rows=384000, cols=384000<br>
> Schur complement A11 - A10 inv(A00) A01<br>
> A11<br>
> Mat Object: (solver_fieldsplit_1_) 1 MPI processes<br>
> type: seqaij<br>
> rows=384000, cols=384000<br>
> total: nonzeros=384000, allocated nonzeros=384000<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> A10<br>
> Mat Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=384000, cols=777600<br>
> total: nonzeros=1919999, allocated nonzeros=1919999<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> KSP of A00<br>
> KSP Object: (solver_fieldsplit_0_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (solver_fieldsplit_0_) 1 MPI processes<br>
> type: bjacobi<br>
> block Jacobi: number of blocks = 1<br>
> Local solve is same for all blocks, in the following KSP and PC objects:<br>
> KSP Object: (solver_fieldsplit_0_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (solver_fieldsplit_0_sub_) 1 MPI processes<br>
> type: ilu<br>
> ILU: out-of-place factorization<br>
> 0 levels of fill<br>
> tolerance for zero pivot 2.22045e-14<br>
> matrix ordering: natural<br>
> factor fill ratio given 1., needed 1.<br>
> Factored matrix follows:<br>
> Mat Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=777600, cols=777600<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=5385600, allocated nonzeros=5385600<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Mat Object: (solver_fieldsplit_0_) 1 MPI processes<br>
> type: seqaij<br>
> rows=777600, cols=777600<br>
> total: nonzeros=5385600, allocated nonzeros=5385600<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Mat Object: (solver_fieldsplit_0_) 1 MPI processes<br>
> type: seqaij<br>
> rows=777600, cols=777600<br>
> total: nonzeros=5385600, allocated nonzeros=5385600<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> A01<br>
> Mat Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=777600, cols=384000<br>
> total: nonzeros=1919999, allocated nonzeros=1919999<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> Mat Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=384000, cols=384000<br>
> total: nonzeros=3416452, allocated nonzeros=3416452<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Mat Object: 1 MPI processes<br>
> type: nest<br>
> rows=1161600, cols=1161600<br>
> Matrix object:<br>
> type=nest, rows=2, cols=2<br>
> MatNest structure:<br>
> (0,0) : prefix="solver_fieldsplit_0_", type=seqaij, rows=777600, cols=777600<br>
> (0,1) : type=seqaij, rows=777600, cols=384000<br>
> (1,0) : type=seqaij, rows=384000, cols=777600<br>
> (1,1) : prefix="solver_fieldsplit_1_", type=seqaij, rows=384000, cols=384000<br>
><br>
> Any insight as to what's happening? Btw this firedrake/petsc-mapdes is from way back in october 2015 (yes much has changed since but reinstalling/updating firedrake and petsc on LANL's firewall HPC machines is a big pain in the ass).<br>
><br>
> Thanks,<br>
> Justin<br>
<br>
</blockquote>
</div></div></div></div></div></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div>
</div></div>