<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Dec 14, 2022 at 1:11 PM Blaise Bourdin <<a href="mailto:bourdin@mcmaster.ca">bourdin@mcmaster.ca</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
Hi Mark,<br>
<div><br>
<blockquote type="cite">
<div>On Dec 14, 2022, at 11:07 AM, Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>> wrote:</div>
<br>
<div>
<div dir="ltr">
<div dir="ltr"><br>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Wed, Dec 14, 2022 at 9:38 AM Blaise Bourdin <<a href="mailto:bourdin@mcmaster.ca" target="_blank">bourdin@mcmaster.ca</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>Hi Jed,
<div><br>
</div>
<div>Thanks for pointing us in the right direction. </div>
<div>We were using MatNullSpaceCreateRigidBody which does not know anything about the discretization, hence our issues with quadratic elements. DMPlexCreate RigidBody does not work out of the box for us since we do not use PetscFE at the moment, but we can
easily build the near null space by hand.</div>
</div>
</blockquote>
<div><br>
</div>
<div>Oh, MatNullSpaceCreateRigidBody should work because it takes the coordinates. You just need to get the coordinates for all the points/vertices.</div>
<div>Or you can build it by hand.</div>
<div>I don't know of DMPlexCreate RigidBody does the right thing. This would take a little code and I'm not sure if (Matt) did this (kinda doubt it). It should error out if not, but you don't use it anyway.</div>
<div><br>
</div>
<div>Did you call MatNullSpaceCreateRigidBody with a vector of coordinates that only has the corner points? (In that case it should have through an error)</div>
</div>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>Yes. I need to figure out why it did not throw an error.</div></div></div></blockquote><div><br></div><div>I see that MatNullSpaceCreateRigidBody is not a Mat method! This is really just a utility method that creates the RBMs for each "node" and does not relate to a matrix to make sure they match.</div><div>Now you must call MatSetNearNullSpace(A, matnull);</div><div><br></div><div>I see the problem now. MatSetNearNullSpace simply attached matnull to A, and then GAMG grabs that and copies the data in.</div><div>GAMG does not check the sizes. So the null space that GAMG used was garbage and it read past the end of the provided null space vectors.</div><div><br></div><div>I'll add a check.</div><div><br></div><div>Thanks,</div><div>Mark</div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div>
<br>
<blockquote type="cite">
<div>
<div dir="ltr">
<div class="gmail_quote">
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<div><br>
</div>
<div>FWIW, removing the wrong null space brought GAMG iteration number to something more reasonable</div>
</div>
</blockquote>
<div><br>
</div>
<div>Good. I'm not sure what happened, but MatNullSpaceCreateRigidBody should work unless you have a non-standard element and you can always test it by call MatMult on the RBMs and verify that its a null space, away from the BCs.</div>
</div>
</div>
</div>
</blockquote>
<div>Will do.</div>
<div>All in all, the easiest for me to to rebuild the null space. This way, I am absolutely certain that it will works, regardless of my FE space.</div>
<div><br>
</div>
<div>Blaise</div>
<div><br>
</div>
<br>
<blockquote type="cite">
<div>
<div dir="ltr">
<div class="gmail_quote">
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<div><br>
</div>
<div>Thanks a million,</div>
<div>Blaise</div>
<div><br>
<div><br>
<blockquote type="cite">
<div>On Dec 13, 2022, at 10:37 PM, Jed Brown <<a href="mailto:jed@jedbrown.org" target="_blank">jed@jedbrown.org</a>> wrote:</div>
<br>
<div><span style="font-family:FiraCode-Regular;font-size:11px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;float:none;display:inline">Do
you have slip/symmetry boundary conditions, where some components are constrained? In that case, there is no uniform block size and I think you'll need DMPlexCreateRigidBody() and MatSetNearNullSpace().</span><br style="font-family:FiraCode-Regular;font-size:11px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<br style="font-family:FiraCode-Regular;font-size:11px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<span style="font-family:FiraCode-Regular;font-size:11px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;float:none;display:inline">The
PCSetCoordinates() code won't work for non-constant block size.</span><br style="font-family:FiraCode-Regular;font-size:11px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<br style="font-family:FiraCode-Regular;font-size:11px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<span style="font-family:FiraCode-Regular;font-size:11px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;float:none;display:inline">-pc_type
gamg should work okay out of the box for elasticity. For hypre, I've had good luck with this options suite, which also runs on GPU.</span><br style="font-family:FiraCode-Regular;font-size:11px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<br style="font-family:FiraCode-Regular;font-size:11px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<span style="font-family:FiraCode-Regular;font-size:11px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;float:none;display:inline">-pc_type
hypre -pc_hypre_boomeramg_coarsen_type pmis -pc_hypre_boomeramg_interp_type ext+i -pc_hypre_boomeramg_no_CF -pc_hypre_boomeramg_P_max 6 -pc_hypre_boomeramg_relax_type_down Chebyshev -pc_hypre_boomeramg_relax_type_up Chebyshev -pc_hypre_boomeramg_strong_threshold
0.5</span><br style="font-family:FiraCode-Regular;font-size:11px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<br style="font-family:FiraCode-Regular;font-size:11px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<span style="font-family:FiraCode-Regular;font-size:11px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;float:none;display:inline">Blaise
Bourdin <</span><a href="mailto:bourdin@mcmaster.ca" style="font-family:FiraCode-Regular;font-size:11px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px" target="_blank">bourdin@mcmaster.ca</a><span style="font-family:FiraCode-Regular;font-size:11px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;float:none;display:inline">>
writes:</span><br style="font-family:FiraCode-Regular;font-size:11px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<br style="font-family:FiraCode-Regular;font-size:11px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<blockquote type="cite" style="font-family:FiraCode-Regular;font-size:11px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
Hi,<br>
<br>
I am getting close to finish porting a code from petsc 3.3 / sieve to main / dmplex, but am<br>
now encountering difficulties<span> </span><br>
I am reasonably sure that the Jacobian and residual are correct. The codes handle boundary<br>
conditions differently (MatZeroRowCols vs dmplex constraints) so it is not trivial to compare<br>
them. Running with snes_type ksponly pc_type Jacobi or hyper gives me the same results in<br>
roughly the same number of iterations.<br>
<br>
In my old code, gamg would work out of the box. When using petsc-main, -pc_type gamg -<br>
pc_gamg_type agg works for _some_ problems using P1-Lagrange elements, but never for<br>
P2-Lagrange. The typical error message is in gamg_agg.txt<br>
<br>
When using -pc_type classical, a problem where the KSP would converge in 47 iteration in<br>
3.3 now takes 1400. ksp_view_3.3.txt and ksp_view_main.txt show the output of -ksp_view<br>
for both versions. I don’t notice anything obvious.<br>
<br>
Strangely, removing the call to PCSetCoordinates does not have any impact on the<br>
convergence.<br>
<br>
I am sure that I am missing something, or not passing the right options. What’s a good<br>
starting point for 3D elasticity?<br>
Regards,<br>
Blaise<br>
<br>
—<span> </span><br>
Canada Research Chair in Mathematical and Computational Aspects of Solid Mechanics<br>
(Tier 1)<br>
Professor, Department of Mathematics & Statistics<br>
Hamilton Hall room 409A, McMaster University<br>
1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada<span> </span><br>
<a href="https://www.math.mcmaster.ca/bourdin" target="_blank">https://www.math.mcmaster.ca/bourdin</a> | +1 (905) 525 9140 ext. 27243<br>
[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------<br>
[0]PETSC ERROR: Petsc has generated inconsistent data<br>
[0]PETSC ERROR: Computed maximum singular value as zero<br>
[0]PETSC ERROR: WARNING! There are option(s) set that were not used! Could be the program crashed before they were used or a spelling mistake, etc!<br>
[0]PETSC ERROR: Option left: name:-displacement_ksp_converged_reason value: ascii source: file<br>
[0]PETSC ERROR: See<span> </span><a href="https://petsc.org/release/faq/" target="_blank">https://petsc.org/release/faq/</a><span> </span>for trouble shooting.<br>
[0]PETSC ERROR: Petsc Development GIT revision: v3.18.2-341-g16200351da0 GIT Date: 2022-12-12 23:42:20 +0000<br>
[0]PETSC ERROR: /home/bourdinb/Development/mef90/mef90-dmplex/bbserv-gcc11.2.1-mvapich2-2.3.7-O/bin/ThermoElasticity on a bbserv-gcc11.2.1-mvapich2-2.3.7-O named bb01 by bourdinb Tue Dec 13 17:02:19 2022<br>
[0]PETSC ERROR: Configure options --CFLAGS=-Wunused --FFLAGS="-ffree-line-length-none -fallow-argument-mismatch -Wunused" --COPTFLAGS="-O2 -march=znver2" --CXXOPTFLAGS="-O2 -march=znver2" --FOPTFLAGS="-O2 -march=znver2" --download-chaco=1 --download-exodusii=1
--download-fblaslapack=1 --download-hdf5=1 --download-hypre=1 --download-metis=1 --download-ml=1 --download-mumps=1 --download-netcdf=1 --download-p4est=1 --download-parmetis=1 --download-pnetcdf=1 --download-scalapack=1 --download-sowing=1 --download-sowing-cc=/opt/rh/devtoolset-9/root/usr/bin/gcc
--download-sowing-cxx=/opt/rh/devtoolset-9/root/usr/bin/g++ --download-sowing-cpp=/opt/rh/devtoolset-9/root/usr/bin/cpp --download-sowing-cxxcpp=/opt/rh/devtoolset-9/root/usr/bin/cpp --download-superlu=1 --download-triangle=1 --download-yaml=1 --download-zlib=1
--with-debugging=0 --with-mpi-dir=/opt/HPC/mvapich2/2.3.7-gcc11.2.1 --with-pic --with-shared-libraries=1 --with-mpiexec=srun --with-x11=0<br>
[0]PETSC ERROR: #1 PCGAMGOptProlongator_AGG() at /1/HPC/petsc/main/src/ksp/pc/impls/gamg/agg.c:779<br>
[0]PETSC ERROR: #2 PCSetUp_GAMG() at /1/HPC/petsc/main/src/ksp/pc/impls/gamg/gamg.c:639<br>
[0]PETSC ERROR: #3 PCSetUp() at /1/HPC/petsc/main/src/ksp/pc/interface/precon.c:994<br>
[0]PETSC ERROR: #4 KSPSetUp() at /1/HPC/petsc/main/src/ksp/ksp/interface/itfunc.c:405<br>
[0]PETSC ERROR: #5 KSPSolve_Private() at /1/HPC/petsc/main/src/ksp/ksp/interface/itfunc.c:824<br>
[0]PETSC ERROR: #6 KSPSolve() at /1/HPC/petsc/main/src/ksp/ksp/interface/itfunc.c:1070<br>
[0]PETSC ERROR: #7 SNESSolve_KSPONLY() at /1/HPC/petsc/main/src/snes/impls/ksponly/ksponly.c:48<br>
[0]PETSC ERROR: #8 SNESSolve() at /1/HPC/petsc/main/src/snes/interface/snes.c:4693<br>
[0]PETSC ERROR: #9 /home/bourdinb/Development/mef90/mef90-dmplex/ThermoElasticity/ThermoElasticity.F90:228<br>
Linear solve converged due to CONVERGED_RTOL iterations 46<br>
KSP Object:(Disp_) 32 MPI processes<br>
type: cg<br>
maximum iterations=10000<br>
tolerances: relative=1e-05, absolute=1e-08, divergence=1e+10<br>
left preconditioning<br>
using nonzero initial guess<br>
using PRECONDITIONED norm type for convergence test<br>
PC Object:(Disp_) 32 MPI processes<br>
type: gamg<br>
MG: type is MULTIPLICATIVE, levels=4 cycles=v<br>
Cycles per PCApply=1<br>
Using Galerkin computed coarse grid matrices<br>
Coarse grid solver -- level -------------------------------<br>
KSP Object: (Disp_mg_coarse_) 32 MPI processes<br>
type: gmres<br>
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
GMRES: happy breakdown tolerance 1e-30<br>
maximum iterations=1, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_) 32 MPI processes<br>
type: bjacobi<br>
block Jacobi: number of blocks = 32<br>
Local solve info for each block is in the following KSP and PC objects:<br>
[0] number of local blocks = 1, first local block number = 0<br>
[0] local block number 0<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 1.06061<br>
Factored matrix follows:<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=54, cols=54, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1260, allocated nonzeros=1260<br>
total number of mallocs used during MatSetValues calls =0<br>
using I-node routines: found 16 nodes, limit used is 5<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=54, cols=54, bs=6<br>
total: nonzeros=1188, allocated nonzeros=1188<br>
total number of mallocs used during MatSetValues calls =0<br>
using I-node routines: found 17 nodes, limit used is 5<br>
- - - - - - - - - - - - - - - - - -<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
[1] number of local blocks = 1, first local block number = 1<br>
[1] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
KSP Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0, bs=6<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
[2] number of local blocks = 1, first local block number = 2<br>
[2] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[3] number of local blocks = 1, first local block number = 3<br>
[3] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[4] number of local blocks = 1, first local block number = 4<br>
[4] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[5] number of local blocks = 1, first local block number = 5<br>
[5] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[6] number of local blocks = 1, first local block number = 6<br>
[6] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[7] number of local blocks = 1, first local block number = 7<br>
[7] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[8] number of local blocks = 1, first local block number = 8<br>
[8] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[9] number of local blocks = 1, first local block number = 9<br>
[9] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[10] number of local blocks = 1, first local block number = 10<br>
[10] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[11] number of local blocks = 1, first local block number = 11<br>
[11] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[12] number of local blocks = 1, first local block number = 12<br>
[12] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[13] number of local blocks = 1, first local block number = 13<br>
[13] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[14] number of local blocks = 1, first local block number = 14<br>
[14] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[15] number of local blocks = 1, first local block number = 15<br>
[15] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[16] number of local blocks = 1, first local block number = 16<br>
[16] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[17] number of local blocks = 1, first local block number = 17<br>
[17] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[18] number of local blocks = 1, first local block number = 18<br>
[18] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[19] number of local blocks = 1, first local block number = 19<br>
[19] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[20] number of local blocks = 1, first local block number = 20<br>
[20] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[21] number of local blocks = 1, first local block number = 21<br>
[21] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[22] number of local blocks = 1, first local block number = 22<br>
[22] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[23] number of local blocks = 1, first local block number = 23<br>
[23] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[24] number of local blocks = 1, first local block number = 24<br>
[24] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[25] number of local blocks = 1, first local block number = 25<br>
[25] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[26] number of local blocks = 1, first local block number = 26<br>
[26] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[27] number of local blocks = 1, first local block number = 27<br>
[27] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[28] number of local blocks = 1, first local block number = 28<br>
[28] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[29] number of local blocks = 1, first local block number = 29<br>
[29] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[30] number of local blocks = 1, first local block number = 30<br>
[30] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[31] number of local blocks = 1, first local block number = 31<br>
[31] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
linear system matrix = precond matrix:<br>
Matrix Object: 32 MPI processes<br>
type: mpiaij<br>
rows=54, cols=54, bs=6<br>
total: nonzeros=1188, allocated nonzeros=1188<br>
total number of mallocs used during MatSetValues calls =0<br>
using I-node (on process 0) routines: found 17 nodes, limit used is 5<br>
Down solver (pre-smoother) on level 1 -------------------------------<br>
KSP Object: (Disp_mg_levels_1_) 32 MPI processes<br>
type: chebyshev<br>
Chebyshev: eigenvalue estimates: min = 0.101023, max = 2.13327<br>
maximum iterations=2<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using nonzero initial guess<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_levels_1_) 32 MPI processes<br>
type: jacobi<br>
linear system matrix = precond matrix:<br>
Matrix Object: 32 MPI processes<br>
type: mpiaij<br>
rows=1086, cols=1086, bs=6<br>
total: nonzeros=67356, allocated nonzeros=67356<br>
total number of mallocs used during MatSetValues calls =0<br>
using I-node (on process 0) routines: found 362 nodes, limit used is 5<br>
Up solver (post-smoother) same as down solver (pre-smoother)<br>
Down solver (pre-smoother) on level 2 -------------------------------<br>
KSP Object: (Disp_mg_levels_2_) 32 MPI processes<br>
type: chebyshev<br>
Chebyshev: eigenvalue estimates: min = 0.0996526, max = 2.29388<br>
maximum iterations=2<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using nonzero initial guess<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_levels_2_) 32 MPI processes<br>
type: jacobi<br>
linear system matrix = precond matrix:<br>
Matrix Object: 32 MPI processes<br>
type: mpiaij<br>
rows=23808, cols=23808, bs=6<br>
total: nonzeros=1976256, allocated nonzeros=1976256<br>
total number of mallocs used during MatSetValues calls =0<br>
using I-node (on process 0) routines: found 7936 nodes, limit used is 5<br>
Up solver (post-smoother) same as down solver (pre-smoother)<br>
Down solver (pre-smoother) on level 3 -------------------------------<br>
KSP Object: (Disp_mg_levels_3_) 32 MPI processes<br>
type: chebyshev<br>
Chebyshev: eigenvalue estimates: min = 0.165968, max = 2.13065<br>
maximum iterations=2<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using nonzero initial guess<br>
using NONE norm type for convergence test<br>
PC Object: (Disp_mg_levels_3_) 32 MPI processes<br>
type: jacobi<br>
linear system matrix = precond matrix:<br>
Matrix Object: (Disp_) 32 MPI processes<br>
type: mpiaij<br>
rows=291087, cols=291087<br>
total: nonzeros=12323691, allocated nonzeros=12336696<br>
total number of mallocs used during MatSetValues calls =0<br>
using I-node (on process 0) routines: found 3419 nodes, limit used is 5<br>
Up solver (post-smoother) same as down solver (pre-smoother)<br>
linear system matrix = precond matrix:<br>
Matrix Object: (Disp_) 32 MPI processes<br>
type: mpiaij<br>
rows=291087, cols=291087<br>
total: nonzeros=12323691, allocated nonzeros=12336696<br>
total number of mallocs used during MatSetValues calls =0<br>
using I-node (on process 0) routines: found 3419 nodes, limit used is 5<br>
SNESConvergedReason returned 5 <br>
KSP Object: (Displacement_) 32 MPI processes<br>
type: cg<br>
maximum iterations=10000, nonzero initial guess<br>
tolerances: relative=1e-05, absolute=1e-08, divergence=1e+10<br>
left preconditioning<br>
using PRECONDITIONED norm type for convergence test<br>
PC Object: (Displacement_) 32 MPI processes<br>
type: gamg<br>
type is MULTIPLICATIVE, levels=4 cycles=v<br>
Cycles per PCApply=1<br>
Using externally compute Galerkin coarse grid matrices<br>
GAMG specific options<br>
Threshold for dropping small values in graph on each level = -1. -1. -1. -1. <br>
Threshold scaling factor for each level not specified = 1.<br>
Complexity: grid = 1.02128 operator = 1.05534<br>
Coarse grid solver -- level 0 -------------------------------<br>
KSP Object: (Displacement_mg_coarse_) 32 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Displacement_mg_coarse_) 32 MPI processes<br>
type: bjacobi<br>
number of blocks = 32<br>
Local solver information for first block is in the following KSP and PC objects on rank 0:<br>
Use -Displacement_mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks<br>
KSP Object: (Displacement_mg_coarse_sub_) 1 MPI process<br>
type: preonly<br>
maximum iterations=1, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Displacement_mg_coarse_sub_) 1 MPI process<br>
type: lu<br>
out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
using diagonal shift on blocks to prevent zero pivot [INBLOCKS]<br>
matrix ordering: nd<br>
factor fill ratio given 5., needed 1.08081<br>
Factored matrix follows:<br>
Mat Object: (Displacement_mg_coarse_sub_) 1 MPI process<br>
type: seqaij<br>
rows=20, cols=20<br>
package used to perform factorization: petsc<br>
total: nonzeros=214, allocated nonzeros=214<br>
using I-node routines: found 8 nodes, limit used is 5<br>
linear system matrix = precond matrix:<br>
Mat Object: (Displacement_mg_coarse_sub_) 1 MPI process<br>
type: seqaij<br>
rows=20, cols=20<br>
total: nonzeros=198, allocated nonzeros=198<br>
total number of mallocs used during MatSetValues calls=0<br>
using I-node routines: found 13 nodes, limit used is 5<br>
linear system matrix = precond matrix:<br>
Mat Object: 32 MPI processes<br>
type: mpiaij<br>
rows=20, cols=20<br>
total: nonzeros=198, allocated nonzeros=198<br>
total number of mallocs used during MatSetValues calls=0<br>
using I-node (on process 0) routines: found 13 nodes, limit used is 5<br>
Down solver (pre-smoother) on level 1 -------------------------------<br>
KSP Object: (Displacement_mg_levels_1_) 32 MPI processes<br>
type: chebyshev<br>
eigenvalue targets used: min 0.81922, max 9.01143<br>
eigenvalues estimated via gmres: min 0.186278, max 8.1922<br>
eigenvalues estimated using gmres with transform: [0. 0.1; 0. 1.1]<br>
KSP Object: (Displacement_mg_levels_1_esteig_) 32 MPI processes<br>
type: gmres<br>
restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
happy breakdown tolerance 1e-30<br>
maximum iterations=10, initial guess is zero<br>
tolerances: relative=1e-12, absolute=1e-50, divergence=10000.<br>
left preconditioning<br>
using PRECONDITIONED norm type for convergence test<br>
estimating eigenvalues using noisy right hand side<br>
maximum iterations=2, nonzero initial guess<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Displacement_mg_levels_1_) 32 MPI processes<br>
type: jacobi<br>
type DIAGONAL<br>
linear system matrix = precond matrix:<br>
Mat Object: 32 MPI processes<br>
type: mpiaij<br>
rows=799, cols=799<br>
total: nonzeros=83159, allocated nonzeros=83159<br>
total number of mallocs used during MatSetValues calls=0<br>
using I-node (on process 0) routines: found 23 nodes, limit used is 5<br>
Up solver (post-smoother) same as down solver (pre-smoother)<br>
Down solver (pre-smoother) on level 2 -------------------------------<br>
KSP Object: (Displacement_mg_levels_2_) 32 MPI processes<br>
type: chebyshev<br>
eigenvalue targets used: min 1.16291, max 12.792<br>
eigenvalues estimated via gmres: min 0.27961, max 11.6291<br>
eigenvalues estimated using gmres with transform: [0. 0.1; 0. 1.1]<br>
KSP Object: (Displacement_mg_levels_2_esteig_) 32 MPI processes<br>
type: gmres<br>
restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
happy breakdown tolerance 1e-30<br>
maximum iterations=10, initial guess is zero<br>
tolerances: relative=1e-12, absolute=1e-50, divergence=10000.<br>
left preconditioning<br>
using PRECONDITIONED norm type for convergence test<br>
estimating eigenvalues using noisy right hand side<br>
maximum iterations=2, nonzero initial guess<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Displacement_mg_levels_2_) 32 MPI processes<br>
type: jacobi<br>
type DIAGONAL<br>
linear system matrix = precond matrix:<br>
Mat Object: 32 MPI processes<br>
type: mpiaij<br>
rows=45721, cols=45721<br>
total: nonzeros=9969661, allocated nonzeros=9969661<br>
total number of mallocs used during MatSetValues calls=0<br>
using nonscalable MatPtAP() implementation<br>
not using I-node (on process 0) routines<br>
Up solver (post-smoother) same as down solver (pre-smoother)<br>
Down solver (pre-smoother) on level 3 -------------------------------<br>
KSP Object: (Displacement_mg_levels_3_) 32 MPI processes<br>
type: chebyshev<br>
eigenvalue targets used: min 0.281318, max 3.0945<br>
eigenvalues estimated via gmres: min 0.0522027, max 2.81318<br>
eigenvalues estimated using gmres with transform: [0. 0.1; 0. 1.1]<br>
KSP Object: (Displacement_mg_levels_3_esteig_) 32 MPI processes<br>
type: gmres<br>
restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
happy breakdown tolerance 1e-30<br>
maximum iterations=10, initial guess is zero<br>
tolerances: relative=1e-12, absolute=1e-50, divergence=10000.<br>
left preconditioning<br>
using PRECONDITIONED norm type for convergence test<br>
estimating eigenvalues using noisy right hand side<br>
maximum iterations=2, nonzero initial guess<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (Displacement_mg_levels_3_) 32 MPI processes<br>
type: jacobi<br>
type DIAGONAL<br>
linear system matrix = precond matrix:<br>
Mat Object: (Displacement_) 32 MPI processes<br>
type: mpiaij<br>
rows=2186610, cols=2186610, bs=3<br>
total: nonzeros=181659996, allocated nonzeros=181659996<br>
total number of mallocs used during MatSetValues calls=0<br>
has attached near null space<br>
using I-node (on process 0) routines: found 21368 nodes, limit used is 5<br>
Up solver (post-smoother) same as down solver (pre-smoother)<br>
linear system matrix = precond matrix:<br>
Mat Object: (Displacement_) 32 MPI processes<br>
type: mpiaij<br>
rows=2186610, cols=2186610, bs=3<br>
total: nonzeros=181659996, allocated nonzeros=181659996<br>
total number of mallocs used during MatSetValues calls=0<br>
has attached near null space<br>
using I-node (on process 0) routines: found 21368 nodes, limit used is 5<br>
cell set 1 elastic energy: 9.32425E-02 work: 1.86485E-01 total: -9.32425E-02<span> </span></blockquote>
</div>
</blockquote>
</div>
<br>
<div>
<div dir="auto" style="letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div>— <br>
Canada Research Chair in Mathematical and Computational Aspects of Solid Mechanics (Tier 1)</div>
<div>Professor, Department of Mathematics & Statistics<br>
Hamilton Hall room 409A, McMaster University<br>
1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada <br>
<a href="https://www.math.mcmaster.ca/bourdin" target="_blank">https://www.math.mcmaster.ca/bourdin</a> | +1 (905) 525 9140 ext. 27243</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<br>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
</div>
<br>
<div>
<div dir="auto" style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div dir="auto" style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">
<div>— <br>
Canada Research Chair in Mathematical and Computational Aspects of Solid Mechanics (Tier 1)</div>
<div>Professor, Department of Mathematics & Statistics<br>
Hamilton Hall room 409A, McMaster University<br>
1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada <br>
<a href="https://www.math.mcmaster.ca/bourdin" target="_blank">https://www.math.mcmaster.ca/bourdin</a> | +1 (905) 525 9140 ext. 27243</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<br>
</div>
</blockquote></div></div>