<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
</head>
<body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; color: rgb(0, 0, 0); font-size: 14px; font-family: Calibri, sans-serif; ">
<div>Hi all, </div>
<div><br>
</div>
<div>I'm simulating a problem with small fluxes, using the asm preconditioner and lu as the sub preconditioner. The simulation runs fine using 2 cores, but when I use more the fluxes disappear and the desired effect goes with them. </div>
<div><br>
</div>
<div>Does anyone have an idea of a suitable tolerance or parameter I should adjust? I am using the snes solver via the FEniCS package. </div>
<div><br>
</div>
<div>Thanks,</div>
<div>Mike</div>
<div><br>
</div>
<div>I attach an snes terminal output for reference:</div>
<div><br>
</div>
<div>
<div>SNES Object: 16 MPI processes</div>
<div> type: newtonls</div>
<div> maximum iterations=30, maximum function evaluations=2000</div>
<div> tolerances: relative=0.99, absolute=1e-05, solution=1e-10</div>
<div> total number of linear solver iterations=59</div>
<div> total number of function evaluations=2</div>
<div> SNESLineSearch Object: 16 MPI processes</div>
<div> type: basic</div>
<div> maxstep=1.000000e+08, minlambda=1.000000e-12</div>
<div> tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08</div>
<div> maximum iterations=1</div>
<div> KSP Object: 16 MPI processes</div>
<div> type: gmres</div>
<div> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement</div>
<div> GMRES: happy breakdown tolerance 1e-30</div>
<div> maximum iterations=10000, initial guess is zero</div>
<div> tolerances: relative=1e-05, absolute=1e-50, divergence=10000</div>
<div> left preconditioning</div>
<div> using PRECONDITIONED norm type for convergence test</div>
<div> PC Object: 16 MPI processes</div>
<div> type: asm</div>
<div> Additive Schwarz: total subdomain blocks = 16, amount of overlap = 5</div>
<div> Additive Schwarz: restriction/interpolation type - NONE</div>
<div> Local solve is same for all blocks, in the following KSP and PC objects:</div>
<div> KSP Object: (sub_) 1 MPI processes</div>
<div> type: preonly</div>
<div> maximum iterations=10000, initial guess is zero</div>
<div> tolerances: relative=1e-05, absolute=1e-50, divergence=10000</div>
<div> left preconditioning</div>
<div> using NONE norm type for convergence test</div>
<div> PC Object: (sub_) 1 MPI processes</div>
<div> type: lu</div>
<div> LU: out-of-place factorization</div>
<div> tolerance for zero pivot 2.22045e-14</div>
<div> matrix ordering: nd</div>
<div> factor fill ratio given 5, needed 5.25151</div>
<div> Factored matrix follows:</div>
<div> Matrix Object: 1 MPI processes</div>
<div> type: seqaij</div>
<div> rows=4412, cols=4412</div>
<div> package used to perform factorization: petsc</div>
<div> total: nonzeros=626736, allocated nonzeros=626736</div>
<div> total number of mallocs used during MatSetValues calls =0</div>
<div> using I-node routines: found 1103 nodes, limit used is 5</div>
<div> linear system matrix = precond matrix:</div>
<div> Matrix Object: 1 MPI processes</div>
<div> type: seqaij</div>
<div> rows=4412, cols=4412</div>
<div> total: nonzeros=119344, allocated nonzeros=119344</div>
<div> total number of mallocs used during MatSetValues calls =0</div>
<div> using I-node routines: found 1103 nodes, limit used is 5</div>
<div> linear system matrix = precond matrix:</div>
<div> Matrix Object: 16 MPI processes</div>
<div> type: mpiaij</div>
<div> rows=41820, cols=41820, bs=4</div>
<div> total: nonzeros=1161136, allocated nonzeros=1161136</div>
<div> total number of mallocs used during MatSetValues calls =0</div>
<div> using I-node (on process 0) routines: found 638 nodes, limit used is 5</div>
</div>
<div><br>
</div>
</body>
</html>