[petsc-users] Loosing a flux when using the asm preconditioned for >2 cores
Welland, Michael J.
mwelland at anl.gov
Wed Sep 3 13:32:44 CDT 2014
Hi all,
I'm simulating a problem with small fluxes, using the asm preconditioner and lu as the sub preconditioner. The simulation runs fine using 2 cores, but when I use more the fluxes disappear and the desired effect goes with them.
Does anyone have an idea of a suitable tolerance or parameter I should adjust? I am using the snes solver via the FEniCS package.
Thanks,
Mike
I attach an snes terminal output for reference:
SNES Object: 16 MPI processes
type: newtonls
maximum iterations=30, maximum function evaluations=2000
tolerances: relative=0.99, absolute=1e-05, solution=1e-10
total number of linear solver iterations=59
total number of function evaluations=2
SNESLineSearch Object: 16 MPI processes
type: basic
maxstep=1.000000e+08, minlambda=1.000000e-12
tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08
maximum iterations=1
KSP Object: 16 MPI processes
type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using PRECONDITIONED norm type for convergence test
PC Object: 16 MPI processes
type: asm
Additive Schwarz: total subdomain blocks = 16, amount of overlap = 5
Additive Schwarz: restriction/interpolation type - NONE
Local solve is same for all blocks, in the following KSP and PC objects:
KSP Object: (sub_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (sub_) 1 MPI processes
type: lu
LU: out-of-place factorization
tolerance for zero pivot 2.22045e-14
matrix ordering: nd
factor fill ratio given 5, needed 5.25151
Factored matrix follows:
Matrix Object: 1 MPI processes
type: seqaij
rows=4412, cols=4412
package used to perform factorization: petsc
total: nonzeros=626736, allocated nonzeros=626736
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 1103 nodes, limit used is 5
linear system matrix = precond matrix:
Matrix Object: 1 MPI processes
type: seqaij
rows=4412, cols=4412
total: nonzeros=119344, allocated nonzeros=119344
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 1103 nodes, limit used is 5
linear system matrix = precond matrix:
Matrix Object: 16 MPI processes
type: mpiaij
rows=41820, cols=41820, bs=4
total: nonzeros=1161136, allocated nonzeros=1161136
total number of mallocs used during MatSetValues calls =0
using I-node (on process 0) routines: found 638 nodes, limit used is 5
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140903/c9cb1b2f/attachment.html>
More information about the petsc-users
mailing list