<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">The two processor case does not reach far enough that ksp_view gives me output. The one processor case gives me the output below.<div>I also added -mg_levels_pc_factor_shift_type nonzero, but still with the same error.</div><div><br></div><div>Benjamin</div><div><br><div><br></div><div><div>KSP Object: 1 MPI processes</div><div> type: cg</div><div> maximum iterations=500</div><div> tolerances: relative=1e-10, absolute=1e-10, divergence=10000</div><div> left preconditioning</div><div> has attached null space</div><div> using nonzero initial guess</div><div> using PRECONDITIONED norm type for convergence test</div><div>PC Object: 1 MPI processes</div><div> type: ml</div><div> MG: type is MULTIPLICATIVE, levels=4 cycles=v</div><div> Cycles per PCApply=1</div><div> Using Galerkin computed coarse grid matrices</div><div> Coarse grid solver -- level -------------------------------</div><div> KSP Object: (mg_coarse_) 1 MPI processes</div><div> type: preonly</div><div> maximum iterations=1, initial guess is zero</div><div> tolerances: relative=1e-05, absolute=1e-50, divergence=10000</div><div> left preconditioning</div><div> using NONE norm type for convergence test</div><div> PC Object: (mg_coarse_) 1 MPI processes</div><div> type: lu</div><div> LU: out-of-place factorization</div><div> tolerance for zero pivot 2.22045e-14</div><div><b> using diagonal shift to prevent zero pivot</b></div><div> matrix ordering: nd</div><div> factor fill ratio given 5, needed 1</div><div> Factored matrix follows:</div><div> Matrix Object: 1 MPI processes</div><div> type: seqaij</div><div> rows=1, cols=1</div><div> package used to perform factorization: petsc</div><div> total: nonzeros=1, allocated nonzeros=1</div><div> total number of mallocs used during MatSetValues calls =0</div><div> not using I-node routines</div><div> linear system matrix = precond matrix:</div><div> Matrix Object: 1 MPI processes</div><div> type: seqaij</div><div> rows=1, cols=1</div><div> total: nonzeros=1, allocated nonzeros=1</div><div> total number of mallocs used during MatSetValues calls =0</div><div> not using I-node routines</div><div> Down solver (pre-smoother) on level 1 -------------------------------</div><div> KSP Object: (mg_levels_1_) 1 MPI processes</div><div> type: richardson</div><div> Richardson: damping factor=1</div><div> maximum iterations=2</div><div> tolerances: relative=1e-05, absolute=1e-50, divergence=10000</div><div> left preconditioning</div><div> using nonzero initial guess</div><div> using NONE norm type for convergence test</div><div> PC Object: (mg_levels_1_) 1 MPI processes</div><div> type: sor</div><div> SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1</div><div> linear system matrix = precond matrix:</div><div> Matrix Object: 1 MPI processes</div><div> type: seqaij</div><div> rows=6, cols=6</div><div> total: nonzeros=36, allocated nonzeros=36</div><div> total number of mallocs used during MatSetValues calls =0</div><div> using I-node routines: found 2 nodes, limit used is 5</div><div> Up solver (post-smoother) same as down solver (pre-smoother)</div><div> Down solver (pre-smoother) on level 2 -------------------------------</div><div> KSP Object: (mg_levels_2_) 1 MPI processes</div><div> type: richardson</div><div> Richardson: damping factor=1</div><div> maximum iterations=2</div><div> tolerances: relative=1e-05, absolute=1e-50, divergence=10000</div><div> left preconditioning</div><div> using nonzero initial guess</div><div> using NONE norm type for convergence test</div><div> PC Object: (mg_levels_2_) 1 MPI processes</div><div> type: sor</div><div> SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1</div><div> linear system matrix = precond matrix:</div><div> Matrix Object: 1 MPI processes</div><div> type: seqaij</div><div> rows=130, cols=130</div><div> total: nonzeros=2704, allocated nonzeros=2704</div><div> total number of mallocs used during MatSetValues calls =0</div><div> not using I-node routines</div><div> Up solver (post-smoother) same as down solver (pre-smoother)</div><div> Down solver (pre-smoother) on level 3 -------------------------------</div><div> KSP Object: (mg_levels_3_) 1 MPI processes</div><div> type: richardson</div><div> Richardson: damping factor=1</div><div> maximum iterations=2</div><div> tolerances: relative=1e-05, absolute=1e-50, divergence=10000</div><div> left preconditioning</div><div> using nonzero initial guess</div><div> using NONE norm type for convergence test</div><div> PC Object: (mg_levels_3_) 1 MPI processes</div><div> type: sor</div><div> SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1</div><div> linear system matrix = precond matrix:</div><div> Matrix Object: 1 MPI processes</div><div> type: seqaij</div><div> rows=1000, cols=1000</div><div> total: nonzeros=6400, allocated nonzeros=6400</div><div> total number of mallocs used during MatSetValues calls =0</div><div> not using I-node routines</div><div> Up solver (post-smoother) same as down solver (pre-smoother)</div><div> linear system matrix = precond matrix:</div><div> Matrix Object: 1 MPI processes</div><div> type: seqaij</div><div> rows=1000, cols=1000</div><div> total: nonzeros=6400, allocated nonzeros=6400</div><div> total number of mallocs used during MatSetValues calls =0</div><div> not using I-node routines</div><div><br></div><div><br></div><div><div>Op 17 jul 2012, om 12:58 heeft Matthew Knepley het volgende geschreven:</div><br class="Apple-interchange-newline"><blockquote type="cite">On Tue, Jul 17, 2012 at 3:23 AM, Benjamin Sanderse <span dir="ltr"><<a href="mailto:B.Sanderse@cwi.nl" target="_blank">B.Sanderse@cwi.nl</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hello all,<br>
<br>
I am trying to use ML to solve a Poisson equation with Neumann BC (singular matrix) and found the thread below to avoid a zero pivot error. I used the option that Barry suggests, -mg_coarse_pc_factor_shift_type nonzero, which works, but only when I run on a single processor. For two or more processors I still get a zero pivot error. Are there more options to be set for the parallel case?<br>
</blockquote><div><br></div><div>Should still work. Can you check that the prefix for the coarse solver is correct with -ksp_view?</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Benjamin<br>
<br>
<br>
[1]PETSC ERROR: --------------------- Error Message ------------------------------------<br>
[1]PETSC ERROR: Detected zero pivot in LU factorization:<br>
see <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot" target="_blank">http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot</a>!<br>
[1]PETSC ERROR: Zero pivot row 1 value 5.7431e-18 tolerance 2.22045e-14!<br>
[1]PETSC ERROR: ------------------------------------------------------------------------<br>
[1]PETSC ERROR: Petsc Release Version 3.3.0, Patch 1, Fri Jun 15 09:30:49 CDT 2012<br>
[1]PETSC ERROR: See docs/changes/index.html for recent updates.<br>
[1]PETSC ERROR: [0]PETSC ERROR: --------------------- Error Message ------------------------------------<br>
[0]PETSC ERROR: Detected zero pivot in LU factorization:<br>
see <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot" target="_blank">http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot</a>!<br>
[0]PETSC ERROR: Zero pivot row 1 value 5.7431e-18 tolerance 2.22045e-14!<br>
[0]PETSC ERROR: ------------------------------------------------------------------------<br>
[0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 1, Fri Jun 15 09:30:49 CDT 2012<br>
[0]PETSC ERROR: See docs/changes/index.html for recent updates.<br>
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.<br>
[0]PETSC ERROR: See docs/index.html for manual pages.<br>
[0]PETSC ERROR: ------------------------------------------------------------------------<br>
[0]PETSC ERROR: bin/navier-stokes on a linux-gnu named <a href="http://gb-r3n32.irc.sara.nl/" target="_blank">gb-r3n32.irc.sara.nl</a> by sanderse Tue Jul 17 10:04:05 2012<br>
[0]PETSC ERROR: Libraries linked from /home/sanderse/Software/petsc-3.3-p1/linux-gnu-c-debug/lib<br>
[0]PETSC ERROR: Configure run at Mon Jul 16 21:06:33 2012<br>
[0]PETSC ERROR: Configure options --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpicxx --with-shared-libraries --with-hypre --download-hypre --with-blas-lapack-dir=/sara<br>
/sw/intel/Compiler/11.0/069 --with-hdf5 --download-hdf5 --with-debugging=0 --download-ml<br>
[0]PETSC ERROR: ------------------------------------------------------------------------<br>
[0]PETSC ERROR: MatPivotCheck_none() line 583 in /home/sanderse/Software/petsc-3.3-p1/include/petsc-private/matimpl.h<br>
[0]PETSC ERROR: MatPivotCheck() line 602 in /home/sanderse/Software/petsc-3.3-p1/include/petsc-private/matimpl.h<br>
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.<br>
[1]PETSC ERROR: See docs/index.html for manual pages.<br>
[1]PETSC ERROR: ------------------------------------------------------------------------<br>
[1]PETSC ERROR: bin/navier-stokes on a linux-gnu named <a href="http://gb-r3n32.irc.sara.nl/" target="_blank">gb-r3n32.irc.sara.nl</a> by sanderse Tue Jul 17 10:04:05 2012<br>
[1]PETSC ERROR: Libraries linked from /home/sanderse/Software/petsc-3.3-p1/linux-gnu-c-debug/lib<br>
[1]PETSC ERROR: Configure run at Mon Jul 16 21:06:33 2012<br>
[1]PETSC ERROR: Configure options --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpicxx --with-shared-libraries --with-hypre --download-hypre --with-blas-lapack-dir=/sara<br>
/sw/intel/Compiler/11.0/069 --with-hdf5 --download-hdf5 --with-debugging=0 --download-ml<br>
[1]PETSC ERROR: ------------------------------------------------------------------------<br>
[1]PETSC ERROR: MatPivotCheck_none() line 583 in /home/sanderse/Software/petsc-3.3-p1/include/petsc-private/matimpl.h<br>
MatLUFactorNumeric_SeqAIJ_Inode() line 1469 in /home/sanderse/Software/petsc-3.3-p1/src/mat/impls/aij/seq/inode.c<br>
[0]PETSC ERROR: MatLUFactorNumeric() line 2790 in /home/sanderse/Software/petsc-3.3-p1/src/mat/interface/matrix.c<br>
[0]PETSC ERROR: PCSetUp_LU() line 160 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/factor/lu/lu.c<br>
[0]PETSC ERROR: PCSetUp() line 832 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c<br>
[0]PETSC ERROR: KSPSetUp() line 278 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c<br>
[0]PETSC ERROR: PCSetUp_Redundant() line 176 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/redundant/redundant.c<br>
[0]PETSC ERROR: PCSetUp() line 832 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c<br>
[0]PETSC ERROR: KSPSetUp() line 278 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c<br>
[0]PETSC ERROR: PCSetUp_MG() line 729 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/mg/mg.c<br>
[0]PETSC ERROR: [1]PETSC ERROR: MatPivotCheck() line 602 in /home/sanderse/Software/petsc-3.3-p1/include/petsc-private/matimpl.h<br>
[1]PETSC ERROR: MatLUFactorNumeric_SeqAIJ_Inode() line 1469 in /home/sanderse/Software/petsc-3.3-p1/src/mat/impls/aij/seq/inode.c<br>
[1]PETSC ERROR: MatLUFactorNumeric() line 2790 in /home/sanderse/Software/petsc-3.3-p1/src/mat/interface/matrix.c<br>
[1]PETSC ERROR: PCSetUp_LU() line 160 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/factor/lu/lu.c<br>
[1]PETSC ERROR: PCSetUp_ML() line 820 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/ml/ml.c<br>
[0]PETSC ERROR: PCSetUp() line 832 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c<br>
[0]PETSC ERROR: KSPSetUp() line 278 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c<br>
PCSetUp() line 832 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c<br>
[1]PETSC ERROR: KSPSetUp() line 278 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c<br>
[1]PETSC ERROR: PCSetUp_Redundant() line 176 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/redundant/redundant.c<br>
[1]PETSC ERROR: PCSetUp() line 832 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c<br>
[1]PETSC ERROR: KSPSetUp() line 278 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c<br>
[1]PETSC ERROR: PCSetUp_MG() line 729 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/mg/mg.c<br>
[1]PETSC ERROR: PCSetUp_ML() line 820 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/ml/ml.c<br>
[1]PETSC ERROR: PCSetUp() line 832 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c<br>
[1]PETSC ERROR: KSPSetUp() line 278 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c<br>
<br>
<br>
Op 3 jun 2011, om 14:26 heeft Barry Smith het volgende geschreven:<br>
<br>
><br>
> It is the direct solver on the the coarse grid that is finding the zero pivot (since the coarse grid problem like all the levels has a null space).<br>
><br>
> You can use the option -mg_coarse_pc_factor_shift_type nonzero (in petsc-3.1 or petsc-dev) Also keep the KSPSetNullSpace() function you are using.<br>
><br>
><br>
> Barry<br>
><br>
> On Jun 3, 2011, at 3:54 AM, Stijn A. M. Vantieghem wrote:<br>
><br>
>> Dear all,<br>
>><br>
>> I am using PETSc (Fortran interface) to solve a Poisson equation with Neumann boundary conditions. Up till now, I managed to do this with Hypre's BoomerAMG. Now, I am investigating whether I can improve the performance of my code by using ML. However, at execution I receive a zero pivot error; I tried to remove the (constant) null space with KSPSetNullSpace, but this didn't solve my problem. Do you have an idea of what I'm doing wrong? Thanks.<br>
>><br>
>> The relevant portions of my code are as follows:<br>
>><br>
>> !****************************************************<br>
>> call KSPCreate(PETSC_COMM_WORLD,ksp,ierr)<br>
>> call KSPSetOperators(ksp,M,M,DIFFERENT_NONZERO_PATTERN,ierr)<br>
>><br>
>> call MatNullSpaceCreate(PETSC_COMM_WORLD,PETSC_TRUE,0,PETSC_NULL,sp,ierr)<br>
>> call KSPSetNullSpace(ksp,sp,ierr)<br>
>><br>
>> call KSPGetPC(ksp,pc,ierr)<br>
>> call PCSetType(pc,PCML,ierr)<br>
>><br>
>> call KSPSetFromOptions(ksp,ierr)<br>
>> ...<br>
>> call KSPSetInitialGuessNonzero(ksp,PETSC_TRUE,ierr)<br>
>> call KSPSolve(ksp,petsc_rhs,petsc_pressure,ierr)<br>
>> !****************************************************<br>
>><br>
>> and the error message is:<br>
>><br>
>> [0]PETSC ERROR: --------------------- Error Message ------------------------------------<br>
>> [0]PETSC ERROR: Detected zero pivot in LU factorization<br>
>> see <a href="http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#ZeroPivot" target="_blank">http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#ZeroPivot</a>!<br>
>> [0]PETSC ERROR: Zero pivot row 0 value 3.57045e-20 tolerance 1e-12!<br>
>> [0]PETSC ERROR: ------------------------------------------------------------------------<br>
>> ...<br>
>> [0]PETSC ERROR: ------------------------------------------------------------------------<br>
>> [0]PETSC ERROR: MatLUFactorNumeric_SeqAIJ() line 574 in src/mat/impls/aij/seq/aijfact.c<br>
>> [0]PETSC ERROR: MatLUFactorNumeric() line 2587 in src/mat/interface/matrix.c<br>
>> [0]PETSC ERROR: PCSetUp_LU() line 158 in src/ksp/pc/impls/factor/lu/lu.c<br>
>> [0]PETSC ERROR: PCSetUp() line 795 in src/ksp/pc/interface/precon.c<br>
>> [0]PETSC ERROR: KSPSetUp() line 237 in src/ksp/ksp/interface/itfunc.c<br>
>> [0]PETSC ERROR: PCSetUp_MG() line 602 in src/ksp/pc/impls/mg/mg.c<br>
>> [0]PETSC ERROR: PCSetUp_ML() line 668 in src/ksp/pc/impls/ml/ml.c<br>
>> [0]PETSC ERROR: PCSetUp() line 795 in src/ksp/pc/interface/precon.c<br>
>> [0]PETSC ERROR: KSPSetUp() line 237 in src/ksp/ksp/interface/itfunc.c<br>
>> [0]PETSC ERROR: KSPSolve() line 353 in src/ksp/ksp/interface/itfunc.c<br>
>><br>
>> Regards<br>
>> Stijn<br>
>><br>
>> Stijn A.M. Vantieghem<br>
>> Earth and Planetary Magnetism<br>
>> Institute for Geophysics<br>
>> ETH Zürich<br>
>> Sonneggstrasse 5 - CH 8092 Zürich<br>
>> tel: +41 44 632 39 90<br>
>> e-mail: <a href="mailto:stijn.vantieghem@erdw.ethz.ch">stijn.vantieghem@erdw.ethz.ch</a><br>
>><br>
>><br>
>><br>
>><br>
><br>
<br>
<br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener<br>
</blockquote></div><br><div>
<div><div>-- </div><div>Ir. B. Sanderse</div><div> </div><div>Centrum Wiskunde en Informatica</div><div>Science Park 123</div><div>1098 XG Amsterdam</div><div><br></div><div>t: +31 20 592 4161</div><div>e: <a href="mailto:sanderse@cwi.nl">sanderse@cwi.nl</a></div></div>
</div>
<br></div></div></body></html>