<html><head></head><body><div style="color:#000; background-color:#fff; font-family:HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, Sans-Serif;font-size:13px"><div id="yui_3_16_0_ym19_1_1487414548429_53572"><span>Thank you very much Barry !</span></div><div id="yui_3_16_0_ym19_1_1487414548429_53572"><span><br></span></div><div id="yui_3_16_0_ym19_1_1487414548429_53572" dir="ltr"><span id="yui_3_16_0_ym19_1_1487414548429_53622">I hand crafted this for leaning PETSc APIs. Thought I can get the solution since the solution exists 'mathematically'. </span></div><div id="yui_3_16_0_ym19_1_1487414548429_53572" dir="ltr"><span><br></span></div><div id="yui_3_16_0_ym19_1_1487414548429_53572" dir="ltr"><span id="yui_3_16_0_ym19_1_1487414548429_53702">Output of -kep_view shows that it uses "</span>ILU: out-of-place factorization" I think:</div><div id="yui_3_16_0_ym19_1_1487414548429_53572" dir="ltr"><br></div><div dir="ltr" id="yui_3_16_0_ym19_1_1487414548429_53821">KSP Object: 1 MPI processes</div><div dir="ltr" id="yui_3_16_0_ym19_1_1487414548429_53822"> type: gmres</div><div dir="ltr" id="yui_3_16_0_ym19_1_1487414548429_53823"> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement</div><div dir="ltr" id="yui_3_16_0_ym19_1_1487414548429_53824"> GMRES: happy breakdown tolerance 1e-30</div><div dir="ltr" id="yui_3_16_0_ym19_1_1487414548429_53825"> maximum iterations=10000, initial guess is zero</div><div dir="ltr" id="yui_3_16_0_ym19_1_1487414548429_53826"> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.</div><div dir="ltr" id="yui_3_16_0_ym19_1_1487414548429_53827"> left preconditioning</div><div dir="ltr" id="yui_3_16_0_ym19_1_1487414548429_53828"> using PRECONDITIONED norm type for convergence test</div><div dir="ltr" id="yui_3_16_0_ym19_1_1487414548429_53829">PC Object: 1 MPI processes</div><div dir="ltr" id="yui_3_16_0_ym19_1_1487414548429_53830"> type: ilu</div><div dir="ltr" id="yui_3_16_0_ym19_1_1487414548429_53831"> ILU: out-of-place factorization</div><div dir="ltr" id="yui_3_16_0_ym19_1_1487414548429_53832"> 0 levels of fill</div><div dir="ltr" id="yui_3_16_0_ym19_1_1487414548429_53833"> tolerance for zero pivot 2.22045e-14</div><div dir="ltr" id="yui_3_16_0_ym19_1_1487414548429_53834"> matrix ordering: natural</div><div dir="ltr" id="yui_3_16_0_ym19_1_1487414548429_53835"> factor fill ratio given 1., needed 1.</div><div dir="ltr" id="yui_3_16_0_ym19_1_1487414548429_53836"><br id="yui_3_16_0_ym19_1_1487414548429_53837"></div><div dir="ltr" id="yui_3_16_0_ym19_1_1487414548429_53836">Will test with real matrices when I learn enough PETSc.</div><div></div><div id="yui_3_16_0_ym19_1_1487414548429_53573"> </div><div class="signature" id="yui_3_16_0_ym19_1_1487414548429_53574">Thanks !<br>lx</div> <div class="qtdSeparateBR"><br><br></div><div class="yahoo_quoted" style="display: block;"> <div style="font-family: HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, Sans-Serif; font-size: 13px;"> <div style="font-family: HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, Sans-Serif; font-size: 16px;"> <div dir="ltr"><font size="2" face="Arial"> On Sunday, 19 February 2017, 1:07, Barry Smith <bsmith@mcs.anl.gov> wrote:<br></font></div> <br><br> <div class="y_msg_container"><br clear="none"> Because of the zeros on the diagonal the sparse (I)LU factorization failed (sparse (I)LU factorizations generally don't use partial pivoting and hence can fail on nonsingular matrices unlike dense factorizations). Did you run with -pc_type lu ? Or the defaults: it is always useful to run with -ksp_view to see what solver options are being used.<br clear="none"><br clear="none"> Where did this matrix come from? Usually PETSc users have a specific class of problems they are solving and so don't hit upon these matrices.<br clear="none"><br clear="none"> SuperLU (not superlu_dist) does do some partial pivoting and so will usually factor such matrices. You can use ./configure --download-superlu and then run the example with -pc_type lu -pc_factor_mat_solver_package superlu But for large matrices Superlu is pretty slow.<br clear="none"><br clear="none"> Barry<br clear="none"><br clear="none"><br clear="none"><br clear="none"><div class="yqt9966700537" id="yqtfd19888"><br clear="none">> On Feb 18, 2017, at 5:06 AM, lixin chu <<a shape="rect" ymailto="mailto:lixin_chu@yahoo.com" href="mailto:lixin_chu@yahoo.com">lixin_chu@yahoo.com</a>> wrote:<br clear="none">> <br clear="none">> Hello,<br clear="none">> <br clear="none">> trying my first PETSc program, some matrices ok, but this one showing error:<br clear="none">> <br clear="none">> Linear solve did not converge due to DIVERGED_PCSETUP_FAILED iterations 0<br clear="none">> PCSETUP_FAILED due to FACTOR_NUMERIC_ZEROPIVOT<br clear="none">> <br clear="none">> Not sure what I can try (I tried PC and GMRES, both no good) ?<br clear="none">> <br clear="none">> <br clear="none">> /*<br clear="none">> * Matrix A<br clear="none">> * 1 0 0 1 1 1<br clear="none">> * 0 1 1 1 0 1<br clear="none">> * 1 0 0 0 1 1<br clear="none">> * 0 0 -1 0 1 1<br clear="none">> * 1 1 0 1 1 1<br clear="none">> * 0 1 1 1 0 0<br clear="none">> *<br clear="none">> * Expected inverse result<br clear="none">> * 0 0 0 -1 1 -1<br clear="none">> * -1 0 0 0 1 0<br clear="none">> * 0 0 1 0 -1 1<br clear="none">> * 1 0 -1 0 0 0<br clear="none">> * 0 -1 1 1 -1 2<br clear="none">> * 0 1 0 0 0 -1<br clear="none">> *<br clear="none">> */<br clear="none">> <br clear="none">> Here is my code:<br clear="none">> <br clear="none">> Vec x,b; <br clear="none">> Mat A; <br clear="none">> KSP solver; <br clear="none">> PC pc; <br clear="none">> <br clear="none">> PetscMPIInt size;<br clear="none">> <br clear="none">> int mpi_rank;<br clear="none">> bool is_mpi_root = FALSE;<br clear="none">> <br clear="none">> /*<br clear="none">> * initialize PETSc and MPI<br clear="none">> */<br clear="none">> PetscInitialize ( &argc, &argv, (char*)0, USAGE);<br clear="none">> <br clear="none">> <br clear="none">> int col_index [6][6] = {<br clear="none">> {0, 1, 2, 3, 4, 5},<br clear="none">> {0, 1, 2, 3, 4, 5},<br clear="none">> {0, 1, 2, 3, 4, 5},<br clear="none">> {0, 1, 2, 3, 4, 5},<br clear="none">> {0, 1, 2, 3, 4, 5},<br clear="none">> {0, 1, 2, 3, 4, 5}<br clear="none">> };<br clear="none">> <br clear="none">> double v [6][6] = {<br clear="none">> {1., 0., 0., 1., 1., 1.},<br clear="none">> {0., 1., 1., 1., 0., 1.},<br clear="none">> {1., 0., 0., 0., 1., 1.},<br clear="none">> {0., 0., -1., 0., 1., 1.},<br clear="none">> {1., 1., 0., 1., 1., 1.},<br clear="none">> {0., 1., 1., 1., 0., 0.}<br clear="none">> };<br clear="none">> <br clear="none">> int nz_rows [] = {6, 6, 6, 6, 6, 6};<br clear="none">> <br clear="none">> int N = 6;<br clear="none">> int row, col;<br clear="none">> double value = 1.;<br clear="none">> <br clear="none">> int m, n;<br clear="none">> <br clear="none">> int b_start, b_end;<br clear="none">> <br clear="none">> // create x<br clear="none">> VecCreate ( PETSC_COMM_WORLD, &x );<br clear="none">> VecSetSizes ( x, PETSC_DECIDE, N );<br clear="none">> VecSetType ( x, VECMPI );<br clear="none">> <br clear="none">> <br clear="none">> // create b<br clear="none">> VecCreate ( PETSC_COMM_WORLD, &b );<br clear="none">> VecSetSizes ( b, PETSC_DECIDE, N );<br clear="none">> VecSetType ( b, VECMPI );<br clear="none">> <br clear="none">> <br clear="none">> // create A<br clear="none">> MatCreate ( PETSC_COMM_WORLD, &A );<br clear="none">> MatSetSizes ( A, PETSC_DECIDE, PETSC_DECIDE, N, N );<br clear="none">> <br clear="none">> MatSetUp ( A );<br clear="none">> <br clear="none">> for ( row = 0; row < N; row++ ) {<br clear="none">> MatSetValues ( A, 1, &row, nz_rows[row], col_index[row], v[row], INSERT_VALUES );<br clear="none">> }<br clear="none">> MatAssemblyBegin ( A, MAT_FINAL_ASSEMBLY ) ;<br clear="none">> MatAssemblyEnd ( A, MAT_FINAL_ASSEMBLY ) ;<br clear="none">> <br clear="none">> MatView ( A, PETSC_VIEWER_STDOUT_WORLD );<br clear="none">> <br clear="none">> /*<br clear="none">> * setting up KSP<br clear="none">> */<br clear="none">> KSPCreate ( PETSC_COMM_WORLD, &solver );<br clear="none">> KSPSetOperators ( solver, A, A );<br clear="none">> // KSPSetType ( solver, KSPCG );<br clear="none">> KSPSetType ( solver, KSPGMRES );<br clear="none">> <br clear="none">> for ( col = 0; col < N; col++ ) {<br clear="none">> <br clear="none">> // set b in root process<br clear="none">> VecGetOwnershipRange ( b , &b_start , &b_end ) ;<br clear="none">> <br clear="none">> // clear all values<br clear="none">> VecSet ( b, 0 );<br clear="none">> // set 1 in the correct process.<br clear="none">> // each process owns a block of b : [b_start, b_end)<br clear="none">> // index starts from 0<br clear="none">> <br clear="none">> if ( col >= b_start && col < b_end )<br clear="none">> VecSetValues ( b, 1, &col, &value, INSERT_VALUES );<br clear="none">> <br clear="none">> // distribute b<br clear="none">> VecAssemblyBegin ( b ) ;<br clear="none">> VecAssemblyEnd ( b ) ;<br clear="none">> <br clear="none">> // must be called from all processes<br clear="none">> //VecView ( b, PETSC_VIEWER_STDOUT_WORLD );<br clear="none">> <br clear="none">> KSPSolve ( solver, b, x );<br clear="none">> <br clear="none">> VecView ( x, PETSC_VIEWER_STDOUT_WORLD );<br clear="none">> }<br clear="none">> <br clear="none">> /*<br clear="none">> * destroy KSP<br clear="none">> */<br clear="none">> VecDestroy ( &b );<br clear="none">> VecDestroy ( &x );<br clear="none">> MatDestroy ( &A );<br clear="none">> KSPDestroy(&solver);<br clear="none">> <br clear="none">> /*<br clear="none">> * finalize PETSc<br clear="none">> */<br clear="none">> PetscFinalize();<br clear="none">> <br clear="none">> return 0;<br clear="none">> <br clear="none">> I am running this with one MPI process, and the output seems to indicate that the A matrix is set correctly:<br clear="none">> <br clear="none">> Mat Object: 1 MPI processes<br clear="none">> type: seqaij<br clear="none">> row 0: (0, 1.) (1, 0.) (2, 0.) (3, 1.) (4, 1.) (5, 1.) <br clear="none">> row 1: (0, 0.) (1, 1.) (2, 1.) (3, 1.) (4, 0.) (5, 1.) <br clear="none">> row 2: (0, 1.) (1, 0.) (2, 0.) (3, 0.) (4, 1.) (5, 1.) <br clear="none">> row 3: (0, 0.) (1, 0.) (2, -1.) (3, 0.) (4, 1.) (5, 1.) <br clear="none">> row 4: (0, 1.) (1, 1.) (2, 0.) (3, 1.) (4, 1.) (5, 1.) <br clear="none">> row 5: (0, 0.) (1, 1.) (2, 1.) (3, 1.) (4, 0.) (5, 0.) <br clear="none">> <br clear="none">> <br clear="none">> Thanks for any help !<br clear="none">> <br clear="none">> rgds<br clear="none">> LX<br clear="none"></div><br><br></div> </div> </div> </div></div></body></html>