[petsc-users] [petsc-maint] Iterative Solver Problem

Foad Hassaninejadfarahani umhassa5 at cc.umanitoba.ca
Mon Apr 28 14:42:35 CDT 2014


Hello;

I put all those commands. It does not work with  
-ksp_monitor_singular_value and here is the output:

0 KSP preconditioned resid norm 2.622210477042e+04 true resid norm  
1.860478790525e+07 ||r(i)||/||b|| 1.000000000000e+00
   0 KSP Residual norm 2.622210477042e+04 % max 1.000000000000e+00 min  
1.000000000000e+00 max/min 1.000000000000e+00
   1 KSP preconditioned resid norm 5.998205227155e+03 true resid norm  
4.223014979562e+06 ||r(i)||/||b|| 2.269853868300e-01
   1 KSP Residual norm 5.998205227155e+03 % max 8.773486916458e-01 min  
8.773486916458e-01 max/min 1.000000000000e+00
   2 KSP preconditioned resid norm 1.879862239084e+03 true resid norm  
3.444600162270e+06 ||r(i)||/||b|| 1.851458979169e-01
   2 KSP Residual norm 1.879862239084e+03 % max 1.229948994481e+00 min  
7.933593275578e-01 max/min 1.550305078365e+00
   3 KSP preconditioned resid norm 8.529038157181e+02 true resid norm  
1.311707893098e+06 ||r(i)||/||b|| 7.050378105779e-02
[3]PETSC ERROR:  
------------------------------------------------------------------------
[3]PETSC ERROR: Caught signal number 8 FPE: Floating Point  
Exception,probably divide by zero
[3]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[3]PETSC ERROR: or see  
http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[3]PETSC  
ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to  
find memory corruption errors
[3]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[3]PETSC ERROR: to get more information on the crash.
[3]PETSC ERROR: --------------------- Error Message  
------------------------------------
[3]PETSC ERROR: Signal received!
[3]PETSC ERROR:  
------------------------------------------------------------------------
[3]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29  
13:45:54 CDT 2011
[3]PETSC ERROR: See docs/changes/index.html for recent updates.
[3]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[3]PETSC ERROR: See docs/index.html for manual pages.
[3]PETSC ERROR:  
------------------------------------------------------------------------
[3]PETSC ERROR:  
/home/u14/umhassa5/mecfd/gas-code/TLEC2CCP/ctf/bin/Linux_p/ctf_Linux  
on a arch-linu named mecfd02 by umhassa5 Mon Apr 28 14:39:08 2014
[3]PETSC ERROR: Libraries linked from  
/home/mecfd/common/sw/petsc-3.2-p5-pgi/arch-linux2-c-opt/lib
[3]PETSC ERROR: Configure run at Sat Dec 31 07:53:05 2011
[3]PETSC ERROR: Configure options  
--with-mpi-dir=/home/mecfd/common/openmpi-p  
--PETSC_DIR=/home/mecfd/common/sw/petsc-3.2-p5-pgi --with-debugging=0  
--with-shared-libraries=1 --download-f-blas-lapack=1  
--download-superlu_dist=yes --download-parmetis=yes  
--download-mumps=yes --download-scalapack=yes --download-spooles=yes  
--download-blacs=yes --download-hypre=yes
[3]PETSC ERROR:  
------------------------------------------------------------------------
[3]PETSC ERROR: [0]PETSC ERROR:  
------------------------------------------------------------------------
[0]PETSC ERROR: Caught signal number 8 FPE: Floating Point  
Exception,probably divide by zero
[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[0]PETSC ERROR: or see  
http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[0]PETSC  
ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to  
find memory corruption errors
[0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[0]PETSC ERROR: to get more information on the crash.
[0]PETSC ERROR: --------------------- Error Message  
------------------------------------
[0]PETSC ERROR: Signal received!
[7]PETSC ERROR:  
------------------------------------------------------------------------
[7]PETSC ERROR: Caught signal number 8 FPE: Floating Point  
Exception,probably divide by zero
[7]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[7]PETSC ERROR: or see  
http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[7]PETSC  
ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to  
find memory corruption errors
[7]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[7]PETSC ERROR: to get more information on the crash.
[7]PETSC ERROR: --------------------- Error Message  
------------------------------------
[7]PETSC ERROR: Signal received!
[7]PETSC ERROR:  
------------------------------------------------------------------------
[7]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29  
13:45:54 CDT 2011
[7]PETSC ERROR: See docs/changes/index.html for recent updates.
[7]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[7]PETSC ERROR: See docs/index.html for manual pages.
[7]PETSC ERROR:  
------------------------------------------------------------------------
[7]PETSC ERROR:  
/home/u14/umhassa5/mecfd/gas-code/TLEC2CCP/ctf/bin/Linux_p/ctf_Linux  
on a arch-linu named mecfd02 by umhassa5 Mon Apr 28 14:39:08 2014
[7]PETSC ERROR: Libraries linked from  
/home/mecfd/common/sw/petsc-3.2-p5-pgi/arch-linux2-c-opt/lib
[7]PETSC ERROR: Configure run at Sat Dec 31 07:53:05 2011
[7]PETSC ERROR: Configure options  
--with-mpi-dir=/home/mecfd/common/openmpi-p  
--PETSC_DIR=/home/mecfd/common/sw/petsc-3.2-p5-pgi --with-debugging=0  
--with-shared-libraries=1 --download-f-blas-lapack=1  
--download-superlu_dist=yes --download-parmetis=yes  
--download-mumps=yes --download-scalapack=yes --download-spooles=yes  
--download-blacs=yes --download-hypre=yes
[7]PETSC ERROR:  
------------------------------------------------------------------------
[7]PETSC ERROR: User provided function() line 0 in unknown directory  
unknown file
User provided function() line 0 in unknown directory unknown file
[1]PETSC ERROR:  
------------------------------------------------------------------------
[1]PETSC ERROR: Caught signal number 8 FPE: Floating Point  
Exception,probably divide by zero
[1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[1]PETSC ERROR: or see  
http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[1]PETSC  
ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to  
find memory corruption errors
[1]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[1]PETSC ERROR: to get more information on the crash.
[1]PETSC ERROR: --------------------- Error Message  
------------------------------------
[2]PETSC ERROR:  
------------------------------------------------------------------------
[2]PETSC ERROR: [4]PETSC ERROR:  
------------------------------------------------------------------------
[4]PETSC ERROR: Caught signal number 8 FPE: Floating Point  
Exception,probably divide by zero
[4]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[4]PETSC ERROR: or see  
http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[4]PETSC  
ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to  
find memory corruption errors
[4]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[4]PETSC ERROR: to get more information on the crash.
[4]PETSC ERROR: --------------------- Error Message  
------------------------------------
[4]PETSC ERROR: Signal received!
[4]PETSC ERROR:  
------------------------------------------------------------------------
[4]PETSC ERROR: [5]PETSC ERROR:  
------------------------------------------------------------------------
[5]PETSC ERROR: Caught signal number 8 FPE: Floating Point  
Exception,probably divide by zero
[5]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[5]PETSC ERROR: or see  
http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[5]PETSC  
ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to  
find memory corruption errors
[5]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[5]PETSC ERROR: to get more information on the crash.
[6]PETSC ERROR:  
------------------------------------------------------------------------
[6]PETSC ERROR: Caught signal number 8 FPE: Floating Point  
Exception,probably divide by zero
[6]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[6]PETSC ERROR: or see  
http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[6]PETSC  
ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to  
find memory corruption errors
[6]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[6]PETSC ERROR: to get more information on the crash.
[6]PETSC ERROR: --------------------- Error Message  
------------------------------------
[6]PETSC ERROR: Signal received!
[6]PETSC ERROR:  
------------------------------------------------------------------------
[6]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29  
13:45:54 CDT 2011
[6]PETSC ERROR: See docs/changes/index.html for recent updates.
[6]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[6]PETSC ERROR: See docs/index.html for manual pages.
[6]PETSC ERROR:  
------------------------------------------------------------------------
[6]PETSC ERROR:  
/home/u14/umhassa5/mecfd/gas-code/TLEC2CCP/ctf/bin/Linux_p/ctf_Linux  
on a arch-linu named mecfd02 by umhassa5 Mon Apr 28 14:39:08 2014
[6]PETSC ERROR: Libraries linked from  
/home/mecfd/common/sw/petsc-3.2-p5-pgi/arch-linux2-c-opt/lib
[6]PETSC ERROR: Configure run at Sat Dec 31 07:53:05 2011
[6]PETSC ERROR: Configure options  
--with-mpi-dir=/home/mecfd/common/openmpi-p  
--PETSC_DIR=/home/mecfd/common/sw/petsc-3.2-p5-pgi --with-debugging=0  
--with-shared-libraries=1 --download-f-blas-lapack=1  
--download-superlu_dist=yes --download-parmetis=yes  
--download-mumps=yes --download-scalapack=yes --download-spooles=yes  
--download-blacs=yes --download-hypre=yes
[6]PETSC ERROR:  
------------------------------------------------------------------------
[6]PETSC ERROR: User provided function() line 0 in unknown directory  
unknown file
[0]PETSC ERROR:  
------------------------------------------------------------------------
[0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29  
13:45:54 CDT 2011
[0]PETSC ERROR: See docs/changes/index.html for recent updates.
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[0]PETSC ERROR: See docs/index.html for manual pages.
[0]PETSC ERROR:  
------------------------------------------------------------------------
[0]PETSC ERROR:  
/home/u14/umhassa5/mecfd/gas-code/TLEC2CCP/ctf/bin/Linux_p/ctf_Linux  
on a arch-linu named mecfd02 by umhassa5 Mon Apr 28 14:39:08 2014
[0]PETSC ERROR: Libraries linked from  
/home/mecfd/common/sw/petsc-3.2-p5-pgi/arch-linux2-c-opt/lib
[0]PETSC ERROR: Configure run at Sat Dec 31 07:53:05 2011
[0]PETSC ERROR: Configure options  
--with-mpi-dir=/home/mecfd/common/openmpi-p  
--PETSC_DIR=/home/mecfd/common/sw/petsc-3.2-p5-pgi --with-debugging=0  
--with-shared-libraries=1 --download-f-blas-lapack=1  
--download-superlu_dist=yes --download-parmetis=yes  
--download-mumps=yes --download-scalapack=yes --download-spooles=yes  
--download-blacs=yes --download-hypre=yes
[0]PETSC ERROR:  
------------------------------------------------------------------------
[0]PETSC ERROR: User provided function() line 0 in unknown directory  
unknown file
[1]PETSC ERROR: Signal received!
[1]PETSC ERROR:  
------------------------------------------------------------------------
[1]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29  
13:45:54 CDT 2011
[1]PETSC ERROR: See docs/changes/index.html for recent updates.
[1]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[1]PETSC ERROR: See docs/index.html for manual pages.
[1]PETSC ERROR:  
------------------------------------------------------------------------
[1]PETSC ERROR:  
/home/u14/umhassa5/mecfd/gas-code/TLEC2CCP/ctf/bin/Linux_p/ctf_Linux  
on a arch-linu named mecfd02 by umhassa5 Mon Apr 28 14:39:08 2014
[1]PETSC ERROR: Libraries linked from  
/home/mecfd/common/sw/petsc-3.2-p5-pgi/arch-linux2-c-opt/lib
[1]PETSC ERROR: Configure run at Sat Dec 31 07:53:05 2011
[1]PETSC ERROR: Configure options  
--with-mpi-dir=/home/mecfd/common/openmpi-p  
--PETSC_DIR=/home/mecfd/common/sw/petsc-3.2-p5-pgi --with-debugging=0  
--with-shared-libraries=1 --download-f-blas-lapack=1  
--download-superlu_dist=yes --download-parmetis=yes  
--download-mumps=yes --download-scalapack=yes --download-spooles=yes  
--download-blacs=yes --download-hypre=yes
[1]PETSC ERROR:  
------------------------------------------------------------------------
[1]PETSC ERROR: User provided function() line 0 in unknown directory  
unknown file
Caught signal number 8 FPE: Floating Point Exception,probably divide by zero
[2]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[2]PETSC ERROR: or see  
http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[2]PETSC  
ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to  
find memory corruption errors
[2]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[2]PETSC ERROR: to get more information on the crash.
[2]PETSC ERROR: --------------------- Error Message  
------------------------------------
[2]PETSC ERROR: Signal received!
[2]PETSC ERROR:  
------------------------------------------------------------------------
[2]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29  
13:45:54 CDT 2011
Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 CDT 2011
[4]PETSC ERROR: See docs/changes/index.html for recent updates.
[4]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[4]PETSC ERROR: See docs/index.html for manual pages.
[4]PETSC ERROR:  
------------------------------------------------------------------------
[4]PETSC ERROR:  
/home/u14/umhassa5/mecfd/gas-code/TLEC2CCP/ctf/bin/Linux_p/ctf_Linux  
on a arch-linu named mecfd02 by umhassa5 Mon Apr 28 14:39:08 2014
[4]PETSC ERROR: Libraries linked from  
/home/mecfd/common/sw/petsc-3.2-p5-pgi/arch-linux2-c-opt/lib
[4]PETSC ERROR: Configure run at Sat Dec 31 07:53:05 2011
[4]PETSC ERROR: Configure options  
--with-mpi-dir=/home/mecfd/common/openmpi-p  
--PETSC_DIR=/home/mecfd/common/sw/petsc-3.2-p5-pgi --with-debugging=0  
--with-shared-libraries=1 --download-f-blas-lapack=1  
--download-superlu_dist=yes --download-parmetis=yes  
--download-mumps=yes --download-scalapack=yes --download-spooles=yes  
--download-blacs=yes --download-hypre=yes
[4]PETSC ERROR:  
------------------------------------------------------------------------
[4]PETSC ERROR: User provided function() line 0 in unknown directory  
unknown file
[5]PETSC ERROR: --------------------- Error Message  
------------------------------------
[5]PETSC ERROR: Signal received!
[5]PETSC ERROR:  
------------------------------------------------------------------------
[5]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29  
13:45:54 CDT 2011
[5]PETSC ERROR: See docs/changes/index.html for recent updates.
[5]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[5]PETSC ERROR: See docs/index.html for manual pages.
[5]PETSC ERROR:  
------------------------------------------------------------------------
[5]PETSC ERROR:  
/home/u14/umhassa5/mecfd/gas-code/TLEC2CCP/ctf/bin/Linux_p/ctf_Linux  
on a arch-linu named mecfd02 by umhassa5 Mon Apr 28 14:39:08 2014
[5]PETSC ERROR: Libraries linked from  
/home/mecfd/common/sw/petsc-3.2-p5-pgi/arch-linux2-c-opt/lib
[5]PETSC ERROR: Configure run at Sat Dec 31 07:53:05 2011
[5]PETSC ERROR: Configure options  
--with-mpi-dir=/home/mecfd/common/openmpi-p  
--PETSC_DIR=/home/mecfd/common/sw/petsc-3.2-p5-pgi --with-debugging=0  
--with-shared-libraries=1 --download-f-blas-lapack=1  
--download-superlu_dist=yes --download-parmetis=yes  
--download-mumps=yes --download-scalapack=yes --download-spooles=yes  
--download-blacs=yes --download-hypre=yes
[5]PETSC ERROR:  
------------------------------------------------------------------------
[5]PETSC ERROR: User provided function() line 0 in unknown directory  
unknown file
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 4 in communicator MPI_COMM_WORLD
with errorcode 59.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[2]PETSC ERROR: See docs/changes/index.html for recent updates.
[2]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[2]PETSC ERROR: See docs/index.html for manual pages.
[2]PETSC ERROR:  
------------------------------------------------------------------------
[2]PETSC ERROR:  
/home/u14/umhassa5/mecfd/gas-code/TLEC2CCP/ctf/bin/Linux_p/ctf_Linux  
on a arch-linu named mecfd02 by umhassa5 Mon Apr 28 14:39:08 2014
[2]PETSC ERROR: Libraries linked from  
/home/mecfd/common/sw/petsc-3.2-p5-pgi/arch-linux2-c-opt/lib
[2]PETSC ERROR: Configure run at Sat Dec 31 07:53:05 2011
[2]PETSC ERROR: Configure options  
--with-mpi-dir=/home/mecfd/common/openmpi-p  
--PETSC_DIR=/home/mecfd/common/sw/petsc-3.2-p5-pgi --with-debugging=0  
--with-shared-libraries=1 --download-f-blas-lapack=1  
--download-superlu_dist=yes --download-parmetis=yes  
--download-mumps=yes --download-scalapack=yes --download-spooles=yes  
--download-blacs=yes --download-hypre=yes
[2]PETSC ERROR:  
------------------------------------------------------------------------
[2]PETSC ERROR: User provided function() line 0 in unknown directory  
unknown file
--------------------------------------------------------------------------
mpiexec has exited due to process rank 3 with PID 19683 on
node mecfd02 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpiexec (as reported here).
--------------------------------------------------------------------------


-- 
With Best Regards;
Foad


Quoting Barry Smith <bsmith at mcs.anl.gov>:

>
>   Please run with the additional options -ksp_max_it 500  
> -ksp_gmres_restart 500 -ksp_monitor_true_residual  
> -ksp_monitor_singular_value and send back all the output (that would  
> include the 500 residual norms as it tries to converge.)
>
>   Barry
>
> On Apr 28, 2014, at 1:21 PM, Foad Hassaninejadfarahani  
> <umhassa5 at cc.umanitoba.ca> wrote:
>
>> Hello Again;
>>
>> I used -ksp_rtol 1.e-12 and it took way way longer to get the  
>> result for one iteration and it did not converge:
>>
>> Linear solve did not converge due to DIVERGED_ITS iterations 10000
>> KSP Object: 8 MPI processes
>>  type: gmres
>>    GMRES: restart=300, using Classical (unmodified) Gram-Schmidt  
>> Orthogonalization with no iterative refinement
>>    GMRES: happy breakdown tolerance 1e-30
>>  maximum iterations=10000, initial guess is zero
>>  tolerances:  relative=1e-12, absolute=1e-50, divergence=10000
>>  left preconditioning
>>  using PRECONDITIONED norm type for convergence test
>> PC Object: 8 MPI processes
>>  type: asm
>>    Additive Schwarz: total subdomain blocks = 8, amount of overlap = 1
>>    Additive Schwarz: restriction/interpolation type - RESTRICT
>>    Local solve is same for all blocks, in the following KSP and PC objects:
>>  KSP Object:  (sub_)   1 MPI processes
>>    type: preonly
>>    maximum iterations=10000, initial guess is zero
>>    tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>>    left preconditioning
>>    using NONE norm type for convergence test
>>  PC Object:  (sub_)   1 MPI processes
>>    type: lu
>>      LU: out-of-place factorization
>>      tolerance for zero pivot 1e-12
>>      matrix ordering: nd
>>      factor fill ratio given 5, needed 3.70575
>>        Factored matrix follows:
>>          Matrix Object:           1 MPI processes
>>            type: seqaij
>>            rows=5630, cols=5630
>>            package used to perform factorization: petsc
>>            total: nonzeros=877150, allocated nonzeros=877150
>>            total number of mallocs used during MatSetValues calls =0
>>              using I-node routines: found 1126 nodes, limit used is 5
>>    linear system matrix = precond matrix:
>>    Matrix Object:     1 MPI processes
>>      type: seqaij
>>      rows=5630, cols=5630
>>      total: nonzeros=236700, allocated nonzeros=236700
>>      total number of mallocs used during MatSetValues calls =0
>>        using I-node routines: found 1126 nodes, limit used is 5
>>  linear system matrix = precond matrix:
>>  Matrix Object:   8 MPI processes
>>    type: mpiaij
>>    rows=41000, cols=41000
>>    total: nonzeros=1817800, allocated nonzeros=2555700
>>    total number of mallocs used during MatSetValues calls =121180
>>      using I-node (on process 0) routines: found 1025 nodes, limit used is 5
>>
>>
>> Well, let me clear everything. I am solving the whole system (air  
>> and water) coupled at once. Although originally the system is not  
>> linear, but I linearized the equations, so I have some lagged  
>> terms. In addition the interface (between two phases) location is  
>> wrong at the beginning and should be corrected in each iteration  
>> after getting the solution. Therefore, I solve the whole domain,  
>> move the interface and again solve the whole domain. This should  
>> continue until the interface movement becomes from the order of  
>> 1E-12.
>>
>> My problem is after getting the converged solution. Restarting from  
>> the converged solution, if I use Superlu, it gives me back the  
>> converged solution and stops after one iteration. But, if I use any  
>> iterative solver, it does not give me back the converged solution  
>> and starts moving the interface cause the wrong solution ask for  
>> new interface location. This leads to oscillation for ever and for  
>> some cases divergence.
>>
>> --
>> With Best Regards;
>> Foad
>>
>>
>> Quoting Barry Smith <bsmith at mcs.anl.gov>:
>>
>>>
>>> On Apr 28, 2014, at 12:59 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>>>
>>>>
>>>> First try a much tighter tolerance on the linear solver. Use  
>>>> -ksp_rtol 1.e-12
>>>>
>>>> I don?t fully understand. Is the coupled system nonlinear? Are  
>>>> you solving a nonlinear system, how are you doing that since you  
>>>> seem to be only solving a single linear system? Does the linear  
>>>> system involve all unknowns in the fluid and air?
>>>>
>>>> Barry
>>>>
>>>>
>>>>
>>>> On Apr 28, 2014, at 11:19 AM, Foad Hassaninejadfarahani  
>>>> <umhassa5 at cc.umanitoba.ca> wrote:
>>>>
>>>>> Hello PETSc team;
>>>>>
>>>>> The PETSc setup in my code is working now. I have issues with  
>>>>> using the iterative solver instead of direct solver.
>>>>>
>>>>> I am solving a 2D, two-phase flow. Two fluids (air and water)  
>>>>> flow into a channel and there is interaction between two phases.  
>>>>> I am solving for the velocities in x and y directions, pressure  
>>>>> and two scalars. They are all coupled together. I am looking for  
>>>>> the steady-state solution. Since there is interface between the  
>>>>> phases which needs updating, there are many iterations to reach  
>>>>> the steady-state solution. "A" is a nine-banded non-symmetric  
>>>>> matrix and each node has five unknowns. I am storing the  
>>>>> non-zero coefficients and their locations in three separate  
>>>>> vectors.
>>>>>
>>>>> I started using the direct solver. Superlu works fine and gives  
>>>>> me good results compared to the previous works. However it is  
>>>>> not cheap and applicable for fine grids. But, the iterative  
>>>>> solver did not work and here is what I did:
>>>>>
>>>>> I got the converged solution by using Superlu. After that I  
>>>>> restarted from the converged solution and did one iteration  
>>>>> using  -pc_type lu -pc_factor_mat_solver_package superlu_dist  
>>>>> -log_summary. Again, it gave me the same converged solution.
>>>>>
>>>>> After that I started from the converged solution once more and  
>>>>> this time I tried different combinations of iterative solvers  
>>>>> and preconditions like the followings:
>>>>> -ksp_type gmres -ksp_gmres_restart 300 -pc_type asm -sub_pc_type  
>>>>> lu ksp_monitor_true_residual -ksp_converged_reason -ksp_view  
>>>>> -log_summary
>>>>>
>>>>> and here is the report:
>>>>> Linear solve converged due to CONVERGED_RTOL iterations 41
>>>>> KSP Object: 8 MPI processes
>>>>> type: gmres
>>>>>  GMRES: restart=300, using Classical (unmodified) Gram-Schmidt  
>>>>> Orthogonalization with no iterative refinement
>>>>>  GMRES: happy breakdown tolerance 1e-30
>>>>> maximum iterations=10000, initial guess is zero
>>>>> tolerances:  relative=1e-06, absolute=1e-50, divergence=10000
>>>>> left preconditioning
>>>>> using PRECONDITIONED norm type for convergence test
>>>>> PC Object: 8 MPI processes
>>>>> type: asm
>>>>>  Additive Schwarz: total subdomain blocks = 8, amount of overlap = 1
>>>>>  Additive Schwarz: restriction/interpolation type - RESTRICT
>>>>>  Local solve is same for all blocks, in the following KSP and PC objects:
>>>>> KSP Object:  (sub_)   1 MPI processes
>>>>>  type: preonly
>>>>>  maximum iterations=10000, initial guess is zero
>>>>>  tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>>>>>  left preconditioning
>>>>>  using NONE norm type for convergence test
>>>>> PC Object:  (sub_)   1 MPI processes
>>>>>  type: lu
>>>>>    LU: out-of-place factorization
>>>>>    tolerance for zero pivot 1e-12
>>>>>    matrix ordering: nd
>>>>>    factor fill ratio given 5, needed 3.70575
>>>>>      Factored matrix follows:
>>>>>        Matrix Object:           1 MPI processes
>>>>>          type: seqaij
>>>>>          rows=5630, cols=5630
>>>>>          package used to perform factorization: petsc
>>>>>          total: nonzeros=877150, allocated nonzeros=877150
>>>>>          total number of mallocs used during MatSetValues calls =0
>>>>>            using I-node routines: found 1126 nodes, limit used is 5
>>>>>  linear system matrix = precond matrix:
>>>>>  Matrix Object:     1 MPI processes
>>>>>    type: seqaij
>>>>>    rows=5630, cols=5630
>>>>>    total: nonzeros=236700, allocated nonzeros=236700
>>>>>    total number of mallocs used during MatSetValues calls =0
>>>>>      using I-node routines: found 1126 nodes, limit used is 5
>>>>> linear system matrix = precond matrix:
>>>>> Matrix Object:   8 MPI processes
>>>>>  type: mpiaij
>>>>>  rows=41000, cols=41000
>>>>>  total: nonzeros=1817800, allocated nonzeros=2555700
>>>>>  total number of mallocs used during MatSetValues calls =121180
>>>>>    using I-node (on process 0) routines: found 1025 nodes, limit  
>>>>> used is 5
>>>>>
>>>>> But, the results are far from the converged solution. For  
>>>>> example two reference nodes for the pressure are compared:
>>>>>
>>>>> Based on Superlu
>>>>> Channel Inlet pressure (MIXTURE):      0.38890D-01
>>>>> Channel Inlet pressure (LIQUID):       0.38416D-01
>>>>>
>>>>> Based on Gmres
>>>>> Channel Inlet pressure (MIXTURE):     -0.87214D+00
>>>>> Channel Inlet pressure (LIQUID):      -0.87301D+00
>>>>>
>>>>>
>>>>> I also tried this:
>>>>> -ksp_type gcr -pc_type asm -ksp_diagonal_scale  
>>>>> -ksp_diagonal_scale_fix -ksp_monitor_true_residual  
>>>>> -ksp_converged_reason -ksp_view -log_summary
>>>>>
>>>>> and here is the report:
>>>>> 0 KSP unpreconditioned resid norm 2.248340888101e+05 true resid  
>>>>> norm 2.248340888101e+05 ||r(i)||/||b|| 1.000000000000e+00
>>>>> 1 KSP unpreconditioned resid norm 4.900010460179e+04 true resid  
>>>>> norm 4.900010460179e+04 ||r(i)||/||b|| 2.179389471637e-01
>>>>> 2 KSP unpreconditioned resid norm 4.267761572746e+04 true resid  
>>>>> norm 4.267761572746e+04 ||r(i)||/||b|| 1.898182608933e-01
>>>>> 3 KSP unpreconditioned resid norm 2.041242251471e+03 true resid  
>>>>> norm 2.041242251471e+03 ||r(i)||/||b|| 9.078882398457e-03
>>>>> 4 KSP unpreconditioned resid norm 1.852885420564e+03 true resid  
>>>>> norm 1.852885420564e+03 ||r(i)||/||b|| 8.241123178296e-03
>>>>> 5 KSP unpreconditioned resid norm 1.748965594395e+02 true resid  
>>>>> norm 1.748965594395e+02 ||r(i)||/||b|| 7.778916460804e-04
>>>>> 6 KSP unpreconditioned resid norm 5.664539353996e+01 true resid  
>>>>> norm 5.664539353996e+01 ||r(i)||/||b|| 2.519430831852e-04
>>>>> 7 KSP unpreconditioned resid norm 3.607535692806e+01 true resid  
>>>>> norm 3.607535692806e+01 ||r(i)||/||b|| 1.604532351788e-04
>>>>> 8 KSP unpreconditioned resid norm 1.041501303366e+01 true resid  
>>>>> norm 1.041501303366e+01 ||r(i)||/||b|| 4.632310468924e-05
>>>>> 9 KSP unpreconditioned resid norm 3.089920380322e+00 true resid  
>>>>> norm 3.089920380322e+00 ||r(i)||/||b|| 1.374311340720e-05
>>>>> 10 KSP unpreconditioned resid norm 1.456883209806e+00 true resid  
>>>>> norm 1.456883209806e+00 ||r(i)||/||b|| 6.479814593583e-06
>>>>> 11 KSP unpreconditioned resid norm 5.566902714391e-01 true resid  
>>>>> norm 5.566902714391e-01 ||r(i)||/||b|| 2.476004748147e-06
>>>>> 12 KSP unpreconditioned resid norm 2.403913756663e-01 true resid  
>>>>> norm 2.403913756663e-01 ||r(i)||/||b|| 1.069194520006e-06
>>>>> 13 KSP unpreconditioned resid norm 1.650435118839e-01 true resid  
>>>>> norm 1.650435118839e-01 ||r(i)||/||b|| 7.340680088032e-07
>>>>> Linear solve converged due to CONVERGED_RTOL iterations 13
>>>>> KSP Object: 8 MPI processes
>>>>> type: gcr
>>>>>  GCR: restart = 30
>>>>>  GCR: restarts performed = 1
>>>>> maximum iterations=10000, initial guess is zero
>>>>> tolerances:  relative=1e-06, absolute=1e-50, divergence=10000
>>>>> right preconditioning
>>>>> diagonally scaled system
>>>>> using UNPRECONDITIONED norm type for convergence test
>>>>> PC Object: 8 MPI processes
>>>>> type: asm
>>>>>  Additive Schwarz: total subdomain blocks = 8, amount of overlap = 1
>>>>>  Additive Schwarz: restriction/interpolation type - RESTRICT
>>>>>  Local solve is same for all blocks, in the following KSP and PC objects:
>>>>> KSP Object:  (sub_)   1 MPI processes
>>>>>  type: preonly
>>>>>  maximum iterations=10000, initial guess is zero
>>>>>  tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>>>>>  left preconditioning
>>>>>  using NONE norm type for convergence test
>>>>> PC Object:  (sub_)   1 MPI processes
>>>>>  type: ilu
>>>>>    ILU: out-of-place factorization
>>>>>    0 levels of fill
>>>>>    tolerance for zero pivot 1e-12
>>>>>    using diagonal shift to prevent zero pivot
>>>>>    matrix ordering: natural
>>>>>    factor fill ratio given 1, needed 1
>>>>>      Factored matrix follows:
>>>>>        Matrix Object:           1 MPI processes
>>>>>          type: seqaij
>>>>>          rows=5630, cols=5630
>>>>>          package used to perform factorization: petsc
>>>>>          total: nonzeros=236700, allocated nonzeros=236700
>>>>>          total number of mallocs used during MatSetValues calls =0
>>>>>            using I-node routines: found 1126 nodes, limit used is 5
>>>>>  linear system matrix = precond matrix:
>>>>>  Matrix Object:     1 MPI processes
>>>>>    type: seqaij
>>>>>    rows=5630, cols=5630
>>>>>    total: nonzeros=236700, allocated nonzeros=236700
>>>>>    total number of mallocs used during MatSetValues calls =0
>>>>>      using I-node routines: found 1126 nodes, limit used is 5
>>>>> linear system matrix = precond matrix:
>>>>> Matrix Object:   8 MPI processes
>>>>>  type: mpiaij
>>>>>  rows=41000, cols=41000
>>>>>  total: nonzeros=1817800, allocated nonzeros=2555700
>>>>>  total number of mallocs used during MatSetValues calls =121180
>>>>>    using I-node (on process 0) routines: found 1025 nodes, limit  
>>>>> used is 5
>>>>>
>>>>> Channel Inlet pressure (MIXTURE):      -0.90733D+00
>>>>> Channel Inlet pressure (LIQUID):      -0.10118D+01
>>>>>
>>>>>
>>>>> As you may see these are complete different results which are  
>>>>> not close to the converged solution.
>>>>>
>>>>> Since, I want to have fine grids I need to use iterative solver.  
>>>>> I wonder if I am missing something or using wrong  
>>>>> solver/precondition/option. I would appreciate if you could help  
>>>>> me (like always).
>>>>>
>>>>> --
>>>>> With Best Regards;
>>>>> Foad
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>>
>>
>>
>
>
>




More information about the petsc-users mailing list