[petsc-users] Questions on dense and sparse matrix block

Danyang Su danyang.su at gmail.com
Wed Jan 22 19:15:14 CST 2014


On 22/01/2014 4:32 PM, Barry Smith wrote:
>    What ODE integrator are you using? Your own? One in PETSc?
I use my own integrator and PETSc is used as KSP solver. I may need to 
test more cases especially 2D cases to see the behavior of dense format 
and sparse format.
>
>     You could just use -pc_type lu and the sparse format right? Or is that too slow?
>
>     Barry
>
>    Jed and Emil,  do we have any integrators that keep the time-step small due to slow convergence of Newton in TS?
>
>
> On Jan 22, 2014, at 5:05 PM, Danyang Su <danyang.su at gmail.com> wrote:
>
>> On 22/01/2014 1:42 PM, Barry Smith wrote:
>>>     Linear solvers based on ILU will behave very differently when extra zeros are kept in the matrix; they generally converge much faster because the ILU behaves more like an LU (especially in 1d).   So first run one 1 process with dense and sparse formats and -pc_type lu;  do they behave differently? (they pretty much shouldn’t).
>> It does not behave differently with the option '-pc_type lu'.
>>>    Next run the sparse format version with -ksp_monitor_true_residual and see how the solver is converging compared to the dense format version.
>> The converging is quite different.
>> ---Sparse format
>>   0 KSP preconditioned resid norm 5.242977405897e-004 true resid norm 3.927123490361e-006 ||r(i)||/||b|| 1.000000000000e+000
>>   1 KSP preconditioned resid norm 6.118580743238e-006 true resid norm 2.917502282301e-008 ||r(i)||/||b|| 7.429107562985e-003
>>   2 KSP preconditioned resid norm 9.271130393116e-007 true resid norm 9.649272009380e-009 ||r(i)||/||b|| 2.457083927476e-003
>>   3 KSP preconditioned resid norm 5.812641606714e-009 true resid norm 4.073757311146e-011 ||r(i)||/||b|| 1.037338734355e-005
>>   4 KSP preconditioned resid norm 1.992914862465e-010 true resid norm 1.261636843933e-012 ||r(i)||/||b|| 3.212623303112e-007
>>   5 KSP preconditioned resid norm 1.422122839379e-012 true resid norm 2.393761421284e-014 ||r(i)||/||b|| 6.095457469466e-009
>> ---Dense format
>>   0 KSP preconditioned resid norm 2.948306125658e+000 true resid norm 2.436662454678e-004 ||r(i)||/||b|| 1.000000000000e+000
>>   1 KSP preconditioned resid norm 9.798852520841e-015 true resid norm 1.267168968393e-018 ||r(i)||/||b|| 5.200428832315e-015
>>
>> The convergence seems difficult for the sparse format and the number of outer newton iteration is usually much larger than dense format.
>> So the time step cannot increase due to this.
>>
>> Thanks,
>>
>> Danyang
>>>     Barry
>>>
>>> On Jan 22, 2014, at 2:11 PM, Danyang Su <danyang.su at gmail.com> wrote:
>>>
>>>> Dear All,
>>>>
>>>> I have a reactive transport problem that use block matrices. Each block can be dense with a lot of zero entries or sparse without zero entries. The model has been tested for a 1D reactive transport problem. When dense block is used, it works well and time step can be increased gradually to reach a maximum time step, but when sparse block is used, the time step remains at a small value. I checked the entries for both dense blocks and sparse blocks (A), they have the same non-zero entries. With the same RHS (b), the solution (X) is a little different, but seems acceptable. The first matrix with both dense blocks and sparse blocks has been attached. The matrix is exported in matrix market exchange format (.mtp).
>>>>
>>>> I wonder if this is caused by the outer newton iteration or the solver as this is not a general problem. This problem only occurs in some cases.
>>>>
>>>> Does anyone run into this problem before?
>>>>
>>>> Thanks and regards,
>>>>
>>>> Danyang
>>>> <a_reactran_rt_1_dense.mtp><a_reactran_rt_1_dense.PNG><b_reactran_rt_1_dense.txt><x_reactran_rt_1_dense.txt><a_reactran_rt_1_sparse.mtp><a_reactran_rt_1_sparse.PNG><b_reactran_rt_1_sparse.txt><x_reactran_rt_1_sparse.txt>



More information about the petsc-users mailing list