[petsc-users] [KSP] Different behaviour between

Norihiro Watanabe norihiro.w at gmail.com
Tue Jan 7 06:16:03 CST 2014


solved. thank you


On Tue, Jan 7, 2014 at 12:15 PM, Norihiro Watanabe <norihiro.w at gmail.com>wrote:

> It seems the problem is solved if I call KSPSetOperators() before
> KSPSolve().
>
> I have one more question. How is a norm of the RHS calculated to check
> relative tolerance in iterative solvers? Is it norm2 or norm infinity?
>
> Best,
> Nori
>
>
>
> On Tue, Jan 7, 2014 at 11:39 AM, Norihiro Watanabe <norihiro.w at gmail.com>wrote:
>
>> Matthew, Thank you for your quick response. I followed your suggestion
>> and confirmed that the matrices are identical (I checked the norm1 of a
>> matrix resulted from MatAXPY()).
>>
>> The next step is to check whether PETSc objects are correctly reset
>> before reassembling equations. A question I have is, do I have to reset or
>> recreate a KSP object? The coefficient matrix, RHS and solution vectors are
>> all set zero by calling MatZeroEntries() and VecSet().
>>
>>
>> Best,
>> Nori
>>
>>
>>
>> On Mon, Jan 6, 2014 at 7:36 PM, Matthew Knepley <knepley at gmail.com>wrote:
>>
>>> On Mon, Jan 6, 2014 at 12:06 PM, Norihiro Watanabe <norihiro.w at gmail.com
>>> > wrote:
>>>
>>>> Hi,
>>>>
>>>> I got a strange problem with PETSc 3.4 that linear equations assembled
>>>> in our FEM codes cannot be solved within 5000 iterations, whereas linear
>>>> equations loaded from binary files can be solved with only 24 iterations
>>>> using ksp/examples/tutorials/ex10.c. The binary files were created by the
>>>> FEM codes using MatView() and VecView() right before calling KSPSolve().
>>>> Linear solver types are -ksp_type bcgs -pc_type bjacobi. I set the same
>>>> tolerance to both programs.
>>>>
>>>> As you can see from attached log files, both programs calculate the
>>>> same "true resid norm" at the beginning but different "preconditioned resid
>>>> norm". Does it mean both programs are actually solving the same problem but
>>>> somewhat with different preconditoner? It would be so helpful if you have
>>>> any clue for this problem.
>>>>
>>>
>>> Yes, that would be the likely conclusion. It appears that the parallel
>>> partition is the same, that the number of nonzeros in both
>>> matrices is the same, and that the solvers are identical. Thus I would
>>> make sure that the matrices are indeed identical. For example,
>>> you can MatLoad the matrix you used in ex10, and MatAXPY() to get the
>>> difference. It is a common error to forget to MatZeroEntries()
>>> before reassembling the FEM operator which can produce the convergence
>>> degredation you see.
>>>
>>>   Thanks,
>>>
>>>       Matt
>>>
>>>
>>>>  just tell you background of this: Currently I'm facing a convergence
>>>> problem in linear solvers for solving transient heat transport problems
>>>> using FEM. At early time steps, PETSc converges quickly (<20 iterations).
>>>> Later, iteration numbers increase as time steps increase (>5000 after 19
>>>> time steps). I'm in the middle of checking where is a problem for the slow
>>>> convergence. Because I don't get  such slow convergence with other linear
>>>> solvers (BiCGSTAB+Jacobi), I suspect the FEM  codes are missing some PETSc
>>>> functions or options to be used. As I wrote above, if I use ex10.c with the
>>>> binary files, the convergence problem is solved, which means something
>>>> going wrong in the FEM codes.
>>>>
>>>>
>>>> Thank you in advance,
>>>> Nori
>>>>
>>>>
>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>
>>
>>
>> --
>> Norihiro Watanabe
>>
>
>
>
> --
> Norihiro Watanabe
>



-- 
Norihiro Watanabe
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140107/0f8626cf/attachment.html>


More information about the petsc-users mailing list