[petsc-users] Differences between jacobi and bjacobi preconditioner for cg method 1-processor/block

Matthew Knepley knepley at gmail.com
Mon May 19 15:20:05 CDT 2014


On Mon, May 19, 2014 at 3:16 PM, Jonathan Wong <jon.the.wong at gmail.com>wrote:

> Matthew: Thanks for clarifying about the block-jacobi.
>
> Paul: I'm only using bjacobi with PETSc to show that the problem is
> solvable, and to provide some "estimation" as to the performance of the
> jacobi preconditioner. On the GPU, I am using CUSP to do cg+jacobi which
> works fine for this 50k node mesh.
>

Its possible the GPU CG code is just ignoring breakdown and continuing the
solve. This may work sometimes, but
could give incorrect answers.

Also, it seems simply beyond belief that CG+Jacobi could solve any FEM
problem other than the identity. For example,
the Laplacian has a condition number that is proportional to h^{-2}, so it
grows like N for linear finite elements in 2D.
Are you trying to solve something with an extremely small timestep so that
it looks like the identity?

   Matt


> On Mon, May 19, 2014 at 12:07 PM, Paul Mullowney <paulmullowney at gmail.com>wrote:
>
>> I don't think bjacobi is working on GPUs. I know Dominic made a pull
>> request a few months ago, but I don't know if its been integrated into next.
>> -Paul
>>
>>
>> On Mon, May 19, 2014 at 12:45 PM, Matthew Knepley <knepley at gmail.com>wrote:
>>
>>> On Mon, May 19, 2014 at 1:42 PM, Jonathan Wong <jon.the.wong at gmail.com>wrote:
>>>
>>>> Thanks for the input. To clarify, I'm trying to compare GPU algorithms
>>>> to Petsc, and they only have cg/jacobi for what I'm comparing at the
>>>> moment. This is why I'm not using gmres (which also works well).
>>>>
>>>> I can solve the problem with the GPU (custom code) using CG + jacobi
>>>> for all the meshes. On the CPU side, I can solve everything with cg/bjacobi
>>>> and almost all of my meshes with cg/jacobi except for my 50k node mesh. I
>>>> can solve the problem with my finite element built-in direct solver (just
>>>> takes awhile) on one processor. I've been reading that by default the
>>>> bjacobi pc uses one block per processor. So I had assumed that for one
>>>> processor block-jacobi and jacobi would give similar results. cg+bjacobi
>>>> works fine. cg+jacobi does not.
>>>>
>>>
>>> "Jacobi" means preconditioning by the inverse of the diagonal of the
>>> matrix. Block-Jacobi means using a preconditioner
>>> formed from each of the blocks, in this case 1 block. By default the
>>> inner preconditioner is ILU(0), not jacobi. You can
>>> make them equivalent using -sub_pc_type jacobi.
>>>
>>>    Matt
>>>
>>>
>>>>  I'll just look into the preconditioner code and use KSPview to try to
>>>> figure out what the differences are for one processor. I'm not sure why the
>>>> GPU can consistently solve the problem with cg/jacobi. I'm assuming this is
>>>> due to the way round-off or the order of operations differences between the
>>>> two.
>>>>
>>>>
>>>> On Mon, May 19, 2014 at 6:35 AM, Jed Brown <jed at jedbrown.org> wrote:
>>>>
>>>>> Matthew Knepley <knepley at gmail.com> writes:
>>>>> > No, Block-Jacobi and Jacobi are completely different. If you are not
>>>>> > positive definite, you should be using MINRES.
>>>>>
>>>>> MINRES requires an SPD preconditioner.
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>
>>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140519/9e8bd2a5/attachment.html>


More information about the petsc-users mailing list