[petsc-users] Most efficient way to set a matrix representing the linearized operator

Matthew Knepley knepley at gmail.com
Fri Sep 4 19:35:48 CDT 2015


On Fri, Sep 4, 2015 at 8:20 PM, Timothée Nicolas <timothee.nicolas at gmail.com
> wrote:

> I see, good. Then there should be no problem. Do you also have a
> suggestion for questions 2. and 3. ?
>

2. Makes little difference. Get the solver converging and scaling first.

3. The DMDA allocates the matrix correctly for the stencil you specify. You
should not have the error
    if your insertions match the specification.

  Thanks,

    Matt


> Timothée
>
> 2015-09-05 3:42 GMT+09:00 Matthew Knepley <knepley at gmail.com>:
>
>> On Fri, Sep 4, 2015 at 11:20 AM, Timothée Nicolas <
>> timothee.nicolas at gmail.com> wrote:
>>
>>> Hi,
>>>
>>> (short version at the end)
>>>
>>> I am starting to write the matrices for the preconditioner of my
>>> application. It's a physics-based preconditioner (L Chacon Journal of
>>> Comput. Phys.  2002, 2003 and Physics of Plasmas 2010 if you want to know
>>> more), so I must compute the linearized operators as matrices. I then plan
>>> to use KSP Multigrid to invert the resulting linear systems.
>>>
>>> My question is, what is the most efficient way of setting the matrices
>>> for linearized operators in the context of a 3D grid handled with DMDA and
>>> several degrees of freedom (8 in my case) ? Take for instance a term like
>>> curl(v x B) in the induction equation, which, linearized, gives curl(v0 x .
>>> ) for the magnetic part. I think the logical approach is to use vectors of
>>> the form (0,...,0,1,0,...,0) and to compute the result of the operator, and
>>> store it in the matrix by columns using MatSetValuesStencil. The matrix is
>>> created with DMCreateMatrix.
>>>
>>> Since all the vectors have mostly zeros, by writing by hand what parts
>>> you need to compute, the number of evaluations for each vector is only the
>>> number of nonzeros in a column, typically 21 in my case (7 point stencil *
>>> 3 directions), so the total number of evaluations of the operator should be
>>> reasonable and efficient that way.
>>>
>>> It gives promising results, however I am having trouble handling the
>>> regions at the interface between the processors. Because if your vector has
>>> its only non zero value on a grid point which is on the boundary between
>>> two processors, then it means that one of the non-zero value in the result
>>> vector will be living on a ghost point. And to compute the operator at this
>>> ghost point, you need an *extra* ghost point, which makes the whole thing
>>> quite cumbersome. You could of course increase the stencil_width of the
>>> DMDA by one, but it does not seem the good approach.
>>>
>>
>> Why not? The only reason that the stencil width exists is to specify how
>> much data from other processors
>> is necessary to compute the residual/Jacobian. It sounds like two ghosts
>> are needed.
>>
>>   Thanks,
>>
>>     Matt
>>
>>
>>> Instead, I was wondering if it would be possible to write the matrix by
>>> column vectors. So I would still create vectors of the form
>>> (0,...,0,1,0,...,0) but in Petsc Vec type, then compute the result directly
>>> in Petsc Vec, and put this vector in the matrix as a column. But somehow it
>>> does not seem really efficient, unless there is a way Petsc knows that the
>>> vector you input has mostly zeros, as well as the result vector.
>>>
>>> So, in short
>>>
>>> 1. How would you compute a matrix representing an operator in a 3D
>>> system with several degrees of freedom in the most efficient way ?
>>>
>>> And two other question related to this matter:
>>>
>>> 2. When I do the Multigrid inversion, I will momentarily ignore 3
>>> degrees of freedom (the velocity). What is the more efficient between (i)
>>> putting 1 everywhere on the diagonal in the velocity part or (ii) somehow
>>> create a new DMDA with 8-3 = 5 degrees of freedom to handle the problem
>>> separately.
>>>
>>> 3. When I use DMCreateMatrix, do I have to preallocate the matrix ? I
>>> have tried, but got errors like
>>> [11]PETSC ERROR: New nonzero at (296,296) caused a malloc
>>> Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to turn
>>> off this check
>>>
>>> which could not be prevented by using the advised MatSetOption(A,
>>> MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE)
>>>
>>> Sorry for the long question
>>>
>>> Best
>>>
>>> Timothée
>>>
>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150904/dfac24d0/attachment-0001.html>


More information about the petsc-users mailing list