[petsc-users] MATNEST with shell matrices

Matthew Knepley knepley at gmail.com
Thu Feb 21 17:29:32 CST 2013


On Thu, Feb 21, 2013 at 6:16 PM, Amneet Bhalla <mail2amneet at gmail.com>wrote:

> Jed, the thing is that I have nonnative PETSc data to work with. It's from
> SAMRAI
> library. We have already defined MatShell to define the individual
> operators that act
> on nonnative vectors; so writing explicit matrices in PETSc would be very
> difficult (atleast for me)
> and time consuming. What we want is PETSc's algorithm (and NOT the data
> structures)
>  to solve the problem.
>

So I would reiterate:

1) Solve your system with a standard KSP solver, like GMRES

2) Tell PCFIELDSPLIT about your fields using PCFieldSplitSetIS(),
http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/PC/PCFieldSplitSetIS.html

When running FieldSplit, crank down on all solvers (ksp_rtol 1.0e-10) to
start and see if there
is any hope of solving it. This will take forever to run. If it works,
start backing off the tolerances.
The tolerances that need to stay low represent the blocks you need to
fill-in with an approximation
that can be used to build a preconditioner.

   Matt


> On Thu, Feb 21, 2013 at 5:06 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
>
>>
>> On Thu, Feb 21, 2013 at 4:19 PM, Amneet Bhalla <mail2amneet at gmail.com>wrote:
>>
>>> Can you guys comment on what example case would be best to start off for
>>> shell
>>> operators with FieldSplit? The examples I am looking into all start with
>>> creating native
>>> PETSc matrices and vectors.
>>>
>>
>>
>> Amneet, do you already have code that applies all the "blocks" of your
>> coupled system or are you starting something new? If it's something new,
>> please don't use MatNest/MatShell just yet. If you have tested existing
>> code, then wrapping it in MatShell/MatNest is fine. If you are working on
>> something new, I recommend the progression below. It will encourage better
>> program structure and better debuggability, and will ultimately be faster
>> than trying to "skip" steps.
>>
>> Step 1: Just write a residual and use -snes_mf to solve all matrix-free
>> without preconditioning.
>>
>> Step 2: Assemble an approximate Jacobian and use -snes_mf_operator
>>
>> Step 3: use fd_coloring (if possible) to see how much solver benefit you
>> could gain by implementing the full Jacobian. Also use fieldsplit solvers
>> to find out which blocks of the Jacobian are most important to assemble.
>>
>> Step 4: Implement those blocks of the Jacobian that you need for
>> effective preconditioning.
>>
>> Step 5: Profile, consider those preconditioners that are most effective
>> and the suitability of the discretization for matrix-free application. If
>> you spend a lot of time/memory in things like MatGetSubMatrix, then add an
>> option to use MatNest. If you have overly heavy matrices (in terms of
>> memory, bandwidth, or assembly time)  that need not be completely assembled
>> for effective preconditioning, add an option to use MatShell for those
>> parts.
>>
>>
>>
>
>
> --
> Amneet
>
>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20130221/ca9be5f5/attachment.html>


More information about the petsc-users mailing list