[petsc-users] discontinuous viscosity stokes equation 3D staggered grid
Matthew Knepley
knepley at gmail.com
Tue Aug 6 09:40:28 CDT 2013
On Tue, Aug 6, 2013 at 8:06 AM, Bishesh Khanal <bisheshkh at gmail.com> wrote:
>
>
>
> On Mon, Aug 5, 2013 at 4:14 PM, Matthew Knepley <knepley at gmail.com> wrote:
>
>> On Mon, Aug 5, 2013 at 8:48 AM, Bishesh Khanal <bisheshkh at gmail.com>wrote:
>>
>>>
>>>
>>>
>>> On Mon, Aug 5, 2013 at 3:17 PM, Matthew Knepley <knepley at gmail.com>wrote:
>>>
>>>> On Mon, Aug 5, 2013 at 7:54 AM, Bishesh Khanal <bisheshkh at gmail.com>wrote:
>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Jul 17, 2013 at 9:48 PM, Jed Brown <jedbrown at mcs.anl.gov>wrote:
>>>>>
>>>>>> Bishesh Khanal <bisheshkh at gmail.com> writes:
>>>>>>
>>>>>> > Now, I implemented two different approaches, each for both 2D and
>>>>>> 3D, in
>>>>>> > MATLAB. It works for the smaller sizes but I have problems solving
>>>>>> it for
>>>>>> > the problem size I need (250^3 grid size).
>>>>>> > I use staggered grid with p on cell centers, and components of v on
>>>>>> cell
>>>>>> > faces. Similar split up of K to cell center and faces to account
>>>>>> for the
>>>>>> > variable viscosity case)
>>>>>>
>>>>>> Okay, you're using a staggered-grid finite difference discretization
>>>>>> of
>>>>>> variable-viscosity Stokes. This is a common problem and I recommend
>>>>>> starting with PCFieldSplit with Schur complement reduction (make that
>>>>>> work first, then switch to block preconditioner). You can use PCLSC
>>>>>> or
>>>>>> (probably better for you), assemble a preconditioning matrix
>>>>>> containing
>>>>>> the inverse viscosity in the pressure-pressure block. This diagonal
>>>>>> matrix is a spectrally equivalent (or nearly so, depending on
>>>>>> discretization) approximation of the Schur complement. The velocity
>>>>>> block can be solved with algebraic multigrid. Read the PCFieldSplit
>>>>>> docs (follow papers as appropriate) and let us know if you get stuck.
>>>>>>
>>>>>
>>>>> I was trying to assemble the inverse viscosity diagonal matrix to use
>>>>> as the preconditioner for the Schur complement solve step as you suggested.
>>>>> I've few questions about the ways to implement this in Petsc:
>>>>> A naive approach that I can think of would be to create a vector with
>>>>> its components as reciprocal viscosities of the cell centers corresponding
>>>>> to the pressure variables, and then create a diagonal matrix from this
>>>>> vector. However I'm not sure about:
>>>>> How can I make this matrix, (say S_p) compatible to the Petsc
>>>>> distribution of the different rows of the main system matrix over different
>>>>> processors ? The main matrix was created using the DMDA structure with 4
>>>>> dof as explained before.
>>>>> The main matrix correspond to the DMDA with 4 dofs but for the S_p
>>>>> matrix would correspond to only pressure space. Should the distribution of
>>>>> the rows of S_p among different processor not correspond to the
>>>>> distribution of the rhs vector, say h' if it is solving for p with Sp = h'
>>>>> where S = A11 inv(A00) A01 ?
>>>>>
>>>>
>>>> PETSc distributed vertices, not dofs, so it never breaks blocks. The P
>>>> distribution is the same as the entire problem divided by 4.
>>>>
>>>
>>> Thanks Matt. So if I create a new DMDA with same grid size but with
>>> dof=1 instead of 4, the vertices for this new DMDA will be identically
>>> distributed as for the original DMDA ? Or should I inform PETSc by calling
>>> a particular function to make these two DMDA have identical distribution of
>>> the vertices ?
>>>
>>
>> Yes.
>>
>>
>>> Even then I think there might be a problem due to the presence of
>>> "fictitious pressure vertices". The system matrix (A) contains an identity
>>> corresponding to these fictitious pressure nodes, thus when using a
>>> -pc_fieldsplit_detect_saddle_point, will detect a A11 zero block of size
>>> that correspond to only non-fictitious P-nodes. So the preconditioner S_p
>>> for the Schur complement outer solve with Sp = h' will also need to
>>> correspond to only the non-fictitious P-nodes. This means its size does not
>>> directly correspond to the DMDA grid defined for the original problem.
>>> Could you please suggest an efficient way of assembling this S_p matrix ?
>>>
>>
>> Don't use detect_saddle, but split it by fields -pc_fieldsplit_0_fields
>> 0,1,2 -pc_fieldsplit_1_fields 4
>>
>
> How can I set this split in the code itself without giving it as a command
> line option when the system matrix is assembled from the DMDA for the whole
> system with 4 dofs. (i.e. *without* using the DMComposite or *without*using the nested block matrices to assemble different blocks separately and
> then combine them together).
> I need the split to get access to the fieldsplit_1_ksp in my code, because
> not using detect_saddle_point means I cannot use
> -fieldsplit_1_ksp_constant_null_space due to the presence of identity for
> the fictitious pressure nodes present in the fieldsplit_1_ block. I need to
> use PCFieldSplitGetSubKsp() so that I can set proper null-space basis.
>
This is currently a real problem with the DMDA. In the unstructured case,
where we always need specialized spaces, you can
use something like
PetscObject pressure;
MatNullSpace nullSpacePres;
ierr = DMGetField(dm, 1, &pressure);CHKERRQ(ierr);
ierr = MatNullSpaceCreate(PetscObjectComm(pressure), PETSC_TRUE, 0,
NULL, &nullSpacePres);CHKERRQ(ierr);
ierr = PetscObjectCompose(pressure, "nullspace", (PetscObject)
nullSpacePres);CHKERRQ(ierr);
ierr = MatNullSpaceDestroy(&nullSpacePres);CHKERRQ(ierr);
and then DMGetSubDM() uses this information to attach the null space to the
IS that is created using the information in the PetscSection.
If you use a PetscSection to set the data layout over the DMDA, I think
this works correctly, but this has not been tested at all and is very
new code. Eventually, I think we want all DMs to use this mechanism, but we
are still working it out.
Bottom line: For custom null spaces using the default layout in DMDA, you
need to take apart the PCFIELDSPLIT after it has been setup,
which is somewhat subtle. You need to call KSPSetUp() and then reach in and
get the PC, and the subKSPs. I don't like this at all, but we
have not reorganized that code (which could be very simple and inflexible
since its very structured).
Matt
>
>> Matt
>>
>>
>>>
>>>> Matt
>>>>
>>>> --
>>>> What most experimenters take for granted before they begin their
>>>> experiments is infinitely more interesting than any results to which their
>>>> experiments lead.
>>>> -- Norbert Wiener
>>>>
>>>
>>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20130806/d927db70/attachment-0001.html>
More information about the petsc-users
mailing list