[petsc-users] discontinuous viscosity stokes equation 3D staggered grid
Bishesh Khanal
bisheshkh at gmail.com
Wed Aug 7 07:07:06 CDT 2013
On Tue, Aug 6, 2013 at 11:34 PM, Matthew Knepley <knepley at gmail.com> wrote:
> On Tue, Aug 6, 2013 at 10:59 AM, Bishesh Khanal <bisheshkh at gmail.com>wrote:
>
>>
>>
>>
>> On Tue, Aug 6, 2013 at 4:40 PM, Matthew Knepley <knepley at gmail.com>wrote:
>>
>>> On Tue, Aug 6, 2013 at 8:06 AM, Bishesh Khanal <bisheshkh at gmail.com>wrote:
>>>
>>>>
>>>>
>>>>
>>>> On Mon, Aug 5, 2013 at 4:14 PM, Matthew Knepley <knepley at gmail.com>wrote:
>>>>
>>>>> On Mon, Aug 5, 2013 at 8:48 AM, Bishesh Khanal <bisheshkh at gmail.com>wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, Aug 5, 2013 at 3:17 PM, Matthew Knepley <knepley at gmail.com>wrote:
>>>>>>
>>>>>>> On Mon, Aug 5, 2013 at 7:54 AM, Bishesh Khanal <bisheshkh at gmail.com>wrote:
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Wed, Jul 17, 2013 at 9:48 PM, Jed Brown <jedbrown at mcs.anl.gov>wrote:
>>>>>>>>
>>>>>>>>> Bishesh Khanal <bisheshkh at gmail.com> writes:
>>>>>>>>>
>>>>>>>>> > Now, I implemented two different approaches, each for both 2D
>>>>>>>>> and 3D, in
>>>>>>>>> > MATLAB. It works for the smaller sizes but I have problems
>>>>>>>>> solving it for
>>>>>>>>> > the problem size I need (250^3 grid size).
>>>>>>>>> > I use staggered grid with p on cell centers, and components of v
>>>>>>>>> on cell
>>>>>>>>> > faces. Similar split up of K to cell center and faces to account
>>>>>>>>> for the
>>>>>>>>> > variable viscosity case)
>>>>>>>>>
>>>>>>>>> Okay, you're using a staggered-grid finite difference
>>>>>>>>> discretization of
>>>>>>>>> variable-viscosity Stokes. This is a common problem and I
>>>>>>>>> recommend
>>>>>>>>> starting with PCFieldSplit with Schur complement reduction (make
>>>>>>>>> that
>>>>>>>>> work first, then switch to block preconditioner). You can use
>>>>>>>>> PCLSC or
>>>>>>>>> (probably better for you), assemble a preconditioning matrix
>>>>>>>>> containing
>>>>>>>>> the inverse viscosity in the pressure-pressure block. This
>>>>>>>>> diagonal
>>>>>>>>> matrix is a spectrally equivalent (or nearly so, depending on
>>>>>>>>> discretization) approximation of the Schur complement. The
>>>>>>>>> velocity
>>>>>>>>> block can be solved with algebraic multigrid. Read the
>>>>>>>>> PCFieldSplit
>>>>>>>>> docs (follow papers as appropriate) and let us know if you get
>>>>>>>>> stuck.
>>>>>>>>>
>>>>>>>>
>>>>>>>> I was trying to assemble the inverse viscosity diagonal matrix to
>>>>>>>> use as the preconditioner for the Schur complement solve step as you
>>>>>>>> suggested. I've few questions about the ways to implement this in Petsc:
>>>>>>>> A naive approach that I can think of would be to create a vector
>>>>>>>> with its components as reciprocal viscosities of the cell centers
>>>>>>>> corresponding to the pressure variables, and then create a diagonal matrix
>>>>>>>> from this vector. However I'm not sure about:
>>>>>>>> How can I make this matrix, (say S_p) compatible to the Petsc
>>>>>>>> distribution of the different rows of the main system matrix over different
>>>>>>>> processors ? The main matrix was created using the DMDA structure with 4
>>>>>>>> dof as explained before.
>>>>>>>> The main matrix correspond to the DMDA with 4 dofs but for the S_p
>>>>>>>> matrix would correspond to only pressure space. Should the distribution of
>>>>>>>> the rows of S_p among different processor not correspond to the
>>>>>>>> distribution of the rhs vector, say h' if it is solving for p with Sp = h'
>>>>>>>> where S = A11 inv(A00) A01 ?
>>>>>>>>
>>>>>>>
>>>>>>> PETSc distributed vertices, not dofs, so it never breaks blocks. The
>>>>>>> P distribution is the same as the entire problem divided by 4.
>>>>>>>
>>>>>>
>>>>>> Thanks Matt. So if I create a new DMDA with same grid size but with
>>>>>> dof=1 instead of 4, the vertices for this new DMDA will be identically
>>>>>> distributed as for the original DMDA ? Or should I inform PETSc by calling
>>>>>> a particular function to make these two DMDA have identical distribution of
>>>>>> the vertices ?
>>>>>>
>>>>>
>>>>> Yes.
>>>>>
>>>>>
>>>>>> Even then I think there might be a problem due to the presence of
>>>>>> "fictitious pressure vertices". The system matrix (A) contains an identity
>>>>>> corresponding to these fictitious pressure nodes, thus when using a
>>>>>> -pc_fieldsplit_detect_saddle_point, will detect a A11 zero block of size
>>>>>> that correspond to only non-fictitious P-nodes. So the preconditioner S_p
>>>>>> for the Schur complement outer solve with Sp = h' will also need to
>>>>>> correspond to only the non-fictitious P-nodes. This means its size does not
>>>>>> directly correspond to the DMDA grid defined for the original problem.
>>>>>> Could you please suggest an efficient way of assembling this S_p matrix ?
>>>>>>
>>>>>
>>>>> Don't use detect_saddle, but split it by fields
>>>>> -pc_fieldsplit_0_fields 0,1,2 -pc_fieldsplit_1_fields 4
>>>>>
>>>>
>>>> How can I set this split in the code itself without giving it as a
>>>> command line option when the system matrix is assembled from the DMDA for
>>>> the whole system with 4 dofs. (i.e. *without* using the DMComposite or
>>>> *without* using the nested block matrices to assemble different blocks
>>>> separately and then combine them together).
>>>> I need the split to get access to the fieldsplit_1_ksp in my code,
>>>> because not using detect_saddle_point means I cannot use
>>>> -fieldsplit_1_ksp_constant_null_space due to the presence of identity for
>>>> the fictitious pressure nodes present in the fieldsplit_1_ block. I need to
>>>> use PCFieldSplitGetSubKsp() so that I can set proper null-space basis.
>>>>
>>>
>>> This is currently a real problem with the DMDA. In the unstructured
>>> case, where we always need specialized spaces, you can
>>> use something like
>>>
>>> PetscObject pressure;
>>> MatNullSpace nullSpacePres;
>>>
>>> ierr = DMGetField(dm, 1, &pressure);CHKERRQ(ierr);
>>> ierr = MatNullSpaceCreate(PetscObjectComm(pressure), PETSC_TRUE, 0,
>>> NULL, &nullSpacePres);CHKERRQ(ierr);
>>> ierr = PetscObjectCompose(pressure, "nullspace", (PetscObject)
>>> nullSpacePres);CHKERRQ(ierr);
>>> ierr = MatNullSpaceDestroy(&nullSpacePres);CHKERRQ(ierr);
>>>
>>> and then DMGetSubDM() uses this information to attach the null space to
>>> the IS that is created using the information in the PetscSection.
>>> If you use a PetscSection to set the data layout over the DMDA, I think
>>> this works correctly, but this has not been tested at all and is very
>>> new code. Eventually, I think we want all DMs to use this mechanism, but
>>> we are still working it out.
>>>
>>
>> Currently I do not use PetscSection. If this makes a cleaner approach,
>> I'd try it too but may a bit later (right now I'd like test my model with a
>> quickfix even if it means a little dirty code!)
>>
>>
>>>
>>> Bottom line: For custom null spaces using the default layout in DMDA,
>>> you need to take apart the PCFIELDSPLIT after it has been setup,
>>> which is somewhat subtle. You need to call KSPSetUp() and then reach in
>>> and get the PC, and the subKSPs. I don't like this at all, but we
>>> have not reorganized that code (which could be very simple and
>>> inflexible since its very structured).
>>>
>>
>> So I tried to get this approach working but I could not succeed and
>> encountered some errors. Here is a code snippet:
>>
>> //mDa is the DMDA that describes the whole grid with all 4 dofs (3
>> velocity components and 1 pressure comp.)
>> ierr = DMKSPSetComputeRHS(mDa,computeRHSTaras3D,this);CHKERRQ(ierr);
>> ierr =
>> DMKSPSetComputeOperators(mDa,computeMatrixTaras3D,this);CHKERRQ(ierr);
>> ierr = KSPSetDM(mKsp,mDa);CHKERRQ(ierr);
>> ierr = KSPSetNullSpace(mKsp,mNullSpaceSystem);CHKERRQ(ierr); //I've
>> the mNullSpaceSystem based on mDa, that contains a null space basis for the
>> complete system.
>> ierr =
>> KSPSetFromOptions(mKsp);CHKERRQ(ierr);
>> //This I expect would register these options I give:-pc_type fieldsplit
>> -pc_fieldsplit_type schur -pc_fieldsplit_0_fields 0,1,2
>> //-pc_fieldsplit_1_fields 3
>>
>> ierr = KSPSetUp(mKsp);CHKERRQ(ierr);
>>
>> ierr = KSPGetPC(mKsp,&mPcOuter); //Now get the PC that was
>> obtained from the options (fieldsplit)
>>
>> ierr =
>> PCFieldSplitSchurPrecondition(mPcOuter,PC_FIELDSPLIT_SCHUR_PRE_USER,mPcForSc);CHKERRQ(ierr);
>> //I have created the matrix mPcForSc using a DMDA with identical //size to
>> mDa but with dof=1 corresponding to the pressure nodes (say mDaPressure).
>>
>> ierr = PCSetUp(mPcOuter);CHKERRQ(ierr);
>>
>> KSP *kspSchur;
>> PetscInt kspSchurPos = 1;
>> ierr =
>> PCFieldSplitGetSubKSP(mPcOuter,&kspSchurPos,&kspSchur);CHKERRQ(ierr);
>> ierr =
>> KSPSetNullSpace(kspSchur[1],mNullSpacePressure);CHKERRQ(ierr);
>> //The null space is the one that correspond to only pressure nodes, created
>> using the mDaPressure.
>> ierr = PetscFree(kspSchur);CHKERRQ(ierr);
>>
>> ierr = KSPSolve(mKsp,NULL,NULL);CHKERRQ(ierr);
>>
>
> Sorry, you need to return to the old DMDA behavior, so you want
>
> -pc_fieldsplit_dm_splits 0
>
Thanks, with this it seems I can attach the null space properly, but I have
a question regarding whether the Schur complement ksp solver is actually
using the preconditioner matrix I provide.
When using -ksp_view, the outer level pc object of type fieldsplit does
report that: "Preconditioner for the Schur complement formed from user
provided matrix", but in the KSP solver for Schur complement S, the pc
object (fieldsplit_1_) is of type ilu and doesn't say that it is using the
matrix I provide. Am I missing something here ?
Below are the relevant commented code snippet and the output of the
-ksp_view
(The options I used: -pc_type fieldsplit -pc_fieldsplit_type schur
-pc_fieldsplit_dm_splits 0 -pc_fieldsplit_0_fields 0,1,2
-pc_fieldsplit_1_fields 3 -ksp_converged_reason -ksp_view )
Code snippet:
ierr = KSPSetNullSpace(mKsp,mNullSpaceSystem);CHKERRQ(ierr); //The
nullspace for the whole system
ierr = KSPSetFromOptions(mKsp);CHKERRQ(ierr);
ierr = KSPSetUp(mKsp);CHKERRQ(ierr); //Set up mKsp
with the options provided with fieldsplit and the fields associated with
the two splits.
ierr = KSPGetPC(mKsp,&mPcOuter);CHKERRQ(ierr); //Get the
fieldsplit pc set up from the options
ierr =
PCFieldSplitSchurPrecondition(mPcOuter,PC_FIELDSPLIT_SCHUR_PRE_USER,mPcForSc);CHKERRQ(ierr);
//Use mPcForSc as the preconditioner for Schur Complement
KSP *kspSchur;
PetscInt kspSchurPos = 1;
ierr =
PCFieldSplitGetSubKSP(mPcOuter,&kspSchurPos,&kspSchur);CHKERRQ(ierr);
ierr =
KSPSetNullSpace(kspSchur[1],mNullSpacePressure);CHKERRQ(ierr);
//Attach the null-space for the Schur complement ksp solver.
ierr = PetscFree(kspSchur);CHKERRQ(ierr);
ierr = KSPSolve(mKsp,NULL,NULL);CHKERRQ(ierr);
the output of the -ksp_view
KSP Object: 1 MPI processes
type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
has attached null space
using PRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
type: fieldsplit
FieldSplit with Schur preconditioner, blocksize = 4, factorization FULL
Preconditioner for the Schur complement formed from user provided matrix
Split info:
Split number 0 Fields 0, 1, 2
Split number 1 Fields 3
KSP solver for A00 block
KSP Object: (fieldsplit_0_) 1 MPI processes
type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using PRECONDITIONED norm type for convergence test
PC Object: (fieldsplit_0_) 1 MPI processes
type: ilu
ILU: out-of-place factorization
0 levels of fill
tolerance for zero pivot 2.22045e-14
using diagonal shift on blocks to prevent zero pivot
matrix ordering: natural
factor fill ratio given 1, needed 1
Factored matrix follows:
Matrix Object: 1 MPI processes
type: seqaij
rows=2187, cols=2187
package used to perform factorization: petsc
total: nonzeros=140625, allocated nonzeros=140625
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 729 nodes, limit used is 5
linear system matrix = precond matrix:
Matrix Object: 1 MPI processes
type: seqaij
rows=2187, cols=2187
total: nonzeros=140625, allocated nonzeros=140625
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 729 nodes, limit used is 5
KSP solver for S = A11 - A10 inv(A00) A01
KSP Object: (fieldsplit_1_) 1 MPI processes
type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
has attached null space
using PRECONDITIONED norm type for convergence test
PC Object: (fieldsplit_1_) 1 MPI processes
type: ilu
ILU: out-of-place factorization
0 levels of fill
tolerance for zero pivot 2.22045e-14
using diagonal shift on blocks to prevent zero pivot
matrix ordering: natural
factor fill ratio given 1, needed 1
Factored matrix follows:
Matrix Object: 1 MPI processes
type: seqaij
rows=729, cols=729
package used to perform factorization: petsc
total: nonzeros=15625, allocated nonzeros=15625
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix followed by preconditioner matrix:
Matrix Object: 1 MPI processes
type: schurcomplement
rows=729, cols=729
Schur complement A11 - A10 inv(A00) A01
A11
Matrix Object: 1 MPI processes
type: seqaij
rows=729, cols=729
total: nonzeros=15625, allocated nonzeros=15625
total number of mallocs used during MatSetValues calls =0
not using I-node routines
A10
Matrix Object: 1 MPI processes
type: seqaij
rows=729, cols=2187
total: nonzeros=46875, allocated nonzeros=46875
total number of mallocs used during MatSetValues calls =0
not using I-node routines
KSP of A00
KSP Object: (fieldsplit_0_) 1 MPI
processes
type: gmres
GMRES: restart=30, using Classical (unmodified)
Gram-Schmidt Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50,
divergence=10000
left preconditioning
using PRECONDITIONED norm type for convergence test
PC Object: (fieldsplit_0_) 1 MPI
processes
type: ilu
ILU: out-of-place factorization
0 levels of fill
tolerance for zero pivot 2.22045e-14
using diagonal shift on blocks to prevent zero pivot
matrix ordering: natural
factor fill ratio given 1, needed 1
Factored matrix follows:
Matrix Object: 1 MPI processes
type: seqaij
rows=2187, cols=2187
package used to perform factorization: petsc
total: nonzeros=140625, allocated nonzeros=140625
total number of mallocs used during MatSetValues
calls =0
using I-node routines: found 729 nodes, limit
used is 5
linear system matrix = precond matrix:
Matrix Object: 1 MPI processes
type: seqaij
rows=2187, cols=2187
total: nonzeros=140625, allocated nonzeros=140625
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 729 nodes, limit used is 5
A01
Matrix Object: 1 MPI processes
type: seqaij
rows=2187, cols=729
total: nonzeros=46875, allocated nonzeros=46875
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 729 nodes, limit used is 5
Matrix Object: 1 MPI processes
type: seqaij
rows=729, cols=729
total: nonzeros=15625, allocated nonzeros=15625
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Matrix Object: 1 MPI processes
type: seqaij
rows=2916, cols=2916, bs=4
total: nonzeros=250000, allocated nonzeros=250000
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 729 nodes, limit used is 5
>
> or
>
> PCFieldSplitSetDMSplits(pc, PETSC_FALSE)
>
> Thanks,
>
> Matt
>
>
>> The errors I get when running with options: -pc_type fieldsplit
>> -pc_fieldsplit_type schur -pc_fieldsplit_0_fields 0,1,2
>> -pc_fieldsplit_1_fields 3
>> [0]PETSC ERROR: --------------------- Error Message
>> ------------------------------------
>> [0]PETSC ERROR: No support for this operation for this object type!
>> [0]PETSC ERROR: Support only implemented for 2d!
>> [0]PETSC ERROR:
>> ------------------------------------------------------------------------
>> [0]PETSC ERROR: Petsc Release Version 3.4.2, Jul, 02, 2013
>> [0]PETSC ERROR: See docs/changes/index.html for recent updates.
>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
>> [0]PETSC ERROR: See docs/index.html for manual pages.
>> [0]PETSC ERROR:
>> ------------------------------------------------------------------------
>> [0]PETSC ERROR: src/AdLemMain on a arch-linux2-cxx-debug named edwards by
>> bkhanal Tue Aug 6 17:35:30 2013
>> [0]PETSC ERROR: Libraries linked from
>> /home/bkhanal/Documents/softwares/petsc-3.4.2/arch-linux2-cxx-debug/lib
>> [0]PETSC ERROR: Configure run at Fri Jul 19 14:25:01 2013
>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=g77
>> --with-cxx=g++ --download-f-blas-lapack=1 --download-mpich=1
>> -with-clanguage=cxx --download-hypre=1
>> [0]PETSC ERROR:
>> ------------------------------------------------------------------------
>> [0]PETSC ERROR: DMCreateSubDM_DA() line 188 in
>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/dm/impls/da/dacreate.c
>> [0]PETSC ERROR: DMCreateSubDM() line 1267 in
>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/dm/interface/dm.c
>> [0]PETSC ERROR: PCFieldSplitSetDefaults() line 337 in
>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/fieldsplit/fieldsplit.c
>> [0]PETSC ERROR: PCSetUp_FieldSplit() line 458 in
>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/fieldsplit/fieldsplit.c
>> [0]PETSC ERROR: PCSetUp() line 890 in
>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c
>> [0]PETSC ERROR: KSPSetUp() line 278 in
>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c
>> [0]PETSC ERROR: solveModel() line 181 in
>> "unknowndirectory/"/user/bkhanal/home/works/AdLemModel/src/PetscAdLemTaras3D.cxx
>> WARNING! There are options you set that were not used!
>> WARNING! could be spelling mistake, etc!
>> Option left: name:-pc_fieldsplit_1_fields value: 3
>>
>>
>>
>>
>>
>>
>>
>>
>>>
>>> Matt
>>>
>>>
>>>>
>>>>> Matt
>>>>>
>>>>>
>>>>>>
>>>>>>> Matt
>>>>>>>
>>>>>>> --
>>>>>>> What most experimenters take for granted before they begin their
>>>>>>> experiments is infinitely more interesting than any results to which their
>>>>>>> experiments lead.
>>>>>>> -- Norbert Wiener
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> What most experimenters take for granted before they begin their
>>>>> experiments is infinitely more interesting than any results to which their
>>>>> experiments lead.
>>>>> -- Norbert Wiener
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20130807/d7b62fa3/attachment-0001.html>
More information about the petsc-users
mailing list