[petsc-users] Setting up a PETSc section for field-split
Matthew Knepley
knepley at gmail.com
Thu Jan 30 09:05:56 CST 2014
On Thu, Jan 30, 2014 at 8:36 AM, Luc Berger-Vergiat <lb2653 at columbia.edu>wrote:
> Thank you so much this helps a lot with my research. I hope that you can
> reuse that little example code for some documentation on DMShell. The
> features available thru that object are really good, it is just a little
> hard (at least for me) to work around all the functions call to set it up.
>
> For instance I understand that the DM can be used in the KSP to work on
> the Kmat operator but what is the purpose of the PCSetDM? Using the DM on
> the preconditioner operator passed to the KSP?
>
KSPSetDM() calls PCSetDM() internally. All the solver objects use the DM
for sizes of Vec/Mat objects, and also
for dividing up the problem.
Thanks,
Matt
> Thanks again for your help, I will try to transfer this to FORTRAN!
>
> Cheers!
>
> Best,
> Luc
>
> On 01/30/2014 08:52 AM, Matthew Knepley wrote:
>
> On Wed, Jan 29, 2014 at 7:19 PM, Luc Berger-Vergiat <lb2653 at columbia.edu>wrote:
>
>> Hi Matt,
>> I thought I would spend the time writing a small c example of what I am
>> doing in my FORTRAN code so that you can more easily see what it is I am
>> doing wrong.
>>
>> I attached the code and its makefile. I run the code the following
>> arguments:
>>
>> -ksp_type gmres -pc_type fieldsplit -fieldsplit_0_pc_type ilu
>> -fieldsplit_0_ksp_type preonly -fieldsplit_1_pc_type jacobi
>> -fieldsplit_1_ksp_type preonly -ksp_view -ksp_monitor
>>
>
> 1) You need to also call PetscSectionSetDof() in order to get full
> sizes. I don't just add up the fields.
>
> 2) You need to set the DM into the KSP using KSPSetDM()
>
> 3) You need to make the DM inactive if you want to provide your own
> Mat/Vec using KSPSetDMActive()
>
> 4) Then you can get rid of the special code for FS
>
> 5) Your options were wrong because you named the fields
>
> I include a revised code sample, and you should run with
>
> -ksp_type gmres -pc_type fieldsplit -fieldsplit_Field_0_pc_type ilu
> -fieldsplit_Field_0_ksp_type preonly -fieldsplit_Field_1_pc_type jacobi
> -fieldsplit_Field_1_ksp_type preonly -ksp_view -ksp_monitor
>
> Thanks,
>
> Matt
>
> and get the following output:
>>
>> start=1, End=9.
>> PetscSection with 2 fields
>> field 0 with 1 components
>> Process 0:
>> ( 1) dim 1 offset 0
>> ( 2) dim 1 offset 0
>> ( 3) dim 1 offset 0
>> ( 4) dim 1 offset 0
>> ( 5) dim 0 offset 0
>> ( 6) dim 0 offset 0
>> ( 7) dim 0 offset 0
>> ( 8) dim 0 offset 0
>> field 1 with 1 components
>> Process 0:
>> ( 1) dim 0 offset 1
>> ( 2) dim 0 offset 1
>> ( 3) dim 0 offset 1
>> ( 4) dim 0 offset 1
>> ( 5) dim 1 offset 0
>> ( 6) dim 1 offset 0
>> ( 7) dim 1 offset 0
>> ( 8) dim 1 offset 0
>> PC flag for GetDMSplits: true
>> 0 KSP Residual norm 0.000000000000e+00
>> KSP Object: 1 MPI processes
>> type: gmres
>> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
>> Orthogonalization with no iterative refinement
>> GMRES: happy breakdown tolerance 1e-30
>> maximum iterations=10000, initial guess is zero
>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
>> left preconditioning
>> using PRECONDITIONED norm type for convergence test
>> PC Object: 1 MPI processes
>> type: fieldsplit
>> FieldSplit with MULTIPLICATIVE composition: total splits = 2
>> Solver info for each split is in the following KSP objects:
>> Split number 0 Defined by IS
>> KSP Object: (fieldsplit_Field_0_) 1 MPI processes
>> type: preonly
>> maximum iterations=10000, initial guess is zero
>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
>> left preconditioning
>> using NONE norm type for convergence test
>> PC Object: (fieldsplit_Field_0_) 1 MPI processes
>> type: ilu
>> ILU: out-of-place factorization
>> 0 levels of fill
>> tolerance for zero pivot 2.22045e-14
>> using diagonal shift on blocks to prevent zero pivot
>> matrix ordering: natural
>> factor fill ratio given 1, needed 1
>> Factored matrix follows:
>> Matrix Object: 1 MPI processes
>> type: seqaij
>> rows=0, cols=0
>> package used to perform factorization: petsc
>> total: nonzeros=0, allocated nonzeros=0
>> total number of mallocs used during MatSetValues calls =0
>> not using I-node routines
>> linear system matrix = precond matrix:
>> Matrix Object: 1 MPI processes
>> type: seqaij
>> rows=0, cols=0
>> total: nonzeros=0, allocated nonzeros=0
>> total number of mallocs used during MatSetValues calls =0
>> not using I-node routines
>> Split number 1 Defined by IS
>> KSP Object: (fieldsplit_Field_1_) 1 MPI processes
>> type: preonly
>> maximum iterations=10000, initial guess is zero
>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
>> left preconditioning
>> using NONE norm type for convergence test
>> PC Object: (fieldsplit_Field_1_) 1 MPI processes
>> type: ilu
>> ILU: out-of-place factorization
>> 0 levels of fill
>> tolerance for zero pivot 2.22045e-14
>> using diagonal shift on blocks to prevent zero pivot
>> matrix ordering: natural
>> factor fill ratio given 1, needed 1
>> Factored matrix follows:
>> Matrix Object: 1 MPI processes
>> type: seqaij
>> rows=0, cols=0
>> package used to perform factorization: petsc
>> total: nonzeros=0, allocated nonzeros=0
>> total number of mallocs used during MatSetValues calls =0
>> not using I-node routines
>> linear system matrix = precond matrix:
>> Matrix Object: 1 MPI processes
>> type: seqaij
>> rows=0, cols=0
>> total: nonzeros=0, allocated nonzeros=0
>> total number of mallocs used during MatSetValues calls =0
>> not using I-node routines
>> linear system matrix = precond matrix:
>> Matrix Object: 1 MPI processes
>> type: seqaij
>> rows=8, cols=8
>> total: nonzeros=64, allocated nonzeros=160
>> total number of mallocs used during MatSetValues calls =8
>> using I-node routines: found 2 nodes, limit used is 5
>> The solution to this problem is:
>> Vector Object: 1 MPI processes
>> type: seq
>> 0
>> 0
>> 0
>> 0
>> 0
>> 0
>> 0
>> 0
>> WARNING! There are options you set that were not used!
>> WARNING! could be spelling mistake, etc!
>> Option left: name:-fieldsplit_0_ksp_type value: preonly
>> Option left: name:-fieldsplit_0_pc_type value: ilu
>> Option left: name:-fieldsplit_1_ksp_type value: preonly
>> Option left: name:-fieldsplit_1_pc_type value: jacobi
>>
>> when I run it with the more simple arguements:
>>
>> -ksp_type gmres -pc_type jacobi -ksp_view -ksp_monitor
>>
>> I get the following output:
>>
>> start=1, End=9.
>> PetscSection with 2 fields
>> field 0 with 1 components
>> Process 0:
>> ( 1) dim 1 offset 0
>> ( 2) dim 1 offset 0
>> ( 3) dim 1 offset 0
>> ( 4) dim 1 offset 0
>> ( 5) dim 0 offset 0
>> ( 6) dim 0 offset 0
>> ( 7) dim 0 offset 0
>> ( 8) dim 0 offset 0
>> field 1 with 1 components
>> Process 0:
>> ( 1) dim 0 offset 1
>> ( 2) dim 0 offset 1
>> ( 3) dim 0 offset 1
>> ( 4) dim 0 offset 1
>> ( 5) dim 1 offset 0
>> ( 6) dim 1 offset 0
>> ( 7) dim 1 offset 0
>> ( 8) dim 1 offset 0
>> 0 KSP Residual norm 4.759858191165e+00
>> 1 KSP Residual norm 2.344421567248e+00
>> 2 KSP Residual norm 3.208390394507e-01
>> 3 KSP Residual norm 7.171256210359e-02
>> 4 KSP Residual norm 1.301032901980e-02
>> 5 KSP Residual norm 1.104121978197e-15
>> KSP Object: 1 MPI processes
>> type: gmres
>> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
>> Orthogonalization with no iterative refinement
>> GMRES: happy breakdown tolerance 1e-30
>> maximum iterations=10000, initial guess is zero
>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
>> left preconditioning
>> using PRECONDITIONED norm type for convergence test
>> PC Object: 1 MPI processes
>> type: jacobi
>> linear system matrix = precond matrix:
>> Matrix Object: 1 MPI processes
>> type: seqaij
>> rows=8, cols=8
>> total: nonzeros=64, allocated nonzeros=160
>> total number of mallocs used during MatSetValues calls =8
>> using I-node routines: found 2 nodes, limit used is 5
>> The solution to this problem is:
>> Vector Object: 1 MPI processes
>> type: seq
>> -0.977778
>> -0.0444444
>> -0.844444
>> -0.327778
>> 3
>> 1
>> 1.5
>> 3
>>
>> It is obvious that something goes wrong in the fieldsplit, I'm just
>> clueless to what it is. I think that everything is setup correctly but I'm
>> missing something to get the KSP to recognize my splits...
>>
>> Thanks for your help.
>>
>> Best,
>> Luc
>>
>> On 01/28/2014 11:20 AM, Matthew Knepley wrote:
>>
>> On Tue, Jan 28, 2014 at 10:11 AM, Luc Berger-Vergiat <
>> lb2653 at columbia.edu> wrote:
>>
>>> What I don't really understand is why the size of all the sub fields
>>> is zero?
>>> As you can see all the matrix object in my fieldsplit preconditioner
>>> have total:nonzeros=0, allocated nonzeros=0.
>>>
>>> This seems strange since I issued the following command:
>>> call DMSetDefaultSection(FSDM, FSSection, ierr)
>>>
>>
>> That section looks like you never called PetscSectionSetUp().
>>
>> Matt
>>
>>
>>> with a the following FSSection:
>>> PetscSection with 4 fields
>>> field 0 with 1 components
>>> Process 0:
>>> ( 0) dim 0 offset 0
>>> ( 1) dim 1 offset 0
>>> ( 2) dim 0 offset 0
>>> ( 3) dim 0 offset 0
>>> ( 4) dim 1 offset 0
>>> ( 5) dim 0 offset 0
>>> ( 6) dim 0 offset 0
>>> ( 7) dim 0 offset 0
>>> ( 8) dim 0 offset 0
>>> ( 9) dim 0 offset 0
>>> ( 10) dim 0 offset 0
>>> ( 11) dim 0 offset 0
>>> ( 12) dim 0 offset 0
>>> ( 13) dim 0 offset 0
>>> ( 14) dim 0 offset 0
>>> ( 15) dim 0 offset 0
>>> field 1 with 1 components
>>> Process 0:
>>> ( 0) dim 1 offset 0
>>> ( 1) dim 0 offset 1
>>> ( 2) dim 1 offset 0
>>> ( 3) dim 1 offset 0
>>> ( 4) dim 0 offset 1
>>> ( 5) dim 1 offset 0
>>> ( 6) dim 0 offset 0
>>> ( 7) dim 0 offset 0
>>> ( 8) dim 0 offset 0
>>> ( 9) dim 0 offset 0
>>> ( 10) dim 0 offset 0
>>> ( 11) dim 0 offset 0
>>> ( 12) dim 0 offset 0
>>> ( 13) dim 0 offset 0
>>> ( 14) dim 0 offset 0
>>> ( 15) dim 0 offset 0
>>> field 2 with 1 components
>>> Process 0:
>>> ( 0) dim 0 offset 1
>>> ( 1) dim 0 offset 1
>>> ( 2) dim 0 offset 1
>>> ( 3) dim 0 offset 1
>>> ( 4) dim 0 offset 1
>>> ( 5) dim 0 offset 1
>>> ( 6) dim 1 offset 0
>>> ( 7) dim 1 offset 0
>>> ( 8) dim 1 offset 0
>>> ( 9) dim 1 offset 0
>>> ( 10) dim 1 offset 0
>>> ( 11) dim 1 offset 0
>>> ( 12) dim 0 offset 0
>>> ( 13) dim 0 offset 0
>>> ( 14) dim 0 offset 0
>>> ( 15) dim 0 offset 0
>>> field 3 with 1 components
>>> Process 0:
>>> ( 0) dim 0 offset 1
>>> ( 1) dim 0 offset 1
>>> ( 2) dim 0 offset 1
>>> ( 3) dim 0 offset 1
>>> ( 4) dim 0 offset 1
>>> ( 5) dim 0 offset 1
>>> ( 6) dim 0 offset 1
>>> ( 7) dim 0 offset 1
>>> ( 8) dim 0 offset 1
>>> ( 9) dim 0 offset 1
>>> ( 10) dim 0 offset 1
>>> ( 11) dim 0 offset 1
>>> ( 12) dim 1 offset 0
>>> ( 13) dim 1 offset 0
>>> ( 14) dim 1 offset 0
>>> ( 15) dim 1 offset 0
>>>
>>> I thought that by using DMSetDefaultSection() I would be done setting
>>> the fields and that the fieldsplit would detect that section and use it to
>>> construct the splits.
>>>
>>> Should I use another command to tell the PC to use the DM section?
>>>
>>> Best,
>>> Luc
>>>
>>> On 01/28/2014 10:25 AM, Matthew Knepley wrote:
>>>
>>> On Mon, Jan 27, 2014 at 1:35 PM, Luc Berger-Vergiat <
>>> lb2653 at columbia.edu> wrote:
>>>
>>>> Thanks Matt,
>>>> this indeed setting the number of fields earlier solve my issue!
>>>>
>>>> I have now made some progress getting my DM setup but I am having
>>>> troubles setting my splitfield options.
>>>> I ran my problem with the -ksp_view option to see what is going on and
>>>> I guess that somehow the section that I define in my DM is not used by the
>>>> preconditioner to split the fields.
>>>> Here is the output of PETSc
>>>>
>>>> KSP Object: 1 MPI processes
>>>> type: gmres
>>>> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
>>>> Orthogonalization with no iterative refinement
>>>> GMRES: happy breakdown tolerance 1e-30
>>>> maximum iterations=10000, initial guess is zero
>>>> tolerances: relative=1e-08, absolute=1e-16, divergence=1e+16
>>>> left preconditioning
>>>> using PRECONDITIONED norm type for convergence test
>>>> PC Object: 1 MPI processes
>>>> type: fieldsplit
>>>> FieldSplit with MULTIPLICATIVE composition: total splits = 4
>>>> Solver info for each split is in the following KSP objects:
>>>> Split number 0 Defined by IS
>>>>
>>>
>>> There are 4 splits here and they are defined by an IS. Why do you
>>> think that is not what you asked for?
>>>
>>> Thanks,
>>>
>>> Matt
>>>
>>>
>>>> KSP Object: (fieldsplit_Field_0_) 1 MPI processes
>>>> type: preonly
>>>> maximum iterations=10000, initial guess is zero
>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
>>>> left preconditioning
>>>> using NONE norm type for convergence test
>>>> PC Object: (fieldsplit_Field_0_) 1 MPI processes
>>>> type: ilu
>>>> ILU: out-of-place factorization
>>>> 0 levels of fill
>>>> tolerance for zero pivot 2.22045e-14
>>>> using diagonal shift on blocks to prevent zero pivot
>>>> matrix ordering: natural
>>>> factor fill ratio given 1, needed 1
>>>> Factored matrix follows:
>>>> Matrix Object: 1 MPI processes
>>>> type: seqaij
>>>> rows=0, cols=0
>>>> package used to perform factorization: petsc
>>>> total: nonzeros=0, allocated nonzeros=0
>>>> total number of mallocs used during MatSetValues calls =0
>>>> not using I-node routines
>>>> linear system matrix = precond matrix:
>>>> Matrix Object: 1 MPI processes
>>>> type: seqaij
>>>> rows=0, cols=0
>>>> total: nonzeros=0, allocated nonzeros=0
>>>> total number of mallocs used during MatSetValues calls =0
>>>> not using I-node routines
>>>> Split number 1 Defined by IS
>>>> KSP Object: (fieldsplit_Field_1_) 1 MPI processes
>>>> type: preonly
>>>> maximum iterations=10000, initial guess is zero
>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
>>>> left preconditioning
>>>> using NONE norm type for convergence test
>>>> PC Object: (fieldsplit_Field_1_) 1 MPI processes
>>>> type: ilu
>>>> ILU: out-of-place factorization
>>>> 0 levels of fill
>>>> tolerance for zero pivot 2.22045e-14
>>>> using diagonal shift on blocks to prevent zero pivot
>>>> matrix ordering: natural
>>>> factor fill ratio given 1, needed 1
>>>> Factored matrix follows:
>>>> Matrix Object: 1 MPI processes
>>>> type: seqaij
>>>> rows=0, cols=0
>>>> package used to perform factorization: petsc
>>>> total: nonzeros=0, allocated nonzeros=0
>>>> total number of mallocs used during MatSetValues calls =0
>>>> not using I-node routines
>>>> linear system matrix = precond matrix:
>>>> Matrix Object: 1 MPI processes
>>>> type: seqaij
>>>> rows=0, cols=0
>>>> total: nonzeros=0, allocated nonzeros=0
>>>> total number of mallocs used during MatSetValues calls =0
>>>> not using I-node routines
>>>> Split number 2 Defined by IS
>>>> KSP Object: (fieldsplit_Field_2_) 1 MPI processes
>>>> type: preonly
>>>> maximum iterations=10000, initial guess is zero
>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
>>>> left preconditioning
>>>> using NONE norm type for convergence test
>>>> PC Object: (fieldsplit_Field_2_) 1 MPI processes
>>>> type: ilu
>>>> ILU: out-of-place factorization
>>>> 0 levels of fill
>>>> tolerance for zero pivot 2.22045e-14
>>>> using diagonal shift on blocks to prevent zero pivot
>>>> matrix ordering: natural
>>>> factor fill ratio given 1, needed 1
>>>> Factored matrix follows:
>>>> Matrix Object: 1 MPI processes
>>>> type: seqaij
>>>> rows=0, cols=0
>>>> package used to perform factorization: petsc
>>>> total: nonzeros=0, allocated nonzeros=0
>>>> total number of mallocs used during MatSetValues calls =0
>>>> not using I-node routines
>>>> linear system matrix = precond matrix:
>>>> Matrix Object: 1 MPI processes
>>>> type: seqaij
>>>> rows=0, cols=0
>>>> total: nonzeros=0, allocated nonzeros=0
>>>> total number of mallocs used during MatSetValues calls =0
>>>> not using I-node routines
>>>> Split number 3 Defined by IS
>>>> KSP Object: (fieldsplit_Field_3_) 1 MPI processes
>>>> type: preonly
>>>> maximum iterations=10000, initial guess is zero
>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
>>>> left preconditioning
>>>> using NONE norm type for convergence test
>>>> PC Object: (fieldsplit_Field_3_) 1 MPI processes
>>>> type: ilu
>>>> ILU: out-of-place factorization
>>>> 0 levels of fill
>>>> tolerance for zero pivot 2.22045e-14
>>>> using diagonal shift on blocks to prevent zero pivot
>>>> matrix ordering: natural
>>>> factor fill ratio given 1, needed 1
>>>> Factored matrix follows:
>>>> Matrix Object: 1 MPI processes
>>>> type: seqaij
>>>> rows=0, cols=0
>>>> package used to perform factorization: petsc
>>>> total: nonzeros=0, allocated nonzeros=0
>>>> total number of mallocs used during MatSetValues calls =0
>>>> not using I-node routines
>>>> linear system matrix = precond matrix:
>>>> Matrix Object: 1 MPI processes
>>>> type: seqaij
>>>> rows=0, cols=0
>>>> total: nonzeros=0, allocated nonzeros=0
>>>> total number of mallocs used during MatSetValues calls =0
>>>> not using I-node routines
>>>> linear system matrix = precond matrix:
>>>> Matrix Object: 1 MPI processes
>>>> type: seqaij
>>>> rows=16, cols=16
>>>> total: nonzeros=256, allocated nonzeros=256
>>>> total number of mallocs used during MatSetValues calls =0
>>>> using I-node routines: found 4 nodes, limit used is 5
>>>>
>>>> I am also attaching part of my code which I use to generate the DM, the
>>>> KSP and the PC objects.
>>>>
>>>> Best,
>>>> Luc
>>>>
>>>> On 01/25/2014 10:31 AM, Matthew Knepley wrote:
>>>>
>>>> On Fri, Jan 24, 2014 at 5:10 PM, Luc Berger-Vergiat <
>>>> lb2653 at columbia.edu> wrote:
>>>>
>>>>> Hi all,
>>>>> I want to use PETSc as a solver for an FEM problem: modelization of
>>>>> shear bands using a 4 fields mixed formulation.
>>>>>
>>>>> So far I am just trying to set up a two fields in a Petsc section,
>>>>> assign dof too each field and then set up the section to pass it to a DM. I
>>>>> am taking this approach because in general I want for fieldsplit to work
>>>>> purely on the algebraic level without knowledge of boundary conditions or
>>>>> geometry.
>>>>>
>>>>> As of now I have issues when I try to assign a point and its
>>>>> associated degree of freedom to a field using: PetscSectionSetFieldDof.
>>>>> Is this the correct way to associate a dof/point to a field?
>>>>>
>>>>
>>>> You have to set the number of fields before the chart. I am updating
>>>> the docs.
>>>>
>>>> Thanks,
>>>>
>>>> Matt
>>>>
>>>>
>>>>> I attached an example code and its makefile.
>>>>>
>>>>> --
>>>>> Best,
>>>>> Luc
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> What most experimenters take for granted before they begin their
>>>> experiments is infinitely more interesting than any results to which their
>>>> experiments lead.
>>>> -- Norbert Wiener
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>>
>>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140130/aac82946/attachment-0001.html>
More information about the petsc-users
mailing list