[petsc-users] Question regarding naming of fieldsplit splits

Matthew Knepley knepley at gmail.com
Mon Jul 1 09:30:11 CDT 2024


On Mon, Jul 1, 2024 at 9:48 AM Blauth, Sebastian <
sebastian.blauth at itwm.fraunhofer.de> wrote:

> Dear Matt,
>
>
>
> thanks a lot for your help. Unfortunately, for me these extra options do
> not have any effect, I still get the “u” and “p” fieldnames. Also, this
> would not help me to get rid of the “c” fieldname – on that level of the
> fieldsplit I am basically using your approach already, and still it does
> show up. The output of the -ksp_view is unchanged, so that I do not attach
> it here again. Maybe I misunderstood you?
>

Oh, we make an exception for single fields, since we think you would want
to use the name. I have to make an extra option to shut off naming.

   Thanks,

     Matt


> Thanks for the help and best regards,
>
> Sebastian
>
>
>
> --
>
> Dr. Sebastian Blauth
>
> Fraunhofer-Institut für
>
> Techno- und Wirtschaftsmathematik ITWM
>
> Abteilung Transportvorgänge
>
> Fraunhofer-Platz 1, 67663 Kaiserslautern
>
> Telefon: +49 631 31600-4968
>
> sebastian.blauth at itwm.fraunhofer.de
>
> https://urldefense.us/v3/__https://www.itwm.fraunhofer.de__;!!G_uCfscf7eWS!e9SeVBzwCGTmMl3gOc_WG4S_zL5JNMZYyiUGhfrhQAN_re34sVynQzfQyxsY8DUyFdPf1HHTM-Pas_5OTyRd$ 
>
>
>
> *From:* Matthew Knepley <knepley at gmail.com>
> *Sent:* Monday, July 1, 2024 2:27 PM
> *To:* Blauth, Sebastian <sebastian.blauth at itwm.fraunhofer.de>
> *Cc:* petsc-users at mcs.anl.gov
> *Subject:* Re: [petsc-users] Question regarding naming of fieldsplit
> splits
>
>
>
> On Fri, Jun 28, 2024 at 4:05 AM Blauth, Sebastian <
> sebastian.blauth at itwm.fraunhofer.de> wrote:
>
> Hello everyone,
>
>
>
> I have a question regarding the naming convention using PETSc’s
> PCFieldsplit. I have been following
> https://urldefense.us/v3/__https://lists.mcs.anl.gov/pipermail/petsc-users/2019-January/037262.html__;!!G_uCfscf7eWS!e9SeVBzwCGTmMl3gOc_WG4S_zL5JNMZYyiUGhfrhQAN_re34sVynQzfQyxsY8DUyFdPf1HHTM-Paswa3c8E2$ 
> to create a DMShell with FEniCS in order to customize PCFieldsplit for my
> application.
>
> I am using the following options, which work nicely for me:
>
>
>
> -ksp_type fgmres
>
> -pc_type fieldsplit
>
> -pc_fieldsplit_0_fields 0, 1
>
> -pc_fieldsplit_1_fields 2
>
> -pc_fieldsplit_type additive
>
> -fieldsplit_0_ksp_type fgmres
>
> -fieldsplit_0_pc_type fieldsplit
>
> -fieldsplit_0_pc_fieldsplit_type schur
>
> -fieldsplit_0_pc_fieldsplit_schur_fact_type full
>
> -fieldsplit_0_pc_fieldsplit_schur_precondition selfp
>
> -fieldsplit_0_fieldsplit_u_ksp_type preonly
>
> -fieldsplit_0_fieldsplit_u_pc_type lu
>
> -fieldsplit_0_fieldsplit_p_ksp_type cg
>
> -fieldsplit_0_fieldsplit_p_ksp_rtol 1e-14
>
> -fieldsplit_0_fieldsplit_p_ksp_atol 1e-30
>
> -fieldsplit_0_fieldsplit_p_pc_type icc
>
> -fieldsplit_0_ksp_rtol 1e-14
>
> -fieldsplit_0_ksp_atol 1e-30
>
> -fieldsplit_0_ksp_monitor_true_residual
>
> -fieldsplit_c_ksp_type preonly
>
> -fieldsplit_c_pc_type lu
>
> -ksp_view
>
>
>
> By default, we use the field names, but you can prevent this by specifying
> the fields by hand, so
>
>
>
> -fieldsplit_0_pc_fieldsplit_0_fields 0
> -fieldsplit_0_pc_fieldsplit_1_fields 1
>
>
>
> should remove the 'u' and 'p' fieldnames. It is somewhat hacky, but I
> think easier to remember than
>
> some extra option.
>
>
>
>   Thanks,
>
>
>
>      Matt
>
>
>
> Note that this is just an academic example (sorry for the low solver
> tolerances) to test the approach, consisting of a Stokes equation and some
> concentration equation (which is not even coupled to Stokes, just for
> testing).
>
> Completely analogous to
> https://urldefense.us/v3/__https://lists.mcs.anl.gov/pipermail/petsc-users/2019-January/037262.html__;!!G_uCfscf7eWS!e9SeVBzwCGTmMl3gOc_WG4S_zL5JNMZYyiUGhfrhQAN_re34sVynQzfQyxsY8DUyFdPf1HHTM-Paswa3c8E2$ ,
> I translate my IS’s to a PETSc Section, which is then supplied to a DMShell
> and assigned to a KSP. I am not so familiar with the code or how / why this
> works, but it seems to do so perfectly. I name my sections with petsc4py
> using
>
>
>
> section.setFieldName(0, "u")
>
> section.setFieldName(1, "p")
>
> section.setFieldName(2, "c")
>
>
>
> However, this is also reflected in the way I can access the fieldsplit
> options from the command line. My question is: Is there any way of not
> using the FieldNames specified in python but use the index of the field as
> defined with “-pc_fieldsplit_0_fields 0, 1” and “-pc_fieldsplit_1_fields
> 2”, i.e., instead of the prefix “fieldsplit_0_fieldsplit_u” I want to write
> “fieldsplit_0_fieldsplit_0”, instead of “fieldsplit_0_fieldsplit_p” I want
> to use “fieldsplit_0_fieldsplit_1”, and instead of “fieldsplit_c” I want to
> use “fieldsplit_1”. Just changing the names of the fields to
>
>
>
> section.setFieldName(0, "0")
>
> section.setFieldName(1, "1")
>
> section.setFieldName(2, "2")
>
>
>
> does obviously not work as expected, as it works for velocity and
> pressure, but not for the concentration – the prefix there is then
> “fieldsplit_2” and not “fieldsplit_1”. In the docs, I have found
> https://urldefense.us/v3/__https://petsc.org/main/manualpages/PC/PCFieldSplitSetFields/__;!!G_uCfscf7eWS!e9SeVBzwCGTmMl3gOc_WG4S_zL5JNMZYyiUGhfrhQAN_re34sVynQzfQyxsY8DUyFdPf1HHTM-Pas1RiSjwn$  which seems
> to suggest that the fieldname can potentially be supplied, but I don’t see
> how to do so from the command line. Also, for the sake of completeness, I
> attach the output of the solve with “-ksp_view” below.
>
>
>
> Thanks a lot in advance and best regards,
>
> Sebastian
>
>
>
>
>
> The output of ksp_view is the following:
>
> KSP Object: 1 MPI processes
>
>   type: fgmres
>
>     restart=30, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
>
>     happy breakdown tolerance 1e-30
>
>   maximum iterations=10000, initial guess is zero
>
>   tolerances:  relative=1e-05, absolute=1e-11, divergence=10000.
>
>   right preconditioning
>
>   using UNPRECONDITIONED norm type for convergence test
>
> PC Object: 1 MPI processes
>
>   type: fieldsplit
>
>     FieldSplit with ADDITIVE composition: total splits = 2
>
>     Solver info for each split is in the following KSP objects:
>
>   Split number 0 Defined by IS
>
>   KSP Object: (fieldsplit_0_) 1 MPI processes
>
>     type: fgmres
>
>       restart=30, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
>
>       happy breakdown tolerance 1e-30
>
>     maximum iterations=10000, initial guess is zero
>
>     tolerances:  relative=1e-14, absolute=1e-30, divergence=10000.
>
>     right preconditioning
>
>     using UNPRECONDITIONED norm type for convergence test
>
>   PC Object: (fieldsplit_0_) 1 MPI processes
>
>     type: fieldsplit
>
>       FieldSplit with Schur preconditioner, factorization FULL
>
>       Preconditioner for the Schur complement formed from Sp, an assembled
> approximation to S, which uses A00's diagonal's inverse
>
>       Split info:
>
>       Split number 0 Defined by IS
>
>       Split number 1 Defined by IS
>
>       KSP solver for A00 block
>
>         KSP Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes
>
>           type: preonly
>
>           maximum iterations=10000, initial guess is zero
>
>           tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>
>           left preconditioning
>
>           using NONE norm type for convergence test
>
>         PC Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes
>
>           type: lu
>
>             out-of-place factorization
>
>             tolerance for zero pivot 2.22045e-14
>
>             matrix ordering: nd
>
>             factor fill ratio given 5., needed 3.92639
>
>               Factored matrix follows:
>
>                 Mat Object: 1 MPI processes
>
>                   type: seqaij
>
>                   rows=4290, cols=4290
>
>                   package used to perform factorization: petsc
>
>                   total: nonzeros=375944, allocated nonzeros=375944
>
>                     using I-node routines: found 2548 nodes, limit used is
> 5
>
>           linear system matrix = precond matrix:
>
>           Mat Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes
>
>             type: seqaij
>
>             rows=4290, cols=4290
>
>             total: nonzeros=95748, allocated nonzeros=95748
>
>             total number of mallocs used during MatSetValues calls=0
>
>               using I-node routines: found 3287 nodes, limit used is 5
>
>       KSP solver for S = A11 - A10 inv(A00) A01
>
>         KSP Object: (fieldsplit_0_fieldsplit_p_) 1 MPI processes
>
>           type: cg
>
>           maximum iterations=10000, initial guess is zero
>
>           tolerances:  relative=1e-14, absolute=1e-30, divergence=10000.
>
>           left preconditioning
>
>           using PRECONDITIONED norm type for convergence test
>
>         PC Object: (fieldsplit_0_fieldsplit_p_) 1 MPI processes
>
>           type: icc
>
>             out-of-place factorization
>
>             0 levels of fill
>
>             tolerance for zero pivot 2.22045e-14
>
>             using Manteuffel shift [POSITIVE_DEFINITE]
>
>             matrix ordering: natural
>
>             factor fill ratio given 1., needed 1.
>
>               Factored matrix follows:
>
>                 Mat Object: 1 MPI processes
>
>                   type: seqsbaij
>
>                   rows=561, cols=561
>
>                   package used to perform factorization: petsc
>
>                   total: nonzeros=5120, allocated nonzeros=5120
>
>                       block size is 1
>
>           linear system matrix followed by preconditioner matrix:
>
>           Mat Object: (fieldsplit_0_fieldsplit_p_) 1 MPI processes
>
>             type: schurcomplement
>
>             rows=561, cols=561
>
>               Schur complement A11 - A10 inv(A00) A01
>
>               A11
>
>                 Mat Object: (fieldsplit_0_fieldsplit_p_) 1 MPI processes
>
>                   type: seqaij
>
>                   rows=561, cols=561
>
>                   total: nonzeros=3729, allocated nonzeros=3729
>
>                   total number of mallocs used during MatSetValues calls=0
>
>                     not using I-node routines
>
>               A10
>
>                 Mat Object: 1 MPI processes
>
>                   type: seqaij
>
>                   rows=561, cols=4290
>
>                   total: nonzeros=19938, allocated nonzeros=19938
>
>                   total number of mallocs used during MatSetValues calls=0
>
>                     not using I-node routines
>
>               KSP of A00
>
>                 KSP Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes
>
>                   type: preonly
>
>                   maximum iterations=10000, initial guess is zero
>
>                   tolerances:  relative=1e-05, absolute=1e-50,
> divergence=10000.
>
>                   left preconditioning
>
>                   using NONE norm type for convergence test
>
>                 PC Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes
>
>                   type: lu
>
>                     out-of-place factorization
>
>                     tolerance for zero pivot 2.22045e-14
>
>                     matrix ordering: nd
>
>                     factor fill ratio given 5., needed 3.92639
>
>                       Factored matrix follows:
>
>                         Mat Object: 1 MPI processes
>
>                           type: seqaij
>
>                           rows=4290, cols=4290
>
>                           package used to perform factorization: petsc
>
>                           total: nonzeros=375944, allocated nonzeros=375944
>
>                             using I-node routines: found 2548 nodes, limit
> used is 5
>
>                   linear system matrix = precond matrix:
>
>                   Mat Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes
>
>                     type: seqaij
>
>                     rows=4290, cols=4290
>
>                     total: nonzeros=95748, allocated nonzeros=95748
>
>                     total number of mallocs used during MatSetValues
> calls=0
>
>                       using I-node routines: found 3287 nodes, limit used
> is 5
>
>               A01
>
>                 Mat Object: 1 MPI processes
>
>                   type: seqaij
>
>                   rows=4290, cols=561
>
>                   total: nonzeros=19938, allocated nonzeros=19938
>
>                   total number of mallocs used during MatSetValues calls=0
>
>                     using I-node routines: found 3287 nodes, limit used is
> 5
>
>           Mat Object: 1 MPI processes
>
>             type: seqaij
>
>             rows=561, cols=561
>
>             total: nonzeros=9679, allocated nonzeros=9679
>
>             total number of mallocs used during MatSetValues calls=0
>
>               not using I-node routines
>
>     linear system matrix = precond matrix:
>
>     Mat Object: (fieldsplit_0_) 1 MPI processes
>
>       type: seqaij
>
>       rows=4851, cols=4851
>
>       total: nonzeros=139353, allocated nonzeros=139353
>
>       total number of mallocs used during MatSetValues calls=0
>
>         using I-node routines: found 3830 nodes, limit used is 5
>
>   Split number 1 Defined by IS
>
>   KSP Object: (fieldsplit_c_) 1 MPI processes
>
>     type: preonly
>
>     maximum iterations=10000, initial guess is zero
>
>     tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>
>     left preconditioning
>
>     using NONE norm type for convergence test
>
>   PC Object: (fieldsplit_c_) 1 MPI processes
>
>     type: lu
>
>       out-of-place factorization
>
>       tolerance for zero pivot 2.22045e-14
>
>       matrix ordering: nd
>
>       factor fill ratio given 5., needed 4.24323
>
>         Factored matrix follows:
>
>           Mat Object: 1 MPI processes
>
>             type: seqaij
>
>             rows=561, cols=561
>
>             package used to perform factorization: petsc
>
>             total: nonzeros=15823, allocated nonzeros=15823
>
>               not using I-node routines
>
>     linear system matrix = precond matrix:
>
>     Mat Object: (fieldsplit_c_) 1 MPI processes
>
>       type: seqaij
>
>       rows=561, cols=561
>
>       total: nonzeros=3729, allocated nonzeros=3729
>
>       total number of mallocs used during MatSetValues calls=0
>
>         not using I-node routines
>
>   linear system matrix = precond matrix:
>
>   Mat Object: 1 MPI processes
>
>     type: seqaij
>
>     rows=5412, cols=5412
>
>     total: nonzeros=190416, allocated nonzeros=190416
>
>     total number of mallocs used during MatSetValues calls=0
>
>       using I-node routines: found 3833 nodes, limit used is 5
>
>
>
> --
>
> Dr. Sebastian Blauth
>
> Fraunhofer-Institut für
>
> Techno- und Wirtschaftsmathematik ITWM
>
> Abteilung Transportvorgänge
>
> Fraunhofer-Platz 1, 67663 Kaiserslautern
>
> Telefon: +49 631 31600-4968
>
> sebastian.blauth at itwm.fraunhofer.de
>
> https://urldefense.us/v3/__https://www.itwm.fraunhofer.de__;!!G_uCfscf7eWS!e9SeVBzwCGTmMl3gOc_WG4S_zL5JNMZYyiUGhfrhQAN_re34sVynQzfQyxsY8DUyFdPf1HHTM-Pas_5OTyRd$ 
>
>
>
>
>
>
> --
>
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
>
> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!e9SeVBzwCGTmMl3gOc_WG4S_zL5JNMZYyiUGhfrhQAN_re34sVynQzfQyxsY8DUyFdPf1HHTM-Pas81IL8s1$ 
> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!e9SeVBzwCGTmMl3gOc_WG4S_zL5JNMZYyiUGhfrhQAN_re34sVynQzfQyxsY8DUyFdPf1HHTM-PasyHZNNNw$ >
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!e9SeVBzwCGTmMl3gOc_WG4S_zL5JNMZYyiUGhfrhQAN_re34sVynQzfQyxsY8DUyFdPf1HHTM-Pas81IL8s1$  <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!e9SeVBzwCGTmMl3gOc_WG4S_zL5JNMZYyiUGhfrhQAN_re34sVynQzfQyxsY8DUyFdPf1HHTM-PasyHZNNNw$ >
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20240701/466bf1e4/attachment-0001.html>


More information about the petsc-users mailing list