[petsc-users] Setting up a matrix for Lagrange multiplier

Matthew Knepley knepley at gmail.com
Tue Apr 25 04:53:53 CDT 2023


On Tue, Apr 25, 2023 at 5:38 AM Karthikeyan Chockalingam - STFC UKRI <
karthikeyan.chockalingam at stfc.ac.uk> wrote:

> Thank you for your response. I will come back to the error later (for now
> I have it working with -pc_fieldsplit_detect_saddle_point.)
>
>
>
> I would like to understand how I need to index while running in parallel.
>
>
>
> (Q1)* Index set (IS) for the ISLocalToGlobalMapping.*
>
>
>
>
>
>     *for* (*int* i = 0; i < N; i++)
>
>         vec_col_index[i] = i;
>
>
>
>     *for* (*int* i = 0; i < C; i++)
>
>         vec_shifted_row_index[i] = i + N;
>
>
>
>
>
>     ISLocalToGlobalMapping phi_mapping, lamda_mapping;
>
>
>
>     ISLocalToGlobalMappingCreate(PETSC_COMM_WORLD, 1, C,
> vec_shifted_row_index, PETSC_COPY_VALUES, &lamda_mapping);
>
>     ISLocalToGlobalMappingCreate(PETSC_COMM_WORLD, 1, N, vec_col_index,
> PETSC_COPY_VALUES, &phi_mapping);
>
>
>
> (i)                  I believe the above index set is created by all
> parallel processes – why then I would have to offset it with the first
> index of this process?
>
> Since the mapping will always hold true and we would only use a *subset*
> of this index set while populating the submatrices. I can see that I might
> have an over-allocated index set than necessary but I can’t see why it can
> be wrong.
>

Oh, N is the global size of your matrix. Then what your L2G map says above
is "replicate this matrix on every process", which is not a scalable
solution. A scalable solution has only part of the rows on each process,
which is the default in PETSc. In my example, I just
put the owned rows/cols on each process. You could choose to have an
overlap, so that some shared rows also map to the local
representation, but I did not do that in my example.


> (ii)                What would be the first index of this process anyway,
> since at this point I haven’t declared any parallel sub-matrix or
> sub-vector?
>

Either you know the parallel layout of your matrix, or if you let PETSc
decide for you, then this function determines it

  https://petsc.org/main/manualpages/Sys/PetscSplitOwnership/


> (Q2)* Index set (IS) while creating the submatrix*
>
>
>
>    *for* (*int* i = 0; i < C; i++)
>
>        vec_row_index[i] = i;
>
>
>
>    Mat SubP;
>
>    IS row_isx, col_isx;
>
>
>
>    ISCreateGeneral(PETSC_COMM_WORLD, C, vec_row_index, PETSC_COPY_VALUES,
> &row_isx);
>
>    ISCreateGeneral(PETSC_COMM_WORLD, N, vec_col_index, PETSC_COPY_VALUES,
> &col_isx);
>
>
>
>    MatSetOption(A[level], MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE);
>
>
>
>    MatSetLocalToGlobalMapping(A, lamda_mapping,  phi_mapping);
>
>
>
>    MatGetLocalSubMatrix(A, row_isx, col_isx, &SubP);
>
>
>
> I believe I will have to request a submatrix SubP of global size C x N on
> all processes – right? Or do you think I can still break row_isx and
> col_isx into parallel parts?
>

The submatrix is local to this process, "LocalSubmatrix", and thus should
have a local number of rows and columns. I have chosen the
number of owned rows/cols since I did not need an overlap.


> (Q3)* Populating the submatrix*
>
>
>
> *    const* PetscInt *nindices;
>
>     PetscInt count;
>
>     ISGetIndices(isout[level], &nindices);
>
>         //isout is the global index of the nodes to be constrained.
>
>
>
>     PetscScalar   val = 1;
>
>
>
>     *for* (*int* irow=0; irow < n; irow++)
>
>     {
>
>         *int* icol = nindices[irow];
>
>         MatSetValuesLocal(SubP,1,&irow,1,&icol,&val,ADD_VALUES);
>
>     }
>
>
>
> I do think if I have the MatGetOwnerShipRange of SubP; I will be able to
> parallelize the loop. Is MatGetOwnerShipRange also available for
> submatrices as well?
>

It is a local submatrix, so I would only run over local things
(parallelization is implicit).

I think I have shown all these operations in the example.

  THanks,

     Matt


> If I have to parallelize the index sets maybe I have to reorder the above
> steps, can you please make a suggestion?
>
>
>
> Kind regards,
>
> Karthik.
>
>
>
>
>
> *From: *Matthew Knepley <knepley at gmail.com>
> *Date: *Monday, 24 April 2023 at 19:24
> *To: *Chockalingam, Karthikeyan (STFC,DL,HC) <
> karthikeyan.chockalingam at stfc.ac.uk>
> *Cc: *petsc-users at mcs.anl.gov <petsc-users at mcs.anl.gov>
> *Subject: *Re: [petsc-users] Setting up a matrix for Lagrange multiplier
>
> On Mon, Apr 24, 2023 at 1:58 PM Karthikeyan Chockalingam - STFC UKRI <
> karthikeyan.chockalingam at stfc.ac.uk> wrote:
>
> Changing the names to NULL produced the same error.
>
>
>
> This is just to make the code simpler.
>
>
>
> Don’t I have to give the fields a name?
>
>
>
> They get the default name, which is the numbering you used.
>
>
>
> The problem solution didn’t change from one time step to the next.
>
>
>
> *[1;31m[0]PETSC ERROR: --------------------- Error Message
> --------------------------------------------------------------*
>
> *[0;39m[0;49m[0]PETSC ERROR: No support for this operation for this object
> type*
>
> *[0]PETSC ERROR: Unsupported viewer gmres*
>
>
>
> You are not completely catching the error, because it should print out the
> entire options database on termination.
>
>
>
> Here it says you gave "gmres" as a Viewer, which is incorrect, but I
> cannot see all the options you used.
>
>
>
>   Thanks,
>
>
>
>      Matt
>
>
>
> *[0]PETSC ERROR: WARNING! There are option(s) set that were not used!
> Could be the program crashed before they were used or a spelling mistake,
> etc!*
>
> *[0]PETSC ERROR: Option left: name:-fieldsplit_1_ksp_type value: preonly
> source: code*
>
> *[0]PETSC ERROR: Option left: name:-fieldsplit_1_ksp_view value: gmres
> source: code*
>
> *[0]PETSC ERROR: Option left: name:-fieldsplit_1_pc_type value: lu source:
> code*
>
> *[0]PETSC ERROR: Option left: name:-options_left (no value) source: code*
>
> *[0]PETSC ERROR: See **https://petsc.org/release/faq/*
> <https://petsc.org/release/faq/>* for trouble shooting.*
>
> *[0]PETSC ERROR: Petsc Development GIT revision: v3.18.4-529-g995ec06f92
> GIT Date: 2023-02-03 18:41:48 +0000*
>
> *[0]PETSC ERROR:
> /Users/karthikeyan.chockalingam/AMReX/amrFEM/build/Debug/amrFEM on a  named
> HC20210312 by karthikeyan.chockalingam Mon Apr 24 18:51:25 2023*
>
> *[0]PETSC ERROR: Configure options --with-debugging=0
> --prefix=/Users/karthikeyan.chockalingam/AMReX/petsc
> --download-fblaslapack=yes --download-scalapack=yes --download-mumps=yes
> --with-hypre-dir=/Users/karthikeyan.chockalingam/AMReX/hypre/src/hypre*
>
> *[0]PETSC ERROR: #1 PetscOptionsGetViewer() at
> /Users/karthikeyan.chockalingam/AMReX/SRC_PKG/petsc/src/sys/classes/viewer/interface/viewreg.c:309*
>
> *[0]PETSC ERROR: #2 KSPSetFromOptions() at
> /Users/karthikeyan.chockalingam/AMReX/SRC_PKG/petsc/src/ksp/ksp/interface/itcl.c:522*
>
> *[0]PETSC ERROR: #3 PCSetUp_FieldSplit() at
> /Users/karthikeyan.chockalingam/AMReX/SRC_PKG/petsc/src/ksp/pc/impls/fieldsplit/fieldsplit.c:1054*
>
> *[0]PETSC ERROR: #4 PCSetUp() at
> /Users/karthikeyan.chockalingam/AMReX/SRC_PKG/petsc/src/ksp/pc/interface/precon.c:994*
>
> *[0]PETSC ERROR: #5 KSPSetUp() at
> /Users/karthikeyan.chockalingam/AMReX/SRC_PKG/petsc/src/ksp/ksp/interface/itfunc.c:405*
>
> *[0]PETSC ERROR: #6 KSPSolve_Private() at
> /Users/karthikeyan.chockalingam/AMReX/SRC_PKG/petsc/src/ksp/ksp/interface/itfunc.c:824*
>
> *[0]PETSC ERROR: #7 KSPSolve() at
> /Users/karthikeyan.chockalingam/AMReX/SRC_PKG/petsc/src/ksp/ksp/interface/itfunc.c:1070*
>
> *End of program *
>
>  *solve time 0.03416619 seconds *
>
>
>
>
>
> It will be a bit difficult for me to produce a stand-alone code.
>
> Thank you for your help.
>
>
> Best,
> Karthik.
>
>
>
> *From: *Matthew Knepley <knepley at gmail.com>
> *Date: *Monday, 24 April 2023 at 18:42
> *To: *Chockalingam, Karthikeyan (STFC,DL,HC) <
> karthikeyan.chockalingam at stfc.ac.uk>
> *Cc: *petsc-users at mcs.anl.gov <petsc-users at mcs.anl.gov>
> *Subject: *Re: [petsc-users] Setting up a matrix for Lagrange multiplier
>
> On Mon, Apr 24, 2023 at 1:37 PM Karthikeyan Chockalingam - STFC UKRI <
> karthikeyan.chockalingam at stfc.ac.uk> wrote:
>
> Great to know there is an example now.
>
>
>
> Yes, -pc_fieldsplit_defect_saddle_point worked.
>
>
>
> But I would like to understand what I did wrong (and why it didn’t work).
>
>
>
> Can you show the error? Or give something self-contained that I can run.
>
>
>
> After reading many posted posts, I needed to create PCFieldSplitSetIS to
> split the fields.
>
>
>
> Yes. I would give NULL for the name here.
>
>
>
> I set the first N indices to the field phi and the next C indices
> (starting from N) to the field lambda.
>
>
>
> This looks fine. However, these are global indices, so this would not work
> in parallel. You would have to offset by
>
> the first index on this process.
>
>
>
>   Thanks,
>
>
>
>      Matt
>
>
>
> Please tell me what I did wrong and how I can fix the lines which are now
> commented?
>
>
>
>     PetscErrorCode ierr;
>
>     KSP           ksp;
>
>     PC            pc;
>
>
>
>     KSPCreate(PETSC_COMM_WORLD, &ksp);
>
>     KSPSetType(ksp, KSPGMRES);
>
>     KSPSetOperators(ksp, A[level], A[level]);
>
>     ierr = KSPGetPC(ksp,&pc);CHKERRQ(ierr);
>
>     ierr = PCSetType(pc,PCFIELDSPLIT);CHKERRQ(ierr);
>
>
>
>     ……..
>
>
>
> *    for* (*int* i = 0; i < N; i++)
>
>       vec_zero_field_index[i] = i;
>
>
>
>     PetscInt * vec_first_field_index = *NULL*;
>
>     PetscMalloc(n * *sizeof*(PetscInt), &vec_first_field_index);
>
>
>
>     *for* (*int* i = 0; i < C; i++)
>
>         vec_first_field_index[i] = N + i;
>
>
>
>     IS zero_field_isx, first_field_isx;
>
>     CHKERRQ(ISCreateGeneral(PETSC_COMM_WORLD, N, vec_zero_field_index,
> PETSC_COPY_VALUES, &zero_field_isx));
>
>     CHKERRQ(ISCreateGeneral(PETSC_COMM_WORLD, C, vec_first_field_index,
> PETSC_COPY_VALUES, &first_field_isx));
>
>
>
>     ierr = PCFieldSplitSetIS(pc,"0",zero_field_isx);  CHKERRQ(ierr);
>
>     ierr = PCFieldSplitSetIS(pc,"1",first_field_isx);  CHKERRQ(ierr);
>
>
>
>     ierr = PetscOptionsSetValue(*NULL*,"-ksp_type", "fgmres"); CHKERRQ
> (ierr);
>
>     ierr = PetscOptionsSetValue(*NULL*,"-pc_type", "fieldsplit"); CHKERRQ
> (ierr);
>
>
>
>     /*
>
>     ierr = PetscOptionsSetValue(NULL,"-fieldsplit_0_ksp_type", "gmres");
> CHKERRQ(ierr);
>
>     ierr = PetscOptionsSetValue(NULL,"-fieldsplit_0_pc_type", "jacobi");
> CHKERRQ(ierr);
>
>     ierr = PetscOptionsSetValue(NULL,"-fieldsplit_1_ksp_type",
> "preonly"); CHKERRQ(ierr);
>
>     ierr = PetscOptionsSetValue(NULL,"-fieldsplit_1_pc_type", "lu");
> CHKERRQ(ierr);*/
>
>
>
>     ierr = PetscOptionsSetValue(*NULL*,
> "-pc_fieldsplit_detect_saddle_point", *NULL*);CHKERRQ(ierr);
>
>     ierr = PetscOptionsSetValue(*NULL*, "-options_left", *NULL*);CHKERRQ
> (ierr);
>
>     ierr = KSPSetFromOptions(ksp); CHKERRQ(ierr);
>
>     KSPSolve(ksp, b, x);
>
>
>
> Best,
>
> Karthik.
>
> *From: *Matthew Knepley <knepley at gmail.com>
> *Date: *Monday, 24 April 2023 at 17:57
> *To: *Chockalingam, Karthikeyan (STFC,DL,HC) <
> karthikeyan.chockalingam at stfc.ac.uk>
> *Cc: *petsc-users at mcs.anl.gov <petsc-users at mcs.anl.gov>
> *Subject: *Re: [petsc-users] Setting up a matrix for Lagrange multiplier
>
> On Mon, Apr 24, 2023 at 10:22 AM Karthikeyan Chockalingam - STFC UKRI <
> karthikeyan.chockalingam at stfc.ac.uk> wrote:
>
> Hello,
>
>
>
> I was able to construct the below K matrix (using submatrices P and P^T),
> which is of type MATAIJ
>
> K = [A P^T
>
>        P   0]
>
> and solved them using a direct solver.
>
>
>
> I modified your example to create either AIJ or Nest matrices, but use the
> same assembly code:
>
>
>
>   https://gitlab.com/petsc/petsc/-/merge_requests/6368
>
>
>
> However, I was reading online that this is a saddle point problem and I
> should be employing PCFIELDSPLIT.
>
> Since I have one monolithic matrix K, I was not sure how to split the
> fields.
>
>
>
> With this particular matrix, you can use
>
>
>
>   -pc_fieldsplit_detect_saddle_point
>
>
>
> and it will split it automatically.
>
>
>
>   Thanks,
>
>
>
>      Matt
>
>
>
> Best regards,
>
> Karthik.
>
>
>
>
>
> *From: *Chockalingam, Karthikeyan (STFC,DL,HC) <
> karthikeyan.chockalingam at stfc.ac.uk>
> *Date: *Wednesday, 19 April 2023 at 17:52
> *To: *Matthew Knepley <knepley at gmail.com>
> *Cc: *petsc-users at mcs.anl.gov <petsc-users at mcs.anl.gov>
> *Subject: *Re: [petsc-users] Setting up a matrix for Lagrange multiplier
>
>  I have declared the mapping
>
>
>
>   ISLocalToGlobalMapping mapping;
>
>     ISLocalToGlobalMappingCreate(PETSC_COMM_WORLD, 1, n, nindices,
> PETSC_COPY_VALUES, &mapping);
>
>
>
> But when I use MatSetValuesLocal(), how do I know the above mapping is
> employed because it is not one of the parameters passed to the function?
>
>
>
> Thank you.
>
>
>
> Kind regards,
>
> Karthik.
>
>
>
>
>
> *From: *Matthew Knepley <knepley at gmail.com>
> *Date: *Tuesday, 18 April 2023 at 16:21
> *To: *Chockalingam, Karthikeyan (STFC,DL,HC) <
> karthikeyan.chockalingam at stfc.ac.uk>
> *Cc: *petsc-users at mcs.anl.gov <petsc-users at mcs.anl.gov>
> *Subject: *Re: [petsc-users] Setting up a matrix for Lagrange multiplier
>
> On Tue, Apr 18, 2023 at 11:16 AM Karthikeyan Chockalingam - STFC UKRI <
> karthikeyan.chockalingam at stfc.ac.uk> wrote:
>
> Thank you for your response. I spend some time understanding how
>
> MatSetValuesLocal and ISLocalToGlobalMappingCreate work.
>
>
>
> You can look at SNES ex28 where we do this with DMCOMPOSITE.
>
>
>
> Q1) Will the matrix K be of type MATMPIAIJ or MATIS?
>
> K = [A P^T
>
>        P   0]
>
>
>
> I assume MPIAIJ since IS is only used for Neumann-Neumann decompositions.
>
>
>
> Q2) Can I use both MatSetValues() to MatSetValuesLocal() to populate K?
> Since I have already used MatSetValues() to construct A.
>
>
>
> You can, and there would be no changes in serial if K is exactly the upper
> left block, but in parallel global indices would change.
>
>
>
> Q3) What are the advantages of using MatSetValuesLocal()? Is it that I can
> construct P directly using local indies and map the entrees to the global
> index in K?
>
>
>
> You have a monolithic K, so that you can use sparse direct solvers to
> check things. THis is impossible with separate storage.
>
>
>
> Q4) I probably don’t have to construct an independent P matrix
>
>
>
> You wouldn't in this case.
>
>
>
>   Thanks,
>
>
>
>      Matt
>
>
>
> Best regards,
>
> Karthik.
>
>
>
>
>
>
>
> *From: *Matthew Knepley <knepley at gmail.com>
> *Date: *Tuesday, 18 April 2023 at 11:08
> *To: *Chockalingam, Karthikeyan (STFC,DL,HC) <
> karthikeyan.chockalingam at stfc.ac.uk>
> *Cc: *petsc-users at mcs.anl.gov <petsc-users at mcs.anl.gov>
> *Subject: *Re: [petsc-users] Setting up a matrix for Lagrange multiplier
>
> On Tue, Apr 18, 2023 at 5:24 AM Karthikeyan Chockalingam - STFC UKRI via
> petsc-users <petsc-users at mcs.anl.gov> wrote:
>
> Hello,
>
>
>
> I'm solving a problem using the Lagrange multiplier, the matrix has the
> form
>
>
>
> K = [A P^T
>
>        P   0]
>
>
>
> I am familiar with constructing K using MATMPIAIJ. However, I would like
> to know if had [A], can I augment it with [P], [P^T] and [0] of type
> MATMPIAIJ? Likewise for vectors as well.
>
>
>
> Can you please point me to the right resource, if it is a common operation
> in PETSc?
>
>
>
> You can do this at least 2 ways:
>
>
>
>   1) Assemble you submatrices directly into the larger matrix by
> constructing local-to-global maps for the emplacement. so that you do
>
>       not change your assembly code, except to change MatSetValues() to
> MatSetValuesLocal(). This is usually preferable.
>
>
>
>   2) Use MATNEST and VecNEST to put pointers to submatrices and subvectors
> directly in.
>
>
>
>   Thanks,
>
>
>
>      Matt
>
>
>
> Many thanks.
>
>
>
> Kind regards,
>
> Karthik.
>
>
>
>
>
>
>
>
>
>
>
>
> --
>
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
>
>
>
> --
>
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
>
>
>
> --
>
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
>
>
>
> --
>
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
>
>
>
> --
>
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20230425/cd204743/attachment-0001.html>


More information about the petsc-users mailing list