[petsc-users] Clarification for use of MatMPIBAIJSetPreallocationCSR

Junchao Zhang junchao.zhang at gmail.com
Fri Mar 1 10:09:17 CST 2024


On Fri, Mar 1, 2024 at 9:28 AM Fabian Wermelinger <fab4100 at posteo.ch> wrote:

> Dear All, I am implementing a linear solver interface in a flow solver
> with support for PETSc. My application uses a parallel CSR representation
> and it manages the memory for it. I would like to wrap PETSc matrices (and
> vectors) around it such
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
>
> Dear All,
>
> I am implementing a linear solver interface in a flow solver with support for
> PETSc.  My application uses a parallel CSR representation and it manages the
> memory for it.  I would like to wrap PETSc matrices (and vectors) around it such
> that I can use the PETSc solvers as well.  I plan to use
> MatMPIBAIJSetPreallocationCSR and VecCreateMPIWithArray for lightweight
> wrapping.  The matrix structure is static over the course of iterations.  I am
> using a derived context class to host the PETSc related context.  This context
> holds references to the PETSc matrix and vectors and KSP/PC required to call the
> solver API later in the iteration loop.  I would like to create as much as
> possible during creation of the context at the beginning of iterations (the
> context will live through iterations).
>
> My understanding is that MatMPIBAIJSetPreallocationCSR and VecCreateMPIWithArray
> DO NOT copy such that I can wrap the PETSc types around the memory managed by
> the hosting linear solver framework in the application.  The system matrix and
> RHS (the pointers to these arrays are passed to MatMPIBAIJSetPreallocationCSR
> and VecCreateMPIWithArray, respectively) is assembled by the application before
> any call to a linear solver.
>
> No. MatMPIBAIJSetPreallocationCSR() copies the data, but VecCreateMPIWithArray()
does not copy (only use pointers user provided).
PETSc MATMPIAIJ or MATMPIBAIJ has a complicated internal data structure. It
is not easy for users to get it right.
I think your intention is to avoid memory copies.  Don't worry about it too
much.  If it is MATMPIAIJ, we can do
    MatMPIAIJSetPreallocationCSR(A, i, j, v); // let petsc copy i, j v

    // To quickly update A when you have an updated v[]

    MatUpdateMPIAIJWithArray(A, v); // copy v[], but faster than MatSetValues()

But it seems we currently do not have a MatUpdateMPIBAIJWithArray()  :(

> Given this setting: for every iteration, my plan is the PETSc information from
> the context (Mat, Vec, KSP) and simply call KSPSolve without any other PETSc
> calls (still assuming the matrix structure is static during iteration).
>
> What is not clear to me:
>
> Are there any MatSetValues/VecSetValues calls followed by
> MatAssembly/VecAssembly(Begin/End) calls required for this setting?  The data in
> the arrays for which pointers have been passed to MatMPIBAIJSetPreallocationCSR
> and VecCreateMPIWithArray is computed prior to any solver call in an iteration,
> such that I am assuming no additional "set value" calls through PETSc are
> required -> am I missing something important by assuming this?
>
> Thank you for taking the time!
>
> --
> fabs
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20240301/8d2a3759/attachment.html>


More information about the petsc-users mailing list