[petsc-users] Nonzero I-j locations

Matthew Knepley knepley at gmail.com
Wed May 29 18:43:14 CDT 2019


On Wed, May 29, 2019 at 7:30 PM Manav Bhatia via petsc-users <
petsc-users at mcs.anl.gov> wrote:

> Thanks, Barry.
>
> I am working on a FE application (involving bifurcation behavior) with
> libMesh where I need to solve the system of equations along with a few
> extra unknowns that are not directly related to the FE mesh. I am able to
> assemble the n x 1 residual (R_fe) and  n x n  Jacobian (J_fe ) from my
> code and libMesh provides me with the sparsity pattern for this.
>
> Next, the system of equations that I need to solve is:
>
>  [   J_fe   A ]  { dX } =  { R_fe  }
>  [   B       C ]  { dV } =  {R_ext }
>
>  Where, C is a dense matrix of size m x m ( m << n ), A is n x m, B is m x
> n, R_ext is m x 1.   A, B and C are dense matrixes. This comes from the
> bordered system for my path continuation solver.
>
>  I have implemented a solver using Schur factorization ( this is outside
> of PETSc and does not use the FieldSplit construct ). This works well for
> most cases, except when J_fe is close to singular.
>
>  I am now attempting to create a monolithic matrix that solves the
> complete system.
>
>  Currently, the approach I am considering is to compute J_fe using my
> libMesh application, so that I don’t have to change that. I am defining a
> new matrix with the extra non-zero locations for A, B, C.
>
>   With J_fe computed, I am looking to copy its non-zero entries to this
> new matrix. This is where I am stumbling since I don’t know how best to get
> the non-zero locations in J_fe. Maybe there is a better approach to copy
> from J_fe to the new matrix?
>
>   I have looked through the nested matrix construct, but have not given
> this a serious consideration. Maybe I should? Note that I don’t want to
> solve J_fe and C separately (not as separate systems), so the field-split
> approach will not be suitable here.
>

I would not choose Nest if you want to eventually run MUMPS, since that
will not work. I would
try to build your matrix using


https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetLocalSubMatrix.html

obtained from your bigger matrix. This is our interface to assembling into
portions or your matrix as if
its the entire matrix.

  Thanks,

    Matt


>   Also, I am currently using MUMPS for all my parallel solves.
>
>    I would appreciate any advice.
>
> Regards,
> Manav
>
>
> > On May 29, 2019, at 6:07 PM, Smith, Barry F. <bsmith at mcs.anl.gov> wrote:
> >
> >
> >  Manav,
> >
> >  For parallel sparse matrices using the standard PETSc formats the
> matrix is stored in two parts on each process (see the details in
> MatCreateAIJ()) thus there is no inexpensive way to access directly the IJ
> locations as a single local matrix. What are you hoping to use the
> information for? Perhaps we have other suggestions on how to achieve the
> goal.
> >
> >   Barry
> >
> >
> >> On May 29, 2019, at 2:27 PM, Manav Bhatia via petsc-users <
> petsc-users at mcs.anl.gov> wrote:
> >>
> >> Hi,
> >>
> >>   Once a MPI-AIJ matrix has been assembled, is there a method to get
> the nonzero I-J locations? I see one for sequential matrices here:
> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetRowIJ.html
> , but not for parallel matrices.
> >>
> >> Regards,
> >> Manav
> >>
> >>
> >
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20190529/d317ac35/attachment.html>


More information about the petsc-users mailing list