[petsc-users] Tips on integrating MPI ksp petsc into my application?
Junchao Zhang
junchao.zhang at gmail.com
Tue Dec 7 23:32:08 CST 2021
On Tue, Dec 7, 2021 at 10:04 PM Faraz Hussain <faraz_hussain at yahoo.com>
wrote:
> The matrix in memory is in IJV (Spooles ) or CSR3 ( Pardiso ). The
> application was written to use a variety of different direct solvers but
> Spooles and Pardiso are what I am most familiar with.
>
I assume the CSR3 has the a, i, j arrays used in petsc's MATAIJ.
You can create a MPIAIJ matrix A with MatCreateMPIAIJWithArrays
<https://petsc.org/release/docs/manualpages/Mat/MatCreateMPIAIJWithArrays.html#MatCreateMPIAIJWithArrays>,
with only rank 0 providing data (i.e., other ranks just have m=n=0,
i=j=a=NULL)
Then you call MatGetSubMatrix
<https://www.mcs.anl.gov/petsc/petsc-3.7/docs/manualpages/Mat/MatGetSubMatrix.html>(A,isrow,iscol,reuse,&B)
to redistribute the imbalanced A to a balanced matrix B.
You can use PetscLayoutCreate() and friends to create a row map and a
column map (as if they are B's) and use them to get ranges of rows/cols
each rank wants to own, and then build the isrow, iscol with
ISCreateStride()
My approach is kind of verbose. I would let Jed and Matt comment whether
there are better ones.
>
>
>
>
>
> On Tuesday, December 7, 2021, 10:33:24 PM EST, Junchao Zhang <
> junchao.zhang at gmail.com> wrote:
>
>
>
>
>
>
>
> On Tue, Dec 7, 2021 at 9:06 PM Faraz Hussain via petsc-users <
> petsc-users at mcs.anl.gov> wrote:
> > Thanks, I took a look at ex10.c in ksp/tutorials . It seems to do as you
> wrote, "it efficiently gets the matrix from the file spread out over all
> the ranks.".
> >
> > However, in my application I only want rank 0 to read and assemble the
> matrix. I do not want other ranks trying to get the matrix data. The reason
> is the matrix is already in memory when my application is ready to call the
> petsc solver.
> What is the data structure of your matrix in memory?
>
> >
> >
> > So if I am running with multiple ranks, I don't want all ranks
> assembling the matrix. This would require a total re-write of my
> application which is not possible . I realize this may sounds confusing. If
> so, I'll see if I can create an example that shows the issue.
> >
> >
> >
> >
> >
> > On Tuesday, December 7, 2021, 10:13:17 AM EST, Barry Smith <
> bsmith at petsc.dev> wrote:
> >
> >
> >
> >
> >
> >
> > If you use MatLoad() it never has the entire matrix on a single rank
> at the same time; it efficiently gets the matrix from the file spread out
> over all the ranks.
> >
> >> On Dec 6, 2021, at 11:04 PM, Faraz Hussain via petsc-users <
> petsc-users at mcs.anl.gov> wrote:
> >>
> >> I am studying the examples but it seems all ranks read the full matrix.
> Is there an MPI example where only rank 0 reads the matrix?
> >>
> >> I don't want all ranks to read my input matrix and consume a lot of
> memory allocating data for the arrays.
> >>
> >> I have worked with Intel's cluster sparse solver and their
> documentation states:
> >>
> >> " Most of the input parameters must be set on the master MPI process
> only, and ignored on other processes. Other MPI processes get all required
> data from the master MPI process using the MPI communicator, comm. "
> >
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20211207/05e9ad35/attachment.html>
More information about the petsc-users
mailing list