[petsc-users] Tips on integrating MPI ksp petsc into my application?
Barry Smith
bsmith at petsc.dev
Mon Dec 13 13:51:05 CST 2021
Sorry, I didn't notice these emails for a long time.
PETSc does provide a "simple" mechanism to redistribute your matrix that does not require you to explicitly do the redistribution.
You must create a MPIAIJ matrix over all the MPI ranks, but simply provide all the rows on the first rank and zero rows on the rest of the ranks (you can use MatCreateMPIAIJWithArrays <https://petsc.org/release/docs/manualpages/Mat/MatCreateMPIAIJWithArrays.html#MatCreateMPIAIJWithArrays>) then use -ksp_type preonly -pc_type redistribute You control the parallel KSP and preconditioner by using for example -redistribute_ksp_type gmres -redistribute_pc_type bjacobi
Barry
The PC type of redistribute manages distributing the matrix and vectors across all the ranks for you. As the PETSc documentation notes this is not a recommended use of PETSc for large numbers of ranks, due to Amdahl's law; for truly good parallel performance you must build the matrix in parallel.
> On Dec 8, 2021, at 12:32 AM, Junchao Zhang <junchao.zhang at gmail.com> wrote:
>
>
>
> On Tue, Dec 7, 2021 at 10:04 PM Faraz Hussain <faraz_hussain at yahoo.com <mailto:faraz_hussain at yahoo.com>> wrote:
> The matrix in memory is in IJV (Spooles ) or CSR3 ( Pardiso ). The application was written to use a variety of different direct solvers but Spooles and Pardiso are what I am most familiar with.
> I assume the CSR3 has the a, i, j arrays used in petsc's MATAIJ.
> You can create a MPIAIJ matrix A with MatCreateMPIAIJWithArrays <https://petsc.org/release/docs/manualpages/Mat/MatCreateMPIAIJWithArrays.html#MatCreateMPIAIJWithArrays>, with only rank 0 providing data (i.e., other ranks just have m=n=0, i=j=a=NULL)
> Then you call MatGetSubMatrix <https://www.mcs.anl.gov/petsc/petsc-3.7/docs/manualpages/Mat/MatGetSubMatrix.html>(A,isrow,iscol,reuse,&B) to redistribute the imbalanced A to a balanced matrix B.
> You can use PetscLayoutCreate() and friends to create a row map and a column map (as if they are B's) and use them to get ranges of rows/cols each rank wants to own, and then build the isrow, iscol with ISCreateStride()
>
> My approach is kind of verbose. I would let Jed and Matt comment whether there are better ones.
>
>
>
>
>
> On Tuesday, December 7, 2021, 10:33:24 PM EST, Junchao Zhang <junchao.zhang at gmail.com <mailto:junchao.zhang at gmail.com>> wrote:
>
>
>
>
>
>
>
> On Tue, Dec 7, 2021 at 9:06 PM Faraz Hussain via petsc-users <petsc-users at mcs.anl.gov <mailto:petsc-users at mcs.anl.gov>> wrote:
> > Thanks, I took a look at ex10.c in ksp/tutorials . It seems to do as you wrote, "it efficiently gets the matrix from the file spread out over all the ranks.".
> >
> > However, in my application I only want rank 0 to read and assemble the matrix. I do not want other ranks trying to get the matrix data. The reason is the matrix is already in memory when my application is ready to call the petsc solver.
> What is the data structure of your matrix in memory?
>
> >
> >
> > So if I am running with multiple ranks, I don't want all ranks assembling the matrix. This would require a total re-write of my application which is not possible . I realize this may sounds confusing. If so, I'll see if I can create an example that shows the issue.
> >
> >
> >
> >
> >
> > On Tuesday, December 7, 2021, 10:13:17 AM EST, Barry Smith <bsmith at petsc.dev <mailto:bsmith at petsc.dev>> wrote:
> >
> >
> >
> >
> >
> >
> > If you use MatLoad() it never has the entire matrix on a single rank at the same time; it efficiently gets the matrix from the file spread out over all the ranks.
> >
> >> On Dec 6, 2021, at 11:04 PM, Faraz Hussain via petsc-users <petsc-users at mcs.anl.gov <mailto:petsc-users at mcs.anl.gov>> wrote:
> >>
> >> I am studying the examples but it seems all ranks read the full matrix. Is there an MPI example where only rank 0 reads the matrix?
> >>
> >> I don't want all ranks to read my input matrix and consume a lot of memory allocating data for the arrays.
> >>
> >> I have worked with Intel's cluster sparse solver and their documentation states:
> >>
> >> " Most of the input parameters must be set on the master MPI process only, and ignored on other processes. Other MPI processes get all required data from the master MPI process using the MPI communicator, comm. "
> >
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20211213/5c65a923/attachment-0001.html>
More information about the petsc-users
mailing list