[petsc-users] Tips on integrating MPI ksp petsc into my application?

Matthew Knepley knepley at gmail.com
Tue Dec 7 21:42:01 CST 2021


On Tue, Dec 7, 2021 at 10:25 PM Faraz Hussain <faraz_hussain at yahoo.com>
wrote:

> Thanks, that makes sense. I guess I was hoping petsc ksp is like intel's
> cluster sparse solver where it handles distributing the matrix to the other
> ranks for you.
>
> It sounds like that is not the case and I need to manually distribute the
> matrix to the ranks?
>

You can call

  https://petsc.org/main/docs/manualpages/Mat/MatCreateSubMatricesMPI.html

to distribute the matrix.

  Thanks,

    Matt


> On Tuesday, December 7, 2021, 10:18:04 PM EST, Matthew Knepley <
> knepley at gmail.com> wrote:
>
>
>
>
>
> On Tue, Dec 7, 2021 at 10:06 PM Faraz Hussain via petsc-users <
> petsc-users at mcs.anl.gov> wrote:
> > Thanks, I took a look at ex10.c in ksp/tutorials . It seems to do as you
> wrote, "it efficiently gets the matrix from the file spread out over all
> the ranks.".
> >
> > However, in my application I only want rank 0 to read and assemble the
> matrix. I do not want other ranks trying to get the matrix data. The reason
> is the matrix is already in memory when my application is ready to call the
> petsc solver.
> >
> > So if I am running with multiple ranks, I don't want all ranks
> assembling the matrix.  This would require a total re-write of my
> application which is not possible . I realize this may sounds confusing. If
> so, I'll see if I can create an example that shows the issue.
>
> MPI is distributed memory parallelism. If we want to use multiple ranks,
> then parts of the matrix must
> be in the different memories of the different processes. If you
> already assemble your matrix on process
> 0, then you need to communicate it to the other processes, perhaps using
> MatGetSubmatrix().
>
>   THanks,
>
>     Matt
>
> >  On Tuesday, December 7, 2021, 10:13:17 AM EST, Barry Smith <
> bsmith at petsc.dev> wrote:
> >
> >
> >
> >
> >
> >
> >   If you use MatLoad() it never has the entire matrix on a single rank
> at the same time; it efficiently gets the matrix from the file spread out
> over all the ranks.
> >
> >> On Dec 6, 2021, at 11:04 PM, Faraz Hussain via petsc-users <
> petsc-users at mcs.anl.gov> wrote:
> >>
> >> I am studying the examples but it seems all ranks read the full matrix.
> Is there an MPI example where only rank 0 reads the matrix?
> >>
> >> I don't want all ranks to read my input matrix and consume a lot of
> memory allocating data for the arrays.
> >>
> >> I have worked with Intel's cluster sparse solver and their
> documentation states:
> >>
> >> " Most of the input parameters must be set on the master MPI process
> only, and ignored on other processes. Other MPI processes get all required
> data from the master MPI process using the MPI communicator, comm. "
> >
> >
> >
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20211207/fad5f2c2/attachment-0001.html>


More information about the petsc-users mailing list