[petsc-users] Communication in parallel MatMatMult
Matthew Knepley
knepley at gmail.com
Wed Dec 1 08:45:56 CST 2021
On Wed, Dec 1, 2021 at 9:32 AM Barry Smith <bsmith at petsc.dev> wrote:
>
> PETSc uses Elemental to perform such operations.
>
> PetscErrorCode MatMatMultNumeric_Elemental(Mat A,Mat B,Mat C)
> {
> Mat_Elemental *a = (Mat_Elemental*)A->data;
> Mat_Elemental *b = (Mat_Elemental*)B->data;
> Mat_Elemental *c = (Mat_Elemental*)C->data;
> PetscElemScalar one = 1,zero = 0;
>
> PetscFunctionBegin;
> { /* Scoping so that constructor is called before pointer is returned */
> El::Gemm(El::NORMAL,El::NORMAL,one,*a->emat,*b->emat,zero,*c->emat);
> }
> C->assembled = PETSC_TRUE;
> PetscFunctionReturn(0);
> }
>
>
> You can consult Elemental's documentation and papers for how it manages
> the communication.
>
Elemental uses all collective communication operations as a fundamental
aspect of the design.
Thanks,
Matt
> Barry
>
>
> > On Dec 1, 2021, at 8:33 AM, Hannes Brandt <Hannes_Brandt at gmx.de> wrote:
> >
> > Hello,
> >
> >
> > I am interested in the communication scheme Petsc uses for the
> multiplication of dense, parallel distributed matrices in MatMatMult. Is it
> based on collective communication or on single calls to MPI_Send/Recv, and
> is it done in a blocking or a non-blocking way? How do you make sure that
> the processes do not receive/buffer too much data at the same time?
> >
> >
> > Best Regards,
> >
> > Hannes Brandt
> >
> >
>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20211201/181d7487/attachment.html>
More information about the petsc-users
mailing list