[petsc-dev] MatComputeExplicitOperator

Stefano Zampini stefano.zampini at gmail.com
Sun Apr 21 07:22:24 CDT 2019


here you have

https://bitbucket.org/petsc/petsc/pull-requests/1570/allow-specifying-an-operator-type-when/diff

Il giorno sab 20 apr 2019 alle ore 22:29 Smith, Barry F. <bsmith at mcs.anl.gov>
ha scritto:

>
>
> > On Apr 20, 2019, at 1:30 PM, Stefano Zampini <stefano.zampini at gmail.com>
> wrote:
> >
> > Using preallocator will require looping twice for the computation of the
> entries,
>
>    No, that is totally impractical. The old code that Jed removed just
> preallocated the AIJ as dense and this was fine for this purpose
>
> 1) MatConvert_Shell() should call all the preallocating possibilities
> (treating as dense)
> 2) MatComputeExplicitOperator() could get a new second argument that is a
> matrix type (then we don't need to argue
>     if it should use DENSE or AIJ).
>
>
>
>    Barry
>
> > and this may be very expensive.
> > In my case, every MatMult requires one forward and one backward TS solve.
>
>
> >
> > We can either use AIJ or DENSE format, but I think this should be the
> same for the sequential or parallel runs, not as it is now.
> >
> > Il giorno sab 20 apr 2019 alle ore 14:20 Matthew Knepley <
> knepley at gmail.com> ha scritto:
> > On Fri, Apr 19, 2019 at 5:58 PM Smith, Barry F. via petsc-dev <
> petsc-dev at mcs.anl.gov> wrote:
> >
> >
> >    I think MPIAIJ was selected because it provided the most parallel
> functionality compared to MPIDENSE for which more operations were not
> written.
> >    This may not be relevant any more.
> >
> >     Definitely the code needs to be fixed. Fixing MatConvert_Shell(); it
> should just assume
> > the matrices are dense and preallocate for them.
> >
> > We should restore preallocation. It should be easy to do with
> Preallocator.
> >
> >   Matt
> >
> >    Barry
> >
> > > On Apr 19, 2019, at 3:07 PM, Stefano Zampini via petsc-dev <
> petsc-dev at mcs.anl.gov> wrote:
> > >
> > > What is the rationale behind MatComputeExplicitOperator returning
> SEQDENSE in sequential and MPIAIJ in parallel?
> > >
> > > Also, before commit
> https://bitbucket.org/petsc/petsc/commits/b3d09e869df0e6ebcb615ca876706bfed4fcf1cd
> full preallocation of the MPIAIJ matrix was happening. Now, if we have a
> very dense operator that we want to sample just for testing purposes
> (inspecting entries, etc..) we have to pay the price of reallocating over
> and over.
> > >
> > > What is the proper fix? 1) Use MPIDENSE? 2) Restore full preallocation
> for MPIAIJ? 3) Have MatComputeExplicitOperator to accept more arguments ?
> > >
> > > I'm in favor of 1
> > >
> > > --
> > > Stefano
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> > -- Norbert Wiener
> >
> > https://www.cse.buffalo.edu/~knepley/
> >
> >
> > --
> > Stefano
>
>

-- 
Stefano
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20190421/ce7ee872/attachment-0001.html>


More information about the petsc-dev mailing list