<div dir="ltr"><div dir="ltr">here you have</div><div dir="ltr"> <a href="https://bitbucket.org/petsc/petsc/pull-requests/1570/allow-specifying-an-operator-type-when/diff">https://bitbucket.org/petsc/petsc/pull-requests/1570/allow-specifying-an-operator-type-when/diff</a><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Il giorno sab 20 apr 2019 alle ore 22:29 Smith, Barry F. <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> ha scritto:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
<br>
> On Apr 20, 2019, at 1:30 PM, Stefano Zampini <<a href="mailto:stefano.zampini@gmail.com" target="_blank">stefano.zampini@gmail.com</a>> wrote:<br>
> <br>
> Using preallocator will require looping twice for the computation of the entries,<br>
<br>
No, that is totally impractical. The old code that Jed removed just preallocated the AIJ as dense and this was fine for this purpose<br>
<br>
1) MatConvert_Shell() should call all the preallocating possibilities (treating as dense)<br>
2) MatComputeExplicitOperator() could get a new second argument that is a matrix type (then we don't need to argue <br>
if it should use DENSE or AIJ).<br>
<br>
<br>
<br>
Barry<br>
<br>
> and this may be very expensive.<br>
> In my case, every MatMult requires one forward and one backward TS solve.<br>
<br>
<br>
> <br>
> We can either use AIJ or DENSE format, but I think this should be the same for the sequential or parallel runs, not as it is now.<br>
> <br>
> Il giorno sab 20 apr 2019 alle ore 14:20 Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> ha scritto:<br>
> On Fri, Apr 19, 2019 at 5:58 PM Smith, Barry F. via petsc-dev <<a href="mailto:petsc-dev@mcs.anl.gov" target="_blank">petsc-dev@mcs.anl.gov</a>> wrote:<br>
> <br>
> <br>
> I think MPIAIJ was selected because it provided the most parallel functionality compared to MPIDENSE for which more operations were not written.<br>
> This may not be relevant any more. <br>
> <br>
> Definitely the code needs to be fixed. Fixing MatConvert_Shell(); it should just assume <br>
> the matrices are dense and preallocate for them.<br>
> <br>
> We should restore preallocation. It should be easy to do with Preallocator.<br>
> <br>
> Matt<br>
> <br>
> Barry<br>
> <br>
> > On Apr 19, 2019, at 3:07 PM, Stefano Zampini via petsc-dev <<a href="mailto:petsc-dev@mcs.anl.gov" target="_blank">petsc-dev@mcs.anl.gov</a>> wrote:<br>
> > <br>
> > What is the rationale behind MatComputeExplicitOperator returning SEQDENSE in sequential and MPIAIJ in parallel?<br>
> > <br>
> > Also, before commit <a href="https://bitbucket.org/petsc/petsc/commits/b3d09e869df0e6ebcb615ca876706bfed4fcf1cd" rel="noreferrer" target="_blank">https://bitbucket.org/petsc/petsc/commits/b3d09e869df0e6ebcb615ca876706bfed4fcf1cd</a> full preallocation of the MPIAIJ matrix was happening. Now, if we have a very dense operator that we want to sample just for testing purposes (inspecting entries, etc..) we have to pay the price of reallocating over and over.<br>
> > <br>
> > What is the proper fix? 1) Use MPIDENSE? 2) Restore full preallocation for MPIAIJ? 3) Have MatComputeExplicitOperator to accept more arguments ? <br>
> > <br>
> > I'm in favor of 1<br>
> > <br>
> > -- <br>
> > Stefano<br>
> <br>
> <br>
> <br>
> -- <br>
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
> -- Norbert Wiener<br>
> <br>
> <a href="https://www.cse.buffalo.edu/~knepley/" rel="noreferrer" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br>
> <br>
> <br>
> -- <br>
> Stefano<br>
<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature">Stefano</div>