[petsc-users] Why use MATMPIBAIJ?

Mark Adams mfadams at lbl.gov
Fri Jan 22 09:27:53 CST 2016


>
>
>
> I said the Hypre setup cost is not scalable,
>

I'd be a little careful here.  Scaling for the matrix triple product is
hard and hypre does put effort into scaling. I don't have any data
however.  Do you?


> but it can be amortized over the iterations. You can quantify this
> just by looking at the PCSetUp time as your increase the number of
> processes. I don't think they have a good
> model for the memory usage, and if they do, I do not know what it is.
> However, generally Hypre takes more
> memory than the agglomeration MG like ML or GAMG.
>
>
agglomerations methods tend to have lower "grid complexity", that is
smaller coarse grids, than classic AMG like in hypre. THis is more of a
constant complexity and not a scaling issue though.  You can address this
with parameters to some extent. But for elasticity, you want to at least
try, if not start with, GAMG or ML.


>   Thanks,
>
>     Matt
>
>
>>
>> Giang
>>
>> On Mon, Jan 18, 2016 at 5:25 PM, Jed Brown <jed at jedbrown.org> wrote:
>>
>>> Hoang Giang Bui <hgbk2008 at gmail.com> writes:
>>>
>>> > Why P2/P2 is not for co-located discretization?
>>>
>>> Matt typed "P2/P2" when me meant "P2/P1".
>>>
>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20160122/4db0fbb7/attachment.html>


More information about the petsc-users mailing list