[petsc-users] Why use MATMPIBAIJ?

Matthew Knepley knepley at gmail.com
Fri Jan 22 11:01:35 CST 2016


On Fri, Jan 22, 2016 at 10:52 AM, Hom Nath Gharti <hng.email at gmail.com>
wrote:

> Dear all,
>
> I take this opportunity to ask for your important suggestion.
>
> I am solving an elastic-acoustic-gravity equation on the planet. I
> have displacement vector (ux,uy,uz) in solid region, displacement
> potential (\xi) and pressure (p) in fluid region, and gravitational
> potential (\phi) in all of space. All these variables are coupled.
>
> Currently, I am using MATMPIAIJ and form a single global matrix. Does
> using a MATMPIBIJ or MATNEST improve the convergence/efficiency in
> this case? For your information, total degrees of freedoms are about a
> billion.
>

1) For any solver question, we need to see the output of -ksp_view, and we
would also like

  -ksp_monitor_true_residual -ksp_converged_reason

2) MATNEST does not affect convergence, and MATMPIBAIJ only in the
blocksize which you
    could set without that format

3) However, you might see benefit from using something like PCFIELDSPLIT if
you have multiphysics here

   Matt


> Any suggestion would be greatly appreciated.
>
> Thanks,
> Hom Nath
>
> On Fri, Jan 22, 2016 at 10:32 AM, Matthew Knepley <knepley at gmail.com>
> wrote:
> > On Fri, Jan 22, 2016 at 9:27 AM, Mark Adams <mfadams at lbl.gov> wrote:
> >>>
> >>>
> >>>
> >>> I said the Hypre setup cost is not scalable,
> >>
> >>
> >> I'd be a little careful here.  Scaling for the matrix triple product is
> >> hard and hypre does put effort into scaling. I don't have any data
> however.
> >> Do you?
> >
> >
> > I used it for PyLith and saw this. I did not think any AMG had scalable
> > setup time.
> >
> >    Matt
> >
> >>>
> >>> but it can be amortized over the iterations. You can quantify this
> >>> just by looking at the PCSetUp time as your increase the number of
> >>> processes. I don't think they have a good
> >>> model for the memory usage, and if they do, I do not know what it is.
> >>> However, generally Hypre takes more
> >>> memory than the agglomeration MG like ML or GAMG.
> >>>
> >>
> >> agglomerations methods tend to have lower "grid complexity", that is
> >> smaller coarse grids, than classic AMG like in hypre. THis is more of a
> >> constant complexity and not a scaling issue though.  You can address
> this
> >> with parameters to some extent. But for elasticity, you want to at least
> >> try, if not start with, GAMG or ML.
> >>
> >>>
> >>>   Thanks,
> >>>
> >>>     Matt
> >>>
> >>>>
> >>>>
> >>>> Giang
> >>>>
> >>>> On Mon, Jan 18, 2016 at 5:25 PM, Jed Brown <jed at jedbrown.org> wrote:
> >>>>>
> >>>>> Hoang Giang Bui <hgbk2008 at gmail.com> writes:
> >>>>>
> >>>>> > Why P2/P2 is not for co-located discretization?
> >>>>>
> >>>>> Matt typed "P2/P2" when me meant "P2/P1".
> >>>>
> >>>>
> >>>
> >>>
> >>>
> >>> --
> >>> What most experimenters take for granted before they begin their
> >>> experiments is infinitely more interesting than any results to which
> their
> >>> experiments lead.
> >>> -- Norbert Wiener
> >>
> >>
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> experiments
> > is infinitely more interesting than any results to which their
> experiments
> > lead.
> > -- Norbert Wiener
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20160122/912cfe9d/attachment-0001.html>


More information about the petsc-users mailing list