[petsc-dev] Lower memory DMDA

Matthew Knepley knepley at gmail.com
Wed May 8 07:34:54 CDT 2013


On Wed, May 8, 2013 at 7:30 AM, Jed Brown <jedbrown at mcs.anl.gov> wrote:

> Matthew Knepley <knepley at gmail.com> writes:
>
> > On Tue, May 7, 2013 at 10:53 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
> >
> >> This thread is complaining about DMDA costing multiple vectors worth
> >> of memory.
> >>
> >> https://groups.google.com/d/msg/claw-dev/JwIjL5e48No/NOizc6i88gkJ
> >>
> >> We currently set a lot of stuff up eagerly so that it can be accessed
> >> using non-collective accessors.  When a Krylov method is used or a
> >> matrix is assembled, that stuff is in the noise, but for explicit
> >> methods, it can be the limiting factor for problem size.  Should we
> >> do something about this?
> >>
> >
> > I would at least like to know what it is, and how the interface would
> > have to change. This can't be all scatter memory.
>
> At the very least, it's global-to-local scatter, local-to-local scatter,
> and local-to-global mapping (scalar and block).
>

Well, the scatter operations are collective, so making creation lazy here
seems fine.
I have no problem forcing a setup of local-to-global since I do not think
many people
use it directly.

   Matt

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20130508/28086cc8/attachment.html>


More information about the petsc-dev mailing list