[petsc-dev] ASM request
Barry Smith
bsmith at mcs.anl.gov
Tue Mar 16 13:20:18 CDT 2010
On Mar 16, 2010, at 1:12 PM, Matthew Knepley wrote:
> On Tue, Mar 16, 2010 at 1:51 PM, Barry Smith <bsmith at mcs.anl.gov>
> wrote:
>
> On Mar 16, 2010, at 11:28 AM, Matthew Knepley wrote:
>
> There is a request to make a more memory efficient version of ASM
> for running
> very large system of equations. They are using LU on small diagonal
> blocks, and
> have asked that the blocks be factored, applied, and discarded,
> rather than being
> saved for all iterations. Does anyone think this is easy in the
> current ASM code?
> Have an alternate proposal?
>
> I need a better understanding. Is it that after the KSPSolve()
> they want more memory available for their own use that they will
> allocate after the KSPSolve and free before the next PCSetUp()? If
> this is what they want then I think it is trivial to implement, but
> we will need an API so that they can say when they are done with the
> solves and want us to free up the memory.
>
> Normally ASM is memory intense because
> 1) it makes copies of the matrix subblocks
> 2) it factors those subblocks into additional memory.
>
> If you use ILU(0) and the inplace option then it only needs one copy
> of the blocks instead of the two.
>
> Yes, told them that so 1/2 off memory. I guess the easiest way to
> give them this might be to put a shell matrix in the KSP
> that sucks out the submatrix, factors it in place, applies it, and
> discards it.
For each application of PCApply() you reform the beasty and factor
it?
I've thought a little bit about the case of tiny overlapping
blocks before. Then even all the IS's start to be large memory
consumers.
It may be that we want a COMPLETELY different implementation of
ASM for tiny blocks; in the same way as for Jacobi/SOR we have
separate implementation and don't just use block Jacobi (PCBJACOBI)
and have tons of tiny blocks. Also, with tiny block ASM you probably
want to do a multiplicative version instead of an additive version to
SOR like instead of Jacobi like. I don't see using a shell matrix to
do this, just a completely different PC.
Barry
>
> Here is the motivation:
>
> You have a preconditioner which is ASM, but with a LOT of small
> blocks. You do not save very much by discarding one
> big block, but if you only ever form each small block, you get a lot
> of savings.
>
> Matt
>
>
> Barry
>
>
> Matt
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which
> their experiments lead.
> -- Norbert Wiener
>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which
> their experiments lead.
> -- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20100316/1bd1401f/attachment.html>
More information about the petsc-dev
mailing list