[petsc-users] Which preconditioners are scalable?
Matthew Knepley
knepley at gmail.com
Fri Mar 11 10:56:12 CST 2011
On Fri, Mar 11, 2011 at 9:52 AM, Sebastian Steiger <steiger at purdue.edu>wrote:
> Hello PETSc developers
>
> I'm doing some scaling benchmarks and I found that the parallel asm
> preconditioner, my favorite preconditioner, has a limit in the number of
> cores it can handle.
>
> I am doing a numerical experiment where I scale up the size of my matrix
> by roughly the same factor as the number of CPUs employed. When I look
> at which function used how much memory using PETSc's routine
> PetscMallocDumpLog, I see the following:
>
> Function name N=300 N=600 increase
> ======================================================================
> MatGetSubMatrices_MPIAIJ_Local 75'912'016 134'516'928 1.77
> MatIncreaseOverlap_MPIAIJ_Once 168'288'288 346'870'832 2.06
> MatIncreaseOverlap_MPIAIJ_Receive 2'918'960 5'658'160 1.94
>
> The matrix sizes are 6'899'904 and 14'224'896, respectively. Above
> N~5000 CPUs I am running out of memory.
>
We have run ASM on 224,000 processors of the XT5 at ORNL, so something else
is going on here. The best thing to do here is send us -log_summary. For
attachments,
we usually recommend petsc-maint at mcs.anl.gov.
Matt
> Here's my question now: Is the asm preconditioner limited from the
> algorithm point of view, or is it the implementation? I thought that
> 'only' the local matrices, plus some constant overlap with neighbors,
> are solved, so that memory consumption should stay constant when I scale
> up with a constant number of rows per process.
>
> Best
> Sebastian
>
>
>
--
What most experimenters take for granted before they begin their experiments
is infinitely more interesting than any results to which their experiments
lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20110311/1ad31f7a/attachment.htm>
More information about the petsc-users
mailing list