[petsc-users] speedup for TS solver using DMDA
Matthew Knepley
knepley at gmail.com
Mon Sep 15 13:13:21 CDT 2014
On Mon, Sep 15, 2014 at 12:45 PM, Katy Ghantous <katyghantous at gmail.com>
wrote:
> Hi,
> I am using DMDA to run in parallel TS to solves a set of N equations. I am
> using DMDAGetCorners in the RHSfunction with setting the stencil size at 2
> to solve a set of coupled ODEs on 30 cores.
> The machine has 32 cores (2 physical CPUs with 2x8 core each with speed of
> 3.4Ghz per core).
> However, mpiexec with more than one core is showing no speedup.
> Also at the configuring/testing stage for petsc on that machine, there was
> no speedup and it only reported one node.
> Is there somehting wrong with how i configured petsc or is the approach
> inappropriate for the machine?
> I am not sure what files (or sections of the code) you would need to be
> able to answer my question.
>
The kind of code you describe sounds memory bandwidth limited. More
information is here:
http://www.mcs.anl.gov/petsc/documentation/faq.html#computers
The STREAMS should give you an idea of the bandwidth, and running it on 2
procs vs 1 should
give you an idea of the speedup to expect, no matter how many cores you use.
Thanks,
Matt
> Thank you!
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140915/e01503da/attachment.html>
More information about the petsc-users
mailing list