[petsc-users] Slower performance in multi-node system
Matthew Knepley
knepley at gmail.com
Wed Feb 3 13:43:54 CST 2021
On Wed, Feb 3, 2021 at 2:42 PM Luciano Siqueira <luciano.siqueira at usp.br>
wrote:
> Hello,
>
> I'm evaluating the performance of an application in a distributed
> environment and I notice that it's much slower when running in many
> nodes/cores when compared to a single node with a fewer cores.
>
> When running the application in 20 nodes, the Main Stage time reported
> in PETSc's log is up to 10 times slower than it is when running the same
> application in only 1 node, even with fewer cores per node.
>
> The application I'm running is an example code provided by libmesh:
>
> http://libmesh.github.io/examples/introduction_ex4.html
>
> The application runs inside a Singularity container, with openmpi-4.0.3
> and PETSc 3.14.3. The distributed processes are managed by slurm
> 17.02.11 and each node is equipped with two Intel CPU Xeon E5-2695v2 Ivy
> Bridge (12c @2,4GHz) and 128Gb of RAM, all communications going through
> infiniband.
>
> My questions are: Is the slowdown expected? Should the application be
> specially tailored to work well in distributed environments?
>
> Also, where (maybe in PETSc documentation/source-code) can I find
> information on how PETSc handles MPI communications? Do the KSP solvers
> favor one-to-one process communication over broadcast messages or
> vice-versa? I suspect inter-process communication must be the cause of
> the poor performance when using many nodes, but not as much as I'm seeing.
>
> Thank you in advance!
>
We can't say anything about the performance without some data. Please send
us the output
of -log_view for both cases.
Thanks,
Matt
> Luciano.
>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20210203/c8be7dd0/attachment.html>
More information about the petsc-users
mailing list