[petsc-users] Good performance For small problem on GALILEO

Justin Chang jychang48 at gmail.com
Fri Oct 20 01:55:09 CDT 2017


600 unknowns is way too small to parallelize. Need at least 10,000 unknowns
per MPI process:
https://www.mcs.anl.gov/petsc/documentation/faq.html#slowerparallel

What problem are you solving? Sounds like you either compiled PETSc with
debugging mode on or you just have a really terrible solver. Show us the
output of -log_view.

On Fri, Oct 20, 2017 at 12:47 AM Luca Verzeroli <
l.verzeroli at studenti.unibg.it> wrote:

> Good morning,
> For my thesis I'm dealing with GALILEO, one of the clusters owned by
> Cineca. http://www.hpc.cineca.it/hardware/galileo
> The first question is: What is the best configuration to run petsc on this
> kind of cluster? My code is only a MPI program and I would like to know if
> it's better to use more nodes or more CPUs with mpirun.
> This question comes from the speed up of my code using that cluster. I
> have a small problem. The global matrices are 600x600. Are they too small
> to see a speed up with more mpiprocess? I notice that a single core
> simulation and a multi cores one take a similar time (multi core a second
> more). The real problem comes when I have to run multiple simulation of the
> same code changing some parameters. So I would like to speed up the single
> simulation.
> Any advices?
>
>
> Luca Verzeroli
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20171020/656015d6/attachment.html>


More information about the petsc-users mailing list