[petsc-users] Increasing parallel speed-up

Haren, S.W. van (Steven) vanharen at nrg.eu
Mon Jul 4 14:32:16 CDT 2011


Thank you for you reply Jed.
 
I will take a look at the preconditioners, to see if I can increase the scaling.
 
CPU is an Intel i7 q720, just a standard laptop CPU.
 
Regards,
 
Steven
 
 
 
---------------------------
Date: Mon, 4 Jul 2011 12:24:56 -0500
From: Jed Brown <jedbrown at mcs.anl.gov>
Subject: Re: [petsc-users] Increasing parallel speed-up
To: PETSc users list <petsc-users at mcs.anl.gov>
Message-ID:
        <CAM9tzSnBAmOdwxKEz-_BA9o+SvQHP69eEE50ozQ3LVFor0eSBQ at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

On Mon, Jul 4, 2011 at 12:09, Haren, S.W. van (Steven) <vanharen at nrg.eu>wrote:

> one of the ksp solvers (Conjugate Gradient method with ILU(0)
> preconditioning) gives poor parallel performance for the
>

We need to identify how much the poor scaling is due to the preconditioner
changing (e.g. block Jacobi with ILU(0)) such that more iterations are
needed versus memory bandwidth. Run with -ksp_monitor or
-ksp_converged_reason to see the iterations. You can try -pc_type asm (or
algebraic multigrid using third-party libraries) to improve the iteration
count.

If you want help seeing what's going on, send -log_summary output for each
case.


> following settings:
>
> - number of unknowns ~ 2 million
> - 1, 2 and 4 processors (quad core CPU)
>

What kind? In particular, what memory bus and how many channels? Sparse
matrix kernels are overwhelmingly limited by memory performance, so extra
cores do very little good unless the memory system is very good (or the
matrix fits in cache).


-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/ms-tnef
Size: 4863 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20110704/4f146b9b/attachment.bin>


More information about the petsc-users mailing list