[petsc-users] Fwd: strange PETSc/KSP GMRES timings for MPI+OMP configuration on KNLs

Matthew Knepley knepley at gmail.com
Mon Jun 19 07:50:25 CDT 2017


On Mon, Jun 19, 2017 at 6:42 AM, Damian Kaliszan <damian at man.poznan.pl>
wrote:

> Hi,
>
> Regarding my previous post
> I looked into both logs of 64MPI/1 OMP vs. 64MPI/2 OMP.
>
>
> What attracted my attention is huge difference in MPI timings in the
> following places:
>
> Average time to get PetscTime(): 2.14577e-07
> Average time for MPI_Barrier(): 3.9196e-05
> Average time for zero size MPI_Send(): 5.45382e-06
>
> vs.
>
> Average time to get PetscTime(): 4.05312e-07
> Average time for MPI_Barrier(): 0.348399
> Average time for zero size MPI_Send(): 0.029937
>
> Isn't something wrong with PETSc library itself?...
>

 I don't think so. This is bad interaction of MPI and your threading
mechanism. MPI_Barrier() and MPI_Send() are lower
level than PETSc. What threading mode did you choose for MPI? This can have
a performance impact.

Also, the justifications for threading in this context are weak (or
non-existent):
http://www.orau.gov/hpcor2015/whitepapers/Exascale_Computing_without_Threads-Barry_Smith.pdf

  Thanks,

    Matt


>
> Best,
> Damian
>
> Wiadomość przekazana
> Od: Damian Kaliszan <damian at man.poznan.pl>
> Do: PETSc users list <petsc-users at mcs.anl.gov>
> Data: 16 czerwca 2017, 14:57:10
> Temat: [petsc-users] strange PETSc/KSP GMRES timings for MPI+OMP
> configuration on KNLs
>
> ===8<===============Treść oryginalnej wiadomości===============
> Hi,
>
> For  several  days  I've been trying to figure out what is going wrong
> with my python app timings solving Ax=b with KSP (GMRES) solver when
> trying to run on Intel's KNL 7210/7230.
>
> I  downsized  the  problem  to  1000x1000 A matrix and a single node and
> observed the following:
>
>
> I'm attaching 2 extreme timings where configurations differ only by 1 OMP
> thread (64MPI/1 OMP vs 64/2 OMPs),
> 23321 vs 23325 slurm task ids.
>
> Any help will be appreciated....
>
> Best,
> Damian
>
> ===8<===========Koniec treści oryginalnej wiadomości===========
>
>
>
> -------------------------------------------------------
> Damian Kaliszan
>
> Poznan Supercomputing and Networking Center
> HPC and Data Centres Technologies
> ul. Jana Pawła II 10
> 61-139 Poznan
> POLAND
>
> phone (+48 61) 858 5109 <+48%2061%20858%2051%2009>
> e-mail damian at man.poznan.pl
> www - http://www.man.poznan.pl/
> -------------------------------------------------------
>
>
> ---------- Forwarded message ----------
> From: Damian Kaliszan <damian at man.poznan.pl>
> To: PETSc users list <petsc-users at mcs.anl.gov>
> Cc:
> Bcc:
> Date: Fri, 16 Jun 2017 14:57:10 +0200
> Subject: [petsc-users] strange PETSc/KSP GMRES timings for MPI+OMP
> configuration on KNLs
> Hi,
>
> For  several  days  I've been trying to figure out what is going wrong
> with my python app timings solving Ax=b with KSP (GMRES) solver when
> trying to run on Intel's KNL 7210/7230.
>
> I  downsized  the  problem  to  1000x1000 A matrix and a single node and
> observed the following:
>
>
> I'm attaching 2 extreme timings where configurations differ only by 1 OMP
> thread (64MPI/1 OMP vs 64/2 OMPs),
> 23321 vs 23325 slurm task ids.
>
> Any help will be appreciated....
>
> Best,
> Damian
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

http://www.caam.rice.edu/~mk51/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170619/dfdc4d90/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: int_1.jpg
Type: image/jpeg
Size: 92539 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170619/dfdc4d90/attachment-0002.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: int_1.jpg
Type: image/jpeg
Size: 92539 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170619/dfdc4d90/attachment-0003.jpg>


More information about the petsc-users mailing list