[petsc-users] strange PETSc/KSP GMRES timings for MPI+OMP configuration on KNLs
Damian Kaliszan
damian at man.poznan.pl
Mon Jun 19 13:31:03 CDT 2017
Yes, very strange. I tested it with Intel MPI and ParastationMPI, both available on the cluster.
Output log I sent may show something interesting (?)
Best,
Damian
W dniu 19 cze 2017, 19:53, o 19:53, użytkownik Satish Balay <balay at mcs.anl.gov> napisał:
>MPI=16 OMP=1 time=45.62.
>
>This timing [without OpenMP] looks out of place. Perhaps something
>else [wierd MPI behavior?] is going on here..
>
>Satish
>
>On Fri, 16 Jun 2017, Damian Kaliszan wrote:
>
>> Hi,
>>
>> For several days I've been trying to figure out what is going
>wrong
>> with my python app timings solving Ax=b with KSP (GMRES) solver when
>trying to run on Intel's KNL 7210/7230.
>>
>> I downsized the problem to 1000x1000 A matrix and a single node
>and
>> observed the following:
>>
>>
>> I'm attaching 2 extreme timings where configurations differ only by 1
>OMP thread (64MPI/1 OMP vs 64/2 OMPs),
>> 23321 vs 23325 slurm task ids.
>>
>> Any help will be appreciated....
>>
>> Best,
>> Damian
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170619/75c22558/attachment.html>
More information about the petsc-users
mailing list