[petsc-users] Scalability problem using PETSc with local installed OpenMPI
Barry Smith
bsmith at petsc.dev
Tue Oct 10 09:39:08 CDT 2023
Run STREAMS with
MPI_BINDING="-map-by socket --bind-to core --report-bindings" make mpistreams
send the result
Also run
lscpu
numactl -H
if they are available on your machine, send the result
> On Oct 10, 2023, at 10:17 AM, Gong Yujie <yc17470 at connect.um.edu.mo> wrote:
>
> Dear Barry,
>
> I tried to use the binding as suggested by PETSc:
> mpiexec -n 4 --map-by socket --bind-to socket --report-bindings
> But it seems not improving the performance. Here is the make stream log
>
> Best Regards,
> Yujie
>
> mpicc -o MPIVersion.o -c -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -I/home/tt/petsc-3.16.0/include -I/home/tt/petsc-3.16.0/arch-linux-c-opt/include `pwd`/MPIVersion.c
> Running streams with 'mpiexec --oversubscribe ' using 'NPMAX=16'
> 1 26119.1937 Rate (MB/s)
> 2 29833.4281 Rate (MB/s) 1.1422
> 3 65338.5050 Rate (MB/s) 2.50155
> 4 59832.7482 Rate (MB/s) 2.29076
> 5 48629.8396 Rate (MB/s) 1.86184
> 6 58569.4289 Rate (MB/s) 2.24239
> 7 63827.1144 Rate (MB/s) 2.44369
> 8 57448.5349 Rate (MB/s) 2.19948
> 9 61405.3273 Rate (MB/s) 2.35097
> 10 68021.6111 Rate (MB/s) 2.60428
> 11 71289.0422 Rate (MB/s) 2.72937
> 12 76900.6386 Rate (MB/s) 2.94422
> 13 80198.6807 Rate (MB/s) 3.07049
> 14 64846.3685 Rate (MB/s) 2.48271
> 15 83072.8631 Rate (MB/s) 3.18053
> 16 70128.0166 Rate (MB/s) 2.68492
> ------------------------------------------------
> Traceback (most recent call last):
> File "process.py", line 89, in <module>
> process(sys.argv[1],len(sys.argv)-2)
> File "process.py", line 33, in process
> speedups[i] = triads[i]/triads[0]
> TypeError: 'dict_values' object does not support indexing
> make[2]: [makefile:47: mpistream] Error 1 (ignored)
> Traceback (most recent call last):
> File "process.py", line 89, in <module>
> process(sys.argv[1],len(sys.argv)-2)
> File "process.py", line 33, in process
> speedups[i] = triads[i]/triads[0]
> TypeError: 'dict_values' object does not support indexing
> make[2]: [makefile:79: mpistreams] Error 1 (ignored)
> From: Barry Smith <bsmith at petsc.dev>
> Sent: Tuesday, October 10, 2023 9:59 PM
> To: Gong Yujie <yc17470 at connect.um.edu.mo>
> Cc: petsc-users at mcs.anl.gov <petsc-users at mcs.anl.gov>
> Subject: Re: [petsc-users] Scalability problem using PETSc with local installed OpenMPI
>
>
> Take a look at https://petsc.org/release/faq/#what-kind-of-parallel-computers-or-clusters-are-needed-to-use-petsc-or-why-do-i-get-little-speedup
>
> Check the binding that OpenMPI is using (by the way, there are much more recent OpenMPI versions, I suggest using them). Run the STREAMS benchmark as indicated on that page.
>
> Barry
>
>
>> On Oct 10, 2023, at 9:27 AM, Gong Yujie <yc17470 at connect.um.edu.mo> wrote:
>>
>> Dear PETSc developers,
>>
>> I installed OpenMPI3 first and then installed PETSc with that mpi. Currently, I'm facing a scalability issue, in detail, I tested that using OpenMPI to calculate an addition of two distributed arrays and I get a good scalability. The problem is when I calculate the addition of two vectors in PETSc, I don't have any scalability. For the same size of the problem, PETSc costs a lot much time than merely using OpenMPI.
>>
>> My PETSc version is 3.16.0 and the version of OpenMPI is 3.1.4. Hope you can give me some suggestions.
>>
>> Best Regards,
>> Yujie
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20231010/b5a69399/attachment.html>
More information about the petsc-users
mailing list