[petsc-users] Not getting scalability.

Dave May dave.mayhem23 at gmail.com
Mon Mar 7 08:39:09 CST 2011


And depending on how you configured your solver, it might make a
difference what preconditioner you were using.

Were you
1) measuring the time taken to perform a fixed number of Krylov iterations?, or
2) measuring the time taken for the solve to converge?

If you did the latter and used the default petsc preconditioner (block
jacobi/ilu in parallel) the number of Krylov iterations required to
converge will increase when you perform strong scaling, as the blocks
size will decrease with increasing number of cpus, thereby making the
preconditioner weaker.

Cheers,
  Dave


On 7 March 2011 15:29, Stephen Wornom <stephen.wornom at inria.fr> wrote:
> Gaurish Telang wrote:
>>
>> Hi,
>>
>> I have been testing PETSc's scalability on clusters for matrices of sizes
>> 2000, 10,000, uptill 60,000.
>>
>> All I did was try to solve Ax=b for these matrices. I found that the
>> solution time dips if I use upto 16 or 32 processors. However for a larger
>> number of processors however the solution time seems to go up rather than
>> down. IS there anyway I can make my code strongly scalable ?
>
> As the number of processors increases, the work/processor decreases.
> Therefore the communication time (mpi) will increase and the total time will
> increase slightly.
> Hope this helps,
> Stephen
>>
>> I am measuring the total time (sec) and KSP_SOLVE time in the -log_summary
>> output. Both times show the same behaviour described above.
>> Gaurish
>>
>
>
> --
> stephen.wornom at inria.fr
> 2004 route des lucioles - BP93
> Sophia Antipolis
> 06902 CEDEX
>
> Tel: 04 92 38 50 54
> Fax: 04 97 15 53 51
>
>


More information about the petsc-users mailing list