[petsc-users] Investigate parallel code to improve parallelism

Matthew Knepley knepley at gmail.com
Fri Feb 26 10:41:31 CST 2016


On Fri, Feb 26, 2016 at 10:27 AM, TAY wee-beng <zonexo at gmail.com> wrote:

>
> On 26/2/2016 11:32 PM, Barry Smith wrote:
>
>> On Feb 26, 2016, at 9:28 AM, TAY wee-beng <zonexo at gmail.com> wrote:
>>>
>>> Hi,
>>>
>>> I have got a 3D code. When I ran with 48 procs and 11 million cells, it
>>> runs for 83 min. When I ran with 96 procs and 22 million cells, it ran for
>>> 99 min.
>>>
>>     This is actually pretty good!
>>
> But if I'm not wrong, if I increase the no. of cells, the parallelism will
> keep on decreasing. I hope it scales up to maybe 300 - 400 procs.
>

100% efficiency does not happen, and here you get 83%. You could probably
get that into the 90s depending on your algorithm, but this
is pretty good.

   Matt


> So it's not that parallel. I want to find out which part of the code I
>>> need to improve. Also if PETsc and hypre is working well in parallel.
>>> What's the best way to do it?
>>>
>>    Run both with -log_summary and send the output for each case. This
>> will show where the time is being spent and which parts are scaling less
>> well.
>>
>>     Barry
>>
> That's only for the PETSc part, right? So for other parts of the code,
> including hypre part, I will not be able to find out. If so, what can I use
> to check these parts?
>
>> I thought of doing profiling but if the code is optimized, I wonder if it
>>> still works well.
>>>
>>> --
>>> Thank you.
>>>
>>> Yours sincerely,
>>>
>>> TAY wee-beng
>>>
>>>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20160226/5ec7a278/attachment.html>


More information about the petsc-users mailing list