Slow speed after changing from serial to parallel

Ben Tay zonexo at gmail.com
Mon Apr 14 05:49:34 CDT 2008


Thank you Matthew. Sorry to trouble you again.

I tried to run it with -log_summary output and I found that there's some 
errors in the execution. Well, I was busy with other things and I just 
came back to this problem. Some of my files on the server has also been 
deleted. It has been a while and I  remember that  it worked before, 
only much slower.

Anyway, most of the serial code has been updated and maybe it's easier 
to convert the new serial code instead of debugging on the old parallel 
code now. I believe I can still reuse part of the old parallel code. 
However, I hope I can approach it better this time.

So supposed I need to start converting my new serial code to parallel. 
There's 2 eqns to be solved using PETSc, the momentum and poisson. I 
also need to parallelize other parts of my code. I wonder which route is 
the best:

1. Don't change the PETSc part ie continue using PETSC_COMM_SELF, modify 
other parts of my code to parallel e.g. looping, updating of values etc. 
Once the execution is fine and speedup is reasonable, then modify the 
PETSc part - poisson eqn 1st followed by the momentum eqn.

2. Reverse the above order ie modify the PETSc part - poisson eqn 1st 
followed by the momentum eqn. Then do other parts of my code.

I'm not sure if the above 2 mtds can work or if there will be conflicts. 
Of course, an alternative will be:

3. Do the poisson, momentum eqns and other parts of the code separately. 
That is, code a standalone parallel poisson eqn and use samples values 
to test it. Same for the momentum and other parts of the code. When each 
of them is working, combine them to form the full parallel code. 
However, this will be much more troublesome.

I hope someone can give me some recommendations.

Thank you once again.

Matthew Knepley wrote:
> 1) There is no way to have any idea what is going on in your code
>     without -log_summary output
>
> 2) Looking at that output, look at the percentage taken by the solver
>     KSPSolve event. I suspect it is not the biggest component, because
>    it is very scalable.
>
>    Matt
>
> On Sun, Apr 13, 2008 at 4:12 AM, Ben Tay <zonexo at gmail.com> wrote:
>   
>> Hi,
>>
>> I've a serial 2D CFD code. As my grid size requirement increases, the
>> simulation takes longer. Also, memory requirement becomes a problem. Grid
>> size 've reached 1200x1200. Going higher is not possible due to memory
>> problem.
>>
>> I tried to convert my code to a parallel one, following the examples given.
>> I also need to restructure parts of my code to enable parallel looping. I
>> 1st changed the PETSc solver to be parallel enabled and then I restructured
>> parts of my code. I proceed on as longer as the answer for a simple test
>> case is correct. I thought it's not really possible to do any speed testing
>> since the code is not fully parallelized yet. When I finished during most of
>> the conversion, I found that in the actual run that it is much slower,
>> although the answer is correct.
>>
>> So what is the remedy now? I wonder what I should do to check what's wrong.
>> Must I restart everything again? Btw, my grid size is 1200x1200. I believed
>> it should be suitable for parallel run of 4 processors? Is that so?
>>
>> Thank you.
>>     
>
>
>
>   




More information about the petsc-users mailing list