Always send output with -log_summary for each run that you do.<br><br><div class="gmail_quote">On Thu, Feb 23, 2012 at 14:16, Francis Poulin <span dir="ltr"><<a href="mailto:fpoulin@uwaterloo.ca">fpoulin@uwaterloo.ca</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello,<br>
<br>
I am learning to use PetSc but am just a notice. I have a rather basic question to ask and couldn't not find it on the achieves.<br>
<br>
I am wanting to test the scalability of a Multigrid solver to the 3D Poisson equation. I found ksp/ex22.c that seems to solve the problem that I'm interested in. I ran it on a large server using different processors.<br>
<br>
The syntax that I use to run using MPI was<br>
<br>
./ex22 -da_grid_x 64 -da_grid_y 64 -da_grid_z 32<br></blockquote><div><br></div><div>Which version of PETSc?</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
I tested it using 2, 4, 8, 16 cpus and found that the time increases. See below. Clearly there is something that I don't understand since the time should be reduced.<br>
<br>
n wtime<br>
---------------------<br>
2 3m58s<br>
4 3m54s<br>
8 5m51s<br>
16 7m23s<br>
<br>
Any advice would be greatly appreciated.<br>
<br>
Best regrads,<br>
<font color="#888888">Francis<br>
<br>
<br>
</font></blockquote></div><br>