[petsc-users] on the performance of MPI PETSc
Barry Smith
bsmith at mcs.anl.gov
Thu Jun 4 13:56:36 CDT 2015
> On Jun 4, 2015, at 1:24 PM, Sun, Hui <hus003 at ucsd.edu> wrote:
>
> Hello,
>
> I'm testing ex34.c under the examples of KSP. It's a multigrid 3D poisson solver.
>
> For 64^3 mesh, the time cost is 1s for 1 node with 12 cores; for 128^3 mesh, the time cost is 13s for 1 node with 12 cores, and the same for 2 nodes with 6 cores. For 256^3 mesh, I use 2 nodes with 12 cores, and time cost goes up to 726s. This doesn't seem right for I'm expecting O(N log(N)). I think it could be the memory bandwidth is not sufficient, and I need to do the bind-to-socket stuff.
>
> But I'm wondering what is the typical time cost for a 256^3 mesh, and then a 512^3 mesh? Please give me a rough idea. Thank you.
There is no way we can answer that. What we can say is that given the numbers you have for 64 and 128 meshes, on an appropriate machine you should get much better numbers than 726 seconds for a 256 mesh. You need to first run streams on your machine, http://www.mcs.anl.gov/petsc/documentation/faq.html#computers check up on the binding business and if you still get poor results prepare a report on what you did and send it to us.
Barry
>
> Best,
> Hui
More information about the petsc-users
mailing list