<div dir="ltr">On Sat, Aug 24, 2013 at 12:07 AM, Zhang <span dir="ltr"><<a href="mailto:zyzhang@nuaa.edu.cn" target="_blank">zyzhang@nuaa.edu.cn</a>></span> wrote:<br><div class="gmail_extra"><div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br><br>Recently I wrote a code of projection method for solving the incompressile flow. It includes a poisson solution of pressure.<br>
I transfered the code to be based on Petsc . <br></blockquote><div><br></div><div>Use -pc_type gamg</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
However, in the case of 3D lid-driven flow case, the speed of petsc version is not advantageous yet.<br><br>I tried different combination of preconditioner with GMRES solver. Among them GMRES+PCILU or GMRES+SOR are both the fastest. <br>
For a grid 80x80x80, GMRES+SOR serial version used 185.816307 secs. However, for case 120x120x120, it diverged. So is GMRES+PCILU.<br><br>Then I tried a parallel comparison, as follows,<br><br><br>#############################################################################################<br>
# with Petsc-3.4.2, time comparison (sec)<br># size (80,80,80), 200 steps, dt=0.002<br>#debug version 177.695939 sec<br>#opt version 106.694733 sec<br>#mpirun -np 8 ./nsproj ../input/sample_80.zzy.dat -ksp_type gmres -p
c_type hypre -ksp_rtol 1.e-2 <br>#debug version 514.718544 sec<br>#opt version 331.114555 sec<br>#mpirun -np 12 ./nsproj ../input/sample_80.zzy.dat -ksp_type gmres -pc_type hypre -ksp_rtol 1.e-2 <br>#debug version 796.765428 sec<br>
#opt version 686.151788 sec<br>#mpirun -np 16 ./nsproj ../input/sample_80.zzy.dat -ksp_type gmres -pc_type hypre -ksp_rtol 1.e-2 <br><br>I do know sometimes problem with speed is not due to petsc, but my own code, so please you are welcome for any suggestion<br>
about which combination is the best for such a computation. I know solving Poiison and Helmholtz is so common to see in numerical work.<br>Thank you first.<br><br>BTW, I also tried to use superLU_dist as PC.<br>#mpirun -np 16 ./nsproj ../input/sample_20.zzy.dat -ksp_type gmres -pc_type lu -pc_factor_mat_solver_package superlu_dist -ksp_rtol 1.e-2 <br>
<br>But with 16 nodes, except case of 20x20x20 grids, all larger grids run e
xtremly slow.<br> Since I never use the direct PC before, is it true that a good usage of direct LU as Preconditioner<br>requires that the amount of procedures be much larger so that for each node the calculation of direct solver is smalled enough to use it?<br>
<br>Cheers,<br><br>Zhenyu<br></blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener
</div></div>