[petsc-dev] help for use PETSc

Barry Smith bsmith at mcs.anl.gov
Tue Jun 28 21:48:12 CDT 2011


On Jun 28, 2011, at 5:05 PM, Matthew Knepley wrote:

> Jed has given excellent advice. However, your problems sound small. You should try using a direct solver
> like MUMPS (with --download-mumps during configure, and -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps
> during the solve).

   You can also try -pc_type eisenstat -ksp_type cg (for testing purposes you can run with -ksp_monitor_true_residual but that will slow it down so only use that to make sure it is converging). Note that this is a simple and not really robust iterative method but it should be a good amount faster than jacobi. As Matt says LU will be the most robust solver and as Jed nots using the fieldsplit preconditioner likely the most efficient but requires more work.

   How many processes you can use effectively depends a lot on your parallel machine and how large your problem is. As are rule of thumb you need at least 10,000 degrees of freedom per process and you need a fast network like myrinet or an IBM BlueGene system, you will never get good parallel performance with ethernet.

   Barry

> 
>    Matt
> 
> On Tue, Jun 28, 2011 at 2:00 PM, Jed Brown <jed at 59a2.org> wrote:
> On Tue, Jun 28, 2011 at 11:55, tuane <tuane at lncc.br> wrote:
> cg+redundant-PC give us the best result, similar to out original direct solver. we don't be sure what “redundant” is.
> 
> Redundant means that the whole problem is solved redundantly (using a direct solver by default) on every process. It only makes sense as a coarse level solver.
>   
> Our response are beeing too dependent of these parameters:
> 
>   1. iterative solver (CG, GMRES, BCGS)
>   2. preconditioners (jacobi, redundant, ILU)
>   3. tolerance
>   4. number of processors
> 
> At this point, you should always run with -ksp_monitor_true_residual to make sure that it is really converging.
> 
> We also tried to use the Algebraic Multigrid, but without success. An error occurs in the program execution.  We used the routines:
> call PCSetType (pc, PCMG, ierr)
> 
> This is not algebraic multigrid, it is geometric multigrid. If you use DMDA to manage the grid, then you could use geometric multigrid here.
> 
> But I think you are using a Raviart-Thomas mixed space in which case the default interpolants from DMDA are not going to work for you.
> 
> The simplest thing you can do is to use PCFieldSplit to eliminate the fluxes such that the preconditioner can work with the non-mixed (H^1-conforming) operator defined in the potential/pressure space.
> 
> 
> The following won't work right now, but it should work soon. I'm describing it here for the others on petsc-dev. If you call
> 
> PCFieldSplitSetIS(pc,"u",is_fluxes);
> PCFieldSplitSetIS(pc,"p",is_potential);
> 
> and
> 
> -pc_type fieldsplit -pc_fieldsplit_type schur
> -fieldsplit_u_pc_type jacobi # The (u,u) block is diagonal for lowest order RT spaces
> -fieldsplit_p_pc_type ml # or other multigrid, uses SIMPLE approximation of the Schur complement which happens to be exact because the (u,u) block is diagonal.
> 
> 
> This won't work right now because PCFieldSplit does not actually call MatGetSchurComplement() is designed. It would simplify fieldsplit.c to use MatGetSchurComplement(), but then MatGetSubMatrix() would be called twice for certain blocks in the matrix, once inside the Schur complement and once directly from fieldsplit.c. This is why I so want to make a mode in which the parent retains ownership and the caller gets a (intended to be) read-only reference when MatGetSubMatrix() and MatGetSchurComplement() are called.
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
> -- Norbert Wiener




More information about the petsc-dev mailing list