Jed has given excellent advice. However, your problems sound small. You should try using a direct solver<div>like MUMPS (with --download-mumps during configure, and -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps</div>
<div>during the solve).</div><div><br></div><div> Matt<br><br><div class="gmail_quote">On Tue, Jun 28, 2011 at 2:00 PM, Jed Brown <span dir="ltr"><<a href="mailto:jed@59a2.org">jed@59a2.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="gmail_quote"><div class="im">On Tue, Jun 28, 2011 at 11:55, tuane <span dir="ltr"><<a href="mailto:tuane@lncc.br" target="_blank">tuane@lncc.br</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>cg+redundant-PC give us the best result, similar to out original direct solver. we don't be sure what “redundant” is.<br></div></blockquote><div><br></div></div><div>Redundant means that the whole problem is solved redundantly (using a direct solver by default) on every process. It only makes sense as a coarse level solver.</div>
<div class="im">
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>
Our response are beeing too dependent of these parameters:<br>
<br>
1. iterative solver (CG, GMRES, BCGS)<br>
2. preconditioners (jacobi, redundant, ILU)<br>
3. tolerance<br>
4. number of processors<br></div></blockquote><div><br></div></div><div>At this point, you should always run with -ksp_monitor_true_residual to make sure that it is really converging.</div><div class="im"><div><br></div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>
We also tried to use the Algebraic Multigrid, but without success. An error occurs in the program execution. We used the routines:<br>
call PCSetType (pc, PCMG, ierr)<br></div></blockquote><div><br></div></div><div>This is not algebraic multigrid, it is geometric multigrid. If you use DMDA to manage the grid, then you could use geometric multigrid here.</div>
<div><br></div><div>But I think you are using a Raviart-Thomas mixed space in which case the default interpolants from DMDA are not going to work for you.</div><div><br></div><div>The simplest thing you can do is to use PCFieldSplit to eliminate the fluxes such that the preconditioner can work with the non-mixed (H^1-conforming) operator defined in the potential/pressure space.</div>
<div><br></div><div><br></div><div>The following won't work right now, but it should work soon. I'm describing it here for the others on petsc-dev. If you call</div><div><br></div><div>PCFieldSplitSetIS(pc,"u",is_fluxes);</div>
<div>PCFieldSplitSetIS(pc,"p",is_potential);</div><div><br></div><div>and</div><div><br></div><div>-pc_type fieldsplit -pc_fieldsplit_type schur</div><div>-fieldsplit_u_pc_type jacobi # The (u,u) block is diagonal for lowest order RT spaces</div>
<div>-fieldsplit_p_pc_type ml # or other multigrid, uses SIMPLE approximation of the Schur complement which happens to be exact because the (u,u) block is diagonal.</div><div><br></div><div><br></div><div>This won't work right now because PCFieldSplit does not actually call MatGetSchurComplement() is designed. It would simplify fieldsplit.c to use MatGetSchurComplement(), but then MatGetSubMatrix() would be called twice for certain blocks in the matrix, once inside the Schur complement and once directly from fieldsplit.c. This is why I so want to make a mode in which the parent retains ownership and the caller gets a (intended to be) read-only reference when MatGetSubMatrix() and MatGetSchurComplement() are called.</div>
</div>
</blockquote></div><br><br clear="all"><br>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener<br>
</div>