[petsc-users] Difference between Block Jacobi and ILU?
    Barry Smith 
    bsmith at mcs.anl.gov
       
    Tue Feb 23 14:13:05 CST 2016
    
    
  
> On Feb 23, 2016, at 2:03 PM, Justin Chang <jychang48 at gmail.com> wrote:
> 
> Two more questions, somewhat related maybe.
> 
> Is there a practical case where one would use plain Jacobi preconditioning over ILU?
  For well conditioned problems an iteration of Jacobi is cheaper than an iteration of ILU (about 1/2 the work) so jacobi can beat ILU
  For problems where ILU produces 0 (or tiny) pivots and thus produces a bad preconditioner
> 
> Also, what exactly is happening when one uses -pc_bjacobi_blocks 2 ?
  By default PETSc uses one block per MPI process. -pc_bjacobi_blocks 2 would produce exactly 2 blocks totally. See PCBJacobiSetTotalBlocks() and PCBJacobiSetLocalBlocks()
> 
> Thanks,
> Justin
> 
> On Wed, Jan 13, 2016 at 9:37 PM, Justin Chang <jychang48 at gmail.com> wrote:
> Thanks Satish,
> 
> And yes I meant sequentially.
> 
> On Wed, Jan 13, 2016 at 8:26 PM, Satish Balay <balay at mcs.anl.gov> wrote:
> On Wed, 13 Jan 2016, Justin Chang wrote:
> 
> > Hi all,
> >
> > What exactly is the difference between these two preconditioners? When I
> > use them to solve a Galerkin finite element poisson problem, I get the
> > exact same performance (iterations, wall-clock time, etc).
> 
> you mean - when you run sequentially?
> 
> With block jacobi - you decide the number of blocks. The default is 1-block/proc
> i.e - for sequnetial run you have only 1block i.e  the whole matrix.
> 
> So the following are essentially the same:
> -pc_type bjacobi -pc_bjacobi_blocks 1 [default] -sub_pc_type ilu [default]
> -pc_type ilu
> 
> Satish
> 
> > Only thing is I can't seem to use ILU in parallel though.
> 
> 
> 
    
    
More information about the petsc-users
mailing list