[petsc-dev] synchronized printing, singletons, and jumbled GAMG/BJacobi output

Barry Smith bsmith at mcs.anl.gov
Sat May 25 21:04:02 CDT 2013


On May 25, 2013, at 8:58 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:

> Barry Smith <bsmith at mcs.anl.gov> writes:
> 
>>  Why is gamg using bjacobi as the course solver? 
> 
> I swear we've been talked about this several times, but I can't find a
> definitive email thread.  Redundant is slower except possibly when
> coarsening entirely in-place, and even then, usually not.  GAMG
> restricts the active set incrementally, so that once it reaches the
> coarse grid, all dofs are on one process.  At that point, it would be
> crazy to broadcast all that data and do a redundant solve.

  True

>  Instead, it
> uses block Jacobi in which all blocks but one are empty.

   This seems a rather ad hoc way of handling it. Why not have a proper mechanism for telescoping solvers rather than an ad hoc block Jacobi with 0 sizes on most blocks?
> 
>>  And why does it have different solvers on each process? 
> 
> It doesn't, but it calls PCBJacobiGetSubKSP() so as to configure the
> inner PC to be PCLU, but PCBJacobiGetSubKSP conservatively sets this
> flag:
> 
>  jac->same_local_solves = PETSC_FALSE;        /* Assume that local solves are now different;
>                                                  not necessarily true though!  This flag is
>                                                  used only for PCView_BJacobi() */

   Why not just fix this instead. Fix it so it doesn't set the flag in this situation.

> 
>>  Seems more a problem with GAMG then with viewing.
> 
> If the user does a custom configuration on each process, like maybe a
> different Chebyshev estimate, I still don't think it's acceptable for
> -ksp_view to create O(P) output.

   I am not arguing about that

>  We have to be careful to summarize in
> a scalable way.




More information about the petsc-dev mailing list