[petsc-dev] [issue1595] Issues of limited number of MPI communicators when having many instances of hypre boomerAMG with Moose
Rob Falgout hypre Tracker
hypre-support at llnl.gov
Wed Apr 4 16:14:02 CDT 2018
Rob Falgout <rfalgout at llnl.gov> added the comment:
1. Yes. AMG coarsens recursively based on matrix entries, so you can't know a priori which ranks will be active by the time you get to the coarsest grid.
2. If you use the other direct solver option, it will be a little less performant. Ulrike can probably comment more quantitatively. The performance degradation will depend on the number of total tasks in the fine grid communicator. Another option that might well work is the iterative method option Ulrike mentioned. If it works for the systems in Moose, then it should be the best option to use.
4. I agree that this would be a mess to debug and to communicate clearly in a user manual. I also think that it is not a common use case. Implementation, and ensuring correctness in the future, especially in the face of new code written by hypre developers, sounds like a fair amount of work. If it really isn't a common use case, then this has to be weighed against the hundreds of other things we would like to do in hypre. Maybe this is easier than I am thinking and there is a bigger demand than I was aware of? I'm not trying to simply write this off.
-Rob
____________________________________________
hypre Issue Tracker <hypre-support at llnl.gov>
<http://cascb1.llnl.gov/hypre/issue1595>
____________________________________________
More information about the petsc-dev
mailing list