[petsc-dev] Fwd: Request for new PETSc capability

Barry Smith bsmith at mcs.anl.gov
Thu Jan 27 20:40:18 CST 2011


  What's up with this?

Begin forwarded message:

> From: "Stephen C. Jardin" <sjardin at pppl.gov>
> Date: January 27, 2011 6:49:27 PM CST
> To: <bsmith at mcs.anl.gov>
> Cc: "Lois Curfman McInnes" <curfman at mcs.anl.gov>, <egng at lbl.gov>, <kd2112 at columbia.edu>
> Subject: Request for new PETSc capability
> 
> Dear Barry,
> 
> The M3D-C1 project is one of the major code projects in CEMM.   It is a fully
> implicit formulation of the 3D MHD equations using high-order 3D finite
> elements with continuous derivatives in all directions.  In a typical
> problem, the 3D domain consists of approximately 100 2D planes, spread out
> equally around a torus.  The grid we use is unstructured within each 2D plane
> (where the coupling of elements is very strong), but is structured and
> regular across the planes (where the coupling is much weaker and is confined
> to nearest neighbors.
> 
> Our plan has always been to solve the large sparse matrix equation we get
> using GMRES with a block Jacobi preconditioner obtained by using SuperLU_dist
> within each 2D plane.   We have implemented this using PETSc and find that it
> leads to a very efficient iterative solve that converges in just a few
> iterations for the time step and other parameters that we normally use.
> However, the present version of PETSc/3.1 only allows a single processor per
> plane(block) when using the block Jacobi preconditioner.  This severely
> limits the maximum problem size that we can run, as we can use only 100
> processors for a problem with 100 2D planes.
> 
> Several years ago, when we were planning this project, we spoke with Hong
> Zhang about this solver strategy and she told us that if there was a demand
> for it, the present limitation restricting the block Jacobi preconditioner to
> a single processor could be lifted.    We are now at point in our project
> where we need to request this.  We have demonstrated good convergence of the
> iterative solver, but need to be able to run with 10-100 processors per plane
> (block) in order use 1000-10000 processors total to obtain the kind of
> resolution we need for our applications.
> 
> Would it be possible for your group to generalize the block Jacobi
> preconditioner option so that the blocks could be distributed over multiple
> processors?  If so, could you give us a timeline for this to appear in a
> PETSc release?
> 
> Thank you, and Best Regards,
> 
> Steve Jardin (for the CEMM team)
> 
> 
> Cc Lois McInnes
>   Esmond Ng
>   David Keyes




More information about the petsc-dev mailing list