[petsc-users] GPU local direct solve of penta-diagonal

Ed D'Azevedo dazevedoef at ornl.gov
Thu Dec 12 15:38:35 CST 2013


Hi Karli,

Yes, each MPI process is responsible for solving a system of nonlinear 
equations on a number of grid cells.
The nonlinear equations are solved by Picard iteration and the time 
consuming part is the formation and solution of the nonsymmetric sparse 
linear system arising from a rectangular grid with a regular finite 
difference stencil.  All the linear systems have the same sparsity 
pattern but may have different numerical values.

Since there are 16 cores on each node on Titan, there can be 
concurrently  16 separate independent linear systems to be solved.
One may not want to batch or synchronize the solvers since different 
grid cells may require different number of Picard iterations.

Ed


On 12/12/2013 04:15 PM, Karl Rupp wrote:
> Hi Mark,
>
>   > We have a lot of 5-point stencil operators on ~50x100 grids to solve.
>>    These are not symmetric and we have been using LU.  We want to move
>> this onto GPUs (Titan).  What resources are there to do this?
> do you have lots of problems to solve simultaneously? Or any other
> feature that makes this problem expensive? 50x100 would mean a system
> size of about 5000 dofs, which is too small to really benefit from GPUs.
>
> Best regards,
> Karli
>



More information about the petsc-users mailing list