[petsc-users] Block Tridiagonal Solver

Smith, Barry F. bsmith at mcs.anl.gov
Mon Sep 9 09:00:37 CDT 2019


  John,

     The individual block tridiagonal systems are pretty small for solving in parallel with MPI, you are unlikely to get much improvement from focusing on these.

     Some general comments/suggestions:

        1) ADI methods generally are difficult to parallelize with MPI
        2) There are perhaps more modern methods that converge much faster than ADI methods 

        For your problem I would consider using (what in PETSc we call) a field split preconditioner and then appropriate preconditioners for the fields (usually some sort of multigrid for the elliptic terms and a simpler iterative method for other terms, for reactions you can use a block preconditioner that solves on each cell all the reactions together). The exact details depend on your discretization and equations. From the discussion so far I am speculating you are using a single structured grid? Are you using a staggered grid (for example cell-centered pressure) and edge or vertex centered for the other variables, or are all variables collocated? Is the discretization more or less traditional finite differences or more finite volume based? 

       This is likely more than what you asking for but if you have collocated variables you might consider using the PETSc DMCreate3d() construct to manage the parallel decomposition of your problem. This also makes it straightforward to use the PCFIELDSPLIT preconditioner. 

       If you have a staggered grid you could use DMStagCreate3d() to similarly manage the parallel decomposition. In both cases the hope is that you can reuse much of your current code that applies the operators and computes the (approximate) Jacobian.

       I realize this may involve far more of a refactorization of your code than you are able or will to do so I just float them in case you are looking for possible major parallel speedups for your simulations.


   Barry


> On Sep 9, 2019, at 8:22 AM, John L. Papp via petsc-users <petsc-users at mcs.anl.gov> wrote:
> 
> Thanks for the help.
> 
> I didn't want to get too far into the weeds about the numerical method, just that I have a block tridiagonal system that needs to be solved.  If it helps any, the system comes from an ADI scheme on the Navier-Stokes equations.  The [A], [B], and [C] block matrices correspond to the [Q]_i-1, [Q]_i, and [Q]_i+1 vector unknowns (density, momentum, energy, species, etc.) for each I, J, K sweep through the solution grid.   So, technically, I do need to solve batches of tridiagonal problems.  I'll take a look at your solvers as it seems to be less heavy than PETSc.
> 
> Thanks,
> 
> John
> 
> On 9/6/2019 5:32 PM, Jed Brown wrote:
>> Where do your tridiagonal systems come from?  Do you need to solve one
>> at a time, or batches of tridiagonal problems?
>> 
>> Although it is not in PETSc, we have some work on solving the sort of
>> tridiagonal systems that arise in compact discretizations, which it
>> turns out can be solved much faster than generic tridiagonal problems.
>> 
>>   https://tridiaglu.github.io/index.html
>> 
>> "John L. Papp via petsc-users" <petsc-users at mcs.anl.gov> writes:
>> 
>>> Hello,
>>> 
>>> I need a parallel block tridiagonal solver and thought PETSc would be
>>> perfect.  However, there seems to be no specific example showing exactly
>>> which VecCreate and MatCreate functions to use.  I searched the archive
>>> and the web and there is no explicit block tridiagonal examples
>>> (although ex23.c example solves a tridiagonal matrix) and the manual is
>>> vague on the subject.  So a couple of questions:
>>> 
>>>  1. Is it better to create a monolithic matrix (MatCreateAIJ) and vector
>>>     (VecCreate)?
>>>  2. Is it better to create a block matrix (MatCreateBAIJ) and vector
>>>     (VecCreate and then VecSetBlockSize or is there an equivalent block
>>>     vector create)?
>>>  3. What is the best parallel solver(s) to invert the Dx=b when D is a
>>>     block tridiagonal matrix?
>>> 
>>> If this helps, each row will be owned by the same process.  In other
>>> words, the data used to fill the [A] [B] [C] block matrices in a row of
>>> the D block tridiagonal matrix will reside on the same process.  Hence,
>>> I don't need to store the individual [A], [B], and [C] block matrices in
>>> parallel, just the over all block tridiagonal matrix on a row by row basis.
>>> 
>>> Thanks in advance,
>>> 
>>> John
>>> 
>>> -- 
>>> **************************************************************
>>> Dr. John Papp
>>> Senior Research Scientist
>>> CRAFT Tech.
>>> 6210 Kellers Church Road
>>> Pipersville, PA 18947
>>> 
>>> Email:  jpapp at craft-tech.com
>>> Phone:  (215) 766-1520
>>> Fax  :  (215) 766-1524
>>> Web  :  http://www.craft-tech.com
>>> 
>>> **************************************************************
> 
> -- 
> **************************************************************
> Dr. John Papp
> Senior Research Scientist
> CRAFT Tech.
> 6210 Kellers Church Road
> Pipersville, PA 18947
> 
> Email:  jpapp at craft-tech.com
> Phone:  (215) 766-1520
> Fax  :  (215) 766-1524
> Web  :  http://www.craft-tech.com
> 
> **************************************************************
> 



More information about the petsc-users mailing list