[petsc-users] Migrating to parallel PETSc

Matthew Knepley knepley at gmail.com
Tue Feb 15 15:43:55 CST 2022


On Tue, Feb 15, 2022 at 4:13 PM Bojan Niceno <
bojan.niceno.scientist at gmail.com> wrote:

> Dear PETSc users,
>
>
> I have an in-house computational fluid dynamics (CFD) solver, written in
> Fortran 2008, parallelized with MPI with its own home-grown suite of linear
> solvers.  The code is unstructured, performs domain decomposition with
> METIS and all communication buffers, I mean connectivity between
> processors, has been properly worked out.
>
> A couple of weeks back, I decided to try out the PETSc suite of solvers.
> After some initial setbacks, I managed to compile my code with PETSc and
> have the sequential version working fine :-)
>
> I have essentially using the following PETSc routines to get the code
> solving linear systems with PETSc:
>
>  I set up the working space as follows:
>
>   call PetscInitialize(PETSC_NULL_CHARACTER, Pet % petsc_err)
>   call MatCreateSeqAij(PETSC_COMM_SELF, ...
>   call MatSeqAijSetColumnIndices(....
>   call VecCreateSeq(PETSC_COMM_SELF,  ...  ! Create Vector x
>   call VecCreateSeq(PETSC_COMM_SELF,  ...  ! Create Vector b
>   call KSPCreate(PETSC_COMM_SELF,  ...
>
> Then in order to solve a system, I do:
>
>   call MatSetValue(Pet % petsc_A,   ! Inside a loop through matrix entries
>                    i-PETSC_ONE,
>                    k-PETSC_ONE, ...
>   call MatAssemblyBegin(Pet % petsc_A, MAT_FINAL_ASSEMBLY, Pet % petsc_err)
>   call MatAssemblyEnd  (Pet % petsc_A, MAT_FINAL_ASSEMBLY, Pet % petsc_err)
>
>   call VecSetValue(Pet % petsc_x, ... ! Fill up x
>   call VecSetValue(Pet % petsc_b, ... ! Fill up b
>
>   call KSPSetType(Pet % petsc_ksp ... ! Set solver
>   call KSPGetPC(Pet % petsc_ksp, ...  ! get preconditioner context
>   call PCSetType(Pet % petsc_pc, ...  ! Set preconditioner
>
>   call KSPSetFromOptions(Pet % petsc_ksp, Pet % petsc_err)
>
>   call KSPSetUp         (Pet % petsc_ksp, Pet % petsc_err)
>
>
>   ! Finally solve
>   call KSPSolve(Pet % petsc_ksp, ...
>
> Once this was up and running, I thought that in order to have the parallel
> version I will merely have to replace the "Seq" versions of the above
> functions, with their parallel counterparts.  I was expecting to find the
> red function (MatSeqAijSetColumnIndices) for parallel runs, but it
> doesn't seem to exist.  I have found non-seq versions of some other
> functions (MatCreateAij, VecCreateSeq), but not something like
> MatAijSetColumnIndices, which surprised me a bit, because I have this
> information in my code.
>
> Is there a parallel counterpart of this function, and if there is none,
> what should it be replaced with?  I understand that I will have to provide
> non-zeros in buffers (o_nnz), which is not a big issue, but how to provide
> information on columns for parallel version is not clear to me.  In a
> nutshell, I would need a hint on which of the above functions could remain
> the same in parallel, and which should be replaced and with what?
>

The reason that we do not have the same function is that the parallel
storage format separately stores the diagonal and off-diagonal blocks, so
you cannot
just copy in the indices. I think if you really need this, we could add
something that would take the data you have in parallel and produce a
nonzero structure, but
it could not share your array.

More generally, I would say things that intrusively touch the internal data
structure, like MatSeqAijSetColumnIndices(), are not to be preferred, but
we do have them because
some users want them. If I were doing the kind of conversion that you are,
I would instead use MatPreallocator. This allows you to run your assembly
loop twice, just inserting
values the first time, which allows the matrix to be preallocated, and then
inserting again. The insertion time is usually small compared to computing
the values. I have used
this in a few places in my code FEM code and it works efficiently.

Alternatively, you could just call
https://petsc.org/main/docs/manualpages/Mat/MatXAIJSetPreallocation.html
with your preallocation data and let the column indices be
calculated. That is usually a very small overhead.

  Thanks,

     Matt


>     Cheers,
>
>     Bojan
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20220215/235d478a/attachment.html>


More information about the petsc-users mailing list