[petsc-users] Migrating to parallel PETSc
Matthew Knepley
knepley at gmail.com
Wed Feb 16 07:31:20 CST 2022
On Wed, Feb 16, 2022 at 8:18 AM Bojan Niceno <
bojan.niceno.scientist at gmail.com> wrote:
> Dear Matthew and Barry,
>
> I am going through tutorial examples, and have noticed that calls to
> MatMPIAIJSetPreallocation and MatSeqAIJSetPreallocation always come in
> succession of one another, usually in that order. Why would that be? Is
> it maybe to be able to run on systems without MPI, so that the first
> function MatMPIAIJSetPreallocation only makes a dummy call?
>
> I was checking the manual pages and found no information about this
> pairing.
>
This is because we are too lazy or time-constrained to convert all examples
to the new style using
https://petsc.org/main/docs/manualpages/Mat/MatXAIJSetPreallocation.html
More broadly, PETSc is constructed like Objective-C in that method bindings
can change at runtime and calling a
method on an object of a different type just ignores it. So if I have a
SeqAIJ matrix, only the MatSeqAIJSetPreallocation
call works and the other is ignored. This way you can put in customization
for many types without lots of checks
to see what type the caller has.
Thanks,
Matt
> Kind regards,
>
> Bojan
>
>
>
> On Wed, Feb 16, 2022 at 7:30 AM Bojan Niceno <
> bojan.niceno.scientist at gmail.com> wrote:
>
>> Thanks you guys for your suggestions, I will have a look at functions you
>> suggested.
>>
>> Have a great day,
>>
>> Bojan
>>
>> On Tue, Feb 15, 2022 at 11:29 PM Barry Smith <bsmith at petsc.dev> wrote:
>>
>>>
>>> MatCreateMPIAIJWithArrays() and MatUpdateMPIAIJWithArrays() may be
>>> suitable for your use case.
>>>
>>> This should also be much more efficient in moving the matrix from your
>>> code to PETSc's format.
>>>
>>> Barry
>>>
>>>
>>> On Feb 15, 2022, at 4:13 PM, Bojan Niceno <
>>> bojan.niceno.scientist at gmail.com> wrote:
>>>
>>> Dear PETSc users,
>>>
>>>
>>> I have an in-house computational fluid dynamics (CFD) solver, written in
>>> Fortran 2008, parallelized with MPI with its own home-grown suite of linear
>>> solvers. The code is unstructured, performs domain decomposition with
>>> METIS and all communication buffers, I mean connectivity between
>>> processors, has been properly worked out.
>>>
>>> A couple of weeks back, I decided to try out the PETSc suite of
>>> solvers. After some initial setbacks, I managed to compile my code with
>>> PETSc and have the sequential version working fine :-)
>>>
>>> I have essentially using the following PETSc routines to get the code
>>> solving linear systems with PETSc:
>>>
>>> I set up the working space as follows:
>>>
>>> call PetscInitialize(PETSC_NULL_CHARACTER, Pet % petsc_err)
>>> call MatCreateSeqAij(PETSC_COMM_SELF, ...
>>> call MatSeqAijSetColumnIndices(....
>>> call VecCreateSeq(PETSC_COMM_SELF, ... ! Create Vector x
>>> call VecCreateSeq(PETSC_COMM_SELF, ... ! Create Vector b
>>> call KSPCreate(PETSC_COMM_SELF, ...
>>>
>>> Then in order to solve a system, I do:
>>>
>>> call MatSetValue(Pet % petsc_A, ! Inside a loop through matrix
>>> entries
>>> i-PETSC_ONE,
>>> k-PETSC_ONE, ...
>>> call MatAssemblyBegin(Pet % petsc_A, MAT_FINAL_ASSEMBLY, Pet %
>>> petsc_err)
>>> call MatAssemblyEnd (Pet % petsc_A, MAT_FINAL_ASSEMBLY, Pet %
>>> petsc_err)
>>>
>>> call VecSetValue(Pet % petsc_x, ... ! Fill up x
>>> call VecSetValue(Pet % petsc_b, ... ! Fill up b
>>>
>>> call KSPSetType(Pet % petsc_ksp ... ! Set solver
>>> call KSPGetPC(Pet % petsc_ksp, ... ! get preconditioner context
>>> call PCSetType(Pet % petsc_pc, ... ! Set preconditioner
>>>
>>> call KSPSetFromOptions(Pet % petsc_ksp, Pet % petsc_err)
>>>
>>> call KSPSetUp (Pet % petsc_ksp, Pet % petsc_err)
>>>
>>>
>>> ! Finally solve
>>> call KSPSolve(Pet % petsc_ksp, ...
>>>
>>> Once this was up and running, I thought that in order to have the
>>> parallel version I will merely have to replace the "Seq" versions of the
>>> above functions, with their parallel counterparts. I was expecting to find
>>> the red function (MatSeqAijSetColumnIndices) for parallel runs, but it
>>> doesn't seem to exist. I have found non-seq versions of some other
>>> functions (MatCreateAij, VecCreateSeq), but not something like
>>> MatAijSetColumnIndices, which surprised me a bit, because I have this
>>> information in my code.
>>>
>>> Is there a parallel counterpart of this function, and if there is none,
>>> what should it be replaced with? I understand that I will have to provide
>>> non-zeros in buffers (o_nnz), which is not a big issue, but how to provide
>>> information on columns for parallel version is not clear to me. In a
>>> nutshell, I would need a hint on which of the above functions could remain
>>> the same in parallel, and which should be replaced and with what?
>>>
>>> Cheers,
>>>
>>> Bojan
>>>
>>>
>>>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20220216/217758fe/attachment-0001.html>
More information about the petsc-users
mailing list