[petsc-users] Question about using MatCreateAIJ

Junchao Zhang junchao.zhang at gmail.com
Sun Jun 18 08:01:21 CDT 2023


yeah,  see a C example at
https://gitlab.com/petsc/petsc/-/blob/main/src/mat/tests/ex259.c

I guess you can code in this outline with petsc-3.19

MatCreate
MatSetSizes
MatSetFromOptions

iteration-loop start

        MatResetPreallocation(A,...)

        Fill Matrix A with MatSetValues

        MatAssemblyXX(A_WAVE,MAT_FINAL_ASSEMBLY,IERR)

 iteration-loop end

--Junchao Zhang


On Sun, Jun 18, 2023 at 7:37 AM Matthew Knepley <knepley at gmail.com> wrote:

> On Sun, Jun 18, 2023 at 4:26 AM Karsten Lettmann <
> karsten.lettmann at uni-oldenburg.de> wrote:
>
>> Dear Matthew,
>>
>>
>> thanks for you help.
>>
>>
>> 1) I tested your suggestion to pass NULL as the arguments for the
>> MatXAIJSetPreallocation.
>>
>> So the old version was:
>>
>> CALL
>> MatCreateMPIAIJ(MPI_GROUP,N_local,N_local,N_global,N_global,0,DNNZ,0,ONNZ,A,IERR)
>>
>> And after you suggestion it is now:
>>
>>   CALL MatCreate(MPI_GROUP,A,IERR)
>>   CALL MatSetType(A,MATAIJ,IERR)
>>   CALL MatSetSizes(A,N_local,N_local,N_global,N_global,IERR)
>>   CALL
>> MatXAIJSetPreallocation(A,1,DNNZ,ONNZ,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,IERR)
>>
>>
>>   Setting block-size = 1.
>>
>>
>> 2) Concerning the error with MatResetPreallocation:
>>
>> We have an iterative loop, in which the matrix A is filled very often
>> with different non-zero structure.
>> Further, I read in the manual pages that due to performance issues, one
>> should preallocate enough space, as operations as matsetvalues might be
>> time consuming due to additional
>> on-demand allocations.
>>
>>
>> So I did the following coding in principle:
>>
>>
>>     Set matrix A the first time with preallocation
>>
>>
>>     iteration-loop start
>>
>>         MatResetPreallocation(A,...)
>>
>>         MatZeroEntries (A)
>>
>>         Fill Matrix A
>>
>>         MatAssemblyXX(A_WAVE,MAT_FINAL_ASSEMBLY,IERR)
>>
>>
>>     iteration-loop end
>>
>>
>> With these settings, the code run with 2 CPU.
>> But with 1 CPU I got an error, which was in MatResetPreallocation.
>> I could not understand, why the above code works with 2 CPU but not with
>> 1 CPU.
>>
>> At the moment, I believe the reason for this error seems to be a
>> pre-check, that is done in SeqAIJ but not in MPIAIJ fo a valid and present
>> matrix A.
>>
>> (Now, an image is included showing the codings of   :
>>
>> https://petsc.org/release/src/mat/impls/aij/seq/aij.c.html#MatResetPreallocation_SeqAIJ
>>
>> https://petsc.org/release/src/mat/impls/aij/mpi/mpiaij.c.html#MatResetPreallocation_MPIAIJ
>> )
>>
>>
>>
>>
>> So, it seems for me at the moment, that the first MatResetPreallocation
>> (when the iteration loop is entered the first time) is done on an
>> not-assembled matrix A.
>> So for one CPU I got an error, while 2 CPUs seem to have been more
>> tolerant.
>> (I'm not sure, if this interpretation is correct.)
>>
>>
>> So, I changed the coding in that way, that I check the assembly status
>> before the preallocation.
>>
>>
>> Using the coding:
>>
>>     CALL MatAssembled(A,A_assembled,ierr)
>>     IF (A_assembled .eqv. PETSC_TRUE) then
>>         CALL MatResetPreallocation(A,ierr)
>>     ENDIF
>>
>> then worked for 1 and 2 CPU.
>>
>>
>>
>> 3) There was another finding, which hopefully is correct.
>>
>>
>> Actually, I did this MatResetPreallocation to have a better performance
>> when filling the matrix A later, as was suggested on the manual pages.
>>
>> However, I found (if I did nothing wrong) that this MatResetPreallocation
>> was much more time consuming than the additional (and unwanted) allocations
>> done during filling the matrix.
>>
>> So, in the end, my code seems to be faster, when *not* doing this in the
>> iteration loop:
>>
>>     CALL MatAssembled(A,A_assembled,ierr)
>>     IF (A_assembled .eqv. PETSC_TRUE) then
>>         CALL MatResetPreallocation(A,ierr)
>>     ENDIF
>>
>>
>> As I told you, I'm a beginner to PETSC and I do not know, if I have done
>> it correctly???
>>
> I think this may be correct now. We have rewritten Mat so that inserting
> values is much more efficient, and
> can be done online, so preallocation is not really needed anymore. It is
> possible that this default mechanism
> is faster than the old preallocation.
>
> I would try the code without preallocation, using the latest release, and
> see how it performs.
>
>   Thanks,
>
>     Matt
>
>> Best, Karsten
>>
>> The arguments are a combination of the AIJ and SBAIJ arguments. You can
>> just pass NULL for the SBAIJ args.
>>
>>> Then I  ran into issues with Resetpreallocation, that I do not
>>> understand.
>>>
>> Can you send the error?
>>
>>   Thanks,
>>
>>     Matt
>>
>>> I want this, because we have an iterative procedure, where the matrix
>>> A_wave and its non-zero structure are changing very often.
>>>
>>> I try to find the reason for my problem.
>>>
>>>
>>>
>>> I really thank you for your answer, that helped me to understand things
>>> a bit.
>>>
>>>
>>> I wish you all the best, Karsten
>>>
>>>
>>>
>>> Am 15.06.23 um 16:51 schrieb Matthew Knepley:
>>>
>>> ACHTUNG! Diese E-Mail kommt von Extern! WARNING! This email originated
>>> off-campus.
>>> On Thu, Jun 15, 2023 at 8:32 AM Karsten Lettmann <
>>> karsten.lettmann at uni-oldenburg.de> wrote:
>>>
>>>> Dear all,
>>>>
>>>>
>>>> I'm quite new to PETSC. So I hope the following questions are not too
>>>> stupid.
>>>>
>>>>
>>>> 1) We have a (Fortran) code, that we want to update from an older PETSC
>>>> version (petsc.2.3.3-p16) to a newer version.
>>>>
>>>> Inside the old code, for creating matrices A, there are function calls
>>>> of the from:
>>>> MatCreateMPIAIJ
>>>>
>>>> In the reference page for this old version it says:
>>>> When calling this routine with a single process communicator, a matrix
>>>> of type SEQAIJ is returned.
>>>>
>>>> So I assume the following behavior of this old routine:
>>>> - for N_proc == 1:
>>>>     a matrix of type SEQAIJ is returned.
>>>>
>>>> - for N_proc > 1:
>>>>     a matrix of type MPIAIJ is returned
>>>>
>>>>
>>>>
>>>> 2a) So, in the new code, we want to have a similar behavior.
>>>>
>>>> I found that this function is not present any more in the newer PETSC
>>>> versions.
>>>>
>>>> Instead, one might use: MatCreateAIJ(….)
>>>> ( https://petsc.org/release/manualpages/Mat/MatCreateAIJ/ )
>>>>
>>>> If I understand the reference page of the function correctly, then,
>>>> actually, a similar behavior should be expected:
>>>>
>>>> - for N_proc == 1:
>>>>     a matrix of type SEQAIJ is returned.
>>>>
>>>> - for N_proc > 1:
>>>>     a matrix of type MPIAIJ is returned
>>>>
>>>>
>>>> 2b) However, on the reference page, there is the note:
>>>>
>>>> It is recommended that one use the MatCreate(), MatSetType() and/or
>>>> MatSetFromOptions(), MatXXXXSetPreallocation() paradigm instead of this
>>>> routine directly.
>>>>
>>>> So, if I want the behavior above, it is recommended to code it like
>>>> this, isn't it:
>>>>
>>>> If (N_Proc == 1)
>>>>
>>>>      MatCreate(.. ,A ,...)
>>>>      MatSetType(…,A, MATSEQAIJ,..)
>>>>      MatSetSizes(…,A, ..)
>>>>      MatSeqAIJSetPreallocation(,...A,...)
>>>>
>>>> else
>>>>
>>>>      MatCreate(.. ,A ,...)
>>>>      MatSetType(…,A, MATMPIAIJ,..)
>>>>      MatSetSizes(…,A, ..)
>>>>      MatMPIAIJSetPreallocation(,...A,...)
>>>>
>>>
>>> You can use
>>>
>>>   MatCreate(comm, &A);
>>>   MatSetType(A, MATAIJ);
>>>   MatSetSizes(A, ...);
>>>   MatXAIJSetPreallocation(A, ...);
>>>
>>> We recommend this because we would like to get rid of the convenience
>>> functions that
>>> wrap up exactly this code.
>>>
>>>   Thanks,
>>>
>>>      Matt
>>>
>>>
>>>> end
>>>>
>>>>
>>>>
>>>> 3) So my questions are:
>>>>
>>>> - Is my present understanding correct?
>>>>
>>>> If  yes:
>>>>
>>>> - Why might using MatCreateAIJ(….) for my case not be helpful?
>>>>
>>>> - So, why is it recommended to use the way 2b) instead of this
>>>> MatCreateAIJ(….) ?
>>>>
>>>>
>>>> Best, Karsten
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> ICBM
>>>> Section: Physical Oceanography
>>>> Universitaet Oldenburg
>>>> Postfach 5634
>>>> D-26046 Oldenburg
>>>> Germany
>>>>
>>>> Tel:    +49 (0)441 798 4061
>>>> email:  karsten.lettmann at uni-oldenburg.de
>>>>
>>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>> https://www.cse.buffalo.edu/~knepley/
>>> <http://www.cse.buffalo.edu/~knepley/>
>>>
>>> --
>>> ICBM
>>> Section: Physical Oceanography
>>> Universitaet Oldenburg
>>> Postfach 5634
>>> D-26046 Oldenburg
>>> Germany
>>>
>>> Tel:    +49 (0)441 798 4061
>>> email:  karsten.lettmann at uni-oldenburg.de
>>>
>>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> <http://www.cse.buffalo.edu/~knepley/>
>>
>> --
>> ICBM
>> Section: Physical Oceanography
>> Universitaet Oldenburg
>> Postfach 5634
>> D-26046 Oldenburg
>> Germany
>>
>> Tel:    +49 (0)441 798 4061
>> email:  karsten.lettmann at uni-oldenburg.de
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20230618/4f439e40/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: fgcfkbigjgelpmgo.png
Type: image/png
Size: 147416 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20230618/4f439e40/attachment-0001.png>


More information about the petsc-users mailing list