[petsc-users] What does PCASMSetOverlap do?

Zhuo Chen chenzhuotj at gmail.com
Wed Apr 13 09:18:52 CDT 2022


Thank you, Pierre!

On Wed, Apr 13, 2022 at 10:05 PM Pierre Jolivet <pierre at joliv.et> wrote:

> You can also use the uncommented option -pc_asm_print_subdomains which
> will, as Matt told you, show you that it is exactly the same algorithm.
>
> Thanks,
> Pierre
>
> On 13 Apr 2022, at 3:58 PM, Zhuo Chen <chenzhuotj at gmail.com> wrote:
>
> Thank you, Matt! I will do that.
>
> On Wed, Apr 13, 2022 at 9:55 PM Matthew Knepley <knepley at gmail.com> wrote:
>
>> On Wed, Apr 13, 2022 at 9:53 AM Zhuo Chen <chenzhuotj at gmail.com> wrote:
>>
>>> Dear Pierre,
>>>
>>> Thank you! I looked into the webpage you sent me and I think it is not
>>> the situation that I am talking about.
>>>
>>> I think I need to attach a figure for an illustrative purpose. This
>>> figure is Figure 14.5 of "Iterative Method for Sparse Linear Systems" by
>>> Saad.
>>> <domaindecompostion.png>
>>>
>>> If I divide the domain into these three subdomains, as you can see, the
>>> middle block has two interfaces. In the matrix form, its rows are not
>>> contiguous, i.e., distributed in different processors. If ASM only expands
>>> in the contiguous direction, the domain decomposition become ineffective, I
>>> guess.
>>>
>>
>> No, we get exactly this picture. Saad is talking about exactly the
>> algorithm we use.
>>
>> Maybe you should just look at the subdomains being produced, -mat_view
>> draw -draw_pause 3
>>
>>    Matt
>>
>>
>>> On Wed, Apr 13, 2022 at 9:36 PM Pierre Jolivet <pierre at joliv.et> wrote:
>>>
>>>>
>>>>
>>>> On 13 Apr 2022, at 3:30 PM, Zhuo Chen <chenzhuotj at gmail.com> wrote:
>>>>
>>>> Dear Matthew and Mark,
>>>>
>>>> Thank you very much for the reply! Much appreciated!
>>>>
>>>> The question was about a 1D problem. I think I should say core 1 has
>>>> row 1:32 instead of 1:32, 1:32 as it might be confusing.
>>>>
>>>> So the overlap is extended to both directions for the middle processor
>>>> but only toward the increasing direction for the first processor and the
>>>> decreasing direction for the last processor. In 1D, this makes sense as the
>>>> domain is contiguous. However, in 2D with domain decomposition with spacial
>>>> overlaps, this overlapping would not work as one subdomain can have several
>>>> neighbor domains. Mark mentioned generalized ASM, is that the correct
>>>> direction that I should look for?
>>>>
>>>>
>>>> What is it that you want to do exactly?
>>>> If you are using a standard discretisation kernel, e.g., piecewise
>>>> linear finite elements, MatIncreaseOverlap() called by PCASM will generate
>>>> an overlap algebraically which is equivalent to the overlap you would have
>>>> gotten geometrically.
>>>> If you know that “geometric” overlap (or want to use a custom
>>>> definition of overlap), you could use
>>>> https://petsc.org/release/docs/manualpages/PC/PCASMSetLocalSubdomains.html
>>>>
>>>> Thanks,
>>>> Pierre
>>>>
>>>> Best regards.
>>>>
>>>>
>>>> On Wed, Apr 13, 2022 at 9:14 PM Matthew Knepley <knepley at gmail.com>
>>>> wrote:
>>>>
>>>>> On Wed, Apr 13, 2022 at 9:11 AM Mark Adams <mfadams at lbl.gov> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, Apr 13, 2022 at 8:56 AM Matthew Knepley <knepley at gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> On Wed, Apr 13, 2022 at 6:42 AM Mark Adams <mfadams at lbl.gov> wrote:
>>>>>>>
>>>>>>>> No, without overlap you have, let say:
>>>>>>>> core 1:   1:32, 1:32
>>>>>>>> core 2:   33:64,  33:64
>>>>>>>>
>>>>>>>> Overlap will increase the size of each domain so you get:
>>>>>>>> core 1:   1:33, 1:33
>>>>>>>> core 2:   32:65,  32:65
>>>>>>>>
>>>>>>>
>>>>>>> I do not think this is correct. Here is the algorithm. Imagine the
>>>>>>> matrix is a large graph. When you divide rows, you
>>>>>>> can think of that as dividing the vertices into sets. If overlap =
>>>>>>> 1, it means start with my vertex set, and add all vertices
>>>>>>> that are just 1 edge away from my set.
>>>>>>>
>>>>>>
>>>>>> I think that is what was said. You increase each subdomain by one row
>>>>>> of vertices.
>>>>>> So in 1D, vertex 32 and 33 are in both subdomains and you have an
>>>>>> overlap region of size 2.
>>>>>> They want an overlap region of size 1, vertex 33.
>>>>>>
>>>>>
>>>>> This is true, but I did not think they specified a 1D mesh.
>>>>>
>>>>>   Matt
>>>>>
>>>>>
>>>>>>
>>>>>>>   Thanks,
>>>>>>>
>>>>>>>      Matt
>>>>>>>
>>>>>>>
>>>>>>>> What you want is reasonable but requires PETSc to pick a separator
>>>>>>>> set, which is not well defined.
>>>>>>>> You need to build that yourself with gasm (I think) if you want
>>>>>>>> this.
>>>>>>>>
>>>>>>>> Mark
>>>>>>>>
>>>>>>>> On Wed, Apr 13, 2022 at 3:17 AM Zhuo Chen <chenzhuotj at gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> I hope that everything is going well with everybody.
>>>>>>>>>
>>>>>>>>> I have a question about the PCASMSetOverlap. If I have a 128x128
>>>>>>>>> matrix and I use 4 cores with overlap=1. Does it mean that from core 1 to
>>>>>>>>> core 4, the block ranges are (starting from 1):
>>>>>>>>>
>>>>>>>>> core 1:   1:33, 1:33
>>>>>>>>> core 2:   33:65,  33:65
>>>>>>>>> core 3:   65:97,  65:97
>>>>>>>>> core 4:   95:128, 95:128
>>>>>>>>>
>>>>>>>>> Or is it something else? I cannot tell from the manual.
>>>>>>>>>
>>>>>>>>> Many thanks in advance.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Zhuo Chen
>>>>>>>>> Department of Astronomy
>>>>>>>>> Tsinghua University
>>>>>>>>> Beijing, China 100084
>>>>>>>>> *https://czlovemath123.github.io/
>>>>>>>>> <https://czlovemath123.github.io/>*
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> What most experimenters take for granted before they begin their
>>>>>>> experiments is infinitely more interesting than any results to which their
>>>>>>> experiments lead.
>>>>>>> -- Norbert Wiener
>>>>>>>
>>>>>>> https://www.cse.buffalo.edu/~knepley/
>>>>>>> <http://www.cse.buffalo.edu/~knepley/>
>>>>>>>
>>>>>>
>>>>>
>>>>> --
>>>>> What most experimenters take for granted before they begin their
>>>>> experiments is infinitely more interesting than any results to which their
>>>>> experiments lead.
>>>>> -- Norbert Wiener
>>>>>
>>>>> https://www.cse.buffalo.edu/~knepley/
>>>>> <http://www.cse.buffalo.edu/~knepley/>
>>>>>
>>>>
>>>>
>>>> --
>>>> Zhuo Chen
>>>> Department of Astronomy
>>>> Tsinghua University
>>>> Beijing, China 100084
>>>> *https://czlovemath123.github.io/ <https://czlovemath123.github.io/>*
>>>>
>>>>
>>>>
>>>
>>> --
>>> Zhuo Chen
>>> Department of Astronomy
>>> Tsinghua University
>>> Beijing, China 100084
>>> *https://czlovemath123.github.io/ <https://czlovemath123.github.io/>*
>>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> <http://www.cse.buffalo.edu/~knepley/>
>>
>
>
> --
> Zhuo Chen
> Department of Astronomy
> Tsinghua University
> Beijing, China 100084
> *https://czlovemath123.github.io/ <https://czlovemath123.github.io/>*
>
>
>

-- 
Zhuo Chen
Department of Astronomy
Tsinghua University
Beijing, China 100084
*https://czlovemath123.github.io/ <https://czlovemath123.github.io/>*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20220413/d6d90538/attachment-0001.html>


More information about the petsc-users mailing list