[petsc-users] What does PCASMSetOverlap do?

Pierre Jolivet pierre at joliv.et
Thu Apr 14 05:44:26 CDT 2022



> On 14 Apr 2022, at 12:16 PM, Zhuo Chen <chenzhuotj at gmail.com> wrote:
> 
> Hi Matt and Pierre,
> 
> Thank you very much for your help. I indeed want to solve a 2D problem with geometric block domain decomposition and the blocks are in the Hilbert space curve ordering. Each block may have 32x32 cells.
> 
> I am a Fortran user and I apologize that I do not know how to use -pc_asm_print_subdomains.

You can just put this on your command line. For example,
$ cd ${PETSC_DIR}/src/ksp/ksp/tutorials && make ex2f && mpirun -n 2 ./ex2f -pc_type asm -pc_asm_print_subdomains
[…]
[0:2] Subdomain 0 with overlap:
0 1 2 3 4 
[1:2] Subdomain 0 with overlap:
5 6 7 8 
Norm of error  0.1205E-04 iterations     4

> Should I use call PetscOptionsGetBool(NULL, NULL, "-pc_asm_print_subdomains", .true., NULL, ierr)? It would be great if you can point me to the right tutorial.

You could also use the following line of code:
      call PetscOptionsInsertString(PETSC_NULL_OPTIONS,'-pc_asm_print_subdomains',ierr)

> After reading some example programs, I would like to know if it is possible to use
> 
> IS, allocatable, dimension(:) :: is1,is2
> integer :: nlocalblk
> 
> call PCASMGetLocalSubdomains(pc,nlocalblk,is1,is2,ierr)
> call ISView(is1,PETSC_VIEWER_STDOUT_SELF,ierr)
> call ISView(is2,PETSC_VIEWER_STDOUT_SELF,ierr)
> 
> to realize the function you have mentioned.

That is another way to do it, yes.
In fact, for debugging purposes, I’d advise that you use GASM instead of ASM because it prints slightly more coherent information.
E.g., with PCASM, the above log shows that the displayed information is _wrong_. These are not the subdomains with overlap, but the subdomains _without_ overlap.
Furthermore, if you increase -pc_asm_overlap, you’ll still get the same subdomains.
$ mpirun -n 2 ./ex2f -pc_type asm -pc_asm_overlap 4 -pc_asm_print_subdomains   
[0:2] Subdomain 0 with overlap:
0 1 2 3 4 
[1:2] Subdomain 0 with overlap:
5 6 7 8 
Norm of error  0.1192E-05 iterations     4

On the contrary, with PCGASM, you get the proper subdomains with and without overlap.
$ mpirun -n 2 ./ex2f -pc_type gasm -pc_gasm_print_subdomains
Inner subdomain:
0 1 2 3 4 
Outer subdomain:
0 1 2 3 4 
Inner subdomain:
5 6 7 8 
Outer subdomain:
5 6 7 8 
$ mpirun -n 2 ./ex2f -pc_type gasm -pc_gasm_print_subdomains -pc_gasm_overlap 4
Inner subdomain:
0 1 2 3 4 
Outer subdomain:
0 1 2 3 4 5 6 7 8 
Inner subdomain:
5 6 7 8 
Outer subdomain:
0 1 2 3 4 5 6 7 8 

Thanks,
Pierre

> Thank you very much!
> 
> 
> 
> 
> On Wed, Apr 13, 2022 at 10:18 PM Zhuo Chen <chenzhuotj at gmail.com <mailto:chenzhuotj at gmail.com>> wrote:
> Thank you, Pierre!
> 
> On Wed, Apr 13, 2022 at 10:05 PM Pierre Jolivet <pierre at joliv.et <mailto:pierre at joliv.et>> wrote:
> You can also use the uncommented option -pc_asm_print_subdomains which will, as Matt told you, show you that it is exactly the same algorithm.
> 
> Thanks,
> Pierre
> 
>> On 13 Apr 2022, at 3:58 PM, Zhuo Chen <chenzhuotj at gmail.com <mailto:chenzhuotj at gmail.com>> wrote:
>> 
>> Thank you, Matt! I will do that.
>> 
>> On Wed, Apr 13, 2022 at 9:55 PM Matthew Knepley <knepley at gmail.com <mailto:knepley at gmail.com>> wrote:
>> On Wed, Apr 13, 2022 at 9:53 AM Zhuo Chen <chenzhuotj at gmail.com <mailto:chenzhuotj at gmail.com>> wrote:
>> Dear Pierre,
>> 
>> Thank you! I looked into the webpage you sent me and I think it is not the situation that I am talking about.
>> 
>> I think I need to attach a figure for an illustrative purpose. This figure is Figure 14.5 of "Iterative Method for Sparse Linear Systems" by Saad.
>> <domaindecompostion.png>
>> 
>> If I divide the domain into these three subdomains, as you can see, the middle block has two interfaces. In the matrix form, its rows are not contiguous, i.e., distributed in different processors. If ASM only expands in the contiguous direction, the domain decomposition become ineffective, I guess.
>> 
>> No, we get exactly this picture. Saad is talking about exactly the algorithm we use.
>> 
>> Maybe you should just look at the subdomains being produced, -mat_view draw -draw_pause 3
>> 
>>    Matt
>>  
>> On Wed, Apr 13, 2022 at 9:36 PM Pierre Jolivet <pierre at joliv.et <mailto:pierre at joliv.et>> wrote:
>> 
>> 
>>> On 13 Apr 2022, at 3:30 PM, Zhuo Chen <chenzhuotj at gmail.com <mailto:chenzhuotj at gmail.com>> wrote:
>>> 
>>> Dear Matthew and Mark,
>>> 
>>> Thank you very much for the reply! Much appreciated!
>>> 
>>> The question was about a 1D problem. I think I should say core 1 has row 1:32 instead of 1:32, 1:32 as it might be confusing.
>>> 
>>> So the overlap is extended to both directions for the middle processor but only toward the increasing direction for the first processor and the decreasing direction for the last processor. In 1D, this makes sense as the domain is contiguous. However, in 2D with domain decomposition with spacial overlaps, this overlapping would not work as one subdomain can have several neighbor domains. Mark mentioned generalized ASM, is that the correct direction that I should look for?
>> 
>> What is it that you want to do exactly?
>> If you are using a standard discretisation kernel, e.g., piecewise linear finite elements, MatIncreaseOverlap() called by PCASM will generate an overlap algebraically which is equivalent to the overlap you would have gotten geometrically.
>> If you know that “geometric” overlap (or want to use a custom definition of overlap), you could use https://petsc.org/release/docs/manualpages/PC/PCASMSetLocalSubdomains.html <https://petsc.org/release/docs/manualpages/PC/PCASMSetLocalSubdomains.html>
>> 
>> Thanks,
>> Pierre
>> 
>>> Best regards.
>>> 
>>> 
>>> On Wed, Apr 13, 2022 at 9:14 PM Matthew Knepley <knepley at gmail.com <mailto:knepley at gmail.com>> wrote:
>>> On Wed, Apr 13, 2022 at 9:11 AM Mark Adams <mfadams at lbl.gov <mailto:mfadams at lbl.gov>> wrote:
>>> 
>>> 
>>> On Wed, Apr 13, 2022 at 8:56 AM Matthew Knepley <knepley at gmail.com <mailto:knepley at gmail.com>> wrote:
>>> On Wed, Apr 13, 2022 at 6:42 AM Mark Adams <mfadams at lbl.gov <mailto:mfadams at lbl.gov>> wrote:
>>> No, without overlap you have, let say: 
>>> core 1:   1:32, 1:32
>>> core 2:   33:64,  33:64
>>> 
>>> Overlap will increase the size of each domain so you get:
>>> core 1:   1:33, 1:33
>>> core 2:   32:65,  32:65
>>> 
>>> I do not think this is correct. Here is the algorithm. Imagine the matrix is a large graph. When you divide rows, you
>>> can think of that as dividing the vertices into sets. If overlap = 1, it means start with my vertex set, and add all vertices
>>> that are just 1 edge away from my set.
>>> 
>>> I think that is what was said. You increase each subdomain by one row of vertices.
>>> So in 1D, vertex 32 and 33 are in both subdomains and you have an overlap region of size 2. 
>>> They want an overlap region of size 1, vertex 33.
>>> 
>>> This is true, but I did not think they specified a 1D mesh.
>>> 
>>>   Matt
>>>  
>>> 
>>>   Thanks,
>>> 
>>>      Matt
>>>  
>>> What you want is reasonable but requires PETSc to pick a separator set, which is not well defined.
>>> You need to build that yourself with gasm (I think) if you want this.
>>> 
>>> Mark
>>> 
>>> On Wed, Apr 13, 2022 at 3:17 AM Zhuo Chen <chenzhuotj at gmail.com <mailto:chenzhuotj at gmail.com>> wrote:
>>> Hi,
>>> 
>>> I hope that everything is going well with everybody.
>>> 
>>> I have a question about the PCASMSetOverlap. If I have a 128x128 matrix and I use 4 cores with overlap=1. Does it mean that from core 1 to core 4, the block ranges are (starting from 1):
>>> 
>>> core 1:   1:33, 1:33
>>> core 2:   33:65,  33:65
>>> core 3:   65:97,  65:97
>>> core 4:   95:128, 95:128
>>> 
>>> Or is it something else? I cannot tell from the manual.
>>> 
>>> Many thanks in advance.
>>> 
>>> 
>>> 
>>> -- 
>>> Zhuo Chen
>>> Department of Astronomy
>>> Tsinghua University
>>> Beijing, China 100084
>>> https://czlovemath123.github.io/ <https://czlovemath123.github.io/>
>>> 
>>> 
>>> -- 
>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
>>> -- Norbert Wiener
>>> 
>>> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
>>> 
>>> 
>>> -- 
>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
>>> -- Norbert Wiener
>>> 
>>> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
>>> 
>>> 
>>> -- 
>>> Zhuo Chen
>>> Department of Astronomy
>>> Tsinghua University
>>> Beijing, China 100084
>>> https://czlovemath123.github.io/ <https://czlovemath123.github.io/>
>> 
>> 
>> 
>> -- 
>> Zhuo Chen
>> Department of Astronomy
>> Tsinghua University
>> Beijing, China 100084
>> https://czlovemath123.github.io/ <https://czlovemath123.github.io/>
>> 
>> 
>> -- 
>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
>> -- Norbert Wiener
>> 
>> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
>> 
>> 
>> -- 
>> Zhuo Chen
>> Department of Astronomy
>> Tsinghua University
>> Beijing, China 100084
>> https://czlovemath123.github.io/ <https://czlovemath123.github.io/>
> 
> 
> 
> -- 
> Zhuo Chen
> Department of Astronomy
> Tsinghua University
> Beijing, China 100084
> https://czlovemath123.github.io/ <https://czlovemath123.github.io/>
> 
> 
> -- 
> Zhuo Chen
> Department of Astronomy
> Tsinghua University
> Beijing, China 100084
> https://czlovemath123.github.io/ <https://czlovemath123.github.io/>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20220414/54ab28fe/attachment-0001.html>


More information about the petsc-users mailing list