[petsc-users] ASM Index Sets [Repost with Correction]
Matthew Knepley
knepley at gmail.com
Thu Mar 24 13:07:06 CDT 2016
On Thu, Mar 24, 2016 at 10:31 AM, Justin Droba (JSC-EG3) [JETS] <
justin.c.droba at nasa.gov> wrote:
> Matt and Barry,
>
> Thank you for your help!
>
> On 3/24/16 10:01 AM, Matthew Knepley wrote:
>
> On Thu, Mar 24, 2016 at 9:48 AM, Justin Droba (JSC-EG3) [JETS] <
> <justin.c.droba at nasa.gov>justin.c.droba at nasa.gov> wrote:
>
>> Barry,
>>
>> Thank you for your response.
>>
>> I'm running a larger problem than the simple example in my initial
>> message. But here is what happens:
>> (1) One block for each processor, with commands
>>
>> PCASMSetLocalSubdomains(preconditioner,local_sdom,is,NULL);
>> PCASMSetOverlap(preconditioner,1);
>>
>> where is = overlapping domain assigned to this processor, then it works
>> nicely.
>>
>
> Then you must not be doing RASM, and the second statement has no effect.
>
>
> Well, this would certainly explain why this case works.
>
>
>
>> (2) One block for each processor, with commands
>>
>> PCASMSetLocalSubdomains(preconditioner,local_sdom,is,is_local);
>> PCASMSetOverlap(preconditioner,0);
>>
>> where is = overlapping domain assigned to this processor and is_local the
>> disjoint domain assigned to this processor, I get the incorrect answer. (I
>> can remove that second statement and I get the same answer.)
>>
>
> How do you know what the correct answer is? The second statement has no
> effect.
>
> The system solves for weights for a radial basis interpolation. I use
> these weights to compute the interpolated values and in this case, I am way
> off.
>
> I will look through my domains again. I'll also see if I can get your
> previous suggestion of using pc_asm_blocks. I can just set that in-code
> with PetscOptionsSetValue("-pc_asm_blocks","25"), correct?
>
Yes.
>
>
> (3) More than one block for each processor, I get a seg fault with either
>> set of commands above.
>>
>
> Are you providing an array of length local_sdom?
>
> local_sdom should be the number of domains for the processor, right? If I
> have 25 blocks and 5 processors (distributed evenly), it should be 5. For
> one block per processor, it's 1.
>
Yes.
You could also look at what Rio Yokota did:
https://github.com/barbagroup/petrbf
Matt
> Best regards,
> Justin
>
>
> Matt
>
>
>> Best regards,
>> Justin
>>
>>
>> On 3/23/16 10:29 AM, Barry Smith wrote:
>>
>> Seems ok. You'll need to be more specific about "isn't quite working."
>>
>> Barry
>>
>>
>> On Mar 23, 2016, at 9:37 AM, Justin Droba (JSC-EG3) [JETS] <justin.c.droba at nasa.gov> <justin.c.droba at nasa.gov> wrote:
>>
>> Dear all,
>>
>> Very sorry to flood the mailing list, one final post because the image will not come through if you're on digest or using plain text.
>>
>> Thank you for maintaining this wonderful software and this user mailing list!
>>
>> I am attempting to use an additive Schwartz preconditioner with multiple blocks per processor. I've been able to get it to work with one block per processor but not been successful with multiple. I believe I am not understanding correctly the construction of index sets for this case.
>>
>> Let's consider a simple example: 16 nodes on a square with 4 blocks of 4x4:
>>
>> <Mail Attachment.png>
>> https://farm2.staticflickr.com/1633/25912726941_428d61ae87_o.png
>>
>>
>> I want to put Omega_1 and Omega_2 on processor 1 and the other two on the second processor. Do I make my index sets on processor 1
>>
>> IS = { {1,2,3,5,6,7,9,10,11}, {2,3,4,6,7,8,10,11,12} }
>> IS_LOCAL = { {1,2,5,6}, {3,4,7,8} }
>>
>> and similarly for the other processor? I tried this and it isn't quite working out. Do I misunderstand something?
>>
>> Thank you!
>>
>> Best regards,
>> Justin
>>
>> --
>> ---------------------------------------------
>> Justin Droba, PhD
>> Applied Aeroscience and CFD Branch (EG3)
>> NASA Lyndon B. Johnson Space Center
>> JETS/Jacobs Technology, Inc. and HX5, LLC
>>
>> Office: Building 16, Room 142
>> Phone: 281-483-1451
>> ---------------------------------------------
>>
>>
>>
>>
>>
>>
>> --
>> ---------------------------------------------
>> Justin Droba, PhD
>> Applied Aeroscience and CFD Branch (EG3)
>> NASA Lyndon B. Johnson Space Center
>> JETS/Jacobs Technology, Inc. and HX5, LLC
>>
>> *Office:* Building 16, Room 142
>> *Phone*: 281-483-1451
>> ---------------------------------------------
>>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
> --
> ---------------------------------------------
> Justin Droba, PhD
> Applied Aeroscience and CFD Branch (EG3)
> NASA Lyndon B. Johnson Space Center
> JETS/Jacobs Technology, Inc. and HX5, LLC
>
> *Office:* Building 16, Room 142
> *Phone*: 281-483-1451
> ---------------------------------------------
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20160324/0d2dbc31/attachment.html>
More information about the petsc-users
mailing list