[petsc-users] Why use MATMPIBAIJ?
Hoang Giang Bui
hgbk2008 at gmail.com
Fri Jan 22 07:27:38 CST 2016
DO you mean the option pc_fieldsplit_block_size? In this thread:
http://petsc-users.mcs.anl.narkive.com/qSHIOFhh/fieldsplit-error
It assumes you have a constant number of fields at each grid point, am I
right? However, my field split is not constant, like
[u1_x u1_y u1_z p_1 u2_x u2_y u2_z u3_x u3_y u3_z
p_3 u4_x u4_y u4_z]
Subsequently the fieldsplit is
[u1_x u1_y u1_z u2_x u2_y u2_z u3_x u3_y u3_z u4_x
u4_y u4_z]
[p_1 p_3]
Then what is the option to set block size 3 for split 0?
Sorry, I search several forum threads but cannot figure out the options as
you said.
> You can still do that. It can be done with options once the decomposition
> is working. Its true that these solvers
> work better with the block size set. However, if its the P2 Laplacian it
> does not really matter since its uncoupled.
>
> Yes, I agree it's uncoupled with the other field, but the crucial factor
defining the quality of the block preconditioner is the approximate
inversion of individual block. I would merely try block Jacobi first,
because it's quite simple. Nevertheless, fieldsplit implements other nice
things, like Schur complement, etc.
Giang
On Fri, Jan 22, 2016 at 11:15 AM, Matthew Knepley <knepley at gmail.com> wrote:
> On Fri, Jan 22, 2016 at 3:40 AM, Hoang Giang Bui <hgbk2008 at gmail.com>
> wrote:
>
>> Hi Matt
>> I would rather like to set the block size for block P2 too. Why?
>>
>> Because in one of my test (for problem involves only [u_x u_y u_z]), the
>> gmres + Hypre AMG converges in 50 steps with block size 3, whereby it
>> increases to 140 if block size is 1 (see attached files).
>>
>
> You can still do that. It can be done with options once the decomposition
> is working. Its true that these solvers
> work better with the block size set. However, if its the P2 Laplacian it
> does not really matter since its uncoupled.
>
> This gives me the impression that AMG will give better inversion for "P2"
>> block if I can set its block size to 3. Of course it's still an hypothesis
>> but worth to try.
>>
>> Another question: In one of the Petsc presentation, you said the Hypre
>> AMG does not scale well, because set up cost amortize the iterations. How
>> is it quantified? and what is the memory overhead?
>>
>
> I said the Hypre setup cost is not scalable, but it can be amortized over
> the iterations. You can quantify this
> just by looking at the PCSetUp time as your increase the number of
> processes. I don't think they have a good
> model for the memory usage, and if they do, I do not know what it is.
> However, generally Hypre takes more
> memory than the agglomeration MG like ML or GAMG.
>
> Thanks,
>
> Matt
>
>
>>
>> Giang
>>
>> On Mon, Jan 18, 2016 at 5:25 PM, Jed Brown <jed at jedbrown.org> wrote:
>>
>>> Hoang Giang Bui <hgbk2008 at gmail.com> writes:
>>>
>>> > Why P2/P2 is not for co-located discretization?
>>>
>>> Matt typed "P2/P2" when me meant "P2/P1".
>>>
>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20160122/c8392299/attachment.html>
More information about the petsc-users
mailing list