[petsc-dev] Bad scaling of GAMG in FieldSplit
Mark Adams
mfadams at lbl.gov
Fri Jul 27 08:02:52 CDT 2018
>
>
> Do you mean the coarse grids? GAMG reduces active processors (and
> repartitions the coarse grids if you ask it to) like telescope.
>
>
> No I was talking about the fine grid. If I do this (telescope then GAMG),
>
What does Telescope do on the fine grids? Is this a structured grid and
Telescope does geometric MG?
> MatMultTranspose and MatMultAdd perform OK, but the smoother (MatSOR)
> takes more and more time as one would have guessed… it seems that there is
> no easy way to make this strong scale.
>
>
Strong scaling will always stop at some point. The question is where. I'm
not seeing any data on where it is stopping (I have not dug into the data
files).
My first suspicion here was that GAMG stops coarsening and there are many
coarse grids (ie, crazy behavior). I have seen this occasionally.
Also, did you check the iteration count in GAMG. I'm not sure if this is a
convergence problem or a complexity problem.
> >> 2) have the sub_0_ and the sub_1_ work on two different nonoverlapping
>> communicators of size PETSC_COMM_WORLD/2, do the solve concurrently, and
>> then sum the solutions (only worth doing because of -pc_composite_type
>> additive). I have no idea if this easily doable with PETSc command line
>> arguments
>> >
>> > 1) is the more flexible approach, as you have better control over the
>> system sizes after 'telescoping’.
>>
>> Right, but the advantage of 2) is that I wouldn't have one half or more
>> of processes idling and I could overlap the solves of both subpc in the
>> PCCOMPOSITE.
>>
>> I’m attaching the -log_view for both runs (I trimmed some options).
>>
>> Thanks for your help,
>> Pierre
>>
>>
>> > Best regards,
>> > Karli
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20180727/e2fe583f/attachment.html>
More information about the petsc-dev
mailing list