[petsc-users] Some questions regarding VecNest type.

Vijay S. Mahadevan vijay.m at gmail.com
Wed Oct 12 15:25:19 CDT 2011


> You are welcome to write ISNest. I might get to it eventually, but it's not
> a high priority for me right now.

This definitely seems like a nice option to have and will start
writing this soon.

> I'm just saying that I think there are rather few problems where deep
> nesting for the Vec is good for performance. Also, if you commit to VecNest,
> there are lots of "cool" features in PETSc that either don't work or are
> much less efficient (anything that calls VecGetArray(), for a start).

I will keep that in mind. I dont want to lose the flexibility or give
up efficiency. Perhaps there might be a better design in my end that
can take advantage of both worlds.

> Sounds like a lot of work and what happens when you change the physics such
> that it's also time to change the preconditioner?

What do you mean ? The blocks are formed in memory and handed over to
the algebraic preconditioner (controlled at runtime). Why would the
user have to change the preconditioner manually ? This made use of the
block PC object in petsc-ext which I assume is replaced by FieldSplit
? I eventually do have to handle this elegantly and will need some
input when I get there.

If you are talking about the compile time switch, then yes, that was a
little painful. But hopefully pure petsc will give me some peace !

On Wed, Oct 12, 2011 at 3:03 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
> On Wed, Oct 12, 2011 at 14:53, Vijay S. Mahadevan <vijay.m at gmail.com> wrote:
>>
>> Yes but petsc-ext did handle the "blocks" hierarchically only, if I
>> remember right. And parallel vectors in petsc were always "flat". Are
>> you suggesting that there is a way to interchangeably use these in the
>> current Nest implementation ?
>
> If I remember correctly, you could only get access to single blocks, not to
> arbitrary subsets of the blocks.
>
>>
>> > different approach is to make an ISNest which would make the matching
>> > problem easier.
>>
>> I like this. And the interface is also consistent in the sense that a
>> Nest type propagates the entire hierarchy of IS/Vec/Mat/(KSP?).
>
> You are welcome to write ISNest. I might get to it eventually, but it's not
> a high priority for me right now.
>
>>
>> > You can do VecNest that way, in which case you can do somewhat more
>> > flexible
>> > ghost updates and such. But VecNest will also let you have the two
>> > "blocks"
>> > be interlaced in the global ordering, but actually stored contiguously.
>> > I
>> > think this is rarely a good way to implement things, but you can do it.
>>
>> I can think of cases where that might be a good thing to have. Of
>> course these are problem/discretization dependent and switching the
>> ordering/nesting at runtime will be a huge advantage to optimize
>> efficiency.
>
> I'm just saying that I think there are rather few problems where deep
> nesting for the Vec is good for performance. Also, if you commit to VecNest,
> there are lots of "cool" features in PETSc that either don't work or are
> much less efficient (anything that calls VecGetArray(), for a start).
>
>>
>> > How did you get a monolithic matrix that you could hit with a direct
>> > solver
>> > when you were using petsc-ext?
>>
>> Ah. I didn't. Once I was sure enough that I was converging to the
>> right solution, I switched to petsc-ext and used only Krylov solves
>> with block diagonal lu/ilu preconditioners. And those worked very
>> nicely for my problems.
>
> Sounds like a lot of work and what happens when you change the physics such
> that it's also time to change the preconditioner?


More information about the petsc-users mailing list