<div class="gmail_quote">On Wed, Oct 12, 2011 at 14:53, Vijay S. Mahadevan <span dir="ltr"><<a href="mailto:vijay.m@gmail.com">vijay.m@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div id=":yu">Yes but petsc-ext did handle the "blocks" hierarchically only, if I<br>
remember right. And parallel vectors in petsc were always "flat". Are<br>
you suggesting that there is a way to interchangeably use these in the<br>
current Nest implementation ?<br></div></blockquote><div><br></div><div>If I remember correctly, you could only get access to single blocks, not to arbitrary subsets of the blocks.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div id=":yu">
<div class="im"><br>
> different approach is to make an ISNest which would make the matching<br>
> problem easier.<br>
<br>
</div>I like this. And the interface is also consistent in the sense that a<br>
Nest type propagates the entire hierarchy of IS/Vec/Mat/(KSP?).<br></div></blockquote><div><br></div><div>You are welcome to write ISNest. I might get to it eventually, but it's not a high priority for me right now.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div id=":yu">
<div class="im"><br>
> You can do VecNest that way, in which case you can do somewhat more flexible<br>
> ghost updates and such. But VecNest will also let you have the two "blocks"<br>
> be interlaced in the global ordering, but actually stored contiguously. I<br>
> think this is rarely a good way to implement things, but you can do it.<br>
<br>
</div>I can think of cases where that might be a good thing to have. Of<br>
course these are problem/discretization dependent and switching the<br>
ordering/nesting at runtime will be a huge advantage to optimize<br>
efficiency.<br></div></blockquote><div><br></div><div>I'm just saying that I think there are rather few problems where deep nesting for the Vec is good for performance. Also, if you commit to VecNest, there are lots of "cool" features in PETSc that either don't work or are much less efficient (anything that calls VecGetArray(), for a start).</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div id=":yu">
<div class="im"><br>
> How did you get a monolithic matrix that you could hit with a direct solver<br>
> when you were using petsc-ext?<br>
<br>
</div>Ah. I didn't. Once I was sure enough that I was converging to the<br>
right solution, I switched to petsc-ext and used only Krylov solves<br>
with block diagonal lu/ilu preconditioners. And those worked very<br>
nicely for my problems.</div></blockquote></div><br><div>Sounds like a lot of work and what happens when you change the physics such that it's also time to change the preconditioner?</div>