[petsc-dev] Error with ML preconditioner

Jed Brown jedbrown at mcs.anl.gov
Mon Feb 18 15:44:44 CST 2013


On Mon, Feb 18, 2013 at 3:36 PM, Mark F. Adams <mark.adams at columbia.edu>wrote:

> The logic here is messed up.  Valid input could result in a non-collectiv
> result.  So this DEBUG code should be a real check here.  I think the logic
> is OK to detect a blocked garray locally … but some processors could be
> proper and others not as Jed pointed out.
>

We either need to do the reduction always or make the normal code path safe
always.

> It seems to me that we should either (a) explicitly disallow matrices with
> a block size to be built when the blocks are not filled or (b) when the
> user requests blocks, pad blocks so that we can always use an ISBlock.
>
>
> It looks like we have (a) now in a debug build.
>

Well, only indirectly because this check will fire. We still have (a) in
optimized mode in the sense that some procs will call ISCreateGeneral while
others call ISCreateBlock, leading to a crash or deadlock.


> I'm  not wild about (b), this is only used to use blocked IS for
> efficiency and we have never measure that this is useful.
>

I don't know if Barry has measurements from when he originally added the
feature

https://bitbucket.org/petsc/petsc-dev/commits/4442702a5bc776781b96301ae6b1541cf7cf1fc2#chg-src/mat/impls/aij/mpi/mmaij.c


>
> So I will comment out the whole 'useblockis' business and let Garth move
> along.  We could do an all reduce to see if everyone is blocked and then,
> and only then, use th blocked ISs. This would cost an alreduce.  Not sure
> what would be better, but for now we can just disable this.
>

This is clearly wrong in the release too. I'm inclined to delete the entire
useblockis code path since it seems like it will need to be implemented
completely differently (if it matters for performance).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20130218/9279ca38/attachment.html>


More information about the petsc-dev mailing list