[petsc-dev] What do people want to have working before a petsc-3.2 release?

Jed Brown jed at 59A2.org
Sun Dec 19 17:13:32 CST 2010


1. I think it would be enough to have a way to run the "level 3" dense
matrix kernels in the GPU.  That is where it would make the biggest
difference.   Dense does not reuse the sequential kernels, so I guess it
requires some code for both Seq and MPI.  My hope was that since the dense
API is fairly small and CUBlas is mature that it would be simple. I think
fairly few people use Dense, but if it's sufficiently easy to support...

2.  Is it as easy as PCSetDM?  Or provide a coarse DM and get a hierarchy?
Is there an example? I agree about having FieldSplit forward the pieces. I
recall starting on that.  Who is responsible for assembling rediscretized
coarse operators?

2b.  Can we decide on an interface for plumbing extra information into the
Schur splits?  Maybe composing the matrix like in PCLSC is the way and it
just needs to be wrapped in a decent interface.  Or maybe there is a better
way.

Jed

On Dec 19, 2010 2:54 PM, "Barry Smith" <bsmith at mcs.anl.gov> wrote:


On Dec 19, 2010, at 4:47 PM, Jed Brown wrote:

> Decent preallocation for DMComposite.
>
> CUDA fo...
 What does this mean? Dense sequential matrices? Parallel?


>
> Move linear DMMG into PCMG. (I don't have a sense for how long this will
take.)
 Except for "grid sequencing" (not sure if I care about grid sequencing for
linear problems) this is pretty much done. But we should still do the
fieldsplit part of DMMG for linear problems into PCFIELDSPLIT


>
> Grid sequencing in SNES, this might be too much for 3.2.
  I think this is too much.

 Barry

>
> Jed
>
>
>> On Dec 19, 2010 10:05 AM, "Barry Smith" <bsmith at mcs.anl.gov> wrote:
>>
>>
>> W...
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20101220/bea24af2/attachment.html>


More information about the petsc-dev mailing list