[petsc-dev] Soliciting suggestions for linear solver work under SciDAC 4 Institutes
Jeff Hammond
jeff.science at gmail.com
Thu Jul 7 19:05:00 CDT 2016
On Thu, Jul 7, 2016 at 1:04 PM, Matthew Knepley <knepley at gmail.com> wrote:
> On Fri, Jul 1, 2016 at 4:32 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
>>
>> The DOE SciDAC institutes have supported PETSc linear solver
>> research/code development for the past fifteen years.
>>
>> This email is to solicit ideas for linear solver research/code
>> development work for the next round of SciDAC institutes (which will be a 4
>> year period) in PETSc. Please send me any ideas, no matter how crazy, on
>> things you feel are missing, broken, or incomplete in PETSc with regard to
>> linear solvers that we should propose to work on. In particular, issues
>> coming from particular classes of applications would be good. Generic
>> "multi physics" coupling types of things are too general (and old :-))
>> while work for extreme large scale is also out since that is covered under
>> another call (ECP). But particular types of optimizations etc for existing
>> or new codes could be in, just not for the very large scale.
>>
>> Rough ideas and pointers to publications are all useful. There is an
>> extremely short fuse so the sooner the better,
>>
>
> I think the suggestions so far are fine, however they all seem to start at
> the "how", whereas I would prefer we start at the "why". Maybe something
> like
>
> 1) How do we run at bandwidth peak on new architectures like Cori or
> Aurora?
>
> Patrick and Rich have good suggestions here. Karl and Rich showed some
> promising numbers for KNL at the PETSc meeting.
>
>
Future systems from multiple vendors basically move from 2-tier memory
hierarchy of shared LLC and DRAM to a 3-tier hierarchy of fast memory (e.g.
HBM), regular memory (e.g. DRAM), and slow (likely nonvolatile) memory on
a node. Xeon Phi and some GPUs have caches, but it is unclear to me if it
actually benefits software like PETSc to consider them. Figuring out how
to run PETSc effectively on KNL should be generally useful...
Jeff
--
Jeff Hammond
jeff.science at gmail.com
http://jeffhammond.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20160707/ac1f865b/attachment.html>
More information about the petsc-dev
mailing list