[petsc-dev] PETSc and threads

Jed Brown jed at jedbrown.org
Sat Jan 10 00:39:21 CST 2015


Barry Smith <bsmith at mcs.anl.gov> writes:

>    In other words you are saying that libMesh is not MPI scalable?

The have a "ParallelMesh", but it had some serious limitations last I
heard and was rarely used.

>    Well then MOOSE could switch to Deal.ii for their mesh and finite
>    element library :-) (Assuming that it is MPI scalable). 

Nope, but it can call p4est which is scalable except for the coarse mesh.

>    otherwise those damn mesh libraries better become MPI scalable, its
>    not that freaking hard.

Right.

>    They've asked us what PETSc's plans are and how they can help
>    us. Well you need to articulate your plan and tell them what they
>    need to do to help them. If they don't like your plan or refuse to
>    help with your plan then they need to state that in writing. Look,
>    PETSc totally ignored the shared memory crazy of the late 90's
>    (when vendors starting putting 2 or more CPUs on the same mother
>    board with a shared memory card) and many other people wasted their
>    time futzing around with OpenMP (David Keyes and Dinesh for
>    example) generating lots of papers but no great performance, maybe
>    this is a repeat of that and maybe we already wasted too much time
>    on the topic this time round with threadcomm etc. Don't worry about
>    what DOE program managers want or NERSc managers want, worry about
>    what is right technically.

Okay, simple.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 818 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20150109/7653c7a2/attachment.sig>


More information about the petsc-dev mailing list