[petsc-dev] OpenMPI is a menace

Bryce Lelbach blelbach at cct.lsu.edu
Sun Nov 4 23:21:15 CST 2012


On 2012.11.04 22.48, Matthew Knepley wrote:
> On Sun, Nov 4, 2012 at 10:38 PM, "C. Bergström" <cbergstrom at pathscale.com>wrote:
> 
> > On 11/ 5/12 10:13 AM, Matthew Knepley wrote:
> >
> >> I am tired of wasting time on their stupid bugs which we report but are
> >> never fixed. Is there a way to retaliate without grenades?
> >>
> > Switch over to HPX?
> > http://stellar.cct.lsu.edu/**tag/hpx/<http://stellar.cct.lsu.edu/tag/hpx/>
> >
> > I'm not sure it's at a point where it can handle petsc, but keep it in
> > mind for 2013.  I'm cc'ing one of the devs who is probably not subscribed,
> > but can answer questions.  (bandwidth permitting and maybe delayed until
> > @SC12 or after)
> > -----------
> > Option 2 - We (PathScale) have been considering to take on shipping a
> > supported version of OpenMPI for some time.  If anyone would be interested
> > in add-on support or just paying a bounty to fix bugs - We may be able to
> > work it out.  (Not the perfect open source answer, but maybe it's better
> > (cheaper?) than grenades..
> >
> > Which bugs are you specifically interested in getting fixed?
> >
> 
> When they install to a non-standard location, they do not add RPATH support
> for the
> library link in their compiler wrappers.
> 
>    Matt

Hi. Just wanted to note, HPX does this. Among other things.

HPX is drastically different from MPI. Comparison table:

HPX: Intra-node (threading) and inter-node (distributed); provides extermely fine grained threading (millions of short-lived threads per node)
MPI: Only does distributed.

HPX: Sends work to data.
MPI: Sends data to work.

HPX: Provides an active global address space to abstract local memory boundaries across nodes.
MPI: Forces user code to explicitly perform remote communication.

HPX: Hides latencies by overlapping them with out computations.
MPI: Only option to deal with latencies is latency avoidance.

HPX: Utilizes local synchronization and zealously avoids explicit global barriers, allowing computation to proceed as far as possible without communicating/synchronizing.
MPI: Strongly emphasizes global barriers.

HPX: Supports the transmission of POD data, polymorphic types, functions and higher order functions (e.g. functions with bound arguments, etc).
MPI: Only does POD data.

HPX: Diverse set of runtime services (builtin, intrinsic instrumentation, error handling facilities, logging, runtime configuration, loadable modules).
MPI: None of the above. 

HPX: Supports dynamic, heuristic load balancing (both at the intra-node and inter-node level).
MPI: Limited builtin support for static load balancing.

HPX is a general-purpose C++11 runtime system for parallel and distributed computing
of any scale (from quad-core tablets to exascale systems). It is freely available 
under a BSD-style license, and developed by a growing community of international
collobators. It is an integral part of US DoE's exascale software stack, and is
supported heavily by the US NSF. 

stellar.cct.lsu.edu for benchmarks, papers and links to the github repository.

-- 
Bryce Adelstein-Lelbach aka wash
STE||AR Group, Center for Computation and Technology, LSU
--
860-808-7497 - Cell
225-578-6182 - Work (no voicemail)
--
stellar.cct.lsu.edu
boost-spirit.com
llvm.linuxfoundation.org
--
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 490 bytes
Desc: Digital signature
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20121104/34de6611/attachment.sig>


More information about the petsc-dev mailing list