[mpich-discuss] MVAPICH2 1.5RC1 release is available with Nemesis-InfiniBand Netmod interface

PRATIK AGRAWAL pratik_9314 at yahoo.co.in
Fri May 7 05:28:53 CDT 2010


At the time of installation I got the error for cxx after disabling it hte installation was proceed successfully.. CAn you pls guide me to configure cxx manualy


Pratik Agrawal
Contact: +91-94287-63145
Twitter: http://www/twitter.com/praxvoper
http://www.pratikagrawal.x10.mx



--- On Wed, 5/5/10, Dhabaleswar Panda <panda at cse.ohio-state.edu> wrote:


From: Dhabaleswar Panda <panda at cse.ohio-state.edu>
Subject: [mpich-discuss] MVAPICH2 1.5RC1 release is available with Nemesis-InfiniBand Netmod interface
To: mpich-discuss at mcs.anl.gov
Date: Wednesday, 5 May, 2010, 11:05 AM


The MVAPICH team has announced the release of MVAPICH2 1.5RC1 today. It
supports a new Nemesis-InfiniBand Netmod interface. It supports the
OpenFabrics InfiniBand Gen2 layer. This interface is in addition to the
various existing CH3-based interfaces for InfiniBand, iWARP, RoCE, uDAPL
and PSM. This release also has flexible support to use Hydra or mpirun_rsh
process manager for any of the CH3-based or Nemesis-based interfaces.

This release is based on MPICH2 1.2.1p1.

The detailed release announcement is included below.

Hope this new Nemesis-InfiniBand netmod design will help MPICH2 users to
harness maximum performance from their InfiniBand clusters.

Looking forward to feedback from MPICH2 users on their experiences in
using this new Nemesis-InfiniBand Netmod interface.

Thanks,

DK Panda
(On Behalf of the MVAPICH Team)

=========================

The MVAPICH team is pleased to announce the release of MVAPICH2 1.5RC1
with the following NEW features:

- MPI 2.2 standard compliant
- Based on MPICH2 1.2.1p1
- OFA-IB-Nemesis interface design
    - OpenFabrics InfiniBand network module support for
      MPICH2 Nemesis modular design
    - Support for high-performance intra-node shared memory
      communication provided by the Nemesis design
    - Adaptive RDMA Fastpath with Polling Set for high-performance
      inter-node communication
    - Shared Receive Queue (SRQ) support with flow control,
      uses significantly less memory for MPI library
    - Header caching
    - Advanced AVL tree-based Resource-aware registration cache
    - Memory Hook Support provided by integration with ptmalloc2
      library. This provides safe release of memory to the
      Operating System and is expected to benefit the memory
      usage of applications that heavily use malloc and free operations.
    - Support for TotalView debugger
    - Shared Library Support for existing binary MPI application
      programs to run ROMIO Support for MPI-IO
    - Support for additional features (such as hwloc,
      hierarchical collectives, one-sided, multithreading, etc.),
      as included in the MPICH2 1.2.1p1 Nemesis channel
- Flexible process manager support
    - mpirun_rsh to work with any of the eight interfaces
      (CH3 and Nemesis channel-based) including OFA-IB-Nemesis,
      TCP/IP-CH3 and TCP/IP-Nemesis
    - Hydra process manager to work with any of the eight interfaces
      (CH3 and Nemesis channel-based) including OFA-IB-CH3,
      OFA-iWARP-CH3, OFA-RoCE-CH3 and TCP/IP-CH3
- MPIEXEC_TIMEOUT is honored by mpirun_rsh

This release also contains multiple bug fixes since MVAPICH2-1.4.1 A
summary of the major fixes are as follows:

- Fix compilation error when configured with
  `--enable-thread-funneled'
- Fix MPE functionality, thanks to Anthony Chan <chan at mcs.anl.gov> for
  reporting and providing the resolving patch
- Cleanup after a failure in the init phase is handled better by
  mpirun_rsh
- Path determination is correctly handled by mpirun_rsh when DPM is
  used
- Shared libraries are correctly built (again)

For downloading MVAPICH2 1.5RC1, associated user guide and accessing
the SVN, please visit the following URL:

http://mvapich.cse.ohio-state.edu

All feedbacks, including bug reports and hints for performance tuning,
patches and enhancements are welcome. Please post it to the
mvapich-discuss mailing list.

We are also happy to inform that the number of organizations using
MVAPICH/MVAPICH2 (and registered at the MVAPICH site) has crossed
1,100 world-wide (in 58 countries). The MVAPICH team extends
thanks to all these organizations.

Thanks,

The MVAPICH Team


_______________________________________________
mpich-discuss mailing list
mpich-discuss at mcs.anl.gov
https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20100507/f9c7b39c/attachment.htm>


More information about the mpich-discuss mailing list