[mpich-discuss] MVAPICH2 1.5RC2 release is available with Nemesis-InfiniBand Netmod interface
Dhabaleswar Panda
panda at cse.ohio-state.edu
Mon Jun 21 23:10:19 CDT 2010
As a follow-up to the MVAPICH2 1.5RC1 release with Nemesis-InfiniBand
Netmod interface, we have announced the release of MVAPICH2 1.5RC2 today.
This has multiple enhancements and bugfixes. The detailed release
announcement is included below.
Looking forward to the feedback from MPICH2 users on their experiences in
using this latest version.
Thanks,
DK Panda
(On Behalf of the MVAPICH Team)
==================================
The MVAPICH team is pleased to announce the release of MVAPICH2 1.5RC2
with the following NEW features and bug-fixes:
MVAPICH2-1.5-RC2 (06/21/10)
* Features and Enhancements (since 1.5-RC1)
- Support for hwloc library (1.0.1) for defining CPU affinity
- Deprecating older PLPA support for defining CPU affinity
with HWLOC
- Efficient CPU binding policies (bunch and scatter) to
specify CPU binding per job for modern multi-core platforms
- New flag in mpirun_rsh to execute tasks with different group IDs
- Enhancement to the design of Win_complete for RMA operations
- Flexibility to support variable number of RMA windows
- Support for Intel iWARP NE020 adapter
* Bug fixes (since 1.5-RC1)
- Compilation issue with the ROMIO adio-lustre driver, thanks
to Adam Moody of LLNL for reporting the issue
- Allowing checkpoint-restart for large-scale systems
- Correcting a bug in clear_kvc function. Thanks to T J (Chris) Ward,
IBM Research, for reporting and providing the resolving patch
- Shared lock operations with RMA with scatter process distribution.
Thanks to Pavan Balaji of Argonne for reporting this issue
- Fix a bug during window creation in uDAPL
- Compilation issue with --enable-alloca, Thanks to E. Borisch,
for reporting and providing the patch
- Improved error message for ibv_poll_cq failures
- Fix an issue that prevents mpirun_rsh to execute programs without
specifying the path from directories in PATH
- Fix an issue of mpirun_rsh with Dynamic Process Migration (DPM)
- Fix for memory leaks (both CH3 and Nemesis interfaces)
- Updatefiles correctly update LiMIC2
- Several fixes to the registration cache
(CH3, Nemesis and uDAPL interfaces)
- Fix to multi-rail communication
- Fix to Shared Memory communication Progress Engine
- Fix to all-to-all collective for large number of processes
For downloading MVAPICH2 1.5RC2, associated user guide and accessing
the SVN, please visit the following URL:
http://mvapich.cse.ohio-state.edu
All feedbacks, including bug reports and hints for performance tuning,
patches and enhancements are welcome. Please post it to the
mvapich-discuss mailing list.
Thanks,
The MVAPICH Team
==============
On Wed, 5 May 2010, Dhabaleswar Panda wrote:
> The MVAPICH team has announced the release of MVAPICH2 1.5RC1 today. It
> supports a new Nemesis-InfiniBand Netmod interface. It supports the
> OpenFabrics InfiniBand Gen2 layer. This interface is in addition to the
> various existing CH3-based interfaces for InfiniBand, iWARP, RoCE, uDAPL
> and PSM. This release also has flexible support to use Hydra or mpirun_rsh
> process manager for any of the CH3-based or Nemesis-based interfaces.
>
> This release is based on MPICH2 1.2.1p1.
>
> The detailed release announcement is included below.
>
> Hope this new Nemesis-InfiniBand netmod design will help MPICH2 users to
> harness maximum performance from their InfiniBand clusters.
>
> Looking forward to feedback from MPICH2 users on their experiences in
> using this new Nemesis-InfiniBand Netmod interface.
>
> Thanks,
>
> DK Panda
> (On Behalf of the MVAPICH Team)
>
> =========================
>
> The MVAPICH team is pleased to announce the release of MVAPICH2 1.5RC1
> with the following NEW features:
>
> - MPI 2.2 standard compliant
> - Based on MPICH2 1.2.1p1
> - OFA-IB-Nemesis interface design
> - OpenFabrics InfiniBand network module support for
> MPICH2 Nemesis modular design
> - Support for high-performance intra-node shared memory
> communication provided by the Nemesis design
> - Adaptive RDMA Fastpath with Polling Set for high-performance
> inter-node communication
> - Shared Receive Queue (SRQ) support with flow control,
> uses significantly less memory for MPI library
> - Header caching
> - Advanced AVL tree-based Resource-aware registration cache
> - Memory Hook Support provided by integration with ptmalloc2
> library. This provides safe release of memory to the
> Operating System and is expected to benefit the memory
> usage of applications that heavily use malloc and free operations.
> - Support for TotalView debugger
> - Shared Library Support for existing binary MPI application
> programs to run ROMIO Support for MPI-IO
> - Support for additional features (such as hwloc,
> hierarchical collectives, one-sided, multithreading, etc.),
> as included in the MPICH2 1.2.1p1 Nemesis channel
> - Flexible process manager support
> - mpirun_rsh to work with any of the eight interfaces
> (CH3 and Nemesis channel-based) including OFA-IB-Nemesis,
> TCP/IP-CH3 and TCP/IP-Nemesis
> - Hydra process manager to work with any of the eight interfaces
> (CH3 and Nemesis channel-based) including OFA-IB-CH3,
> OFA-iWARP-CH3, OFA-RoCE-CH3 and TCP/IP-CH3
> - MPIEXEC_TIMEOUT is honored by mpirun_rsh
>
> This release also contains multiple bug fixes since MVAPICH2-1.4.1 A
> summary of the major fixes are as follows:
>
> - Fix compilation error when configured with
> `--enable-thread-funneled'
> - Fix MPE functionality, thanks to Anthony Chan <chan at mcs.anl.gov> for
> reporting and providing the resolving patch
> - Cleanup after a failure in the init phase is handled better by
> mpirun_rsh
> - Path determination is correctly handled by mpirun_rsh when DPM is
> used
> - Shared libraries are correctly built (again)
>
> For downloading MVAPICH2 1.5RC1, associated user guide and accessing
> the SVN, please visit the following URL:
>
> http://mvapich.cse.ohio-state.edu
>
> All feedbacks, including bug reports and hints for performance tuning,
> patches and enhancements are welcome. Please post it to the
> mvapich-discuss mailing list.
>
> We are also happy to inform that the number of organizations using
> MVAPICH/MVAPICH2 (and registered at the MVAPICH site) has crossed
> 1,100 world-wide (in 58 countries). The MVAPICH team extends
> thanks to all these organizations.
>
> Thanks,
>
> The MVAPICH Team
>
>
More information about the mpich-discuss
mailing list