[mpich-discuss] Announcing the Release of MVAPICH2 1.7a2

Dhabaleswar Panda panda at cse.ohio-state.edu
Fri Jun 3 22:38:33 CDT 2011


This release might be of interest to some of the MPICH2 users. Thus, I am
posting it here.

Thanks,

DK

================================================

The MVAPICH team is pleased to announce the release of
MVAPICH2-1.7-alpha2 with the following NEW features and
enhancements:

* NEW Features and Enhancements (since MVAPICH2-1.6)

    - Based on MPICH2-1.3.2p1
    - Improved intra-node shared memory communication performance
    - Tuned RDMA Fast Path Buffer size to get better performance
      with less memory footprint (CH3 and Nemesis)
    - Fast process migration using RDMA
    - Supporting large data transfers (>2GB)
    - Integrated with enhanced LiMIC2 (v0.5.5) to support Intra-node
      large message (>2GB) transfers
    - Automatic inter-node communication parameter tuning
      based on platform and adapter detection (Nemesis)
    - Automatic intra-node communication parameter tuning
      based on platform
    - Efficient connection set-up for multi-core systems
    - Enhancements for collectives
      (barrier, gather, allgather and alltoall)
    - Compact and shorthand way to specify blocks of processes
      on the same host with mpirun_rsh
    - Support for latest stable version of HWLOC v1.2
    - Improved debug message output in process management and
      fault tolerance functionality
    - Better handling of process signals and error
      management in mpispawn
    - Enhanced debugging config options to generate
      core files and back-traces
    - Performance tuning for pt-to-pt and several collective
      operations
    - Support for Chelsio T4 Adapter

Bug Fixes:

    - Fixes for memory leaks
    - Fixes in CR/migration
    - Better handling of memory allocation and registration failures
    - Fixes for compilation warnings
    - Fix a bug that disallows '=' from mpirun_rsh arguments
    - Handling of non-contiguous transfer in Nemesis interface
    - Bug fix in gather collective when ranks are in cyclic order
    - Fix for the ignore_locks bug in MPI-IO with Lustre

For downloading MVAPICH2-1.7, associated user guide and accessing the
SVN, please visit the following URL:

http://mvapich.cse.ohio-state.edu

All questions, feedbacks, bug reports, hints for performance tuning,
patches and enhancements are welcome. Please post it to the
mvapich-discuss mailing list (mvapich-discuss at cse.ohio-state.edu).

Thanks,

The MVAPICH Team




More information about the mpich-discuss mailing list