[mpich-discuss] [mvapich-discuss] Announcing the Release of MVAPICH2 1.8a2 and OSU Micro-Benchmarks (OMB) 3.5.1
Dhabaleswar Panda
panda at cse.ohio-state.edu
Thu Feb 2 23:18:16 CST 2012
These releases might be of interest to some of the MPICH users. Thus, I am
posting it here.
Thanks,
DK
---------- Forwarded message ----------
Date: Thu, 2 Feb 2012 13:20:40 -0500 (EST)
From: Dhabaleswar Panda <panda at cse.ohio-state.edu>
To: mvapich-discuss at cse.ohio-state.edu
Subject: [mvapich-discuss] Announcing the Release of MVAPICH2 1.8a2 and OSU
Micro-Benchmarks (OMB) 3.5.1
The MVAPICH team is pleased to announce the release of MVAPICH2 1.8a2 and
OSU Micro-Benchmarks (OMB) 3.5.1.
Features, Enhancements, and Bug Fixes for MVAPICH2 1.8a2 are listed
here.
* New Features and Enhancements (since 1.8a1p1):
- Support for collective communication from GPU buffers
- Non-contiguous datatype support in point-to-point and collective
communication from GPU buffers
- Efficient GPU-GPU transfers within a node using CUDA IPC
(for CUDA 4.1)
- Alternate synchronization mechanism using CUDA Events for pipelined
device data transfers
- Exporting processes local rank in a node through environment variable
- Adjust shared-memory communication block size at runtime
- Enable XRC by default at configure time
- New shared memory design for enhanced intra-node small message
performance
- Tuned inter-node and intra-node performance on different cluster
architectures
- Update to hwloc v1.3.1
- Support for fallback to R3 rendezvous protocol if RGET fails
- SLURM integration with mpiexec.mpirun_rsh to use SLURM allocated
hosts without specifying a hostfile
- Support added to automatically use PBS_NODEFILE in Torque and PBS
environments
- Enable signal-triggered (SIGUSR2) migration
* Bug Fixes (since 1.8a1p1):
- Set process affinity independently of SMP enable/disable to control
the affinity in loopback mode
- Report error and exit if user requests MV2_USE_CUDA=1 in non-CUDA
configuration
- Fix for data validation error with GPU buffers
- Updated WRAPPER_CPPFLAGS when using --with-cuda. Users should not
have to explicitly specify CPPFLAGS or LDFLAGS to build applications
- Fix for several compilation warnings
- Report an error message if user requests MV2_USE_XRC=1 in non-XRC
configuration
- Remove debug prints in regular code path with MV2_USE_BLOCKING=1
- Thanks to Vaibhav Dutt for the report
- Handling shared memory collective buffers in a dynamic manner to
eliminate static setting of maximum CPU core count
- Fix for validation issues in MPICH2 one-sided tests
- Fix a bug in packetized transfers on heterogeneous clusters
- Fix for deadlock between psm_ep_connect and PMGR_COLLECTIVE calls
on QLogic systems
- Thanks to Adam T. Moody (LLNL) for the patch
- Fix a bug in MPI_Allocate_mem when it is called with size 0
- Thanks to Michele De Stefano for reporting this issue
- Create vendor for Open64 compilers and add rpath for unknown
compilers
- Thanks to Martin Hilgemen (Dell) for the initial patch
- Fix issue due to overlapping buffers with sprintf
- Thanks to Mark Debbage (QLogic) for reporting this issue
- Fallback to using GNU options for unknown f90 compilers
- Fix hang in PMI_Barrier due to incorrect handling of the socket
return values in mpirun_rsh
- Unify the redundant FTB events used to initiate a migration
- Fix memory leaks when mpirun_rsh reads hostfiles
- Fix a bug where library attempts to use inactive rail
in multi-rail scenario
Features, Enhancements and Bugfixes for OSU Micro-Benchmarks (OMB) 3.5.1
are listed here.
*New Features & Enhancements (since OMB 3.5):
- Provide script to set GPU affinity for MPI processes
*Bug Fixes (since OMB 3.5):
- Removed GPU binding after MPI_Init to avoid switching context
Sample performance numbers for MPI communication from NVIDIA GPU
memory using MVAPICH2 1.8a2 and OMB 3.5.1 can be obtained from the
following URL:
http://mvapich.cse.ohio-state.edu/performance/gpu.shtml
For downloading MVAPICH2 1.8a2, OMB 3.5.1, associated user guide,
quick start guide, and accessing the SVN, please visit the following
URL:
http://mvapich.cse.ohio-state.edu
All questions, feedbacks, bug reports, hints for performance tuning,
patches and enhancements are welcome. Please post it to the
mvapich-discuss mailing list (mvapich-discuss at cse.ohio-state.edu).
Thanks,
The MVAPICH Team
_______________________________________________
mvapich-discuss mailing list
mvapich-discuss at cse.ohio-state.edu
http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
More information about the mpich-discuss
mailing list