<table cellspacing="0" cellpadding="0" border="0" ><tr><td valign="top" style="font: inherit;">At the time of installation I got the error for cxx after disabling it hte installation was proceed successfully.. CAn you pls guide me to configure cxx manualy<BR><BR>
<DIV><FONT class=Apple-style-span color=#ff4040>Pratik Agrawal</FONT></DIV>
<DIV><FONT class=Apple-style-span color=#ff4040>Contact: +91-94287-63145</FONT></DIV>
<DIV><FONT class=Apple-style-span color=#ff4040>Twitter: http://www/twitter.com/praxvoper</FONT></DIV>
<DIV><FONT class=Apple-style-span color=#ff4040>http://www.pratikagrawal.x10.mx</FONT></DIV>
<DIV><FONT class=Apple-style-span color=#ff4040><BR></FONT></DIV><BR><BR>--- On <B>Wed, 5/5/10, Dhabaleswar Panda <I><panda@cse.ohio-state.edu></I></B> wrote:<BR>
<BLOCKQUOTE style="PADDING-LEFT: 5px; MARGIN-LEFT: 5px; BORDER-LEFT: rgb(16,16,255) 2px solid"><BR>From: Dhabaleswar Panda <panda@cse.ohio-state.edu><BR>Subject: [mpich-discuss] MVAPICH2 1.5RC1 release is available with Nemesis-InfiniBand Netmod interface<BR>To: mpich-discuss@mcs.anl.gov<BR>Date: Wednesday, 5 May, 2010, 11:05 AM<BR><BR>
<DIV class=plainMail>The MVAPICH team has announced the release of MVAPICH2 1.5RC1 today. It<BR>supports a new Nemesis-InfiniBand Netmod interface. It supports the<BR>OpenFabrics InfiniBand Gen2 layer. This interface is in addition to the<BR>various existing CH3-based interfaces for InfiniBand, iWARP, RoCE, uDAPL<BR>and PSM. This release also has flexible support to use Hydra or mpirun_rsh<BR>process manager for any of the CH3-based or Nemesis-based interfaces.<BR><BR>This release is based on MPICH2 1.2.1p1.<BR><BR>The detailed release announcement is included below.<BR><BR>Hope this new Nemesis-InfiniBand netmod design will help MPICH2 users to<BR>harness maximum performance from their InfiniBand clusters.<BR><BR>Looking forward to feedback from MPICH2 users on their experiences in<BR>using this new Nemesis-InfiniBand Netmod interface.<BR><BR>Thanks,<BR><BR>DK Panda<BR>(On Behalf of the MVAPICH Team)<BR><BR>=========================<BR><BR>The MVAPICH
team is pleased to announce the release of MVAPICH2 1.5RC1<BR>with the following NEW features:<BR><BR>- MPI 2.2 standard compliant<BR>- Based on MPICH2 1.2.1p1<BR>- OFA-IB-Nemesis interface design<BR> - OpenFabrics InfiniBand network module support for<BR> MPICH2 Nemesis modular design<BR> - Support for high-performance intra-node shared memory<BR> communication provided by the Nemesis design<BR> - Adaptive RDMA Fastpath with Polling Set for high-performance<BR> inter-node communication<BR> - Shared Receive Queue (SRQ) support with flow control,<BR> uses significantly less memory for MPI library<BR> - Header caching<BR> - Advanced AVL tree-based Resource-aware registration cache<BR> - Memory Hook Support provided by integration with ptmalloc2<BR> library. This
provides safe release of memory to the<BR> Operating System and is expected to benefit the memory<BR> usage of applications that heavily use malloc and free operations.<BR> - Support for TotalView debugger<BR> - Shared Library Support for existing binary MPI application<BR> programs to run ROMIO Support for MPI-IO<BR> - Support for additional features (such as hwloc,<BR> hierarchical collectives, one-sided, multithreading, etc.),<BR> as included in the MPICH2 1.2.1p1 Nemesis channel<BR>- Flexible process manager support<BR> - mpirun_rsh to work with any of the eight interfaces<BR> (CH3 and Nemesis channel-based) including OFA-IB-Nemesis,<BR> TCP/IP-CH3 and TCP/IP-Nemesis<BR> - Hydra process manager to work with any of the eight interfaces<BR>
(CH3 and Nemesis channel-based) including OFA-IB-CH3,<BR> OFA-iWARP-CH3, OFA-RoCE-CH3 and TCP/IP-CH3<BR>- MPIEXEC_TIMEOUT is honored by mpirun_rsh<BR><BR>This release also contains multiple bug fixes since MVAPICH2-1.4.1 A<BR>summary of the major fixes are as follows:<BR><BR>- Fix compilation error when configured with<BR> `--enable-thread-funneled'<BR>- Fix MPE functionality, thanks to Anthony Chan <<A href="http://in.mc945.mail.yahoo.com/mc/compose?to=chan@mcs.anl.gov" ymailto="mailto:chan@mcs.anl.gov">chan@mcs.anl.gov</A>> for<BR> reporting and providing the resolving patch<BR>- Cleanup after a failure in the init phase is handled better by<BR> mpirun_rsh<BR>- Path determination is correctly handled by mpirun_rsh when DPM is<BR> used<BR>- Shared libraries are correctly built (again)<BR><BR>For downloading MVAPICH2 1.5RC1, associated user guide and accessing<BR>the SVN, please visit the
following URL:<BR><BR><A href="http://mvapich.cse.ohio-state.edu/" target=_blank>http://mvapich.cse.ohio-state.edu</A><BR><BR>All feedbacks, including bug reports and hints for performance tuning,<BR>patches and enhancements are welcome. Please post it to the<BR>mvapich-discuss mailing list.<BR><BR>We are also happy to inform that the number of organizations using<BR>MVAPICH/MVAPICH2 (and registered at the MVAPICH site) has crossed<BR>1,100 world-wide (in 58 countries). The MVAPICH team extends<BR>thanks to all these organizations.<BR><BR>Thanks,<BR><BR>The MVAPICH Team<BR><BR><BR>_______________________________________________<BR>mpich-discuss mailing list<BR><A href="http://in.mc945.mail.yahoo.com/mc/compose?to=mpich-discuss@mcs.anl.gov" ymailto="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</A><BR><A href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss"
target=_blank>https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</A><BR></DIV></BLOCKQUOTE></td></tr></table><br>