This could be a simple configuration issue. Can you please provide the logs generated during the failed build. I'd like to see the config.log file from the top level directory as well as the output to the screen from the configure and make steps. You can use tee like `./configure | tee config-mine.log' and `make | tee make-mine.log'. Thanks in advance.<br>
<br><div class="gmail_quote">On Fri, May 7, 2010 at 6:28 AM, PRATIK AGRAWAL <span dir="ltr"><<a href="mailto:pratik_9314@yahoo.co.in">pratik_9314@yahoo.co.in</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<table border="0" cellpadding="0" cellspacing="0"><tbody><tr><td style="font-family: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; font-size: inherit; line-height: inherit; font-size-adjust: inherit; font-stretch: inherit;" valign="top">
At the time of installation I got the error for cxx after disabling it hte installation was proceed successfully.. CAn you pls guide me to configure cxx manualy<br><br>
<div><font color="#ff4040">Pratik Agrawal</font></div>
<div><font color="#ff4040">Contact: +91-94287-63145</font></div>
<div><font color="#ff4040">Twitter: <a href="http://www/twitter.com/praxvoper" target="_blank">http://www/twitter.com/praxvoper</a></font></div>
<div><font color="#ff4040"><a href="http://www.pratikagrawal.x10.mx" target="_blank">http://www.pratikagrawal.x10.mx</a></font></div>
<div><font color="#ff4040"><br></font></div><br><br>--- On <b>Wed, 5/5/10, Dhabaleswar Panda <i><<a href="mailto:panda@cse.ohio-state.edu" target="_blank">panda@cse.ohio-state.edu</a>></i></b> wrote:<br>
<blockquote style="border-left: 2px solid rgb(16, 16, 255); padding-left: 5px; margin-left: 5px;"><br>From: Dhabaleswar Panda <<a href="mailto:panda@cse.ohio-state.edu" target="_blank">panda@cse.ohio-state.edu</a>><br>
Subject: [mpich-discuss] MVAPICH2 1.5RC1 release is available with Nemesis-InfiniBand Netmod interface<br>To: <a href="mailto:mpich-discuss@mcs.anl.gov" target="_blank">mpich-discuss@mcs.anl.gov</a><br>Date: Wednesday, 5 May, 2010, 11:05 AM<div>
<div></div><div class="h5"><br><br>
<div>The MVAPICH team has announced the release of MVAPICH2 1.5RC1 today. It<br>supports a new Nemesis-InfiniBand Netmod interface. It supports the<br>OpenFabrics InfiniBand Gen2 layer. This interface is in addition to the<br>
various existing CH3-based interfaces for InfiniBand, iWARP, RoCE, uDAPL<br>and PSM. This release also has flexible support to use Hydra or mpirun_rsh<br>process manager for any of the CH3-based or Nemesis-based interfaces.<br>
<br>This release is based on MPICH2 1.2.1p1.<br><br>The detailed release announcement is included below.<br><br>Hope this new Nemesis-InfiniBand netmod design will help MPICH2 users to<br>harness maximum performance from their InfiniBand clusters.<br>
<br>Looking forward to feedback from MPICH2 users on their experiences in<br>using this new Nemesis-InfiniBand Netmod interface.<br><br>Thanks,<br><br>DK Panda<br>(On Behalf of the MVAPICH Team)<br><br>=========================<br>
<br>The MVAPICH
team is pleased to announce the release of MVAPICH2 1.5RC1<br>with the following NEW features:<br><br>- MPI 2.2 standard compliant<br>- Based on MPICH2 1.2.1p1<br>- OFA-IB-Nemesis interface design<br> - OpenFabrics InfiniBand network module support for<br>
MPICH2 Nemesis modular design<br> - Support for high-performance intra-node shared memory<br> communication provided by the Nemesis design<br> - Adaptive RDMA Fastpath with Polling Set for high-performance<br>
inter-node communication<br> - Shared Receive Queue (SRQ) support with flow control,<br> uses significantly less memory for MPI library<br> - Header caching<br> - Advanced AVL tree-based Resource-aware registration cache<br>
- Memory Hook Support provided by integration with ptmalloc2<br> library. This
provides safe release of memory to the<br> Operating System and is expected to benefit the memory<br> usage of applications that heavily use malloc and free operations.<br> - Support for TotalView debugger<br>
- Shared Library Support for existing binary MPI application<br> programs to run ROMIO Support for MPI-IO<br> - Support for additional features (such as hwloc,<br> hierarchical collectives, one-sided, multithreading, etc.),<br>
as included in the MPICH2 1.2.1p1 Nemesis channel<br>- Flexible process manager support<br> - mpirun_rsh to work with any of the eight interfaces<br> (CH3 and Nemesis channel-based) including OFA-IB-Nemesis,<br>
TCP/IP-CH3 and TCP/IP-Nemesis<br> - Hydra process manager to work with any of the eight interfaces<br>
(CH3 and Nemesis channel-based) including OFA-IB-CH3,<br> OFA-iWARP-CH3, OFA-RoCE-CH3 and TCP/IP-CH3<br>- MPIEXEC_TIMEOUT is honored by mpirun_rsh<br><br>This release also contains multiple bug fixes since MVAPICH2-1.4.1 A<br>
summary of the major fixes are as follows:<br><br>- Fix compilation error when configured with<br> `--enable-thread-funneled'<br>- Fix MPE functionality, thanks to Anthony Chan <<a href="http://in.mc945.mail.yahoo.com/mc/compose?to=chan@mcs.anl.gov" target="_blank">chan@mcs.anl.gov</a>> for<br>
reporting and providing the resolving patch<br>- Cleanup after a failure in the init phase is handled better by<br> mpirun_rsh<br>- Path determination is correctly handled by mpirun_rsh when DPM is<br> used<br>- Shared libraries are correctly built (again)<br>
<br>For downloading MVAPICH2 1.5RC1, associated user guide and accessing<br>the SVN, please visit the
following URL:<br><br><a href="http://mvapich.cse.ohio-state.edu/" target="_blank">http://mvapich.cse.ohio-state.edu</a><br><br>All feedbacks, including bug reports and hints for performance tuning,<br>patches and enhancements are welcome. Please post it to the<br>
mvapich-discuss mailing list.<br><br>We are also happy to inform that the number of organizations using<br>MVAPICH/MVAPICH2 (and registered at the MVAPICH site) has crossed<br>1,100 world-wide (in 58 countries). The MVAPICH team extends<br>
thanks to all these organizations.<br><br>Thanks,<br><br>The MVAPICH Team<br><br><br>_______________________________________________<br>mpich-discuss mailing list<br><a href="http://in.mc945.mail.yahoo.com/mc/compose?to=mpich-discuss@mcs.anl.gov" target="_blank">mpich-discuss@mcs.anl.gov</a><br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br></div></div></div></blockquote></td></tr></tbody></table><br><br>_______________________________________________<br>
mpich-discuss mailing list<br>
<a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
<br></blockquote></div><br><br clear="all"><br>-- <br>Jonathan Perkins<br>