<div dir="ltr">I am thinking I need to get a manual build of SuperLU but let me know if you have any suggestions.<div>Thanks,</div><div>Mark</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Nov 5, 2014 at 11:59 AM, Satish Balay <span dir="ltr"><<a href="mailto:balay@mcs.anl.gov" target="_blank">balay@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">No log attached - so I can't comment..<br>
<span class="HOEnZb"><font color="#888888"><br>
satish<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
On Wed, 5 Nov 2014, Mark Adams wrote:<br>
<br>
> Thanks, it is working now. Links fail with a SuperLU error. I'm guessing<br>
> I need to use a manually built SuperLU.<br>
> Mark<br>
><br>
> On Tue, Nov 4, 2014 at 9:13 PM, Satish Balay <<a href="mailto:balay@mcs.anl.gov">balay@mcs.anl.gov</a>> wrote:<br>
><br>
> > >>>>>><br>
> > checking whether the C compiler works...configure: error: in<br>
> > `/chos/global/u2/m/madams/petsc_private/arch-knc-dbg64/externalpackages/hypre-2.9.1a/src':<br>
> > configure: error: cannot run C compiled programs.<br>
> > If you meant to cross compile, use `--host'.<br>
> > <<<<<<br>
> ><br>
> > we really don't have support for batch build of<br>
> > externalpacakges. [they usually work - which is great - but it won't<br>
> > work always].<br>
> ><br>
> > So it has to be installed manually.<br>
> ><br>
> > BTW: --with-cc="mpiicc " --with-cxx="mpiicpc " has extra<br>
> > 'spaces'. Perhaps its not causing problems yet. [cmake usually barfs<br>
> > with this type of usage]<br>
> ><br>
> > Satish<br>
> ><br>
> > On Tue, 4 Nov 2014, Mark Adams wrote:<br>
> ><br>
> > > Thanks, getting better ... I now get this error with hypre. hypre is<br>
> > not<br>
> > > critical here but if someone has ideas ...<br>
> > ><br>
> > > On Tue, Nov 4, 2014 at 7:43 PM, Satish Balay <<a href="mailto:balay@mcs.anl.gov">balay@mcs.anl.gov</a>> wrote:<br>
> > ><br>
> > > > --CFLAGS=""-mmic -mkl -fp-model precise""<br>
> > > ><br>
> > > > remove the extra set of quotes from your ../arch-knc-dbg64.py.<br>
> > > > For eg: petsc/config/examples/arch-cray-xt6-pkgs-opt.py has:<br>
> > > ><br>
> > > > '--COPTFLAGS=-fast -mp',<br>
> > > ><br>
> > > > The equivalent shell would be:<br>
> > > ><br>
> > > > --COPTFLAGS="-fast -mp"<br>
> > > ><br>
> > > > Satish<br>
> > > ><br>
> > > > On Tue, 4 Nov 2014, Mark Adams wrote:<br>
> > > ><br>
> > > > > Thanks this helped. I am now getting METIS errors.<br>
> > > > ><br>
> > > > > On Tue, Nov 4, 2014 at 6:30 PM, Satish Balay <<a href="mailto:balay@mcs.anl.gov">balay@mcs.anl.gov</a>><br>
> > wrote:<br>
> > > > ><br>
> > > > > > Ah - didn't check the configure comand closely enough..<br>
> > > > > ><br>
> > > > > > > --with-mpicc=/global/babbage/nsg/opt/intel/impi/<br>
> > > > > > <a href="http://5.0.1.035/intel64/bin/mpicc" target="_blank">5.0.1.035/intel64/bin/mpicc</a><br>
> > > > > > > --with-mpicxx=/global/babbage/nsg/opt/intel/impi/<br>
> > > > > > <a href="http://5.0.1.035/intel64/bin/mpicxx" target="_blank">5.0.1.035/intel64/bin/mpicxx</a><br>
> > > > > > > --with-mpif90=/global/babbage/nsg/opt/intel/impi/<br>
> > > > > > <a href="http://5.0.1.035/intel64/bin/mpif90" target="_blank">5.0.1.035/intel64/bin/mpif90</a><br>
> > > > > ><br>
> > > > > > petsc configure doesn't provide any such options..<br>
> > > > > ><br>
> > > > > > > --with-mpi-lib=/global/babbage/nsg/opt/intel/impi/<br>
> > > > <a href="http://5.0.1.035/intel64/lib" target="_blank">5.0.1.035/intel64/lib</a><br>
> > > > > ><br>
> > > > > > This would be the wrong way to provide MPI lib.<br>
> > > > > ><br>
> > > > > ><br>
> > > > > > The correct way to use Intel MPI is:<br>
> > > > > ><br>
> > > > > > --with-cc=mpiipc --with-cxx=mpiicpc --with-fc=mpiifort [i.e no<br>
> > > > > > --with-mpi-include or --with-mpi-lib options]<br>
> > > > > ><br>
> > > > > > But as Richard mentioned - the wrappers in intel impi-5 are<br>
> > > > > > broken. Alternative is to use impi-4 as suggested.<br>
> > > > > ><br>
> > > > > > Or something like the following (for compilers/mpi):<br>
> > > > > ><br>
> > > > > > ./configure --with-cc=icc --with-fc=ifort --with-cxx=0<br>
> > > > > > --with-mpi-include=/global/babbage/nsg/opt/intel/impi/<br>
> > > > > > <a href="http://5.0.1.035/intel64/include" target="_blank">5.0.1.035/intel64/include</a><br>
> > > > > > --with-mpi-lib="-L/global/babbage/nsg/opt/intel/impi/<br>
> > > > > > <a href="http://5.0.1.035/intel64/lib/release" target="_blank">5.0.1.035/intel64/lib/release</a><br>
> > -L/global/babbage/nsg/opt/intel/impi/<br>
> > > > > > <a href="http://5.0.1.035/intel64/lib" target="_blank">5.0.1.035/intel64/lib</a> -lmpifort -lmpi -lmpigi -ldl -lrt -lpthread"<br>
> > > > > ><br>
> > > > > > Satish<br>
> > > > > ><br>
> > > > > > On Tue, 4 Nov 2014, Richard Mills wrote:<br>
> > > > > ><br>
> > > > > > > Hi Mark,<br>
> > > > > > ><br>
> > > > > > > I noticed that you are using the wrong compiler wrappers.<br>
> > Instead of<br>
> > > > > > > 'mpicc', 'mpif90', and 'mpicxx' (which will invoke GNU<br>
> > compilers),<br>
> > > > you<br>
> > > > > > > probably want 'mpiicc', 'mpiifort', and 'mpiicpc' (invoke the<br>
> > Intel<br>
> > > > > > > compilers).<br>
> > > > > > ><br>
> > > > > > > You will probably also want to unload the IMPI 5.x and load the<br>
> > IMPI<br>
> > > > > > 4.1.3<br>
> > > > > > > module right now because of a stupid bug in the compiler wrappers<br>
> > > > (for<br>
> > > > > > > which I've filed a bug report--will be fixed in next IMPI<br>
> > release,<br>
> > > > > > sometime<br>
> > > > > > > before SC14).<br>
> > > > > > ><br>
> > > > > > > --Richard<br>
> > > > > > ><br>
> > > > > > > On Tue, Nov 4, 2014 at 12:59 PM, Mark Adams <<a href="mailto:mfadams@lbl.gov">mfadams@lbl.gov</a>><br>
> > wrote:<br>
> > > > > > ><br>
> > > > > > > > Thanks Victor,<br>
> > > > > > > ><br>
> > > > > > > > I am getting cc error. Any ideas?<br>
> > > > > > > > Mark<br>
> > > > > > > ><br>
> > > > > > > ><br>
> > > > > > > > On Tue, Nov 4, 2014 at 12:38 PM, Victor Eijkhout <<br>
> > > > > > <a href="mailto:eijkhout@tacc.utexas.edu">eijkhout@tacc.utexas.edu</a><br>
> > > > > > > > > wrote:<br>
> > > > > > > ><br>
> > > > > > > >><br>
> > > > > > > >> > On Nov 4, 2014, at 11:32 AM, Mark Adams <<a href="mailto:mfadams@lbl.gov">mfadams@lbl.gov</a>><br>
> > > > wrote:<br>
> > > > > > > >> ><br>
> > > > > > > >> > Has anyone built PETSc on KNC (Babbage at NERSC to be<br>
> > specific)?<br>
> > > > > > > >><br>
> > > > > > > >> At TACC under intel 14:<br>
> > > > > > > >><br>
> > > > > > > >> CEE_OPTIONS="-mmic -mkl -fp-model precise"<br>
> > > > > > > >> ./configure --PETSC_ARCH=mic --with-fc=0 --with-debug=0 \<br>
> > > > > > > >> --with-batch=1 --CPPFLAGS=-mmic \<br>
> > > > > > > >> --CFLAGS="${CEE_OPTIONS}" --CXXFLAGS="${CEE_OPTIONS}"<br>
> > > > > > > >> --FFLAGS="${CEE_OPTIONS}" \<br>
> > > > > > > >> --with-mpi=1 --known-mpi-shared-libraries=1 \<br>
> > > > > > > >> --with-mpi-include=${MPICH_HOME}/mic/include \<br>
> > > > > > > >> --with-mpi-lib=${MPICH_HOME}/mic/lib \<br>
> > > > > > > >> --with-mpicc=/opt/apps/intel13/impi/<br>
> > > > > > <a href="http://4.1.0.030/intel64/bin/mpicc" target="_blank">4.1.0.030/intel64/bin/mpicc</a> \<br>
> > > > > > > >> --with-mpicxx=/opt/apps/intel13/impi/<br>
> > > > > > <a href="http://4.1.0.030/intel64/bin/mpicxx" target="_blank">4.1.0.030/intel64/bin/mpicxx</a><br>
> > > > > > > >> \<br>
> > > > > > > >> --with-mpif90=/opt/apps/intel13/impi/<br>
> > > > > > <a href="http://4.1.0.030/intel64/bin/mpif90" target="_blank">4.1.0.030/intel64/bin/mpif90</a><br>
> > > > > > > >><br>
> > > > > > > >> That first line is still giving me problems. Just nix the<br>
> > > > fp-model.<br>
> > > > > > > >><br>
> > > > > > > >> Victor.<br>
> > > > > > > >><br>
> > > > > > > >><br>
> > > > > > > ><br>
> > > > > > ><br>
> > > > > ><br>
> > > > > ><br>
> > > > ><br>
> > > ><br>
> > > ><br>
> > ><br>
> ><br>
> ><br>
><br>
<br>
</div></div></blockquote></div><br></div>