[petsc-users] valgrind

paul zhang paulhuaizhang at gmail.com
Mon Dec 1 13:22:33 CST 2014


Thanks for checking.

My program is coded with C++ actually, so fortran may not be necessary. I
compiled PETSc on our university cluster, where a MPI package has already
been installed as universal module. It seems not compatible with PETSc. So
I am trying to install my own version of MPI and link it to PETSc.

As compile PETSc using its default configuration, it automatically
downloaded the mpi .
./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran
--download-fblaslapack --download-mpich

I was wondering if it does the same thing as I installed MPI by my own.

Thanks again.
Paul






















Huaibao (Paul) Zhang
*Gas Surface Interactions Lab*
Department of Mechanical Engineering
University of Kentucky,
Lexington,
KY, 40506-0503
*Office*: 216 Ralph G. Anderson Building
*Web*:gsil.engineering.uky.edu

On Mon, Dec 1, 2014 at 2:06 PM, Matthew Knepley <knepley at gmail.com> wrote:

> On Mon, Dec 1, 2014 at 1:02 PM, paul zhang <paulhuaizhang at gmail.com>
> wrote:
>
>> I should have installed openmpi successfully...
>>
>
> The Fortran wrapper does not seem to correctly link the libraries:
>
> ERROR while running executable: Could not execute
> "/tmp/petsc-rVaKfJ/config.setCompilers/conftest":
> /tmp/petsc-rVaKfJ/config.setCompilers/conftest: symbol lookup error:
> /home/hzh225/LIB_CFD/openmpi-1.8.3/lib/libmpi_mpifh.so.2: undefined symbol:
> mpi_fortran_weights_empty
>
> Or else you need something in your LD_LIBRARY_PATH. Either way, so you
> need Fortran? If so,
> use --download-mpich, otherwise use --with-fc=0.
>
>   Thanks,
>
>     Matt
>
>
>> Attached.
>>
>> Thanks,
>> Paul
>>
>> Huaibao (Paul) Zhang
>> *Gas Surface Interactions Lab*
>> Department of Mechanical Engineering
>> University of Kentucky,
>> Lexington,
>> KY, 40506-0503
>> *Office*: 216 Ralph G. Anderson Building
>> *Web*:gsil.engineering.uky.edu
>>
>> On Mon, Dec 1, 2014 at 1:59 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>>
>>>
>>>   Send configure.log for the ./configure with
>>>
>>> ./configure  --download-fblaslapack
>>> --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0
>>> --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3
>>>
>>>
>>> Barry
>>>
>>>
>>> > On Dec 1, 2014, at 12:55 PM, paul zhang <paulhuaizhang at gmail.com>
>>> wrote:
>>> >
>>> > Matt,
>>> >
>>> > Sorry to poke you again. I am in a dilemma.
>>> >
>>> > If I use
>>> >
>>> > ./configure --with-cc=mpicc --with-cxx=mpiCC --with-fc=mpif77
>>> --download-fblaslapack
>>> --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0
>>> --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3/
>>> >
>>> >
>>> > Then I am told to
>>> >
>>> > TESTING: checkMPICompilerOverride from
>>> config.setCompilers(config/BuildSystem/config/setCompilers.py:1501)
>>>
>>>
>>>  *******************************************************************************
>>> >          UNABLE to CONFIGURE with GIVEN OPTIONS    (see configure.log
>>> for details):
>>> >
>>> -------------------------------------------------------------------------------
>>> > --with-cc=mpicc is specified with
>>> --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3. However
>>> /home/hzh225/LIB_CFD/openmpi-1.8.3/bin/mpicc exists and should be the
>>> prefered compiler! Suggest not specifying --with-cc option so that
>>> configure can use /home/hzh225/LIB_CFD/openmpi-1.8.3/bin/mpicc instead.
>>> >
>>> *******************************************************************************
>>> >
>>> >
>>> > However if I skip those compilers,
>>> >
>>> > ./configure  --download-fblaslapack
>>> --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0
>>> --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3
>>> >
>>> >
>>> > My problem now is
>>> >
>>> >
>>> ===============================================================================
>>> >              Configuring PETSc to compile on your system
>>> >
>>> ===============================================================================
>>> > TESTING: checkFortranCompiler from
>>> config.setCompilers(config/BuildSystem/config/setCompilers.py:910)
>>>
>>>
>>> *******************************************************************************
>>> >                     UNABLE to EXECUTE BINARIES for ./configure
>>> >
>>> -------------------------------------------------------------------------------
>>> > Cannot run executables created with FC. If this machine uses a batch
>>> system
>>> > to submit jobs you will need to configure using ./configure with the
>>> additional option  --with-batch.
>>> >  Otherwise there is problem with the compilers. Can you compile and
>>> run code with your C/C++ (and maybe Fortran) compilers?
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > Huaibao (Paul) Zhang
>>> > Gas Surface Interactions Lab
>>> > Department of Mechanical Engineering
>>> > University of Kentucky,
>>> > Lexington,
>>> > KY, 40506-0503
>>> > Office: 216 Ralph G. Anderson Building
>>> > Web:gsil.engineering.uky.edu
>>> >
>>> > On Mon, Dec 1, 2014 at 1:34 PM, Matthew Knepley <knepley at gmail.com>
>>> wrote:
>>> > On Mon, Dec 1, 2014 at 12:33 PM, paul zhang <paulhuaizhang at gmail.com>
>>> wrote:
>>> > That is my new configuration. Is that OK?
>>> >
>>> > export PETSC_DIR=`pwd`
>>> > export PETSC_ARCH=linux-gnu-intel
>>> > ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran
>>> --download-fblaslapack --download-mpich
>>> --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0
>>> --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3/
>>> >
>>> > That looks correct.
>>> >
>>> > When I say "using PETSc makefiles", I mean for your own project. You
>>> appear to be using CMake.
>>> >
>>> >   Matt
>>> >
>>> >
>>> > Huaibao (Paul) Zhang
>>> > Gas Surface Interactions Lab
>>> > Department of Mechanical Engineering
>>> > University of Kentucky,
>>> > Lexington,
>>> > KY, 40506-0503
>>> > Office: 216 Ralph G. Anderson Building
>>> > Web:gsil.engineering.uky.edu
>>> >
>>> > On Mon, Dec 1, 2014 at 1:28 PM, paul zhang <paulhuaizhang at gmail.com>
>>> wrote:
>>> > I did use the PETSc makefiles. Should I include the valgrind path in
>>> my own make file again?
>>> >
>>> > [hzh225 at dlxlogin2-2 petsc-3.5.2]$ pwd
>>> > /home/hzh225/LIB_CFD/nP/petsc-3.5.2
>>> > [hzh225 at dlxlogin2-2 petsc-3.5.2]$ make getincludedirs
>>> > -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/include
>>> -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/linux-gnu-intel/include
>>> -I/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0/include
>>> >
>>> > Huaibao (Paul) Zhang
>>> > Gas Surface Interactions Lab
>>> > Department of Mechanical Engineering
>>> > University of Kentucky,
>>> > Lexington,
>>> > KY, 40506-0503
>>> > Office: 216 Ralph G. Anderson Building
>>> > Web:gsil.engineering.uky.edu
>>> >
>>> > On Mon, Dec 1, 2014 at 11:55 AM, Matthew Knepley <knepley at gmail.com>
>>> wrote:
>>> > On Mon, Dec 1, 2014 at 10:43 AM, paul zhang <paulhuaizhang at gmail.com>
>>> wrote:
>>> > Matt,
>>> >
>>> > Thanks for your reply. I am able to compile PETSc. And I went through
>>> the default tests. Now when I go to my code, I got problems.
>>> >
>>> > I am assuming that you put flags in your makefiles rather than using
>>> the PETSc makefiles. You need all the includes you get from
>>> >
>>> >    make getincludedirs
>>> >
>>> >     Matt
>>> >
>>> > [hzh225 at dlxlogin2-1 petsc-3.5]$ make all
>>> > [100%] Building CXX object CMakeFiles/kats.dir/main.cc.o
>>> > /home/hzh225/LIB_CFD/nP/petsc-3.5.2/include/petscsys.h(1760):
>>> catastrophic error: cannot open source file "valgrind/valgrind.h"
>>> >   #  include <valgrind/valgrind.h>
>>> >                                   ^
>>> >
>>> > compilation aborted for /home/hzh225/CMake/petsc/petsc-3.5/main.cc
>>> (code 4)
>>> > make[2]: *** [CMakeFiles/kats.dir/main.cc.o] Error 4
>>> > make[1]: *** [CMakeFiles/kats.dir/all] Error 2
>>> > make: *** [all] Error 2
>>> >
>>> >
>>> > Huaibao (Paul) Zhang
>>> > Gas Surface Interactions Lab
>>> > Department of Mechanical Engineering
>>> > University of Kentucky,
>>> > Lexington,
>>> > KY, 40506-0503
>>> > Office: 216 Ralph G. Anderson Building
>>> > Web:gsil.engineering.uky.edu
>>> >
>>> > On Mon, Dec 1, 2014 at 11:28 AM, Matthew Knepley <knepley at gmail.com>
>>> wrote:
>>> > On Mon, Dec 1, 2014 at 10:21 AM, paul zhang <paulhuaizhang at gmail.com>
>>> wrote:
>>> > Hi All,
>>> >
>>> > How to enable the valgrind flag? I installed that by myself locally.
>>> >
>>> >       It appears you do not have valgrind installed on your system.
>>>
>>>
>>>                     We HIGHLY recommend you install it from
>>> www.valgrind.org
>>>
>>>                                                  Or install valgrind-devel
>>> or equivalent using your package manager.
>>>
>>>                                                             Then rerun
>>> ./configure
>>> >
>>> > We could not find the valgrind header (valgrind.h). You can use
>>> >
>>> >   --with-valgrind-dir=<path>
>>> >
>>> > so that it can find the path/include/valgrind/valgrind.h
>>> >
>>> >   Thanks,
>>> >
>>> >     Matt
>>> >
>>> > Thanks,
>>> > Paul
>>> >
>>> >
>>> > Huaibao (Paul) Zhang
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> > -- Norbert Wiener
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> > -- Norbert Wiener
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> > -- Norbert Wiener
>>> >
>>>
>>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20141201/c9836b7d/attachment.html>


More information about the petsc-users mailing list