[petsc-users] problems after glibc upgrade to 2.17-157

Matthew Knepley knepley at gmail.com
Tue Jan 3 09:36:44 CST 2017


On Tue, Jan 3, 2017 at 9:04 AM, Klaij, Christiaan <C.Klaij at marin.nl> wrote:

>
> I've been using petsc-3.7.4 with intel mpi and compilers,
> superlu_dist, metis and parmetis on a cluster running
> SL7. Everything was working fine until SL7 got an update where
> glibc was upgraded from 2.17-106 to 2.17-157.
>

I cannot see the error in your log. We previously fixed a bug with this
error reporting:


https://bitbucket.org/petsc/petsc/commits/32cc76960ddbb48660f8e7c667e293c0ccd0e7d7

in August. Is it possible that your PETSc is older than this? Could you
apply that patch, or
run the configure with 'master'?

My guess is this is a dynamic library path problem, as it always is after
upgrades.

  Thanks,

    Matt


> This update seemed to have broken (at least) parmetis: the
> standalone binary gpmetis started to give a segmentation
> fault. The core dump shows this:
>
> Core was generated by `gpmetis'.
> Program terminated with signal 11, Segmentation fault.
> #0  0x00002aaaac6b865e in memmove () from /lib64/libc.so.6
>
> That's when I decided to recompile, but to my surprise I cannot
> even get past the configure stage (log attached)!
>
> ************************************************************
> *******************
>                     UNABLE to EXECUTE BINARIES for ./configure
> ------------------------------------------------------------
> -------------------
> Cannot run executables created with FC. If this machine uses a batch system
> to submit jobs you will need to configure using ./configure with the
> additional option  --with-batch.
>  Otherwise there is problem with the compilers. Can you compile and run
> code with your compiler 'mpif90'?
> See http://www.mcs.anl.gov/petsc/documentation/faq.html#libimf
> ************************************************************
> *******************
>
> Note the following:
>
> 1) Configure was done with the exact same options that worked
> fine before the update of SL7.
>
> 2) The intel mpi and compilers are exactly the same as before the
> update of SL7.
>
> 3) The cluster does not require a batch system to run code.
>
> 4) I can compile and run code with mpif90 on this cluster.
>
> 5) The problem also occurs on a workstation running SL7.
>
> Any clues on how to proceed?
> Chris
>
>
> dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
> MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl
>
> MARIN news: http://www.marin.nl/web/News/News-items/Comparison-of-
> uRANS-and-BEMBEM-for-propeller-pressure-pulse-prediction.htm
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170103/dc4f5c7e/attachment.html>


More information about the petsc-users mailing list