problem with installation

Bartlomiej Burba scanya at man.poznan.pl
Thu Feb 8 09:04:49 CST 2018


Hello,


I've try find any information in google how to sort it out, but there is 
no information about this kind of error.
My software:

   1) gmp/5.1.3(default)                 3) 
libmpc/1.0.1(default)              5) icc/17.0.1(default)                
7) ifort/17.0.1(default)              9) zlib/1.2.11_icc-17.0.1
   2) mpfr/3.1.2(default)                4) 
gcc/6.2.0                          6) impi/2017.1.132(default)           
8) szip/2.1.1                        10) hdf5/1.8.20_impi-2017_icc-17.0.1

./configure --enable-large-file-test --enable-large-req --enable-shared 
--prefix=/opt/exp_soft/local/generic/parallel-netcdf/1.9.0

Could you help me ?

[scanya at e0001 largefile]$ cat test-suite.log
==========================================================
    parallel-netcdf 1.9.0: test/largefile/test-suite.log
==========================================================

# TOTAL: 7
# PASS:  5
# SKIP:  0
# XFAIL: 0
# FAIL:  2
# XPASS: 0
# ERROR: 0

.. contents:: :depth: 2

FAIL: large_files
=================


*** Testing large files, slowly.
*** Creating large file /mnt/lustre/scanya/./large_files.nc...line 146 
of large_files.c: NetCDF: Index exceeds dimension bound
FAIL large_files (exit status: 1)

FAIL: large_coalesce
====================

*** TESTING C   large_coalesce for skip filetype buftype coalesce ------ 
MPI error (MPI_File_read_at_all) : Other I/O error , error stack:
ADIOI_NFS_READSTRIDED(545): Other I/O error Bad address
Error at line 182 in large_coalesce.c: (NC_EREAD)
0 (at line 187): expect buf[1073741814]=97 but got 0
0 (at line 187): expect buf[1073741815]=98 but got 0
0 (at line 187): expect buf[1073741816]=99 but got 0
0 (at line 187): expect buf[1073741817]=100 but got 0
0 (at line 187): expect buf[1073741818]=101 but got 0
0 (at line 187): expect buf[1073741819]=102 but got 0
0 (at line 187): expect buf[1073741820]=103 but got 0
0 (at line 187): expect buf[1073741821]=104 but got 0
0 (at line 187): expect buf[1073741822]=105 but got 0
0 (at line 187): expect buf[1073741823]=106 but got 0
0 (at line 187): expect buf[1073741824]=107 but got 0
0 (at line 187): expect buf[1073741825]=108 but got 0
0 (at line 187): expect buf[1073741826]=109 but got 0
0 (at line 187): expect buf[1073741827]=110 but got 0
0 (at line 187): expect buf[1073741828]=111 but got 0
0 (at line 187): expect buf[1073741829]=112 but got 0
0 (at line 187): expect buf[1073741830]=113 but got 0
0 (at line 187): expect buf[1073741831]=114 but got 0
0 (at line 187): expect buf[1073741832]=115 but got 0
0 (at line 187): expect buf[1073741833]=116 but got 0
0 (at line 195): expect buf[2147483638]=65 but got 0
0 (at line 195): expect buf[2147483639]=66 but got 0
0 (at line 195): expect buf[2147483640]=67 but got 0
0 (at line 195): expect buf[2147483641]=68 but got 0
0 (at line 195): expect buf[2147483642]=69 but got 0
0 (at line 195): expect buf[2147483643]=70 but got 0
0 (at line 195): expect buf[2147483644]=71 but got 0
0 (at line 195): expect buf[2147483645]=72 but got 0
0 (at line 195): expect buf[2147483646]=73 but got 0
0 (at line 195): expect buf[2147483647]=74 but got 0
0 (at line 195): expect buf[2147483648]=75 but got 0
0 (at line 195): expect buf[2147483649]=76 but got 0
0 (at line 195): expect buf[2147483650]=77 but got 0
0 (at line 195): expect buf[2147483651]=78 but got 0
0 (at line 195): expect buf[2147483652]=79 but got 0
0 (at line 195): expect buf[2147483653]=80 but got 0
0 (at line 195): expect buf[2147483654]=81 but got 0
0 (at line 195): expect buf[2147483655]=82 but got 0
0 (at line 195): expect buf[2147483656]=83 but got 0
0 (at line 195): expect buf[2147483657]=84 but got 0
fail with 41 mismatches
FAIL large_coalesce (exit status: 1)

Thank you!
----
Regards,

Bartek Burba

/System Administrator
Poznan Supercomputing & Networking Center/

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/parallel-netcdf/attachments/20180208/6339fef0/attachment.html>


More information about the parallel-netcdf mailing list