[Darshan-users] file count summary
Harms, Kevin N.
harms at alcf.anl.gov
Wed May 28 11:17:23 CDT 2014
jg,
harms at sirsteve:/tmp$ for f in piccinal_CRAY.SANTIS.darsh.*.gz; do
~/working/darshan/install/bin/darshan-parser $f | grep CP_MAX_BYTE_READ;
done
-1 12149853240961702863 CP_MAX_BYTE_READ 13783683607 ...e08/GridDensity /sc
ratch/daint lustre
-1 12149853240961702863 CP_MAX_BYTE_READ 1659593751 ...e08/GridDensity /scr
atch/daint lustre
-1 12149853240961702863 CP_MAX_BYTE_READ 138317591 ...e08/GridDensity /scra
tch/daint lustre
harms at sirsteve:/tmp$ for f in piccinal_CRAY.SANTIS.darsh.*.gz; do
~/working/darshan/install/bin/darshan-parser $f | grep CP_BYTES_READ; done
-1 12149853240961702863 CP_BYTES_READ 1728028352 ...e08/GridDensity /scratc
h/daint lustre
-1 12149853240961702863 CP_BYTES_READ 26161152 ...e08/GridDensity /scratch/
daint lustre
-1 12149853240961702863 CP_BYTES_READ 2076672 ...e08/GridDensity /scratch/d
aint lustre
MAX_BYTE_READ should be the literal maximum byte read of the file, so it
would seem in all cases you only read a small portion of the file and
shrinking with each greater core count. (Also see the BYTES_READ counter.)
If you intend to read the whole file in your program, it might be your
hyperslab selection is wrong.
kevin
>Hi,
>
>I am reading 1 (103G) file with parallel-hdf5 using 8, 64 and 512 cores.
>Reports from darshan/229 (see attached) looks good for 8 and 64 cores:
> > 8cores * 13G/core ~= 103G: ok
> > 64cores * 1.6G/core ~= 103G: ok
>but
> > 512cores * 0.13G/core ~= 67G != 103G
>
>Does it mean only 67G are being read ?
>I am attaching the logs for reference.
>
>Regards, jg.
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 4110 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/darshan-users/attachments/20140528/782756a7/attachment.bin>
More information about the Darshan-users
mailing list