[Darshan-users] Using darshan to instrument PyTorch

Lu Weizheng luweizheng36 at hotmail.com
Fri Jun 18 08:36:58 CDT 2021


Hi Shane,

Thank you so much for you reply about my problem.
After some experiments on darshan, I guess that the main reason for my incorrect darshan logs is probably about Python multiprocessing module. Here are what I tried:
I add some excluding dirs for darshan like this:

export DARSHAN_EXCLUDE_DIRS=/proc,/etc,/dev,/sys,~/jupyterlab/dlprof/,~/.conda/envs/torch1.8/

After excluding the python dir, I do not get multiple logs anymore. There are only 3 darshan logs about python IO behavior, each about a python process. As my program’s main process start 2 sub-processes. However the results in the logs are not correct. The total transferred file size is 0 as showed in the PDF genernated in darshan-job-summary.pl tool: I/O performance estimate (at the POSIX layer): transferred 0.0 MiB at 10.50 MiB/s. And using darshan-parser to get the detailed info about the darshan log, there is no POSIX or STDIO  counters on the files which my workload would read and write. So I guess darshan could not get the IO that my workload is doing.

I check the source code of PyTorch which I am instrumenting on. The data loading part relies on Python’s multiprocessing module. PyTorch use multiprocessing to start a pool and read data using multiple cores. I also find a GitHub issue about darshan on Python multiprocessing (https://github.com/darshan-hpc/darshan/issues/293). But in the issue there are no solution and I guess the problem still exists.

So is there any suggestions about how to instrument PyTorch workloads? Or is there any workaround about multiprocessing?


Best Regards,

Lu

2021年6月18日 上午5:26,Snyder, Shane <ssnyder at mcs.anl.gov<mailto:ssnyder at mcs.anl.gov>> 写道:

Hi Lu,

(sending to the entire mailing list now)

Unfortunately, we don't currently have a tool for either combining multiple logs from a workflow into a single log file or analysis tools that work on sets of logs.

We do have a utility called 'darshan-merge' that was written to help merge together Darshan logs for another use case, but I don't think it will work right for this case from some quick testing. I've opened an issue on our GitHub page (https://github.com/darshan-hpc/darshan/issues/401) to remind myself to see if I can rework this tool to be more helpful in cases like yours.

At some point, we'd like to offer some of our own analysis tools that are workflow aware and can summarize data from multiple Darshan logs. That's something that's going to take some time though, as we are just now starting to look at revamping some of our analysis tools using the new PyDarshan interface to Darshan logs. BTW, PyDarshan might be something you could consider using if you wanted to come up with your own analysis tools for Darshan data, but that might be more work than you're looking for. In case it's helpful, here's some documentation on PyDarshan: https://www.mcs.anl.gov/research/projects/darshan/docs/pydarshan/index.html

Thanks,
--Shane
________________________________
From: Darshan-users <darshan-users-bounces at lists.mcs.anl.gov<mailto:darshan-users-bounces at lists.mcs.anl.gov>> on behalf of Lu Weizheng <luweizheng36 at hotmail.com<mailto:luweizheng36 at hotmail.com>>
Sent: Tuesday, June 15, 2021 3:43 AM
To: darshan-users at lists.mcs.anl.gov<mailto:darshan-users at lists.mcs.anl.gov> <darshan-users at lists.mcs.anl.gov<mailto:darshan-users at lists.mcs.anl.gov>>
Subject: [Darshan-users] Using darshan to instrument PyTorch

Hi,

I am using darshan to instrument PyTorch on a local machine. My workload is an image classification problem on ImageNet dataset. When the training process ended, there are a lot of logs generated. Like:

u2020000_python_id4719_6-15-41351-17690910011763757569_1.darshan
u2020000_python_id5012_6-15-42860-17690910011763757569_1.darshan
u2020000_python_id4721_6-15-41352-17690910011763757569_1.darshan
u2020000_uname_id4720_6-15-41351-17690910011763757569_1.darshan
u2020000_python_id4722_6-15-41352-17690910011763757569_1.darshan
u2020000_uname_id4723_6-15-41354-17690910011763757569_1.darshan
u2020000_python_id4758_6-15-41830-17690910011763757569_1.darshan
u2020000_uname_id4724_6-15-41354-17690910011763757569_1.darshan
...

After using the darshan-util analysis tool for one of the above log file, it shows: I/O performance estimate (at the POSIX layer): transferred 7.5 MiB at 36.02 MiB/s

The transferred data showed in the PDF report is far less than the whole dataset size.As PyTorch DataLoader is a multi-process program, I guess darshan generate every log for every process.

My question is: how can I get the IO analysis for the whole PyTorch workload task instead of these process logs?

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/darshan-users/attachments/20210618/2d255899/attachment-0001.html>


More information about the Darshan-users mailing list