[Darshan-users] Using darshan to instrument PyTorch

Snyder, Shane ssnyder at mcs.anl.gov
Fri Jun 18 10:11:17 CDT 2021


What issue specifically do you see when using pytorch? Does Darshan run out of memory in those cases? Or does it just not capture information on the files you expect? We recently modified Darshan to gracefully handle apps that call fork(), but if pytorch is using the Python 'multiprocess' module it is likely that we can't accurately capture the I/O behavior -- 'multiprocess' is using clone() system calls that we have not found a way to properly handle in Darshan. We should probably think a bit more to see if it's at all possible to account for apps that use clone(), but seemed pretty tricky when I last looked.

As for your second issue related to Darshan running out of memory, Darshan does have some internal limits that prevent each module from instrumenting more than 1,024 files for a job. Increasing DARSHAN_MODMEM does not increase those limits, and in fact, those limits are not tunable in any way right now. That said, we are working on changes to Darshan right now that allow you to control those on a per-module basis, so you could set DARSHAN_MODMEM really high and configure Darshan to allow the POSIX module to record 1.2 million files, theoretically. Those changes are in an experimental branch right now while I fine tune the implementation, but if you're interested in trying it out I could give you some details.

It does sound like there could be a bug in Darshan's core library that causes problems when compressing using zlib and using really large DARSHAN_MODMEM values. I'll investigate that more to see if I can trigger it and see if I can put it in a workaround. My hunch is that we don't properly handle buffers over 4 GB, so you might consider dialing DARSHAN_MODMEM back to around 2 GB or so at max -- that should still be enough space to capture info on 1.2 million files. But again, setting it that high right now isn't helpful without using it with the new changes I'm working on.

Thanks,
--Shane
________________________________
From: Lu Weizheng <luweizheng36 at hotmail.com>
Sent: Friday, June 18, 2021 9:04 AM
To: Snyder, Shane <ssnyder at mcs.anl.gov>
Cc: darshan-users at lists.mcs.anl.gov <darshan-users at lists.mcs.anl.gov>
Subject: Re: Using darshan to instrument PyTorch

Today I do some other experiments on instrumenting pytorch using darshan. I guess it is very likely that pytorch’s default DataLoader uses multiprocessing so I cannot get the correct darshan log.
I switch to NVIDIA DALI(https://docs.nvidia.com/deeplearning/dali/user-guide/docs/index.html) which is another data loading backend instead of using multiprocessing to load data. Now it seems that darshan can collect IO behavior as using darshan-parser, I can see POSIX read/write logs on the files I am using.
However, there is still a problem. As I am training ImageNet which in total is 160GB and has 1.2 million images and 1k folders. Darshan seems ran out of memory. I have tuned the DARSHAN_MODMEM environment variable up to 40960 MB. I get the warning log: darshan_library_warning: error compressing job record darshan_library_warning: unable to write job record to file.
I add some debug lines in darshan-runtime source code. Darshan hit this line: tmp_stream.avail_out == 0 (https://github.com/darshan-hpc/darshan/blob/e85b8bc929da91e54ff68fb1210dfe7bee3261a3/darshan-runtime/lib/darshan-core.c#L2039). It seems that the zlib is trying to compress the buffered data but run out of buffer. My current working node has 60GB main memory.
So what should I do now? To use another node with bigger memory size, or tune the DARSHAN_MODMEM to a very big size?

Thank you very much if you can reply.

Lu

2021年6月18日 上午5:26,Snyder, Shane <ssnyder at mcs.anl.gov<mailto:ssnyder at mcs.anl.gov>> 写道:

Hi Lu,

(sending to the entire mailing list now)

Unfortunately, we don't currently have a tool for either combining multiple logs from a workflow into a single log file or analysis tools that work on sets of logs.

We do have a utility called 'darshan-merge' that was written to help merge together Darshan logs for another use case, but I don't think it will work right for this case from some quick testing. I've opened an issue on our GitHub page (https://github.com/darshan-hpc/darshan/issues/401) to remind myself to see if I can rework this tool to be more helpful in cases like yours.

At some point, we'd like to offer some of our own analysis tools that are workflow aware and can summarize data from multiple Darshan logs. That's something that's going to take some time though, as we are just now starting to look at revamping some of our analysis tools using the new PyDarshan interface to Darshan logs. BTW, PyDarshan might be something you could consider using if you wanted to come up with your own analysis tools for Darshan data, but that might be more work than you're looking for. In case it's helpful, here's some documentation on PyDarshan: https://www.mcs.anl.gov/research/projects/darshan/docs/pydarshan/index.html

Thanks,
--Shane
________________________________
From: Darshan-users <darshan-users-bounces at lists.mcs.anl.gov<mailto:darshan-users-bounces at lists.mcs.anl.gov>> on behalf of Lu Weizheng <luweizheng36 at hotmail.com<mailto:luweizheng36 at hotmail.com>>
Sent: Tuesday, June 15, 2021 3:43 AM
To: darshan-users at lists.mcs.anl.gov<mailto:darshan-users at lists.mcs.anl.gov> <darshan-users at lists.mcs.anl.gov<mailto:darshan-users at lists.mcs.anl.gov>>
Subject: [Darshan-users] Using darshan to instrument PyTorch

Hi,

I am using darshan to instrument PyTorch on a local machine. My workload is an image classification problem on ImageNet dataset. When the training process ended, there are a lot of logs generated. Like:

u2020000_python_id4719_6-15-41351-17690910011763757569_1.darshan
u2020000_python_id5012_6-15-42860-17690910011763757569_1.darshan
u2020000_python_id4721_6-15-41352-17690910011763757569_1.darshan
u2020000_uname_id4720_6-15-41351-17690910011763757569_1.darshan
u2020000_python_id4722_6-15-41352-17690910011763757569_1.darshan
u2020000_uname_id4723_6-15-41354-17690910011763757569_1.darshan
u2020000_python_id4758_6-15-41830-17690910011763757569_1.darshan
u2020000_uname_id4724_6-15-41354-17690910011763757569_1.darshan
...

After using the darshan-util analysis tool for one of the above log file, it shows: I/O performance estimate (at the POSIX layer): transferred 7.5 MiB at 36.02 MiB/s

The transferred data showed in the PDF report is far less than the whole dataset size.As PyTorch DataLoader is a multi-process program, I guess darshan generate every log for every process.

My question is: how can I get the IO analysis for the whole PyTorch workload task instead of these process logs?

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/darshan-users/attachments/20210618/d3b02fc7/attachment-0001.html>


More information about the Darshan-users mailing list