Linux 什么进程正在使用我所有的磁盘 IO
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/488826/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
What Process is using all of my disk IO
提问by
If I use "top" I can see what CPU is busy and what process is using all of my CPU.
如果我使用“top”,我可以看到哪个 CPU 很忙以及哪个进程正在使用我的所有 CPU。
If I use "iostat -x" I can see what drive is busy.
如果我使用“iostat -x”,我可以看到哪个驱动器正忙。
But how do I see what process is using all of the drive's throughput?
但是我如何查看哪个进程正在使用所有驱动器的吞吐量?
采纳答案by phresus
You're looking for iotop
(assuming you've got kernel >2.6.20 and Python 2.5). Failing that, you're looking into hooking into the filesystem. I recommend the former.
您正在寻找iotop
(假设您的内核 > 2.6.20 和 Python 2.5)。如果做不到这一点,您正在考虑挂钩到文件系统。我推荐前者。
回答by apolinsky
Have you considered lsof
(list open files)?
您是否考虑过lsof
(列出打开的文件)?
回答by quentinxue
To find out which processes in state 'D' (waiting for disk response) are currently running:
要找出状态为“D”(等待磁盘响应)的进程当前正在运行:
while true; do date; ps aux | awk '{if($8=="D") print $0;}'; sleep 1; done
while true; do date; ps aux | awk '{if($8=="D") print $0;}'; sleep 1; done
or
或者
watch -n1 -d "ps axu | awk '{if (\$8==\"D\") {print \$0}}'"
watch -n1 -d "ps axu | awk '{if (\$8==\"D\") {print \$0}}'"
Wed Aug 29 13:00:46 CLT 2012
root 321 0.0 0.0 0 0 ? D May28 4:25 \_ [jbd2/dm-0-8]
Wed Aug 29 13:00:47 CLT 2012
Wed Aug 29 13:00:48 CLT 2012
Wed Aug 29 13:00:49 CLT 2012
Wed Aug 29 13:00:50 CLT 2012
root 321 0.0 0.0 0 0 ? D May28 4:25 \_ [jbd2/dm-0-8]
Wed Aug 29 13:00:51 CLT 2012
Wed Aug 29 13:00:52 CLT 2012
Wed Aug 29 13:00:53 CLT 2012
Wed Aug 29 13:00:55 CLT 2012
Wed Aug 29 13:00:56 CLT 2012
root 321 0.0 0.0 0 0 ? D May28 4:25 \_ [jbd2/dm-0-8]
Wed Aug 29 13:00:57 CLT 2012
root 302 0.0 0.0 0 0 ? D May28 3:07 \_ [kdmflush]
root 321 0.0 0.0 0 0 ? D May28 4:25 \_ [jbd2/dm-0-8]
Wed Aug 29 13:00:58 CLT 2012
root 302 0.0 0.0 0 0 ? D May28 3:07 \_ [kdmflush]
root 321 0.0 0.0 0 0 ? D May28 4:25 \_ [jbd2/dm-0-8]
Wed Aug 29 13:00:59 CLT 2012
root 302 0.0 0.0 0 0 ? D May28 3:07 \_ [kdmflush]
root 321 0.0 0.0 0 0 ? D May28 4:25 \_ [jbd2/dm-0-8]
Wed Aug 29 13:01:00 CLT 2012
root 302 0.0 0.0 0 0 ? D May28 3:07 \_ [kdmflush]
root 321 0.0 0.0 0 0 ? D May28 4:25 \_ [jbd2/dm-0-8]
Wed Aug 29 13:01:01 CLT 2012
root 302 0.0 0.0 0 0 ? D May28 3:07 \_ [kdmflush]
root 321 0.0 0.0 0 0 ? D May28 4:25 \_ [jbd2/dm-0-8]
Wed Aug 29 13:01:02 CLT 2012
Wed Aug 29 13:01:03 CLT 2012
root 321 0.0 0.0 0 0 ? D May28 4:25 \_ [jbd2/dm-0-8]
As you can see from the result, the jdb2/dm-0-8 (ext4 journal process), and kdmflush are constantly block your Linux.
从结果中可以看出,jdb2/dm-0-8(ext4 日志进程)和 kdmflush 不断阻塞您的 Linux。
For more details this URL could be helpful: Linux Wait-IO Problem
有关更多详细信息,此 URL 可能会有所帮助:Linux Wait-IO Problem
回答by Patrick
For KDE Users you can use 'ctrl-esc' top call up a system actrivity monitor and there is I/O activities charts with process id and name.
对于 KDE 用户,您可以使用 'ctrl-esc' 顶部调用系统活动监视器,并且有带有进程 ID 和名称的 I/O 活动图表。
I don't have permissions to upload image, due to 'new user status' but you can check out the image below. It has a column for IO read and write.
由于“新用户状态”,我无权上传图片,但您可以查看下面的图片。它有一个用于 IO 读写的列。
回答by Steve Kehlet
atopalso works well and installs easily even on older CentOS 5.x systems which can't run iotop. Hit d
to show disk details, ?
for help.
即使在无法运行 iotop 的旧版 CentOS 5.x 系统上,atop也能很好地运行并轻松安装。点击d
以显示磁盘详细信息,?
寻求帮助。
ATOP - mybox 2014/09/08 15:26:00 ------ 10s elapsed
PRC | sys 0.33s | user 1.08s | | #proc 161 | #zombie 0 | clones 31 | | #exit 16 |
CPU | sys 4% | user 11% | irq 0% | idle 306% | wait 79% | | steal 1% | guest 0% |
cpu | sys 2% | user 8% | irq 0% | idle 11% | cpu000 w 78% | | steal 0% | guest 0% |
cpu | sys 1% | user 1% | irq 0% | idle 98% | cpu001 w 0% | | steal 0% | guest 0% |
cpu | sys 1% | user 1% | irq 0% | idle 99% | cpu003 w 0% | | steal 0% | guest 0% |
cpu | sys 0% | user 1% | irq 0% | idle 99% | cpu002 w 0% | | steal 0% | guest 0% |
CPL | avg1 2.09 | avg5 2.09 | avg15 2.09 | | csw 54184 | intr 33581 | | numcpu 4 |
MEM | tot 8.0G | free 81.9M | cache 2.9G | dirty 0.8M | buff 174.7M | slab 305.0M | | |
SWP | tot 2.0G | free 2.0G | | | | | vmcom 8.4G | vmlim 6.0G |
LVM | Group00-root | busy 85% | read 0 | write 30658 | KiB/w 4 | MBr/s 0.00 | MBw/s 11.98 | avio 0.28 ms |
DSK | xvdb | busy 85% | read 0 | write 23706 | KiB/w 5 | MBr/s 0.00 | MBw/s 11.97 | avio 0.36 ms |
NET | transport | tcpi 2705 | tcpo 2008 | udpi 36 | udpo 43 | tcpao 14 | tcppo 45 | tcprs 1 |
NET | network | ipi 2788 | ipo 2072 | ipfrw 0 | deliv 2768 | | icmpi 7 | icmpo 20 |
NET | eth0 ---- | pcki 2344 | pcko 1623 | si 1455 Kbps | so 781 Kbps | erri 0 | erro 0 | drpo 0 |
NET | lo ---- | pcki 423 | pcko 423 | si 88 Kbps | so 88 Kbps | erri 0 | erro 0 | drpo 0 |
NET | eth1 ---- | pcki 22 | pcko 26 | si 3 Kbps | so 5 Kbps | erri 0 | erro 0 | drpo 0 |
PID RDDSK WRDSK WCANCL DSK CMD 1/1
9862 0K 53124K 0K 98% java
358 0K 636K 0K 1% jbd2/dm-0-8
13893 0K 192K 72K 0% java
1699 0K 60K 0K 0% syslogd
4668 0K 24K 0K 0% zabbix_agentd
This clearly shows java pid 9862 is the culprit.
这清楚地表明 java pid 9862 是罪魁祸首。
回答by sjas
TL;DR
TL; 博士
If you can use iotop
, do so. Else this might help.
如果您可以使用iotop
,请这样做。否则这可能会有所帮助。
Use top
, then use these shortcuts:
使用top
,然后使用这些快捷方式:
d 1 = set refresh time from 3 to 1 second
1 = show stats for each cpu, not cumulated
This has to show values > 1.0 wa
for at least one core - if there are no diskwaits, there is simply no IO load and no need to look further. Significant loads usually start > 15.0 wa
.
这必须显示> 1.0 wa
至少一个核心的值- 如果没有磁盘等待,则根本没有 IO 负载,无需进一步查看。显着负载通常开始> 15.0 wa
。
x = highlight current sort column
< and > = change sort column
R = reverse sort order
Chose 'S', the process status column. Reverse the sort order so the 'R' (running) processes are shown on top. If you can spot 'D' processes (waiting for disk), you have an indicator what your culprit might be.
选择“S”,进程状态栏。颠倒排序顺序,以便“R”(正在运行)进程显示在顶部。如果你能发现“D”进程(等待磁盘),你就有了一个指标,你的罪魁祸首可能是什么。
回答by Chris Godwin
iotop with the -a flag:
带有 -a 标志的 iotop:
-a, --accumulated show accumulated I/O instead of bandwidth