为什么我不能在 bash 脚本中使用作业控制?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/690266/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Why can't I use job control in a bash script?
提问by system PAUSE
In this answerto another question, I was told that
in scripts you don't have job control (and trying to turn it on is stupid)
在脚本中你没有工作控制(并且试图打开它是愚蠢的)
This is the first time I've heard this, and I've pored over the bash.info section on Job Control (chapter 7), finding no mention of either of these assertions. [Update:The man page is a little better, mentioning 'typical' use, default settings, and terminal I/O, but no real reason why job control is particularly ill-advised for scripts.]
这是我第一次听到这个,我仔细阅读了关于作业控制(第 7 章)的 bash.info 部分,发现没有提到这些断言中的任何一个。[更新:手册页要好一些,提到了“典型”使用、默认设置和终端 I/O,但没有真正的原因为什么作业控制对脚本特别不明智。]
So why doesn't script-based job-control work, and what makes it a bad practice (aka 'stupid')?
那么为什么基于脚本的作业控制不起作用,是什么使它成为一种糟糕的做法(又名“愚蠢”)?
Edit:The script in question starts a background process, starts a second background process, then attempts to put the first process back into the foreground so that it has normal terminal I/O (as if run directly), which can then be redirected from outside the script. Can't do that to a background process.
编辑:有问题的脚本启动一个后台进程,启动第二个后台进程,然后尝试将第一个进程放回前台,以便它具有正常的终端 I/O(就像直接运行一样),然后可以从脚本之外。不能对后台进程这样做。
As noted by the accepted answerto the other question, there exist other scripts that solve that particular problem without attempting job control. Fine. And the lambasted script uses a hard-coded job number — Obviously bad. But I'm trying to understand whether job control is a fundamentally doomed approach. It still seems like maybe it couldwork...
正如对另一个问题的公认答案所指出的,存在其他脚本可以在不尝试作业控制的情况下解决该特定问题。美好的。被抨击的脚本使用了硬编码的工作编号——显然很糟糕。但我试图了解工作控制是否是一种从根本上注定失败的方法。它似乎仍然可以工作......
采纳答案by vladr
What he meant is that job control is by default turned off in non-interactive mode(i.e. in a script.)
他的意思是作业控制在非交互模式下(即在脚本中)默认关闭。
From the bash
man page:
从bash
手册页:
JOB CONTROL
Job control refers to the ability to selectively stop (suspend)
the execution of processes and continue (resume) their execution at a
later point.
A user typically employs this facility via an interactive interface
supplied jointly by the system's terminal driver and bash.
and
和
set [--abefhkmnptuvxBCHP] [-o option] [arg ...]
...
-m Monitor mode. Job control is enabled. This option is on by
default for interactive shells on systems that support it (see
JOB CONTROL above). Background processes run in a separate
process group and a line containing their exit status is
printed upon their completion.
When he said "is stupid" he meant that not only:
当他说“愚蠢”时,他的意思不仅是:
- is job control meantmostly for facilitating interactive control (whereas a script can work directly with the pid's), but also
- I quote his original answer, ... relies on the fact that you didn't start any other jobs previously in the script which is a bad assumption to make. Which is quite correct.
- 是作业控制意味着大部分用于促进交互式控制(而脚本可以与PID的直接合作),而且还
- 我引用了他的原始答案,...依赖于您之前没有在脚本中开始任何其他工作的事实,这是一个错误的假设。这是非常正确的。
UPDATE
更新
In answer to your comment: yes, nobody will stop you from using job control in your bash script -- there is no hard case for forcefully disablingset -m
(i.e. yes, job control from the script will work if you want it to.) Remember that in the end, especially in scripting, there always are more than one way to skin a cat, but some ways are more portable, more reliable, make it simpler to handle error cases, parse the output, etc.
回答您的评论:是的,没有人会阻止您在 bash 脚本中使用作业控制——没有强行禁用的硬性案例set -m
(即,是的,如果您愿意,脚本中的作业控制将起作用。)请记住这一点最后,尤其是在脚本编写中,总是有不止一种方法可以给猫剥皮,但有些方法更便携、更可靠、更容易处理错误情况、解析输出等。
You particular circumstances may or may not warrant a way different from what lhunath
(and other users) deem "best practices".
您的特定情况可能会或可能不会保证与lhunath
(和其他用户)认为的“最佳实践”不同的方式。
回答by Andreas Spindler
Job control with bg
and fg
is useful only in interactive shells. But &
in conjunction with wait
is useful in scripts too.
使用bg
和 的作业控制fg
仅在交互式 shell 中有用。但&
与 with 结合wait
在脚本中也很有用。
On multiprocessor systems spawning background jobs can greatly improve the script's performance, e.g. in build scripts where you want to start at least one compiler per CPU, or process images using ImageMagick tools parallely etc.
在多处理器系统上,产生后台作业可以极大地提高脚本的性能,例如在构建脚本中,您希望每个 CPU 至少启动一个编译器,或者使用 ImageMagick 工具并行处理图像等。
The following example runs up to 8 parallel gcc's to compile all source files in an array:
以下示例最多运行 8 个并行 gcc 来编译数组中的所有源文件:
#!bash
...
for ((i = 0, end=${#sourcefiles[@]}; i < end;)); do
for ((cpu_num = 0; cpu_num < 8; cpu_num++, i++)); do
if ((i < end)); then gcc ${sourcefiles[$i]} & fi
done
wait
done
There is nothing "stupid" about this. But you'll require the wait
command, which waits for all background jobs before the script continues. The PID of the last background job is stored in the $!
variable, so you may also wait ${!}
. Note also the nice
command.
这没有什么“愚蠢”的。但是您将需要该wait
命令,该命令在脚本继续之前等待所有后台作业。最后一个后台作业的 PID 存储在$!
变量中,因此您也可以wait ${!}
. 还要注意nice
命令。
Sometimes such code is useful in makefiles:
有时这样的代码在 makefile 中很有用:
buildall:
for cpp_file in *.cpp; do gcc -c $$cpp_file & done; wait
This gives much finer control than make -j
.
这提供了比 更精细的控制make -j
。
Note that &
is a line terminator like ;
(write command&
not command&;
).
请注意,这&
是一个行终止符,如;
(write command&
not command&;
)。
Hope this helps.
希望这可以帮助。
回答by Juliano
Job control is useful only when you are running an interactive shell, i.e., you know that stdin and stdout are connected to a terminal device (/dev/pts/* on Linux). Then, it makes sense to have something on foreground, something else on background, etc.
作业控制仅在您运行交互式 shell 时有用,即,您知道 stdin 和 stdout 连接到终端设备(Linux 上的 /dev/pts/*)。然后,在前景中有一些东西,在背景中有一些东西等等是有意义的。
Scripts, on the other hand, doesn't have such guarantee. Scripts can be made executable, and run without any terminal attached. It doesn't make sense to have foreground or background processes in this case.
另一方面,脚本没有这样的保证。可以使脚本可执行,并且无需附加任何终端即可运行。在这种情况下,拥有前台或后台进程是没有意义的。
You can, however, run other commands non-interactively on the background (appending "&" to the command line) and capture their PIDs with $!
. Then you use kill
to kill or suspend them (simulating Ctrl-C or Ctrl-Z on the terminal, it the shell was interactive). You can also use wait
(instead of fg
) to wait for the background process to finish.
但是,您可以在后台以非交互方式运行其他命令(将“&”附加到命令行)并使用$!
. 然后你kill
用来杀死或挂起它们(在终端上模拟 Ctrl-C 或 Ctrl-Z,shell 是交互式的)。您还可以使用wait
(而不是fg
) 来等待后台进程完成。
回答by Eduardo A. Bustamante López
It could be useful to turn on job control in a script to set traps on SIGCHLD. The JOB CONTROL section in the manual says:
在脚本中打开作业控制以在 SIGCHLD 上设置陷阱可能很有用。手册中的 JOB CONTROL 部分说:
The shell learns immediately whenever a job changes state. Normally, bash waits until it is about to print a prompt before reporting changes in a job's status so as to not interrupt any other output. If the -b option to the set builtin command is enabled, bash reports such changes immediately. Any trap on SIGCHLD is executed for each child that exits.
每当作业更改状态时,外壳就会立即学习。通常,bash 会等到即将打印提示,然后再报告作业状态的更改,以免中断任何其他输出。如果启用了 set 内置命令的 -b 选项,bash 会立即报告此类更改。 SIGCHLD 上的任何陷阱都会为每个退出的孩子执行。
(emphasis is mine)
(重点是我的)
Take the following script, as an example:
以下面的脚本为例:
dualbus@debian:~$ cat children.bash
#!/bin/bash
set -m
count=0 limit=3
trap 'counter && { job & }' CHLD
job() {
local amount=$((RANDOM % 8))
echo "sleeping $amount seconds"
sleep "$amount"
}
counter() {
((count++ < limit))
}
counter && { job & }
wait
dualbus@debian:~$ chmod +x children.bash
dualbus@debian:~$ ./children.bash
sleeping 6 seconds
sleeping 0 seconds
sleeping 7 seconds
Note: CHLD trapping seems to be broken as of bash 4.3
注意:从 bash 4.3 开始,CHLD 陷阱似乎已被破坏
In bash 4.3, you could use 'wait -n' to achieve the same thing, though:
但是,在 bash 4.3 中,您可以使用“wait -n”来实现相同的目的:
dualbus@debian:~$ cat waitn.bash
#!/home/dualbus/local/bin/bash
count=0 limit=3
trap 'kill "$pid"; exit' INT
job() {
local amount=$((RANDOM % 8))
echo "sleeping $amount seconds"
sleep "$amount"
}
for ((i=0; i<limit; i++)); do
((i>0)) && wait -n; job & pid=$!
done
dualbus@debian:~$ chmod +x waitn.bash
dualbus@debian:~$ ./waitn.bash
sleeping 3 seconds
sleeping 0 seconds
sleeping 5 seconds
You could argue that there are other ways to do this in a more portable way, that is, without CHLD or wait -n:
您可能会争辩说,还有其他方法可以以更便携的方式执行此操作,即无需 CHLD 或 wait -n:
dualbus@debian:~$ cat portable.sh
#!/bin/sh
count=0 limit=3
trap 'counter && { brand; job & }; wait' USR1
unset RANDOM; rseed=123459876$$
brand() {
[ "$rseed" -eq 0 ] && rseed=123459876
h=$((rseed / 127773))
l=$((rseed % 127773))
rseed=$((16807 * l - 2836 * h))
RANDOM=$((rseed & 32767))
}
job() {
amount=$((RANDOM % 8))
echo "sleeping $amount seconds"
sleep "$amount"
kill -USR1 "$$"
}
counter() {
[ "$count" -lt "$limit" ]; ret=$?
count=$((count+1))
return "$ret"
}
counter && { brand; job & }
wait
dualbus@debian:~$ chmod +x portable.sh
dualbus@debian:~$ ./portable.sh
sleeping 2 seconds
sleeping 5 seconds
sleeping 6 seconds
So, in conclusion, set -m is notthat useful in scripts, since the only interesting feature it brings to scripts is being able to work with SIGCHLD. And there are other ways to achieve the same thing either shorter (wait -n) or more portable (sending signals yourself).
因此,总而言之, set -m在脚本中并不是那么有用,因为它为脚本带来的唯一有趣的功能是能够与 SIGCHLD 一起工作。还有其他方法可以实现更短的(wait -n)或更便携(自己发送信号)。
回答by Peter
Bash does support job control, as you say. In shell script writing, there is often an assumption that you can't rely on the fact that you have bash, but that you have the vanilla Bourne shell (sh
), which historically did not have job control.
正如您所说,Bash 确实支持作业控制。在编写 shell 脚本时,通常假设您不能依赖于您拥有 bash 的事实,但您拥有普通的 Bourne shell ( sh
),它在历史上没有作业控制。
I'm hard-pressed these days to imagine a system in which you are honestly restricted to the real Bourne shell. Most systems' /bin/sh
will be linked to bash
. Still, it's possible. One thing you can do is instead of specifying
这些天我很难想象一个系统,在这个系统中你真的被限制在真正的 Bourne shell 中。大多数系统/bin/sh
将链接到bash
. 不过,还是有可能的。你可以做的一件事是而不是指定
#!/bin/sh
You can do:
你可以做:
#!/bin/bash
That, and your documentation, would make it clear your script needs bash
.
这和您的文档将明确您的脚本需要bash
。
回答by Ghoti
Possibly o/t but I quite often use nohup when ssh into a server on a long-running job so that if I get logged out the job still completes.
可能是 o/t 但我经常使用 nohup 在长时间运行的作业中通过 ssh 进入服务器时,这样如果我退出,作业仍然可以完成。
I wonder if people are confusing stopping and starting from a master interactive shell and spawning background processes? The wait command allows you to spawn a lot of things and then wait for them all to complete, and like I said I use nohup all the time. It's more complex than this and very underused - sh supports this mode too. Have a look at the manual.
我想知道人们是否对从主交互式 shell 停止和启动以及生成后台进程感到困惑?wait 命令允许您生成很多东西,然后等待它们全部完成,就像我说的那样,我一直使用 nohup。它比这更复杂,而且使用率很低 - sh 也支持这种模式。看看说明书。
You've also got
你还有
kill -STOP pid
I quite often do that if I want to suspend the currently running sudo, as in:
如果我想暂停当前正在运行的 sudo,我经常这样做,例如:
kill -STOP $$
But woe betide you if you've jumped out to the shell from an editor - it will all just sit there.
但是,如果您从编辑器跳出到外壳程序,那就有祸了——它只会坐在那里。
I tend to use mnemonic -KILL etc. because there's a danger of typing
我倾向于使用助记符 -KILL 等,因为打字有危险
kill - 9 pid # note the space
and in the old days you could sometimes bring the machine down because it would kill init!
在过去,您有时可以关闭机器,因为它会杀死 init!
回答by THESorcerer
jobs DO work in bash scripts
工作确实在 bash 脚本中工作
BUT, you ... NEED to watch for the spawned staff like:
但是,您...需要注意生成的员工,例如:
ls -1 /usr/share/doc/ | while read -r doc ; do ... done
jobs will have different context on each side of the |
工作的每一边都有不同的上下文|
bypassing this may be using for instead of while:
绕过这可能使用 for 而不是 while:
for `ls -1 /usr/share/doc` ; do ... done
this should demonstrate how to use jobs in a script ... with the mention that my commented note is ... REAL (dunno why that behaviour)
这应该演示如何在脚本中使用作业......并提到我的评论笔记是......真实的(不知道为什么这种行为)
#!/bin/bash
for i in `seq 7` ; do ( sleep 100 ) & done
jobs
while [ `jobs | wc -l` -ne 0 ] ; do
for jobnr in `jobs | awk '{print }' | cut -d\[ -f2- |cut -d\] -f1` ; do
kill %$jobnr
done
#this is REALLY ODD ... but while won't exit without this ... dunno why
jobs >/dev/null 2>/dev/null
done
sleep 1
jobs