bash 如何运行本身启动两个后台进程的后台shell脚本?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/19225647/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How to run a background shell script which itself lauches two background processes?
提问by kener
To start with, I am a beginner to programming etc so apologies for lack of professionally accurate terminology in my question but hopefully I will manage to get my points across!
首先,我是编程等的初学者,因此对于我的问题中缺乏专业准确的术语表示歉意,但希望我能设法表达我的观点!
Would you have any suggestions how in bash or tcsh I can run a long background process which itself launches a few programs and has to run three long processes in parallel on different cores and wait for all three to be completed before proceeding...?
您对如何在 bash 或 tcsh 中运行一个长的后台进程有什么建议,该进程本身会启动几个程序,并且必须在不同的内核上并行运行三个长进程,并等待所有三个进程完成,然后再继续......?
I have written a shell script (for bash) to apply an image filter to each frame of a short but heavy video clip (it's a scientific tomogram actually but this does not really matter). It is supposed to:
我编写了一个 shell 脚本(用于 bash)来将图像过滤器应用于短而重的视频剪辑的每一帧(它实际上是一个科学断层扫描,但这并不重要)。它应该:
Create a file with a script to convert the whole file to a different format using an em2em piece of software.
Split the converted file into three equal parts and filter each set of frames in a separate process on separate cores on my linux server (to speed things up) using a program spider. Firstly, three batch-mode files (filter_1/2/3.spi) with required filtration parameters are created and then three subprocesses are launched:
spider spi/spd @filter_1 & # The first process to be launched by the main script and run in the background on one core spider spi/spd @filter_2 & # The second background process run on the next core spider spi/spd @filter_3 # The third process to be run in parallel with the two above and be finished before proceeding further.
These filtered fragments are then put together at the end.
创建一个带有脚本的文件,使用 em2em 软件将整个文件转换为不同的格式。
将转换后的文件分成三个相等的部分,并使用程序蜘蛛在我的 linux 服务器上的不同内核上的单独进程中过滤每组帧(以加快速度)。首先,创建三个具有所需过滤参数的批处理模式文件(filter_1/2/3.spi),然后启动三个子进程:
spider spi/spd @filter_1 & # The first process to be launched by the main script and run in the background on one core spider spi/spd @filter_2 & # The second background process run on the next core spider spi/spd @filter_3 # The third process to be run in parallel with the two above and be finished before proceeding further.
然后将这些过滤后的片段最后放在一起。
Because I wanted the 3 filtration steps to run simultaneously, I sent the first two to background with a simple & and kept the last one in the foreground, so that the main script process will wait for all three to finish (should happen at the same time) before proceeding further to reassemble the 3 chunks. This all works fine when I run my script in the foreground but it throws a lot of output info from the many subprocesses onto the terminal. I can reduce it with:
因为我希望这 3 个过滤步骤同时运行,所以我用一个简单的 & 将前两个发送到后台,并将最后一个保留在前台,以便主脚本进程将等待所有三个完成(应该同时发生)时间),然后再进一步重新组装 3 个块。当我在前台运行我的脚本时,这一切正常,但它会从许多子进程向终端抛出大量输出信息。我可以减少它:
$ ./My_script 2>&1 > /dev/null
But each spider process still returns
但是每个蜘蛛进程仍然返回
*****Spider normal stop*****
to the terminal. When I try to send the main process to background it keeps stopping all the time.
到终端。当我尝试将主进程发送到后台时,它一直在停止。
Would you have any suggestions how I can run the main script in the background and still get it to run the 3 spider sub-processes in parallel somehow?
您对我如何在后台运行主脚本并仍然让它以某种方式并行运行 3 个蜘蛛子进程有什么建议吗?
Thanks!
谢谢!
回答by Jester
You can launch each spider in the background, storing the process ids which you can later use in a wait
command, such as:
您可以在后台启动每个蜘蛛,存储您以后可以在wait
命令中使用的进程 ID ,例如:
spider spi/spd @filter_1 &
sp1=$!
spider spi/spd @filter_2 &
sp2=$!
spider spi/spd @filter_3 &
sp3=$!
wait $sp1 $sp2 $sp3
If you want to get rid of output, apply redirections on each command.
如果您想摆脱输出,请对每个命令应用重定向。
Update: actually you don't even need to store the PIDs, a wait
without parameters will automatically wait for all spawned children.
更新:实际上你甚至不需要存储 PID,一个wait
没有参数的将自动等待所有产生的孩子。
回答by Salem
First, if you are using bash you can use wait
to wait for each process to exit. For example, all the messages will be printed only when all processes have finished:
首先,如果您使用的是 bash,您可以使用它wait
来等待每个进程退出。例如,只有在所有进程都完成后才会打印所有消息:
sleep 10 &
P1=$!
sleep 5 &
P2=$!
sleep 6 &
P3=$!
wait $P1
echo "P1 finished"
wait $P2
echo "P2 finished"
wait $P3
echo "P3 finished"
You can use the same idea to wait for the spider
processes to finish and only then merge the results.
您可以使用相同的想法等待spider
进程完成,然后才合并结果。
Regarding the output, you can try to redirect each one to /dev/null
instead of redirecting all the output of the script:
关于输出,您可以尝试将每个重定向到/dev/null
而不是重定向脚本的所有输出:
sleep 10 &> /dev/null &