在 bash 脚本仍在运行时强制将输出刷新到文件
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/1429951/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Force flushing of output to a file while bash script is still running
提问by olamundo
I have a small script, which is called daily by crontab using the following command:
我有一个小脚本,它每天由 crontab 使用以下命令调用:
/homedir/MyScript &> some_log.log
The problem with this method is that some_log.log is only created after MyScript finishes. I would like to flush the output of the program into the file while it's running so I could do things like
这种方法的问题是 some_log.log 仅在 MyScript 完成后创建。我想在程序运行时将程序的输出刷新到文件中,以便我可以执行以下操作
tail -f some_log.log
and keep track of the progress, etc.
并跟踪进度等。
采纳答案by Chris Dodd
bash itself will never actually write any output to your log file. Instead, the commands it invokes as part of the script will each individually write output and flush whenever they feel like it. So your question is really how to force the commands within the bash script to flush, and that depends on what they are.
bash 本身永远不会真正将任何输出写入您的日志文件。相反,它作为脚本的一部分调用的命令将每个单独写入输出并在他们需要时刷新。所以你的问题实际上是如何强制 bash 脚本中的命令刷新,这取决于它们是什么。
回答by Martin Wiebusch
I found a solution to this here. Using the OP's example you basically run
我在这里找到了解决方案。使用 OP 的示例,您基本上可以运行
stdbuf -oL /homedir/MyScript &> some_log.log
and then the buffer gets flushed after each line of output. I often combine this with nohup
to run long jobs on a remote machine.
然后在每行输出后刷新缓冲区。我经常将它与nohup
在远程机器上运行长时间的工作结合起来。
stdbuf -oL nohup /homedir/MyScript &> some_log.log
This way your process doesn't get cancelled when you log out.
这样,当您注销时,您的进程不会被取消。
回答by user3258569
script -c <PROGRAM> -f OUTPUT.txt
Key is -f. Quote from man script:
键是-f。引用 man 脚本:
-f, --flush
Flush output after each write. This is nice for telecooperation: one person
does 'mkfifo foo; script -f foo', and another can supervise real-time what is
being done using 'cat foo'.
Run in background:
在后台运行:
nohup script -c <PROGRAM> -f OUTPUT.txt
回答by crenate
You can use tee
to write to the file without the need for flushing.
您可以使用tee
写入文件而无需刷新。
/homedir/MyScript 2>&1 | tee some_log.log > /dev/null
回答by Greg Hewgill
This isn't a function of bash
, as all the shell does is open the file in question and then pass the file descriptor as the standard output of the script. What you need to do is make sure output is flushed from your scriptmore frequently than you currently are.
这不是 的函数bash
,因为 shell 所做的只是打开有问题的文件,然后将文件描述符作为脚本的标准输出传递。您需要做的是确保比当前更频繁地从脚本中刷新输出。
In Perl for example, this could be accomplished by setting:
例如,在 Perl 中,这可以通过设置来完成:
$| = 1;
See perlvarfor more information on this.
有关这方面的更多信息,请参阅perlvar。
回答by Ondra ?i?ka
Would this help?
这会有帮助吗?
tail -f access.log | stdbuf -oL cut -d ' ' -f1 | uniq
This will immediately display unique entries from access.log using the stdbuf utility.
这将使用stdbuf 实用程序立即显示 access.log 中的唯一条目。
回答by Midas
Buffering of output depends on how your program /homedir/MyScript
is implemented. If you find that output is getting buffered, you have to force it in your implementation. For example, use sys.stdout.flush() if it's a python program or use fflush(stdout) if it's a C program.
输出的缓冲取决于您的程序/homedir/MyScript
是如何实现的。如果您发现输出被缓冲,则必须在您的实现中强制它。例如,如果它是一个 python 程序,则使用 sys.stdout.flush();如果它是一个 C 程序,则使用 fflush(stdout)。
回答by Victor Sergienko
Thanks @user3258569
, script is maybe the only thing that works in busybox
!
谢谢@user3258569
,脚本可能是唯一有效的东西busybox
!
The shell was freezing for me after it, though. Looking for the cause, I found these big red warnings "don't use in a non-interactive shells" in script manual page:
不过,在它之后,外壳对我来说很冷。寻找原因,我在脚本手册页中发现了这些大红色警告“不要在非交互式 shell 中使用” :
script
is primarily designed for interactive terminal sessions. When stdin is not a terminal (for example:echo foo | script
), then the session can hang, because the interactive shell within the script session misses EOFandscript
has no clue when to close the session. See the NOTESsection for more information.
script
主要设计用于交互式终端会话。当 stdin 不是终端(例如:echo foo | script
)时,会话可能会挂起,因为脚本会话中的交互式 shell 会错过EOF并且script
不知道何时关闭会话。有关更多信息,请参阅注释部分。
True. script -c "make_hay" -f /dev/null | grep "needle"
was freezing the shell for me.
真的。script -c "make_hay" -f /dev/null | grep "needle"
正在为我冷冻外壳。
Countrary to the warning, I thought that echo "make_hay" | script
WILL pass a EOF, so I tried
与警告相反,我认为会echo "make_hay" | script
通过 EOF,所以我尝试了
echo "make_hay; exit" | script -f /dev/null | grep 'needle'
and it worked!
它奏效了!
Note the warnings in the man page. This may not work for you.
请注意手册页中的警告。这可能对您不起作用。
回答by Hastur
How just spotted herethe problem is that you have to wait that the programs that you run from your script finish their jobs.
If in your script you run program in backgroundyou can try something more.
刚刚在这里发现的问题是,您必须等待从脚本运行的程序完成其工作。
如果在你的脚本中你在后台运行程序,你可以尝试更多的东西。
In general a call to sync
before you exit allows to flush file system buffers and can help a little.
通常,sync
在退出之前调用允许刷新文件系统缓冲区,并且可以提供一点帮助。
If in the script you start some programs in background(&
), you can waitthat they finish before you exit from the script. To have an idea about how it can function you can see below
如果在脚本中您在后台启动了一些程序( &
),您可以在退出脚本之前等待它们完成。要了解它的功能,您可以在下面查看
#!/bin/bash
#... some stuffs ...
program_1 & # here you start a program 1 in background
PID_PROGRAM_1=${!} # here you remember its PID
#... some other stuffs ...
program_2 & # here you start a program 2 in background
wait ${!} # You wait it finish not really useful here
#... some other stuffs ...
daemon_1 & # We will not wait it will finish
program_3 & # here you start a program 1 in background
PID_PROGRAM_3=${!} # here you remember its PID
#... last other stuffs ...
sync
wait $PID_PROGRAM_1
wait $PID_PROGRAM_3 # program 2 is just ended
# ...
Since wait
works with jobs as well as with PID
numbers a lazy solution should be to put at the end of the script
由于wait
适用于工作以及PID
数字,因此应该将懒惰的解决方案放在脚本的末尾
for job in `jobs -p`
do
wait $job
done
More difficult is the situation if you run something that run something else in background because you have to search and wait (if it is the case) the end of all the childprocess: for example if you run a daemonprobably it is not the case to wait it finishes :-).
如果您运行一些在后台运行其他东西的东西,情况会更加困难,因为您必须搜索并等待(如果是这种情况)所有子进程的结束:例如,如果您运行一个守护进程,情况可能并非如此等待它完成:-)。
Note:
笔记:
wait ${!} means "wait till the last background process is completed" where
$!
is the PID of the last background process. So to putwait ${!}
just afterprogram_2 &
is equivalent to execute directlyprogram_2
without sending it in background with&
From the help of
wait
:Syntax wait [n ...] Key n A process ID or a job specification
wait ${!} 表示“等到最后一个后台进程完成”,其中
$!
是最后一个后台进程的PID。所以放在wait ${!}
后面program_2 &
就相当于直接执行program_2
而不在后台发送&
来自以下帮助
wait
:Syntax wait [n ...] Key n A process ID or a job specification
回答by Brian Chrisman
alternative to stdbuf is awk '{print} END {fflush()}'
I wish there were a bash builtin to do this.
Normally it shouldn't be necessary, but with older versions there might be bash synchronization bugs on file descriptors.
stdbuf 的替代方案是awk '{print} END {fflush()}'
我希望有一个内置的 bash 来做到这一点。通常它不是必需的,但是对于旧版本,文件描述符上可能存在 bash 同步错误。