使用 Bash 脚本进行日志轮换
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/5789526/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Log rotating with a Bash script
提问by Ferenc Deak
I have the following issue:
我有以下问题:
I have an application, which continuously produces output to stderr and stdout. The output of this application is captured in a logfile (the app is redirected as: &> log.txt
). I don't have any options to produce a proper logging to file for this.
我有一个应用程序,它不断地向 stderr 和 stdout 生成输出。此应用程序的输出在日志文件中捕获(应用程序重定向为:)&> log.txt
。我没有任何选项可以为此生成适当的日志记录。
Now, I have a cron job, which runs every hour and beside of doing other things, it also tries to rotate this logfile above, by copying it to log.txt.1 and then creates an empty file and copies it to log.txt
现在,我有一个 cron 作业,它每小时运行一次,除了做其他事情之外,它还尝试旋转上面的这个日志文件,将它复制到 log.txt.1,然后创建一个空文件并将其复制到 log.txt
It looks like:
看起来像:
cp log.txt log.txt.1
touch /tmp/empty
cp /tmp/empty log.txt
The problem is, that the application is still writing to it, and because of this I get some very strange stuff in the log.txt.1, it starts with a lot of garbage characters, and the actual log file is somewhere at the end.
问题是,应用程序仍在写入它,因此我在 log.txt.1 中得到了一些非常奇怪的东西,它以很多垃圾字符开头,而实际的日志文件在末尾的某个地方.
Do you have any idea, how to make a correct log rotating for this specific situation (I also tried cat log.txt > log.txt.1
, does not work)? Using logrotate
for this specific application not an option, there is a whole mechanism behind the scenes that I may not change.
您有什么想法,如何针对这种特定情况制作正确的日志轮换(我也试过cat log.txt > log.txt.1
,不起作用)?使用logrotate
这个特定的应用程序不是一种选择,有幕后的整体机制,我可能不会改变。
Thanks, f.
谢谢,F。
采纳答案by pepoluan
Okay, here's an idea, inspired by http://en.wikibooks.org/wiki/Bourne_Shell_Scripting/Files_and_streams
好的,这是一个想法,灵感来自http://en.wikibooks.org/wiki/Bourne_Shell_Scripting/Files_and_streams
make a named pipe:
mkfifo /dev/mypipe
redirect stdout and stderr to the named pipe:
&> /dev/mypipe
read from mypipe into a file:
cat < /dev/mypipe > /var/log/log.txt &
when you need to log-rotate, kill the cat, rotate the log, and restart the cat.
制作一个命名管道:
mkfifo /dev/mypipe
将 stdout 和 stderr 重定向到命名管道:
&> /dev/mypipe
从 mypipe 读入文件:
cat < /dev/mypipe > /var/log/log.txt &
当你需要轮转日志时,杀死猫,轮转日志,然后重启猫。
Now, I haven't tested this. Tell us how it goes.
现在,我还没有测试过这个。告诉我们进展如何。
Note: you can give the named pipe any name, like /var/tmp/pipe1 , /var/log/pipe , /tmp/abracadabra , and so on. Just make sure to re-create the pipe after booting beforeyour logging-script runs.
注意:您可以为命名管道指定任何名称,例如 /var/tmp/pipe1 、 /var/log/pipe 、 /tmp/abracadabra 等。只需确保在启动后重新创建管道,然后再运行日志脚本。
Alternatively, don't use cat, but use a simple script file:
或者,不要使用 cat,而是使用一个简单的脚本文件:
#!/bin/bash
while : ; do
read line
printf "%s\n" "$line"
done
This script guarantees an output for every newline read. (cat might not start outputting until its buffer is full or it encounters an EOF)
此脚本保证每次读取换行符时都有一个输出。(猫可能不会开始输出,直到它的缓冲区已满或遇到 EOF)
Final -- and TESTED -- attempt
最后 -- 和 TESTED -- 尝试
IMPORTANT NOTE:Please read the comments from @andrewbelow. There are several situations which you need to be aware of.
重要提示:请阅读下面@andrew的评论。有几种情况您需要注意。
Alright! Finally got access to my Linux box. Here's how:
好吧!终于可以访问我的 Linux 机器了。就是这样:
Step 1:Make this recorder script:
第 1 步:制作此记录器脚本:
#!/bin/bash
LOGFILE="/path/to/log/file"
SEMAPHORE="/path/to/log/file.semaphore"
while : ; do
read line
while [[ -f $SEMAPHORE ]]; do
sleep 1s
done
printf "%s\n" "$line" >> $LOGFILE
done
Step 2:put the recorder into work:
第二步:让录音机工作:
Make a named pipe:
mkfifo $PIPENAME
Redirect your application's STDOUT & STDERR to the named pipe:
...things... &> $PIPENAME
Start the recorder:
/path/to/recorder.sh < $PIPENAME &
You might want to
nohup
the above to make it survive logouts.Done!
制作一个命名管道:
mkfifo $PIPENAME
将应用程序的 STDOUT 和 STDERR 重定向到命名管道:
...things... &> $PIPENAME
启动录音机:
/path/to/recorder.sh < $PIPENAME &
您可能希望
nohup
以上内容使其在注销后仍然存在。完毕!
Step 3:If you need to logrotate, pause the recorder:
第 3 步:如果需要 logrotate,请暂停记录器:
touch /path/to/log/file.semaphore
mv /path/to/log/file /path/to/archive/of/log/file
rm /path/to/log/file.semaphore
I suggest putting the above steps into its own script. Feel free to change the 2nd line to whatever log-rotating method you want to use.
我建议将上述步骤放入自己的脚本中。随意将第二行更改为您想要使用的任何日志旋转方法。
Note :If you're handy with C programming, you might want to make a short C program to perform the function of recorder.sh
. Compiled C programs will certainly be lighter than a nohup-ed detached bash script.
注意:如果您精通 C 编程,则可能需要编写一个简短的 C 程序来执行recorder.sh
. 编译后的 C 程序肯定会比 nohup-ed 分离的 bash 脚本更轻。
Note 2:David Newcomb provided a helpful warning in the comments: While the recorder is not running then writes to the pipe will block andmay cause the program to fail unpredictably. Make sure the recorder is down (or rotating) for as short time as possible.
注 2:David Newcomb 在评论中提供了一个有用的警告:当记录器未运行时,写入管道将阻塞并可能导致程序意外失败。确保记录仪停机(或旋转)的时间尽可能短。
So, if you can ensure that rotating happens reallyquickly, you can replace sleep
(a built-in command which accepts only integer values) with /bin/sleep
(a program that accepts float values) and set the sleep period to 0.5
or shorter.
因此,如果您可以确保旋转发生得非常快,您可以将sleep
(一个只接受整数值的内置命令)替换为/bin/sleep
(一个接受浮点值的程序)并将睡眠时间设置为0.5
或更短。
回答by ivan_pozdeev
First of all, you really should not reinvent the square wheel here. Your peers are probably against rotating the logs on daily schedule which automatically applies to all scripts in /etc/logrotate.d/
- this can be avoided by placing the script elsewhere.
首先,你真的不应该在这里重新发明方轮。您的同行可能反对每天轮换日志,这会自动应用于所有脚本/etc/logrotate.d/
- 这可以通过将脚本放在其他地方来避免。
Now, the standard approach to log rotation(that is implemented in logrotate
) can be implemented by any other facility just as well. E.g. here's a sample implementation in bash
:
现在,日志轮换的标准方法(在 中实现logrotate
)也可以由任何其他工具实现。例如,这里有一个示例实现bash
:
MAXLOG=<maximum index of a log copy>
for i in `seq $((MAXLOG-1)) -1 1`; do
mv "log."{$i,$((i+1))} #will need to ignore file not found errors here
done
mv log log.1 # since a file descriptor is linked to an inode rather than path,
#if you move (or even remove) an open file, the program will continue
#to write into it as if nothing happened
#see https://stackoverflow.com/questions/5219896/how-do-the-unix-commands-mv-and-rm-work-with-open-files
<make the daemon reopen the log file with the old path>
The last item is done by sending SIGHUP or (less often) SIGUSR1 and having a signal handler in the daemon that replaces the corresponding file descriptor or variable. This way, the switch is atomic, so there's no interruption in logging availability. In bash, this would look like:
最后一项是通过发送 SIGHUP 或(不太常见的)SIGUSR1 并在守护进程中使用信号处理程序来替换相应的文件描述符或变量来完成的。这样,切换是原子的,因此日志可用性不会中断。在 bash 中,这看起来像:
trap { exec &>"$LOGFILE"; } HUP
The other approach is to make the writing program itself keep track of the log size each time it writes to it and do the rotation. This limits your options in where you can write to and what rotation logic is to what the program itself supports. But it has the benefit of being a self-contained solution and checking the log size at each write rather than on schedule. Many languages' standard libraries have such a facility. As a drop-in solution, this is implemented in Apache's rotatelogs
:
另一种方法是让写入程序本身在每次写入时跟踪日志大小并进行轮换。这限制了您可以写入的位置以及程序本身支持的旋转逻辑的选择。但它的好处是作为一个独立的解决方案,并在每次写入时检查日志大小,而不是按计划进行。许多语言的标准库都有这样的功能。作为一个嵌入式解决方案,这是在 Apache 中实现的rotatelogs
:
<your_program> 2>&1 | rotatelogs <opts> <logfile> <rotation_criteria>
回答by Victor Sergienko
I wrote a logroteethis weekend. I probably wouldn't if I've read @JdeBP's great answer about multilog
before.
这个周末我写了一个logrotee。如果我之前阅读过@JdeBP 的精彩回答,multilog
我可能不会。
I focused on it being lightweight and being able to bzip2 its output chunks like:
我专注于它是轻量级的,并且能够 bzip2 其输出块,例如:
verbosecommand | logrotee \
--compress "bzip2 {}" --compress-suffix .bz2 \
/var/log/verbosecommand.log
There's a lot of to be done and tested yet, though.
不过,还有很多工作要做和测试。
回答by Cavaz
You can leverage rotatelogs
(docs here). This utility will decouple your script's stdout from the log file, managing the rotation in a transparent way. For example:
您可以利用rotatelogs
(此处的文档)。此实用程序会将脚本的标准输出与日志文件分离,以透明的方式管理轮换。例如:
your_script.sh | rotatelogs /var/log/your.log 100M
will automatically rotate the output file when it reaches 100M (can be configured to rotate based on a time interval).
当输出文件达到 100M 时会自动旋转(可配置为基于时间间隔旋转)。
回答by bravomail
You can also pipe your output thru Apache rotatelogs utility. Or following script:
您还可以通过 Apache rotatelogs 实用程序通过管道传输您的输出。或以下脚本:
#!/bin/ksh
#rotatelogs.sh -n numberOfFiles pathToLog fileSize[B|K|M|G]
numberOfFiles=10
while getopts "n:fltvecp:L:" opt; do
case $opt in
n) numberOfFiles="$OPTARG"
if ! printf '%s\n' "$numberOfFiles" | grep '^[0-9][0-9]*$' >/dev/null; then
printf 'Numeric numberOfFiles required %s. rotatelogs.sh -n numberOfFiles pathToLog fileSize[B|K|M|G]\n' "$numberOfFiles" 1>&2
exit 1
elif [ $numberOfFiles -lt 3 ]; then
printf 'numberOfFiles < 3 %s. rotatelogs.sh -n numberOfFiles pathToLog fileSize[B|K|M|G]\n' "$numberOfFiles" 1>&2
fi
;;
*) printf '-%s ignored. rotatelogs.sh -n numberOfFiles pathToLog fileSize[B|K|M|G]\n' "$opt" 1>&2
;;
esac
done
shift $(( $OPTIND - 1 ))
pathToLog=""
fileSize=""
if ! printf '%s\n' "$fileSize" | grep '^[0-9][0-9]*[BKMG]$' >/dev/null; then
printf 'Numeric fileSize followed by B|K|M|G required %s. rotatelogs.sh -n numberOfFiles pathToLog fileSize[B|K|M|G]\n' "$fileSize" 1>&2
exit 1
fi
sizeQualifier=`printf "%s\n" "$fileSize" | sed "s%^[0-9][0-9]*\([BKMG]\)$%%"`
multip=1
case $sizeQualifier in
B) multip=1 ;;
K) multip=1024 ;;
M) multip=1048576 ;;
G) multip=1073741824 ;;
esac
fileSize=`printf "%s\n" "$fileSize" | sed "s%^\([0-9][0-9]*\)[BKMG]$%%"`
fileSize=$(( $fileSize * $multip ))
fileSize=$(( $fileSize / 1024 ))
if [ $fileSize -le 10 ]; then
printf 'fileSize %sKB < 10KB. rotatelogs.sh -n numberOfFiles pathToLog fileSize[B|K|M|G]\n' "$fileSize" 1>&2
exit 1
fi
if ! touch "$pathToLog"; then
printf 'Could not write to log file %s. rotatelogs.sh -n numberOfFiles pathToLog fileSize[B|K|M|G]\n' "$pathToLog" 1>&2
exit 1
fi
lineCnt=0
while read line
do
printf "%s\n" "$line" >>"$pathToLog"
lineCnt=$(( $lineCnt + 1 ))
if [ $lineCnt -gt 200 ]; then
lineCnt=0
curFileSize=`du -k "$pathToLog" | sed -e 's/^[ ][ ]*//' -e 's%[ ][ ]*$%%' -e 's/[ ][ ]*/[ ]/g' | cut -f1 -d" "`
if [ $curFileSize -gt $fileSize ]; then
DATE=`date +%Y%m%d_%H%M%S`
cat "$pathToLog" | gzip -c >"${pathToLog}.${DATE}".gz && cat /dev/null >"$pathToLog"
curNumberOfFiles=`ls "$pathToLog".[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]_[0-9][0-9][0-9][0-9][0-9][0-9].gz | wc -l | sed -e 's/^[ ][ ]*//' -e 's%[ ][ ]*$%%' -e 's/[ ][ ]*/[ ]/g'`
while [ $curNumberOfFiles -ge $numberOfFiles ]; do
fileToRemove=`ls "$pathToLog".[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]_[0-9][0-9][0-9][0-9][0-9][0-9].gz | head -1`
if [ -f "$fileToRemove" ]; then
rm -f "$fileToRemove"
curNumberOfFiles=`ls "$pathToLog".[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]_[0-9][0-9][0-9][0-9][0-9][0-9].gz | wc -l | sed -e 's/^[ ][ ]*//' -e 's%[ ][ ]*$%%' -e 's/[ ][ ]*/[ ]/g'`
else
break
fi
done
fi
fi
done