bash 如何创建备份脚本来比较日期和删除最旧的文件
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/15778292/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How to create a backup script to compare dates and delete oldest file
提问by user2238653
I have a backup script for a Minecraft server that im making. I have gotten to the point where I can consolidate all files into 1 folder, name it the current date and time, then compress it into 1 .zip file for World Edit to recognize as a backup. Problem is, I want this script to recognize that when it reaches 4 backups, it will start deleting the oldest one comparing the dates. I also need it to not glitch out when there arent already 4 backup files. How do I go about this. Here is my script.
我有一个我正在制作的 Minecraft 服务器的备份脚本。我已经可以将所有文件合并到 1 个文件夹中,将其命名为当前日期和时间,然后将其压缩为 1 个 .zip 文件,以便 World Edit 识别为备份。问题是,我希望这个脚本能够识别出当它达到 4 个备份时,它将开始删除最旧的比较日期。我还需要它在没有 4 个备份文件时不会出现故障。我该怎么做。这是我的脚本。
#!/bin/bash
DIR="$(dirname "#!/bin/bash
# IFS is needed for bash to recognize spaces
IFS=$'\n'
# Backup directory
BACKUPDIR="your directory here"
# Backup directory size limit (in kB)
DIRLIMIT=20971520
# Start logging
LOGFILE="where you want the log file stored"
echo -e "Cron job started at $(date +"%m/%d/%Y %r")\r" >> $LOGFILE
echo -e "Directory size limit: $DIRLIMIT [kB]\r" >> $LOGFILE
# If $BACKUPDIR exitsts, find its size
if [ -d "$BACKUPDIR" ]
then
DIRSIZE=$(du -sk "$BACKUPDIR" | awk '{print }')
echo -e "Current directory size: $DIRSIZE [kB]\r" >> $LOGFILE
else
echo -e "$BACKUPDIR does not exist!\r" >> $LOGFILE
fi
# Check if $BACKUPDIR's size is greater than $DIRLIMIT. If so, delete
# old files. If not, exit.
if [ $DIRSIZE -gt $DIRLIMIT ]
then
echo -e "Directory size over limit! Attempting to delete oldest log backups...\r" >> $LOGFILE
LOOPSIZE=$DIRSIZE
while [ $LOOPSIZE -gt $DIRLIMIT ]
do
# This find command below finds files (-type f) in the $BACKUPDIR directory only
# (-maxdepth 1) and it prints their last modified time and filename followed by a new line (-printf "%T@ %p\n").
# Then it sorts it based on time (sort -n) and selects the file which was modified the furthest in the past. The
# awk command removes the timestamp (which is in field ). Finally, xargs -I{} -f {} deletes the file even though spaces are
# present in the full file name.
echo -e "Deleting file: $(find "$BACKUPDIR" -type f -maxdepth 1 -printf "%T@ %f\n" | sort -n | head -1 | awk '{ print substr(#!/bin/bash
# IFS is needed for bash to recognize spaces
IFS=$'\n'
# .log files directory
LOGFILESDIR="your directory here"
# .log file size limits (in KB)
LOGFILELIMIT=35
# Find .log files in $LOGFILESDIR and remove the earliest logs if
# the filesize is greater than $LOGFILELIMIT.
for FILE in "$LOGFILESDIR"/*.log
do
LOGSIZE=$(du -sk "$FILE" | awk '{print }')
while [ $LOGSIZE -gt $LOGFILELIMIT ]
do
# The sed command deletes the rows from the top
# until it encounters a bunch of "-"'s in a row
# (which separates the logs in the log files)
sed -i '1,/----------/d' "$FILE"
LOGSIZE=$(du -sk "$FILE" | awk '{print }')
done
done
exit
, index(rm -f `ls -t ????y-??m-??d_??h-??m.zip|sed 1,4d`
,)) }')\r" >> $LOGFILE
find "$BACKUPDIR" -type f -maxdepth 1 -printf "%T@ %p\n" | sort -n | head -1 | awk '{ print substr(rm -f `ls -r ????y-??m-??d_??h-??m.zip|sed 1,4d`
, index(#!/bin/bash
NOWDATE=`date -I -d '7 days ago'`
BACKUPNAME="$NOWDATE.sql.gz"
/usr/local/bin/s3cmd del s3://path/to/bucket/$BACKUPNAME
,)) }' | xargs -I{} rm -f {}
# This function calculates the $BACKUPDIR size again and is used in the loop to see when it is permissable to exit.
LOOPSIZE=$(du -sk "$BACKUPDIR" | awk '{print }')
echo -e "Directory size is now: $LOOPSIZE [kB]\r" >> $LOGFILE
done
echo -e "Operation completed successfully! Final directory size is: $LOOPSIZE [kB]\r" >> $LOGFILE
else
echo -e "Directory size is less than limit. Exiting.\r" >> $LOGFILE
fi
# Finish logging
echo -e "\r" >> $LOGFILE
echo -e "Cron job exiting at $(date +"%m/%d/%Y %r")\r" >> $LOGFILE
echo -e "----------------------------------------------------------------------\r" >> $LOGFILE
")"
cd $DIR
clear
while [ true ]
do
# Set $DATE to current date and time
DATE="`date +%Yy-%mm-%dd_%Hh-%Mm`"
# Make directory with date and time
mkdir $DATE
# copy all files into 1 folder with date and time
cp ~/Desktop/Test.command ~/Desktop/Test2.command $DIR/$DATE
sleep 1
# compress folder into $DATE and remove previos files
zip $DATE -rm $DATE
# wait for round 2 in 1 hour
echo Waiting 1 hour
sleep 3600
done
回答by Bill Z.
Here is a semi-related bash script which I recently did for a cron job. It looks at a certain directory where automatically generated backups are stored, and deletes the oldest ones if the directory's size becomes over the user specified limit. All while maintaining a .log file.
这是我最近为 cron 工作所做的半相关 bash 脚本。它查看存储自动生成的备份的某个目录,如果目录的大小超过用户指定的限制,则删除最旧的。同时维护一个 .log 文件。
##代码##And just to make sure the .log file(s) doesn't ever become obnoxiously large, I have made a cron script to trim the upper most entries:
为了确保 .log 文件不会变得非常大,我制作了一个 cron 脚本来修剪最上面的条目:
##代码##回答by Armali
or
或者
##代码##will delete all but the newest four zip files (based on modification time or file name, respectively).
将删除除最新的四个 zip 文件之外的所有文件(分别基于修改时间或文件名)。
回答by Jeevan Dongre
Hi I wrote a script quite long ago which does similar things, it deletes the 7 day old file and deletes it from AWS S3. Hope this will be useful for you.
嗨,我很久以前写了一个脚本,它做类似的事情,它删除了 7 天的旧文件并将其从 AWS S3 中删除。希望这对你有用。
##代码##
