bash 使用锁定文件以避免同时运行脚本的两个实例时如何避免竞争条件?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/325628/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-17 20:33:39  来源:igfitidea点击:

How to avoid race condition when using a lock-file to avoid two instances of a script running simultaneously?

linuxbashshelllockingmutex

提问by n-alexander

A typical approach to avoid two instances of the same script running simultaneously looks like this:

避免同一脚本的两个实例同时运行的典型方法如下所示:

[ -f ".lock" ] && exit 1
touch .lock
# do something
rm .lock

Is there a better way to lock on files from a shell-script, avoiding a race condition? Must directories be used instead?

有没有更好的方法来锁定来自 shell 脚本的文件,避免竞争条件?必须使用目录吗?

回答by Barry Kelly

Yes, there is indeed a race condition in the sample script. You can use bash's noclobberoption in order to get a failure in case of a race, when a different script sneaks in between the -ftest and the touch.

是的,示例脚本中确实存在竞争条件。您可以使用 bash 的noclobber选项来在比赛中失败,当不同的脚本潜入-f测试和touch.

The following is a sample code-snippet (inspired by this article) that illustrates the mechanism:

以下是说明该机制的示例代码片段(受本文启发):

if (set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null; 
then
   # This will cause the lock-file to be deleted in case of a
   # premature exit.
   trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT

   # Critical Section: Here you'd place the code/commands you want
   # to be protected (i.e., not run in multiple processes at once).

   rm -f "$lockfile"
   trap - INT TERM EXIT
else
   echo "Failed to acquire lock-file: $lockfile." 
   echo "Held by process $(cat $lockfile)."
fi

回答by jpastuszek

Try flock command:

尝试 flock 命令:

exec 200>"$LOCK_FILE"
flock -e -n 200 || exit 1

It will exit if the lock file is locked. It is atomic and it will work over recent version of NFS.

如果锁定文件被锁定,它将退出。它是原子的,可以在最新版本的 NFS 上工作。

I did a test. I have created a counter file with 0 in it and executed the following in a loop on two servers simultaneously 500 times:

我做了一个测试。我创建了一个包含 0 的计数器文件,并在两台服务器上同时循环执行以下 500 次:

#!/bin/bash

exec 200>/nfs/mount/testlock
flock -e 200

NO=`cat /nfs/mount/counter`
echo "$NO"
let NO=NO+1
echo "$NO" > /nfs/mount/counter

One node was fighting with the other for the lock. When both runs finished the file content was 1000. I have tried multiple times and it always works!

一个节点正在与另一个节点争夺锁。两次运行完成后,文件内容为 1000。我尝试了多次,它始终有效!

Note: NFS client is RHEL 5.2 and server used is NetApp.

注意:NFS 客户端是 RHEL 5.2,使用的服务器是 NetApp。

回答by Hank

Lock your script (against parallel run)

锁定您的脚本(防止并行运行)

http://wiki.bash-hackers.org/howto/mutex

http://wiki.bash-hackers.org/howto/mutex

FYI.

供参考。

回答by n-alexander

seems like I've found an easier solution: man lockfile

似乎我找到了一个更简单的解决方案: man lockfile