java ReentrantReadWriteLock 上的读锁是否足以并发读取 RandomAccessFile
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/1587218/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Is a read lock on a ReentrantReadWriteLock sufficient for concurrent reading of a RandomAccessFile
提问by Ed Mazur
I'm writing something to handle concurrent read/write requests to a database file.
我正在写一些东西来处理对数据库文件的并发读/写请求。
ReentrantReadWriteLocklooks like a good match. If all threads access a shared RandomAccessFileobject, do I need to worry about the file pointer with concurrent readers? Consider this example:
ReentrantReadWriteLock看起来很匹配。如果所有线程都访问共享的RandomAccessFile对象,我是否需要担心并发读取器的文件指针?考虑这个例子:
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.util.concurrent.locks.ReentrantReadWriteLock;
public class Database {
private static final int RECORD_SIZE = 50;
private static Database instance = null;
private ReentrantReadWriteLock lock;
private RandomAccessFile database;
private Database() {
lock = new ReentrantReadWriteLock();
try {
database = new RandomAccessFile("foo.db", "rwd");
} catch (FileNotFoundException e) {
e.printStackTrace();
}
};
public static synchronized Database getInstance() {
if(instance == null) {
instance = new Database();
}
return instance;
}
public byte[] getRecord(int n) {
byte[] data = new byte[RECORD_SIZE];
try {
// Begin critical section
lock.readLock().lock();
database.seek(RECORD_SIZE*n);
database.readFully(data);
lock.readLock().unlock();
// End critical section
} catch (IOException e) {
e.printStackTrace();
}
return data;
}
}
In the getRecord() method, is the following interleaving possible with multiple concurrent readers?
在 getRecord() 方法中,是否可以与多个并发读取器进行以下交错?
Thread 1 -> getRecord(0)
Thread 2 -> getRecord(1)
Thread 1 -> acquires shared lock
Thread 2 -> acquires shared lock
Thread 1 -> seeks to record 0
Thread 2 -> seeks to record 1
Thread 1 -> reads record at file pointer (1)
Thread 2 -> reads record at file pointer (1)
线程 1 -> getRecord(0)
线程 2 -> getRecord(1)
线程 1 -> 获取共享锁
线程 2 -> 获取共享锁
线程 1 -> 寻求记录 0
线程 2 -> 寻求记录 1
线程 1 ->在文件指针处读取记录 (1)
线程 2 -> 在文件指针处读取记录 (1)
If there are indeed potential concurrency issues using ReentrantReadWriteLock and RandomAccessFile, what would an alternative be?
如果使用 ReentrantReadWriteLock 和 RandomAccessFile 确实存在潜在的并发问题,那么替代方案是什么?
采纳答案by erickson
Yes, this code isn't synchronized properly, just as you outline. A read-write lock isn't useful if the write lock is never acquired; it's as if there is no lock.
是的,正如您概述的那样,此代码未正确同步。如果从未获得写锁,则读写锁没有用;就好像没有锁一样。
Use a traditional synchronizedblock to make the seek and read appear atomic to other threads, or create a pool of RandomAccessFileinstances that are borrowed for the exclusive use of a single thread and then returned. (Or simply dedicate a channel to each thread, if you don't have too many threads.)
使用传统synchronized块使查找和读取对其他线程看起来是原子的,或者创建一个RandomAccessFile实例池,这些实例被借用于单个线程的独占使用然后返回。(或者干脆为每个线程指定一个通道,如果您没有太多线程。)
回答by Dhrumil Shah
This is a sample program that lock file and unlock file.
这是一个锁定文件和解锁文件的示例程序。
try { // Get a file channel for the file
File file = new File("filename");
FileChannel channel = new RandomAccessFile(file, "rw").getChannel(); // Use the file channel to create a lock on the file.
// This method blocks until it can retrieve the lock.
FileLock lock = channel.lock(); // Try acquiring the lock without blocking. This method returns // null or throws an exception if the file is already locked.
try {
lock = channel.tryLock();
} catch (OverlappingFileLockException e){}
lock.release(); // Close the file
channel.close();
}
catch (Exception e) { }
回答by Sam Barnum
You may want to consider using File System locks instead of managing your own locking.
您可能需要考虑使用文件系统锁而不是管理您自己的锁。
Call getChannel().lock()on your RandomAccessFile to lock the file via the FileChannelclass. This prevents write access, even from processes outside your control.
调用getChannel().lock()您的 RandomAccessFile 以通过FileChannel类锁定文件。这可以防止写访问,即使是来自您控制之外的进程。
回答by McAnix
Rather operate on the single lock object rather than the method, ReentrantReadWriteLock can support upto a maximum of 65535 recursive write locks and 65535 read locks.
ReentrantReadWriteLock 是对单个锁对象而非方法进行操作,最多可支持 65535 个递归写锁和 65535 个读锁。
Assign a read and write lock
分配读写锁
private final Lock r = rwl.readLock();
private final Lock w = rwl.writeLock();
Then work on them...
然后在他们身上工作......
Also: you are not catering for an exception and failure to unlock subsequent to locking. Call the lock as you enter the method (like a mutex locker) then do your work in a try/catch block with the unlock in the finally section, eg:
另外:您没有考虑到异常和锁定后无法解锁的情况。在进入方法时调用锁(如互斥锁),然后在 try/catch 块中使用 finally 部分中的解锁进行工作,例如:
public String[] allKeys() {
r.lock();
try { return m.keySet().toArray(); }
finally { r.unlock(); }
}
回答by sjngm
OK, 8.5 years is a long time, but I hope it's not necro...
好吧,8.5年是一段很长的时间,但我希望它不是坏死......
My problem was that we needed to access streams to read and write as atomic as possible. An important part was that our code was supposed to run on multiple machines accessing the same file. However, all examples on the Internet stopped at explaining how to lock a RandomAccessFileand didn't go any deeper. So my starting point was Sam's answer.
我的问题是我们需要访问流以尽可能原子地读写。一个重要的部分是我们的代码应该在访问同一个文件的多台机器上运行。然而,互联网上的所有示例都停留在解释如何锁定 aRandomAccessFile并没有更深入。所以我的出发点是Sam 的回答。
Now, from a distance it makes sense to have a certain order:
现在,从远处看,有一定的顺序是有意义的:
- lock the file
- open the streams
- do whatever with the streams
- close the streams
- release the lock
- 锁定文件
- 打开溪流
- 对流做任何事情
- 关闭溪流
- 释放锁
However, to allow releasing the lock in Java the streams must not be closed! Because of that the entire mechanism becomes a little weird (and wrong?).
但是,要允许在 Java 中释放锁,不能关闭流!因此,整个机制变得有点奇怪(错误?)。
In order to make auto-closing work one must remember that JVM closes the entities in the reverse order of the try-segment. This means that a flow looks like this:
为了使自动关闭工作,必须记住 JVM 以与 try-segment 相反的顺序关闭实体。这意味着流程如下所示:
- open the streams
- lock the file
- do whatever with the streams
- release the lock
- close the streams
- 打开溪流
- 锁定文件
- 对流做任何事情
- 释放锁
- 关闭溪流
Tests showed that this doesn't work. Therefore, auto-close half way and do the rest in good ol' Java 1 fashion:
测试表明这不起作用。因此,在中途自动关闭并以良好的 Java 1 方式完成剩下的工作:
try (RandomAccessFile raf = new RandomAccessFile(filename, "rwd");
FileChannel channel = raf.getChannel()) {
FileLock lock = channel.lock();
FileInputStream in = new FileInputStream(raf.getFD());
FileOutputStream out = new FileOutputStream(raf.getFD());
// do all reading
...
// that moved the pointer in the channel to somewhere in the file,
// therefore reposition it to the beginning:
channel.position(0);
// as the new content might be shorter it's a requirement to do this, too:
channel.truncate(0);
// do all writing
...
out.flush();
lock.release();
in.close();
out.close();
}
Note that the methods using this must still be synchronized. Otherwise the parallel executions may throw an OverlappingFileLockExceptionwhen calling lock().
请注意,使用 this 的方法必须仍然是synchronized. 否则,并行执行可能会OverlappingFileLockException在调用lock().
Please share experiences in case you have any...
请分享经验,如果您有任何...

