java有“LinkedConcurrentHashMap”数据结构吗?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/1391918/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Does java have a "LinkedConcurrentHashMap" data structure?
提问by Peter Lee
I need a data structure that is a LinkedHashMap and is thread safe.
我需要一个 LinkedHashMap 并且是线程安全的数据结构。
How can I do that ?
我怎样才能做到这一点 ?
回答by David Crawshaw
Collections.synchronizedMap(new LinkedHashMap())
Collections.synchronizedMap(new LinkedHashMap())
回答by Yishai
You can wrap the map in a Collections.synchronizedMap to get a synchronized hashmap that maintains insertion order. This is not as efficient as a ConcurrentHashMap (and doesn't implement the extra interface methods of ConcurrentMap) but it does get you the (somewhat) thread safe behavior.
您可以将映射包装在 Collections.synchronizedMap 中以获取保持插入顺序的同步哈希映射。这不如 ConcurrentHashMap 有效(并且没有实现 ConcurrentMap 的额外接口方法),但它确实为您提供了(某种程度上)线程安全的行为。
Even the mighty Google Collections doesn't appear to have solved this particular problem yet. However, there is one projectthat does try to tackle the problem.
即使是强大的 Google Collections 似乎也没有解决这个特定的问题。然而,有一个项目确实试图解决这个问题。
I say somewhat on the synchronization, because iteration is still not thread safe in the sense that concurrent modification exceptions can happen.
我说的是同步,因为在并发修改异常可能发生的意义上,迭代仍然不是线程安全的。
回答by Adrian Pronk
Since the ConcurrentHashMap offers a few important extra methods that are not in the Map interface, simply wrapping a LinkedHashMap with a synchronizedMap won't give you the same functionality, in particular, they won't give you anything like the putIfAbsent(), replace(key, oldValue, newValue) and remove(key, oldValue) methods which make the ConcurrentHashMap so useful.
由于 ConcurrentHashMap 提供了一些 Map 接口中没有的重要的额外方法,简单地将 LinkedHashMap 与 synchronizedMap 包装起来不会给你相同的功能,特别是,它们不会给你像 putIfAbsent() 这样的东西,替换(key, oldValue, newValue) 和 remove(key, oldValue) 方法使 ConcurrentHashMap 如此有用。
Unless there's some apache library that has implemented what you want, you'll probably have to use a LinkedHashMap and provide suitable synchronized{} blocks of your own.
除非有一些 apache 库实现了您想要的功能,否则您可能必须使用 LinkedHashMap 并提供您自己的合适的 synchronized{} 块。
回答by Malaxeur
The answer is pretty much no, there's nothing equivalent to a ConcurrentHashMap that is sorted (like the LinkedHashMap). As other people pointed out, you can wrap your collection using Collections.synchronizedMap(-yourmap-) however this will not give you the same level of fine grained locking. It will simply block the entire map on every operation.
答案几乎是否定的,没有什么等同于已排序的 ConcurrentHashMap(如 LinkedHashMap)。正如其他人指出的那样,您可以使用 Collections.synchronizedMap(-yourmap-) 包装您的集合,但这不会为您提供相同级别的细粒度锁定。它只会在每次操作时阻塞整个地图。
Your best bet is to either use synchronized around any access to the map (where it matters, of course. You may not care about dirty reads, for example) or to write a wrapper around the map that determines when it should or should not lock.
您最好的选择是在对地图的任何访问(当然,重要的地方。例如,您可能不关心脏读)周围使用同步,或者在地图周围编写一个包装器,以确定何时应该或不应该锁定.
回答by hohonuuli
There's a number of different approaches to this problem. You could use:
有许多不同的方法可以解决这个问题。你可以使用:
Collections.synchronizedMap(new LinkedHashMap());
as the other responses have suggested but this has several gotchas you'll need to be aware of. Most notably is that you will often need to hold the collections synchronized lock when iterating over the collection, which in turn prevents other threads from accessing the collection until you've completed iterating over it. (See Java theory and practice: Concurrent collections classes). For example:
正如其他回复所建议的那样,但这有几个您需要注意的问题。最值得注意的是,在迭代集合时,您通常需要持有集合同步锁,这反过来又会阻止其他线程访问该集合,直到您完成对它的迭代为止。(请参阅 Java 理论与实践:并发集合类)。例如:
synchronized(map) {
for (Object obj: map) {
// Do work here
}
}
Using
使用
new ConcurrentHashMap();
is probably a better choice as you won't need to lock the collection to iterate over it.
可能是更好的选择,因为您不需要锁定集合来迭代它。
Finally, you might want to consider a more functionalprogramming approach. That is you could consider the map as essentially immutable. Instead of adding to an existing Map, you would create a new one that contains the contents of the old map plus the new addition. This sounds pretty bizarre at first, but it is actually the way Scala deals with concurrency and collections
最后,您可能需要考虑一种更函数式的编程方法。也就是说,您可以将地图视为本质上不可变的。您可以创建一个包含旧地图内容和新增内容的新地图,而不是添加到现有地图。乍一看这听起来很奇怪,但这实际上是Scala 处理并发和集合的方式
回答by Andrey Adamovich
There is one implementationavailable under Google code. A quote from their site:
在 Google 代码下有一个可用的实现。来自他们网站的报价:
A high performance version of java.util.LinkedHashMap for use as a software cache.
Design
- A concurrent linked list runs through a ConcurrentHashMap to provide eviction ordering.
- Supports insertion and access ordered eviction policies (FIFO, LRU, and Second Chance).
java.util.LinkedHashMap 的高性能版本,用作软件缓存。
设计
- 并发链表通过 ConcurrentHashMap 运行以提供驱逐排序。
- 支持插入和访问有序驱逐策略(FIFO、LRU 和第二次机会)。
回答by Paul
You can use a ConcurrentSkipListMap, only available in Java SE/EE 6 or later. It is order presevering in that keys are sorted according to their natural ordering. You need to have a Comparator or make the keys Comparable objects. In order to mimik a linked hash map behavior (iteration order is the order in time in which entries were added) I implemented my key objects to always compare to be greater than a given other object unless it is equal (whatever that is for your object). A wrapped synchronized linked hash map did not suffice because as stated in http://www.ibm.com/developerworks/java/library/j-jtp07233.html: "The synchronized collections wrappers, synchronizedMap and synchronizedList, are sometimes called conditionally thread-safe -- all individual operations are thread-safe, but sequences of operations where the control flow depends on the results of previous operations may be subject to data races. The first snippet in Listing 1 shows the common put-if-absent idiom -- if an entry does not already exist in the Map, add it. Unfortunately, as written, it is possible for another thread to insert a value with the same key between the time the containsKey() method returns and the time the put() method is called. If you want to ensure exactly-once insertion, you need to wrap the pair of statements with a synchronized block that synchronizes on the Map m."
您可以使用 ConcurrentSkipListMap,仅在 Java SE/EE 6 或更高版本中可用。保持顺序是因为键根据它们的自然顺序进行排序。您需要有一个 Comparator 或使键成为 Comparable 对象。为了模仿链接的哈希映射行为(迭代顺序是添加条目的时间顺序),我实现了我的关键对象,以始终将其比较为大于给定的其他对象,除非它相等(无论您的对象是什么) )。包装的同步链接哈希映射是不够的,因为如 http://www.ibm.com/developerworks/java/library/j-jtp07233.html 中所述:“同步集合包装器,synchronizedMap 和 synchronizedList,有时被称为有条件的线程安全——所有单独的操作都是线程安全的,但控制流取决于先前操作结果的操作序列可能会受到数据竞争的影响。清单 1 中的第一个片段显示了常见的 put-if-absent 习惯用法 —— 如果 Map 中不存在某个条目,则添加它。不幸的是,正如所写的那样,另一个线程可能会插入具有相同键的值“在 containsKey() 方法返回的时间和 put() 方法被调用的时间之间。如果要确保只插入一次,则需要使用在 Map m 上同步的同步块包装这对语句。”
So what only helps is a ConcurrentSkipListMap which is 3-5 times slower than a normal ConcurrentHashMap.
所以唯一有帮助的是 ConcurrentSkipListMap,它比普通的 ConcurrentHashMap 慢 3-5 倍。
回答by Kanagavelu Sugumar
I just tried synchronized bounded LRU Map based on insertion order LinkedConcurrentHashMap; with Read/Write Lockfor synchronization.
So when you are using iterator; you have to acquire WriteLock to avoid ConcurrentModificationException.
This is better than Collections.synchronizedMap.
我刚刚尝试了基于插入顺序LinkedConcurrentHashMap 的同步有界 LRU Map ;具有同步读/写锁。所以当你使用迭代器时;您必须获得 WriteLock 以避免 ConcurrentModificationException。
这比Collections.synchronizedMap.
public class LinkedConcurrentHashMap<K, V> {
private LinkedHashMap<K, V> linkedHashMap = null;
private final int cacheSize;
private ReadWriteLock readWriteLock = null;
public LinkedConcurrentHashMap(LinkedHashMap<K, V> psCacheMap, int size) {
this.linkedHashMap = psCacheMap;
cacheSize = size;
readWriteLock=new ReentrantReadWriteLock();
}
public void put(K key, V value) throws SQLException{
Lock writeLock=readWriteLock.writeLock();
try{
writeLock.lock();
if(linkedHashMap.size() >= cacheSize && cacheSize > 0){
K oldAgedKey = linkedHashMap.keySet().iterator().next();
remove(oldAgedKey);
}
linkedHashMap.put(key, value);
}finally{
writeLock.unlock();
}
}
public V get(K key){
Lock readLock=readWriteLock.readLock();
try{
readLock.lock();
return linkedHashMap.get(key);
}finally{
readLock.unlock();
}
}
public boolean containsKey(K key){
Lock readLock=readWriteLock.readLock();
try{
readLock.lock();
return linkedHashMap.containsKey(key);
}finally{
readLock.unlock();
}
}
public V remove(K key){
Lock writeLock=readWriteLock.writeLock();
try{
writeLock.lock();
return linkedHashMap.remove(key);
}finally{
writeLock.unlock();
}
}
public ReadWriteLock getLock(){
return readWriteLock;
}
public Set<Map.Entry<K, V>> entrySet(){
return linkedHashMap.entrySet();
}
}