java 以原子方式递增存储在 ConcurrentHashMap 中的计数器

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/3339801/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-10-30 01:23:09  来源:igfitidea点击:

Atomically incrementing counters stored in ConcurrentHashMap

javamultithreadingconcurrencyguavaconcurrenthashmap

提问by wishihadabettername

I would like to collect some metrics from various places in a web app. To keep it simple, all these will be counters and therefore the only modifier operation is to increment them by 1.

我想从网络应用程序的各个地方收集一些指标。为简单起见,所有这些都是计数器,因此唯一的修饰符操作是将它们加 1。

The increments will be concurrent and often. The reads (dumping the stats) is a rare operation.

增量将是并发的并且经常发生。读取(转储统计数据)是一种罕见的操作。

I was thinking to use a ConcurrentHashMap. The issue is how to incrementthe counters correctly. Since the map doesn't have an "increment" operation, I need to read the current value first, increment it than put the new value in the map. Without more code, this is not an atomic operation.

我正在考虑使用ConcurrentHashMap。问题是如何正确增加计数器。由于地图没有“增量”操作,我需要先读取当前值,将其增量而不是将新值放入地图中。没有更多代码,这不是原子操作。

Is it possible to achieve this without synchronization (which would defeat the purpose of the ConcurrentHashMap)? Do I need to look at Guava?

是否有可能在没有同步的情况下实现这一点(这会破坏ConcurrentHashMap的目的)?我需要看番石榴吗?

Thanks for any pointers.

感谢您的指点。



P.S.
There is a related question on SO (Most efficient way to increment a Map value in Java) but focused on performance and not multi-threading

PS
有一个关于SO的相关问题(Most Effective way to increment a Map value in Java)但专注于性能而不是多线程

UPDATE
For those arriving here through searches on the same topic: besides the answers below, there's a useful presentationwhich incidentally covers the same topic. See slides 24-33.

更新
对于那些通过搜索相同主题到达这里的人:除了下面的答案之外,还有一个有用的演示文稿,它顺便涵盖了相同的主题。见幻灯片 24-33。

回答by ZhekaKozlov

In Java 8:

在 Java 8 中:

ConcurrentHashMap<String, LongAdder> map = new ConcurrentHashMap<>();

map.computeIfAbsent("key", k -> new LongAdder()).increment();

回答by Louis Wasserman

Guava's new AtomicLongMap(in release 11) might address this need.

Guava 的新AtomicLongMap(在第 11 版中)可能会满足这一需求。

回答by Steven Schlansker

You're pretty close. Why don't you try something like a ConcurrentHashMap<Key, AtomicLong>? If your Keys (metrics) are unchanging, you could even just use a standard HashMap(they are threadsafe if readonly, but you'd be well advised to make this explicit with an ImmutableMapfrom Google Collections or Collections.unmodifiableMap, etc.).

你很接近。你为什么不尝试类似的东西ConcurrentHashMap<Key, AtomicLong>?如果您的Keys(指标)不变,您甚至可以只使用标准HashMap(如果只读,它们是线程安全的,但建议您使用ImmutableMap来自 Google Collections 的 或Collections.unmodifiableMap等明确说明)。

This way, you can use map.get(myKey).incrementAndGet()to bump statistics.

这样,您可以使用map.get(myKey).incrementAndGet()凹凸统计。

回答by Tom Hawtin - tackline

Other than going with AtomicLong, you can do the usual cas-loop thing:

除了 with AtomicLong,你可以做通常的 cas-loop 事情:

private final ConcurrentMap<Key,Long> counts =
    new ConcurrentHashMap<Key,Long>();

public void increment(Key key) {
    if (counts.putIfAbsent(key, 1)) == null) {
        return;
    }

    Long old;
    do {
       old = counts.get(key);
    } while (!counts.replace(key, old, old+1)); // Assumes no removal.
}

(I've not written a do-whileloop for ages.)

(我已经很久没有写do-while循环了。)

For small values the Longwill probably be "cached". For longer values, it may require allocation. But the allocations are actually extremely fast (and you can cache further) - depends upon what you expect, in the worst case.

对于小值,Long可能会被“缓存”。对于更长的值,它可能需要分配。但是分配实际上非常快(并且您可以进一步缓存) - 在最坏的情况下取决于您的期望。

回答by f.ald

I did a benchmark to compare the performance of LongAdderand AtomicLong.

我做了一个基准来比较的性能LongAdderAtomicLong

LongAdderhad a better performance in my benchmark: for 500 iterations using a map with size 100 (10 concurrent threads), the average time for LongAdder was 1270ms while that for AtomicLong was 1315ms.

LongAdder在我的基准测试中具有更好的性能:对于使用大小为 100(10 个并发线程)的映射进行 500 次迭代,LongAdder 的平均时间为 1270 毫秒,而 AtomicLong 的平均时间为 1315 毫秒。

回答by Vitalii

Got a necessity to do the same. I'm using ConcurrentHashMap + AtomicInteger. Also, ReentrantRW Lock was introduced for atomic flush(very similar behavior).

有必要做同样的事情。我正在使用 ConcurrentHashMap + AtomicInteger。此外,还为原子刷新(非常相似的行为)引入了 ReentrantRW 锁。

Tested with 10 Keys and 10 Threads per each Key. Nothing was lost. I just haven't tried several flushing threads yet, but hope it will work.

使用 10 个键和每个键 10 个线程进行测试。什么都没有丢失。我只是还没有尝试过几个冲洗线程,但希望它会起作用。

Massive singleusermode flush is torturing me... I want to remove RWLock and break down flushing into small pieces. Tomorrow.

大量的单用户模式刷新正在折磨我......我想删除 RWLock 并将刷新分解成小块。明天。

private ConcurrentHashMap<String,AtomicInteger> counters = new ConcurrentHashMap<String, AtomicInteger>();
private ReadWriteLock rwLock = new ReentrantReadWriteLock();

public void count(String invoker) {

    rwLock.readLock().lock();

    try{
        AtomicInteger currentValue = counters.get(invoker);
        // if entry is absent - initialize it. If other thread has added value before - we will yield and not replace existing value
        if(currentValue == null){
            // value we want to init with
            AtomicInteger newValue = new AtomicInteger(0);
            // try to put and get old
            AtomicInteger oldValue = counters.putIfAbsent(invoker, newValue);
            // if old value not null - our insertion failed, lets use old value as it's in the map
            // if old value is null - our value was inserted - lets use it
            currentValue = oldValue != null ? oldValue : newValue;
        }

        // counter +1
        currentValue.incrementAndGet();
    }finally {
        rwLock.readLock().unlock();
    }

}

/**
 * @return Map with counting results
 */
public Map<String, Integer> getCount() {
    // stop all updates (readlocks)
    rwLock.writeLock().lock();
    try{
        HashMap<String, Integer> resultMap = new HashMap<String, Integer>();
        // read all Integers to a new map
        for(Map.Entry<String,AtomicInteger> entry: counters.entrySet()){
            resultMap.put(entry.getKey(), entry.getValue().intValue());
        }
        // reset ConcurrentMap
        counters.clear();
        return resultMap;

    }finally {
        rwLock.writeLock().unlock();
    }

}