java AtomicBoolean 与同步块
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/3848070/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
AtomicBoolean vs synchronized block
提问by biasedbit
I was trying to cut thread contention in my code by replacing some synchronized
blocks with AtomicBoolean
.
我试图通过更换一些削减线程争在我的代码synchronized
与块AtomicBoolean
。
Here's an example with synchronized
:
这是一个示例synchronized
:
public void toggleCondition() {
synchronized (this.mutex) {
if (this.toggled) {
return;
}
this.toggled = true;
// do other stuff
}
}
And the alternative with AtomicBoolean
:
和替代方案AtomicBoolean
:
public void toggleCondition() {
if (!this.condition.getAndSet(true)) {
// do other stuff
}
}
Taking advantage of AtomicBoolean
's CAS property should be way faster than relying on synchronization so I ran a little micro-benchmark.
利用AtomicBoolean
CAS 属性应该比依赖同步快得多,所以我运行了一些微基准测试。
For 10 concurrent threads and 1000000 iterations, AtomicBoolean
comes in only slightly faster than synchronized
block.
对于 10 个并发线程和 1000000 次迭代,AtomicBoolean
仅比synchronized
块快一点。
Average time (per thread) spent on toggleCondition() with AtomicBoolean: 0.0338
使用 AtomicBoolean 在 toggleCondition() 上花费的平均时间(每个线程):0.0338
Average time (per thread) spent on toggleCondition() with synchronized: 0.0357
使用同步时在 toggleCondition() 上花费的平均时间(每个线程):0.0357
I know micro-benchmarks are worth what they're worth but shouldn't the difference be higher?
我知道微基准测试物有所值,但差异不应该更大吗?
采纳答案by Stephen C
I know micro-benchmarks are worth what they're worth but shouldn't the difference be higher?
我知道微基准测试物有所值,但差异不应该更大吗?
I think the problem is in your benchmark. It looks like each thread is going to toggle the condition just once. The benchmark will spend most of its time creating and destroying threads. The chance that any given thread will be toggling a condition at the same time as any other thread is toggling it will be close to zero.
我认为问题出在您的基准测试中。看起来每个线程只会切换一次条件。基准测试将花费大部分时间来创建和销毁线程。任何给定线程在任何其他线程切换条件的同时切换条件的可能性将接近于零。
An AtomicBoolean has a performance advantage over primitive locking when there is significant contention for the condition. For an uncontended condition, I'd expect to see little difference.
当条件存在显着争用时,AtomicBoolean 比原始锁定具有性能优势。对于无争议的情况,我希望看到的差异很小。
Change your benchmark so that each thread toggles the condition a few million times. That will guarantee lots of lock contention, and I expect you will see a performance difference.
更改您的基准,以便每个线程切换条件数百万次。这将保证大量的锁争用,我希望您会看到性能差异。
EDIT
编辑
If the scenario you intended to test only involved one toggle per thread (and 10 threads), then it is unlikely that your application would experience contention, and therefore it is unlikely that using AtomicBoolean will make any difference.
如果您打算测试的场景只涉及每个线程(和 10 个线程)一个切换,那么您的应用程序不太可能遇到争用,因此使用 AtomicBoolean 不太可能产生任何影响。
At this point, I should ask why you are focusing your attention on this particular aspect. Have you profiled your application and determined that reallyyou have a lock contention problem? Or are you just guessing? Have you been given the standard lecture on the evils of premature optimization yet??
在这一点上,我应该问你为什么将注意力集中在这个特定方面。您是否分析了您的应用程序并确定您确实存在锁争用问题?还是你只是在猜测?你有没有听过关于过早优化的弊端的标准讲座??
回答by Stephen C
Looking at the actual implementation, I mean looking at the code is way better than some microbenchmark ( which are less than useless in Java or any other GC runtime ), I am not surprised it isn't "significantly faster". It is basically doing an implicit synchronized section.
查看实际实现,我的意思是查看代码比某些微基准测试要好得多(这在 Java 或任何其他 GC 运行时中几乎没有用处),我并不感到惊讶它不是“显着更快”。它基本上是在做一个隐式同步部分。
/**
* Atomically sets to the given value and returns the previous value.
*
* @param newValue the new value
* @return the previous value
*/
public final boolean getAndSet(boolean newValue) {
for (;;) {
boolean current = get();
if (compareAndSet(current, newValue))
return current;
}
}
/**
* Atomically sets the value to the given updated value
* if the current value {@code ==} the expected value.
*
* @param expect the expected value
* @param update the new value
* @return true if successful. False return indicates that
* the actual value was not equal to the expected value.
*/
public final boolean compareAndSet(boolean expect, boolean update) {
int e = expect ? 1 : 0;
int u = update ? 1 : 0;
return unsafe.compareAndSwapInt(this, valueOffset, e, u);
}
And then this from com.sun.Unsafe.java
然后这从 com.sun.Unsafe.java
/**
* Atomically update Java variable to <tt>x</tt> if it is currently
* holding <tt>expected</tt>.
* @return <tt>true</tt> if successful
*/
public final native boolean compareAndSwapInt(Object o, long offset,
int expected,
int x);
there is no magic in this, resource contention is a bitch and very complex. That is why using final
variables and working with immutable data is so prevalent in real concurrent languages like Erlang. All this complexity that eats CPU time is by passed, or at least shifted somewhere less complex.
这没有什么神奇之处,资源争用是一个婊子,非常复杂。这就是为什么final
在像 Erlang 这样的真正的并发语言中使用变量和处理不可变数据如此普遍。所有这些消耗 CPU 时间的复杂性都被忽略了,或者至少转移到了不那么复杂的地方。