使用 Scala/Akka 在 JVM 中进行高频交易
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/9951501/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
High Frequency Trading in the JVM with Scala/Akka
提问by Hugo Sereno Ferreira
Let's imagine an hypothetical HFT system in Java, requiring (very) low-latency, with lots of short-lived small objects somewhat due to immutability (Scala?), thousands of connections per second, and an obscene number of messages passing around in an event-driven architecture (akka and amqp?).
让我们想象一个 Java 中的假设 HFT 系统,需要(非常)低延迟,由于不可变性(Scala?),有许多短命的小对象,每秒有数千个连接,以及在一个事件驱动架构(akka 和 amqp?)。
For the experts out there, what would (hypothetically) be the best tuning for JVM 7? What type of code would make it happy? Would Scala and Akka be ready for this kind of systems?
对于那里的专家来说,JVM 7 的最佳调优(假设)是什么?什么类型的代码会让它开心?Scala 和 Akka 会为这种系统做好准备吗?
Note:There has been some similar questions, like this one, but I've yet to find one covering Scala (which has its own idiosyncratic footprint in the JVM).
注意:已经有一些类似的问题,像这样的一个,但我还没有找到一个覆盖斯卡拉(它有自己的特质足迹在JVM)。
回答by Martin Thompson
It is possible to achieve very good performance in Java. However the question needs to be more specific to provide a credible answer. Your main sources of latency will come from follow non-exhaustive list:
在 Java 中可以实现非常好的性能。然而,这个问题需要更具体,才能提供可信的答案。您的主要延迟来源将来自以下非详尽列表:
How much garbage you create and the work of the GC to collect and promote it. Immutable designs in my experience do not fit well with low-latency. GC tuning needs to be a big focus.
Warm up the JVM so that classes are loaded and the JIT has had time to do its work.
Design your algorithms to be O(1) or at least O(log2 n), and have performance tests that assert this.
Your design needs to be lock-free and follow the "Single Writer Principle".
A significant effort needs to be put into understanding the whole stack and showing mechanical sympathy in its use.
Design your algorithms and data structures to be cache friendly. Cache misses these days are the biggest cost. This is closely related to process affinity which if not set up correctly can result and significant cache pollution. This will involve sympathy for the OS and even some JNI code in some cases.
Ensure you have sufficient cores so that any thread that needs to run has a core available without having to wait.
您创建了多少垃圾以及 GC 收集和提升它的工作。根据我的经验,不可变设计不适合低延迟。GC 调优需要成为一个重点。
预热 JVM,以便加载类并且 JIT 有时间完成它的工作。
将您的算法设计为 O(1) 或至少 O(log2 n),并进行性能测试来断言这一点。
您的设计需要是无锁的,并遵循“单一编写器原则”。
需要付出巨大的努力来理解整个堆栈并在其使用中表现出机械上的同情。
将您的算法和数据结构设计为对缓存友好。如今,缓存未命中是最大的成本。这与进程关联密切相关,如果设置不正确,可能会导致严重的缓存污染。在某些情况下,这将涉及对操作系统的同情,甚至是一些 JNI 代码。
确保您有足够的内核,以便任何需要运行的线程都有可用的内核而无需等待。
I recently blogged about a case studyof such an exercise.
我最近写了一篇关于此类练习的案例研究的博客。
回答by Andriy Plokhotnyuk
On my laptop the average latency of ping messages between Akka 2.3.7 actors is ~300nsand it is much less than the latency expected due to GC pauses on JVMs.
在我的笔记本电脑上,Akka 2.3.7 参与者之间 ping 消息的平均延迟约为300纳秒,这远低于由于 JVM 上的 GC 暂停而导致的预期延迟。
Code (incl. JVM options) & test results for Akka and other actors on Intel Core i7-2640M here.
英特尔酷睿 i7-2640M 上 Akka 和其他参与者的代码(包括 JVM 选项)和测试结果在这里。
P.S. You can find lots of principles and tips for low-latency computing on Dmitry Vyukov's siteand in Martin Thompson's blog.
PS 您可以在 Dmitry Vyukov 的站点和 Martin Thompson 的博客中找到许多有关低延迟计算的原则和技巧。
回答by Michael Dillon
You may find that use of a ring buffer for message passing will surpass what can be done with Akka. The main ring buffer implementation that people use on the JVM for financial applications is one called Disruptor which is carefully tuned for efficiency (power of two size), for the JVM (no GC, no locks) and for modern CPUs (no false sharing of cache lines).
您可能会发现使用环形缓冲区进行消息传递将超过 Akka 所能做到的。人们在 JVM 上用于金融应用程序的主要环形缓冲区实现是一种称为 Disruptor 的实现,它针对效率(两个大小的幂)、JVM(无 GC、无锁)和现代 CPU(无错误共享缓存行)。
Here is an intro presentation from a Scala point of view http://scala-phase.org/talks/jamie-allen-sdisruptor/index.html#1and there are links on the last slide to the original LMAX stuff.
这是从 Scala 的角度来看的介绍性演示http://scala-phase.org/talks/jamie-allen-sdisruptor/index.html#1,最后一张幻灯片上有原始 LMAX 内容的链接。

