scala 为什么 Lift Web 框架是可扩展的?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/648964/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
why is the lift web framework scalable?
提问by
I want to know the technical reasons why the lift webframework has high performance and scalability? I know it uses scala, which has an actor library, but according to the install instructions it default configuration is with jetty. So does it use the actor library to scale?
我想知道lift webframework具有高性能和可扩展性的技术原因?我知道它使用 scala,它有一个 actor 库,但根据安装说明,它的默认配置是使用 jetty。那么它是否使用actor库进行缩放?
Now is the scalability built right out of the box. Just add additional servers and nodes and it will automatically scale, is that how it works? Can it handle 500000+ concurrent connections with supporting servers.
现在是开箱即用的可扩展性。只需添加额外的服务器和节点,它就会自动扩展,它是如何工作的?它可以处理与支持服务器的 500000+ 个并发连接。
I am trying to create a web services framework for the enterprise level, that can beat what is out there and is easy to scale, configurable, and maintainable. My definition of scaling is just adding more servers and you should be able to accommodate the extra load.
我正在尝试为企业级创建一个 Web 服务框架,它可以击败现有的框架,并且易于扩展、可配置和可维护。我对扩展的定义只是添加更多服务器,您应该能够适应额外的负载。
Thanks
谢谢
回答by
Lift's approach to scalability is within a single machine. Scaling across machines is a larger, tougher topic. The short answer there is: Scala and Lift don't do anything to either help or hinder horizontal scaling.
Lift 的可扩展性方法是在一台机器内。跨机器扩展是一个更大、更棘手的话题。简短的回答是:Scala 和 Lift 不会帮助或阻碍水平扩展。
As far as actors within a single machine, Lift achieves better scalability because a single instance can handle more concurrent requests than most other servers. To explain, I first have to point out the flaws in the classic thread-per-request handling model. Bear with me, this is going to require some explanation.
就单个机器中的参与者而言,Lift 实现了更好的可扩展性,因为单个实例可以处理比大多数其他服务器更多的并发请求。为了解释,我首先必须指出经典的每请求线程处理模型的缺陷。忍受我,这需要一些解释。
A typical framework uses a thread to service a page request. When the client connects, the framework assigns a thread out of a pool. That thread then does three things: it reads the request from a socket; it does some computation (potentially involving I/O to the database); and it sends a response out on the socket. At pretty much every step, the thread will end up blocking for some time. When reading the request, it can block while waiting for the network. When doing the computation, it can block on disk or network I/O. It can also block while waiting for the database. Finally, while sending the response, it can block if the client receives data slowly and TCP windows get filled up. Overall, the thread might spend 30 - 90% of it's time blocked. It spends 100% of its time, however, on that one request.
典型的框架使用线程来为页面请求提供服务。当客户端连接时,框架从池中分配一个线程。该线程然后做三件事:它从套接字读取请求;它进行一些计算(可能涉及到数据库的 I/O);并在套接字上发送响应。在几乎每一步,线程最终都会阻塞一段时间。在读取请求时,它可以在等待网络时阻塞。在进行计算时,它可以阻塞磁盘或网络 I/O。它还可以在等待数据库时阻塞。最后,在发送响应时,如果客户端接收数据缓慢并且 TCP 窗口被填满,它会阻塞。总的来说,线程可能会花费 30 - 90% 的时间被阻塞。然而,它 100% 的时间都花在那个请求上。
A JVM can only support so many threads before it really slows down. Thread scheduling, contention for shared-memory entities (like connection pools and monitors), and native OS limits all impose restrictions on how many threads a JVM can create.
JVM 在它真正变慢之前只能支持这么多线程。线程调度、对共享内存实体(如连接池和监视器)的争用以及本机操作系统限制都对 JVM 可以创建的线程数量施加了限制。
Well, if the JVM is limited in its maximum number of threads, and the number of threads determines how many concurrent requests a server can handle, then the number of concurrent requests will be determined by the number of threads.
那么,如果JVM限制了它的最大线程数,并且线程数决定了服务器可以处理的并发请求数,那么并发请求数将取决于线程数。
(There are other issues that can impose lower limits---GC thrashing, for example. Threads are a fundamental limiting factor, but not the only one!)
(还有其他问题可以施加较低的限制——例如 GC 颠簸。线程是一个基本的限制因素,但不是唯一的一个!)
Lift decouples thread from requests. In Lift, a request does nottie up a thread. Rather, a thread does an action (like reading the request), then sends a message to an actor. Actors are an important part of the story, because they are scheduled via "lightweight" threads. A pool of threads gets used to process messages within actors. It's important to avoid blocking operations inside of actors, so these threads get returned to the pool rapidly. (Note that this pool isn't visible to the application, it's part of Scala's support for actors.) A request that's currently blocked on database or disk I/O, for example, doesn't keep a request-handling thread occupied. The request handling thread is available, almost immediately, to receive more connections.
Lift 将线程与请求分离。在电梯中,请求并不会阻碍线程。相反,线程执行一个操作(例如读取请求),然后向参与者发送消息。演员是故事的重要组成部分,因为他们是通过“轻量级”线程安排的。线程池用于处理参与者内的消息。避免在 actor 内部阻塞操作很重要,因此这些线程会迅速返回到池中。(请注意,该池对应用程序不可见,它是 Scala 对 actor 支持的一部分。)例如,当前在数据库或磁盘 I/O 上被阻塞的请求不会占用请求处理线程。请求处理线程几乎立即可用,以接收更多连接。
This method for decoupling requests from threads allows a Lift server to have many more concurrent requests than a thread-per-request server. (I'd also like to point out that the Grizzly library supports a similar approach without actors.) More concurrent requests means that a single Lift server can support more users than a regular Java EE server.
这种将请求与线程解耦的方法允许 Lift 服务器具有比线程每个请求服务器更多的并发请求。(我还想指出,Grizzly 库支持没有参与者的类似方法。)更多并发请求意味着单个 Lift 服务器可以支持比常规 Java EE 服务器更多的用户。
回答by Dre
at mtnyguard
在 mtnyguard
"Scala and Lift don't do anything to either help or hinder horizontal scaling"
“Scala 和 Lift 不会帮助或阻碍水平扩展”
Ain't quite right. Lift is highly statefull framework. For example if a user requests a form, then he can only post the request to the same machine where the form came from, because the form processeing action is saved in the server state.
不太对。Lift 是高度有状态的框架。例如,如果用户请求一个表单,那么他只能将请求发布到表单来自的同一台机器上,因为表单处理操作保存在服务器状态中。
And this is actualy a thing which hinders scalability in a way, because this behaviour is inconistent to the shared nothing architecture.
这实际上在某种程度上阻碍了可扩展性,因为这种行为与无共享架构不一致。
No doubt that lift is highly performant but perfomance and scalability are two different things. So if you want to scale horizontaly with lift you have to define sticky sessions on the loadbalancer which will redirect a user during a session to the same machine.
毫无疑问,lift 是高性能的,但性能和可扩展性是两个不同的东西。因此,如果您想通过提升水平扩展,您必须在负载均衡器上定义粘性会话,它将在会话期间将用户重定向到同一台机器。
回答by Saem
Jetty maybe the point of entry, but the actor ends up servicing the request, I suggest having a look at the twitter-esque example, 'skitter' to see how you would be able to create a very scalable service. IIRC, this is one of the things that made the twitter people take notice.
Jetty 可能是切入点,但参与者最终会为请求提供服务,我建议查看 twitter 风格的示例“skitter”,以了解如何创建一个非常可扩展的服务。IIRC,这是让推特人注意到的事情之一。
回答by Sai Venkat
I really like @dre's reply as he correctly states the statefulness of lift being a potential problem for horizontal scalability.
我真的很喜欢@dre 的回复,因为他正确地指出了提升的状态是水平可扩展性的潜在问题。
The problem - Instead of me describing the whole thing again, check out the discussion (Not the content) on this post. http://javasmith.blogspot.com/2010/02/automagically-cluster-web-sessions-in.html
问题 - 而不是我再次描述整个事情,请查看这篇文章的讨论(不是内容)。http://javasmith.blogspot.com/2010/02/automagically-cluster-web-sessions-in.html
Solution would be as @dre said sticky session configuration on load balancer on the front and adding more instances. But since request handling in lift is done in thread + actor combination you can expect one instance handle more requests than normal frameworks. This would give an edge over having sticky sessions in other frameworks. i.e. Individual instance's capacity to process more may help you to scale
解决方案就像@dre 所说的那样,在前面的负载均衡器上进行粘性会话配置并添加更多实例。但是由于lift中的请求处理是在线程+actor组合中完成的,您可以期望一个实例比普通框架处理更多的请求。这将比在其他框架中使用粘性会话更具优势。即单个实例的处理能力可以帮助您扩展
- you have Akka lift integration which would be another advantage in this.
- 你有 Akka Lift 集成,这将是另一个优势。

