如何制作分布式 node.js 应用程序?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/15425647/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-02 13:52:31  来源:igfitidea点击:

How to make a distributed node.js application?

node.jshttpscalability

提问by MaiaVictor

Creating a node.js application is simple enough.

创建一个 node.js 应用程序非常简单。

var app = require('express')();
app.get('/',function(req,res){
    res.send("Hello world!");
});

But suppose people became obsessed with your Hello World!application and exhausted your resources. How could this example be scaled up on practice? I don't understand it, because yes, you could open several node.js instance in different computers - but when someone access http://your_site.com/it aims directly that specific machine, that specific port, that specific node process. So how?

但是假设人们对您的Hello World!应用程序着迷并耗尽了您的资源。如何在实践中扩大这个例子?我不明白,因为是的,您可以在不同的计算机上打开多个 node.js 实例 - 但是当有人访问http://your_site.com/ 时,它直接针对特定机器、特定端口、特定节点进程。又怎样?

回答by Pascal Belloncle

There are many many ways to deal with this, but it boils down to 2 things:

有很多方法可以解决这个问题,但归结为两件事:

  1. being able to use more cores per server
  2. being able to scale beyond more than one server.
  1. 能够在每台服务器上使用更多内核
  2. 能够扩展到不止一台服务器。

node-cluster

节点集群

For the first option, you can user node-clusteror the same solution as for the seconde option. node-cluster(http://nodejs.org/api/cluster.html) essentially is a built in way to fork the node process into one master and multiple workers. Typically, you'd want 1 master and n-1 to n workers (n being your number of available cores).

对于第一个选项,您可以使用node-cluster与第二个选项相同的解决方案。 node-cluster( http://nodejs.org/api/cluster.html) 本质上是一种将节点进程分叉为一个主节点和多个工作节点的内置方法。通常,您需要 1 个 master 和 n-1 到 n 个 worker(n 是可用内核的数量)。

load balancers

负载均衡器

The second option is to use a load balancer that distributes the requests amongst multiple workers (on the same server, or across servers).

第二种选择是使用负载均衡器在多个工作人员(在同一台服务器上,或跨服务器)之间分配请求。

Here you have multiple options as well. Here are a few:

在这里,您也有多种选择。以下是一些:

One more thing, once you start having multiple processes serving requests, you can no longer use memory to store state, you need an additional service to store shared states, Redis (http://redis.io) is a popular choice, but by no means the only one.

还有一件事,一旦你开始有多个进程服务请求,你就不能再使用内存来存储状态,你需要一个额外的服务来存储共享状态,Redis ( http://redis.io) 是一个流行的选择,但是通过不意味着唯一。

If you use services such as cloudfoundry, heroku, and others, they set it up for you so you only have to worry about your app's logic (and using a service to deal with shared state)

如果您使用 cloudfoundry、heroku 等服务,它们会为您进行设置,因此您只需担心应用程序的逻辑(并使用服务处理共享状态)

回答by I_Debug_Everything

I've been working with node for quite some time but recently got the opportunity to try scaling my node apps and have been researching on the same topic for some time now and have come across following pre-requisites for scaling:

我使用 node 已经有一段时间了,但最近有机会尝试扩展我的 node 应用程序,并且已经研究同一主题一段时间了,并且遇到了以下扩展的先决条件:

  1. My app needs to be available on a distributed system each running multiple instances of node

  2. Each system should have a load balancer that helps distribute traffic across the node instances.

  3. There should be a master load balancer that should distribute traffic across the node instances on distributed systems.

  4. The master balancer should always be running OR should have a dependable restart mechanism to keep the app stable.

  1. 我的应用程序需要在每个运行多个节点实例的分布式系统上可用

  2. 每个系统都应该有一个负载平衡器,帮助在节点实例之间分配流量。

  3. 应该有一个主负载均衡器,它应该在分布式系统上的节点实例之间分配流量。

  4. 主平衡器应始终运行或应具有可靠的重启机制以保持应用程序稳定。

For the above requisites I've come across the following:

对于上述要求,我遇到了以下几点:

  1. Use modules like clusterto start multiple instances of node in a system.

  2. Use nginx always. It's one of the most simplest mechanism for creating a load balancer i've came across so far

  3. Use HAProxyto act as a master load balancer. A few pointerson how to use it and keep it forever running.

  1. 使用像集群这样的模块来启动系统中节点的多个实例。

  2. 始终使用 nginx。这是迄今为止我遇到的最简单的创建负载均衡器的机制之一

  3. 使用HAProxy作为主负载均衡器。一个几个指针如何使用它,并保持它的运行下去。

Useful resources:

有用的资源:

  1. Horizontal scaling node.js and websockets.
  2. Using cluster to take advantages of multiple cores.
  1. 水平扩展 node.js 和 websockets。
  2. 使用集群来利用多核的优势。

I'll keep updating this answer as I progress.

随着我的进步,我会不断更新这个答案。

回答by Nick Mitchinson

The basic way to use multiple machines is to put them behind a load balancer, and point all your traffic to the load balancer. That way, someone going to http://my_domain.com, and it will point at the load balancer machine. The sole purpose (for this example anyways; in theory more could be done) of the load balancer is to delegate the traffic to a given machine running your application. This means that you can have x number of machines running your application, however an external machine (in this case a browser) can go to the load balancer address and get to one of them. The client doesn't (and doesn't have to) know what machine is actually handling its request. If you are using AWS, it's pretty easy to set up and manage this. Note that Pascal's answer has more detail about your options here.

使用多台机器的基本方法是将它们放在负载均衡器后面,并将所有流量指向负载均衡器。这样,有人会访问http://my_domain.com,它将指向负载均衡器机器。负载均衡器的唯一目的(无论如何对于这个例子;理论上可以做更多的事情)是将流量委托给运行您的应用程序的给定机器。这意味着您可以有 x 台机器运行您的应用程序,但是外部机器(在本例中为浏览器)可以转到负载均衡器地址并访问其中一台。客户端不(也不必)知道实际上是哪台机器在处理它的请求。如果您使用的是AWS,设置和管理它非常容易。请注意,Pascal 的回答在此处提供了有关您的选项的更多详细信息。

With Node specifically, you may want to look at the Node Clustermodule. I don't really have alot of experience with this module, however it should allow you to spawn multiple process of your application on one machine all sharing the same port. Also node that it's still experimental and I'm not sure how reliably it will be.

对于 Node,您可能需要查看Node Cluster模块。我对这个模块并没有很多经验,但是它应该允许您在一台机器上生成应用程序的多个进程,所有进程都共享相同的端口。还有一个节点,它仍然是实验性的,我不确定它的可靠性如何。

回答by Cartucho

I'd recommend to take a look to http://senecajs.org, a microservices toolkit for Node.js. That is a good start point for beginners and to start thinking in "services" instead of monolitic applications.

我建议查看http://senecajs.org,这是 Node.js 的微服务工具包。对于初学者来说,这是一个很好的起点,可以开始考虑“服务”而不是单一的应用程序。

Having said that, building distributed applcations is hard, take time to learn, take LOTof time to master it, and usually you will face a lot trade-off between performance, reliability, manteinance, etc.

话虽如此,构建分布式应用程序很困难,需要时间学习,需要很多时间来掌握它,通常你会在性能、可靠性、维护等之间面临很多权衡。