Node.js + Express:应用程序不会开始侦听端口 80

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/7929563/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-02 14:42:12  来源:igfitidea点击:

Node.js + Express: app won't start listening on port 80

node.jsamazon-ec2portexpress

提问by Vitaly

I create and launch an app like this:

我创建并启动一个这样的应用程序:

express -s -t ejs
npm install express
npm install ejs
node app.js

and it works (on port 3000). But when I go and change the port to 80, then running node app.jsoutputs this:

它可以工作(在端口 3000 上)。但是当我将端口更改为 80 时,运行node app.js输出如下:

node.js:198
throw e; // process.nextTick error, or 'error' event on first tick
          ^
TypeError: Cannot call method 'getsockname' of null
at HTTPServer.address (net.js:746:23)
at Object.<anonymous> (/var/www/thorous/app.js:35:67)
at Module._compile (module.js:432:26)
at Object..js (module.js:450:10)
at Module.load (module.js:351:31)
at Function._load (module.js:310:12)
at Array.<anonymous> (module.js:470:10)
at EventEmitter._tickCallback (node.js:190:26)

This works too on my laptop, but not on my Amazon EC2 instance, where port 80 is open. Can figure out what's wrong. Any tips?

这也适用于我的笔记本电脑,但不适用于打开端口 80 的 Amazon EC2 实例。可以弄清楚是什么问题。有小费吗?

回答by Michael Connor

If you really want to do this you can forward traffic on port 80 to 3000.

如果你真的想这样做,你可以将端口 80 上的流量转发到 3000。

sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 3000

回答by Thomas Fritz

Are you starting your app as root? Because lower port numbers require root privileges. Maybe a sudo node app.js works?

您是否以 root 身份启动您的应用程序?因为较低的端口号需要 root 权限。也许 sudo 节点 app.js 有效?

BUT, you should NOT run any node.js app on port 80 with root privileges!!! NEVER!

但是,您不应该使用 root 权限在端口 80 上运行任何 node.js 应用程序!!!绝不!

My suggestions is to run nginx in front as a reverse proxy to your node.js app running on port e.g. 3000

我的建议是在前面运行 nginx 作为在端口(例如 3000)上运行的 node.js 应用程序的反向代理

回答by CoolAJ86

Keep it Stupid Simple:

保持愚蠢简单:

  • netcap
  • systemd
  • VPS
  • netcap
  • systemd
  • 虚拟专用网

On a normal VPS (such as Digital Ocean, Linode, Vultr, or Scaleway), where the disk is persistent, use "netcap". This will allow a non-root user to bind to privileged ports.

在磁盘持久的普通 VPS(例如Digital OceanLinodeVultrScaleway)上,使用“netcap”。这将允许非 root 用户绑定到特权端口。

sudo setcap 'cap_net_bind_service=+ep' $(which node)

TADA! Now you can run node ./server.js --port 80as a normal user!

多田!现在您可以node ./server.js --port 80以普通用户身份运行了!

Aside:

旁白

You can also use systemdto stop and start your service. Since systemdis sometimes a p.i.t.a., I wrote a wrapper script in Gothat makes it really easy to deploy node projects:

您还可以使用systemd来停止和启动您的服务。由于systemd有时是pita,我用Go编写了一个包装脚本,这使得部署节点项目变得非常容易:

# Install
curl https://rootprojects.org/serviceman/dist/linux/amd64/serviceman -o serviceman
chmod +x ./serviceman
sudo serviceman /usr/local/bin
# Use
cd ./my/node/project
sudo serviceman --username $(whoami) add npm start

or, if your server isn't called 'server.js' (de facto standard), or extra options:

或者,如果您的服务器未被称为“server.js”(事实上的标准),或其他选项:

cd ./my/node/project
sudo serviceman --username $(whoami) add node ./my-server-thing.js -- --my-options

All that does is create your systemdfile for you with sane defaults. I'd recommend you check out the systemddocumentation as well, but it is a bit hard to grok and there are probably more confusing and otherwise bad tutorials than there are simple and otherwise good tutorials.

所做的就是systemd使用合理的默认值为您创建文件。我建议您也查看systemd文档,但它有点难以理解,并且可能有更多的混乱和糟糕的教程,而不是简单的和其他好的教程。

Ephemeral Instances (i.e. EC2) are not for long-running servers

临时实例(即 EC2)不适用于长时间运行的服务器

Generally, when people use EC2, it's because they don't care about individual instance uptime reliability - they want a "scalable" architecture, not a persistent architecture.

通常,当人们使用 EC2 时,是因为他们不关心单个实例正常运行时间的可靠性——他们想要一个“可扩展”的架构,而不是持久的架构。

In most of these cases it isn't actually intended that the virtualized server persist in any sort of way. In these types of "ephemeral" (temporary) environments a "reboot" is intended to be about the same as reinstalling from scratch.

在大多数情况下,实际上并不打算让虚拟化服务器以任何方式持续存在。在这些类型的“短暂”(临时)环境中,“重启”与从头重新安装大致相同。

You don't "setup a server" but rather "deploy an image". The only reason you'd log into such a server is to prototype or debug the image you're creating.

您不是“设置服务器”而是“部署映像”。您登录此类服务器的唯一原因是对您正在创建的图像进行原型设计或调试。

The "disks" are volatile, the IP addresses are floating, the images behave the same on each and every boot. You're also not typically utilizing a concept of user accounts in the traditional sense.

“磁盘”是不稳定的,IP 地址是浮动的,每次启动时图像的行为都相同。您通常也不会使用传统意义上的用户帐户概念。

Therefore: although it is true that, in general, you shouldn't run a service as root, the types of situations in which you typically use volatile virtualization... it doesn't matter that much. You have a single service, a single user account, and as soon as the instance fails or is otherwise "rebooted" (or you spin up a new instance of your image), you have a fresh system all over again (which does mean that any vulnerabilities persist).

因此:虽然通常情况下,您不应该以 root 身份运行服务,但您通常使用易失性虚拟化的情况类型……但这并不重要。你有一个服务,一个用户帐户,一旦实例失败或以其他方式“重新启动”(或者你启动了一个新的镜像实例),你就会重新拥有一个全新的系统(这确实意味着任何漏洞仍然存在)。

Firewalls: Ephemeral vs VPS

防火墙:临时 vs VPS

Stuff like EC2 is generally intended to be private-only, not public-facing. These are "cloud service" systems. You're expected to use a dozen different services and auto-scale. As such, you'd use the load balancer service to forward ports to your EC2 group. Typically the default firewall for an instance will deny all public-network traffic. You have to go into the firewall management and make sure the ports you intend to use are actually open.

像 EC2 这样的东西通常只用于私有,而不是面向公众。这些是“云服务”系统。您需要使用十几种不同的服务并自动扩展。因此,您将使用负载均衡器服务将端口转发到您的 EC2 组。通常,实例的默认防火墙会拒绝所有公共网络流量。您必须进入防火墙管理并确保您打算使用的端口实际上是打开的。

Sometimes VPS providers have "enterprise" firewall configurators, but more typically you just get raw access to the virtual machine and since only the ports that you actually listen on get access to the outside world in the first place (by default they typically don't have random services running), you may not get any additional benefit from a firewall. Certainly a good idea, but not a requirement to do what you need to do.

有时 VPS 提供商有“企业”防火墙配置器,但更常见的是,您只能获得对虚拟机的原始访问权限,因为只有您实际监听的端口才能首先访问外部世界(默认情况下,他们通常不会访问)运行随机服务),您可能无法从防火墙中获得任何额外的好处。当然是个好主意,但不是必须做你需要做的事情。

Don't use EC2 as a VPS

不要将 EC2 用作 VPS

The use case you have above may be a much better candidate for a traditional VPS service (as mentioned above: Digital Ocean, Linode, Vultr, Scaleway, etc) which are far easier to use and have much less management hassle to get started. All you need is a little bash CLI know-how.

您上面的用例可能更适合传统 VPS 服务(如上所述: Digital OceanLinodeVultrScaleway等),这些服务更易于使用,并且开始时的管理麻烦要少得多。您所需要的只是一点点 bash CLI 专业知识。

And, as an extra bonus, you don't have to guess at what the cost will be. They tell you in simple $/monthrather than ¢/cpu/hour/gb/internal-network/external-network/etc - so when something goes wrong you get a warning via email or in your admin console rather than an unexpected bill for $6,527.

而且,作为额外的奖励,您不必猜测成本是多少。他们用简单的$/month而不是 ¢/cpu/hour/gb/internal-network/external-network/etc告诉你- 所以当出现问题时,你会通过电子邮件或管理控制台收到警告,而不是意外的账单6,527 美元。

Bottom line: If you choose to use EC2 and you're not a "DevOps" expert with an accountant on staff... you're gonna have a hard time.

底线:如果您选择使用 EC2 并且您不是拥有会计人员的“DevOps”专家……您会遇到困难。

回答by Daniel Elliott

Perhaps there is something else running on port 80 previously?

也许以前在端口 80 上运行了其他东西?

Perhaps do a port scan and confirm that it is not being used already?

也许做一个端口扫描并确认它没有被使用?

nc -z <<your IP>> 80