Linux 可能有多少个套接字连接?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/651665/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-03 17:06:50  来源:igfitidea点击:

How many socket connections possible?

linuxsocketstcpmaxserver-hardware

提问by TheHippo

Has anyone an idea how many tcp-socket connections are possible on a modern standard root server? (There is in general less traffic on each connection, but all the connections have to be up all the time.)

有谁知道现代标准根服务器上可能有多少个 tcp-socket 连接?(一般来说,每个连接上的流量较少,但所有连接都必须一直处于开启状态。)

EDIT:We will use a Linux Server.

编辑:我们将使用 Linux 服务器。

回答by Eddie

This depends not only on the operating system in question, but also on configuration, potentially real-time configuration.

这不仅取决于所讨论的操作系统,还取决于配置,可能是实时配置。

For Linux:

对于 Linux:

cat /proc/sys/fs/file-max

will show the current maximum number of file descriptors total allowed to be opened simultaneously. Check out http://www.cs.uwaterloo.ca/~brecht/servers/openfiles.html

将显示当前允许同时打开的最大文件描述符总数。查看http://www.cs.uwaterloo.ca/~brecht/servers/openfiles.html

回答by sean riley

Realistically for an application, more then 4000-5000 open sockets on a single machine becomes impractical. Just checking for activity on all the sockets and managing them starts to become a performance issue - especially in real-time environments.

实际上,对于一个应用程序来说,在一台机器上打开超过 4000-5000 个套接字变得不切实际。仅仅检查所有套接字上的活动并管理它们就开始成为一个性能问题——尤其是在实时环境中。

回答by Len Holgate

Which operating system?

哪个操作系统?

For windows machines, if you're writing a server to scale well, and therefore using I/O Completion Ports and async I/O, then the main limitation is the amount of non-paged pool that you're using for each active connection. This translates directly into a limit based on the amount of memory that your machine has installed (non-paged pool is a finite, fixed size amount that is based on the total memory installed).

对于 Windows 机器,如果你正在编写一个服务器来很好地扩展,因此使用 I/O 完成端口和异步 I/O,那么主要限制是你为每个活动连接使用的非分页池的数量. 这直接转化为基于您的机器已安装的内存量的限制(非分页池是基于安装的总内存的有限、固定大小的量)。

For connections that don't see much traffic you can reduce make them more efficient by posting 'zero byte reads' which don't use non-paged pool and don't affect the locked pages limit (another potentially limited resource that may prevent you having lots of socket connections open).

对于没有看到太多流量的连接,您可以通过发布不使用非分页池并且不影响锁定页面限制的“零字节读取”来减少它们的效率(另一个可能会阻止您的潜在有限资源)有很多套接字连接打开)。

Apart from that, well, you will need to profile but I've managed to get more than 70,000 concurrent connections on a modestly specified (760MB memory) server; see here http://www.lenholgate.com/blog/2005/11/windows-tcpip-server-performance.htmlfor more details.

除此之外,您还需要进行概要分析,但我已经设法在适度指定(760MB 内存)服务器上获得超过 70,000 个并发连接;有关更多详细信息,请参见此处 http://www.lenholgate.com/blog/2005/11/windows-tcpip-server-performance.html

Obviously if you're using a less efficient architecture such as 'thread per connection' or 'select' then you should expect to achieve less impressive figures; but, IMHO, there's simply no reason to select such architectures for windows socket servers.

显然,如果您使用的是效率较低的架构,例如“每个连接的线程数”或“选择”,那么您应该期望获得不那么令人印象深刻的数字;但是,恕我直言,根本没有理由为 Windows 套接字服务器选择这样的架构。

Edit:see here http://blogs.technet.com/markrussinovich/archive/2009/03/26/3211216.aspx; the way that the amount of non-paged pool is calculated has changed in Vista and Server 2008 and there's now much more available.

编辑:见这里http://blogs.technet.com/markrussinovich/archive/2009/03/26/3211216.aspx;计算非分页池数量的方式在 Vista 和 Server 2008 中发生了变化,现在可用的方式更多。

回答by cmeerw

On Linux you should be looking at using epoll for async I/O. It might also be worth fine-tuning socket-buffers to not waste too much kernel space per connection.

在 Linux 上,您应该考虑使用 epoll 进行异步 I/O。可能还值得微调套接字缓冲区,以免每个连接浪费太多内核空间。

I would guess that you should be able to reach 100k connections on a reasonable machine.

我猜你应该能够在一台合理的机器上达到 10 万个连接。

回答by gbjbaanb

10,000? 70,000? is that all :)

10,000?七万?这就是全部 :)

FreeBSD is probably the server you want, Here's a little blog postabout tuning it to handle 100,000 connections, its has had some interesting features like zero-copy sockets for some time now, along with kqueue to act as a completion port mechanism.

FreeBSD 可能是您想要的服务器,这里有一篇关于调整它以处理 100,000 个连接的小博客文章,它有一些有趣的功能,例如一段时间以来的零复制套接字,以及用作完成端口机制的 kqueue。

Solaris can handle 100,000 connectionsback in the last century!. They say linux would be better

早在上个世纪,Solaris 就可以处理 100,000 个连接!。他们说 linux 会更好

The best description I've come across is this presentation/paper on writing a scalable webserver. He's not afraid to say it like it is :)

我遇到的最好的描述是这篇关于编写可扩展网络服务器的演示文稿/论文。他不怕像这样说:)

Same for software: the cretins on the application layer forced great innovations on the OS layer. Because Lotus Notes keeps one TCP connection per client open, IBM contributed major optimizations for the ”one process, 100.000 open connections” case to Linux

And the O(1) scheduler was originally created to score well on some irrelevant Java benchmark. The bottom line is that this bloat bene?ts all of us.

软件也一样:应用层的愚蠢迫使操作系统层进行了巨大的创新。由于 Lotus Notes 为每个客户端打开一个 TCP 连接,IBM 为 Linux 的“一个进程,100.000 个打开的连接”案例做出了重大优化

而 O(1) 调度器最初是为了在一些不相关的 Java 基准测试中取得好成绩而创建的。最重要的是,这种膨胀对我们所有人都有好处。

回答by fatmck

depends on the application. if there is only a few packages from each client, 100K is very easy for linux. A engineer of my team had done a test years ago, the result shows : when there is no package from client after connection established, linux epoll can watch 400k fd for readablity at cpu usage level under 50%.

取决于应用程序。如果每个客户端只有几个包,100K对linux来说很容易。我团队的一个工程师多年前做过一个测试,结果显示:当建立连接后没有来自客户端的包时,linux epoll在cpu使用率低于50%的情况下可以观察400k fd的可读性。

回答by shenedu

I achieved 1600k concurrent idle socket connections, and at the same time 57k req/s on a Linux desktop (16G RAM, I7 2600 CPU). It's a single thread http server written in C with epoll. Source code is on github, a blog here.

我在 Linux 桌面(16G RAM,I7 2600 CPU)上实现了 1600k 并发空闲套接字连接,同时达到了 57k req/s。它是一个用 epoll 用 C 语言编写的单线程 http 服务器。源代码在github 上这里有一个博客

Edit:

编辑:

I did 600k concurrent HTTP connections (client & server) on both the same computer, with JAVA/Clojure . detail info post, HN discussion: http://news.ycombinator.com/item?id=5127251

我在同一台计算机上使用 JAVA/Clojure 进行了 600k 并发 HTTP 连接(客户端和服务器)。详细信息帖子,HN 讨论:http: //news.ycombinator.com/item?id=5127251

The cost of a connection(with epoll):

连接成本(使用 epoll):

  • application need some RAM per connection
  • TCP buffer 2 * 4k ~ 10k, or more
  • epoll need some memory for a file descriptor, from epoll(7)
  • 应用程序每个连接需要一些 RAM
  • TCP 缓冲区 2*4k ~ 10k,或更多
  • epoll 需要一些内存用于文件描述符,来自 epoll(7)

Each registered file descriptor costs roughly 90 bytes on a 32-bit kernel, and roughly 160 bytes on a 64-bit kernel.

每个注册的文件描述符在 32 位内核上花费大约 90 字节,在 64 位内核上花费大约 160 字节。

回答by teknopaul

A limit on the number of open sockets is configurable in the /proc file system

在 /proc 文件系统中可以配置对打开套接字数量的限制

cat /proc/sys/fs/file-max

Max for incomingconnections in the OS defined by integer limits.

由整数限制定义的操作系统中传入连接的最大值。

Linux itself allows billionsof open sockets.

Linux 本身允许数十亿个打开的套接字。

To use the sockets you need an application listening, e.g. a web server, and that will use a certain amount of RAM per socket.

要使用套接字,您需要一个应用程序侦听,例如 Web 服务器,并且每个套接字将使用一定数量的 RAM。

RAM and CPU will introduce the real limits. (modern 2017, think millions not billions)

RAM 和 CPU 将引入真正的限制。(现代 2017 年,认为数百万而不是数十亿)

1 millions is possible, not easy. Expect to use X Gigabytes of RAM to manage 1 million sockets.

一百万是可能的,并不容易。预计使用 X GB 的 RAM 来管理 100 万个套接字。

OutgoingTCP connections are limited by port numbers ~65000 per IP. You can have multiple IP addresses, but not unlimited IP addresses. This is a limit in TCP not Linux.

传出TCP 连接受每个 IP 端口号 ~65000 的限制。您可以拥有多个 IP 地址,但不能拥有无限的 IP 地址。这是 TCP 而非 Linux 中的限制。