Linux 如何设置全局无文件限制以避免“许多打开的文件”错误?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/20901518/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-07 01:45:51  来源:igfitidea点击:

How to set a global nofile limit to avoid "many open files" error?

linuxubuntulinux-kernelupstart

提问by leiyonglin

I have a websocket service. it's strage that have error:"too many open files", but i have set the system configure:

我有一个 websocket 服务。有错误是很奇怪的:“打开的文件太多”,但我已经设置了系统配置:

/etc/security/limits.conf
*               soft    nofile          65000
*               hard    nofile          65000

/etc/sysctl.conf
net.ipv4.ip_local_port_range = 1024 65000

ulimit -n
//output 6500

So i think my system configure it's right.

所以我认为我的系统配置是正确的。

My service is manage by supervisor, it's possible supervisor limits?

我的服务是由主管管理的,主管可能有限制吗?

check process start by supervisor:

检查过程由主管启动:

cat /proc/815/limits
Max open files            1024                 4096                 files 

check process manual start:

检查进程手动启动:

cat /proc/900/limits
Max open files            65000                 65000                 files 

The reason is used supervisor manage serivce. if i restart supervisor and restart child process, it's "max open files" ok(65000) but wrong(1024) when reboot system supervisor automatically start.

原因是使用了主管管理服务。如果我重新启动主管并重新启动子进程,则“最大打开文件”正常(65000)但在重新启动系统主管时会自动启动错误(1024)。

May be supervisor start level is too high and system configure does not work when supervisor start?

可能是主管启动级别太高,并且主管启动时系统配置不起作用?

edit:

编辑:

system: ubuntu 12.04 64bit

系统:ubuntu 12.04 64bit

It's not supervisor problem, all process auto start after system reboot are not use system configure(max open files=1024), but restart it's ok.

这不是主管问题,系统重启后所有进程自动启动都没有使用系统配置(最大打开文件=1024),但重启就可以了。

update

更新

Maybe the problem is:

也许问题是:

Now the question is, how to set a global nofile limit because i don't want to set nofile limit in every upstart script which i need.

现在的问题是,如何设置全局 nofile 限制,因为我不想在我需要的每个 upstart 脚本中设置 nofile 限制。

回答by Giovanni Silva

Try to edit /etc/sysctl.confand adjust the limits globally For example:

尝试编辑/etc/sysctl.conf并全局调整限制 例如:

Forces the limit to 100000 files.

强制限制为 100000 个文件。

vi /etc/sysctl.conf

Append:

附加:

fs.file-max = 100000

Save and close the file. Users need to log out and log back in again to changes take effect or just type the following command:

保存并关闭文件。用户需要注销并重新登录才能使更改生效或只需键入以下命令:

sysctl -p

回答by Dimitrios

You can find you limit with:

您可以通过以下方式找到限制:

 cat /proc/sys/fs/file-max

or sysctl -a | grep file

或者 sysctl -a | grep file

change it in /proc/sys/fs/file-max file or with:

在 /proc/sys/fs/file-max 文件中更改它或使用:

sysctl -w fs.file-max=100000

回答by Anand Bayarbileg

I think this has nothing to do with opened files(It's just wrong error message). Any port that your application uses in use. 1. Try to find the process ID with command

我认为这与打开的文件无关(这只是错误的错误消息)。您的应用程序在使用中使用的任何端口。1.尝试用命令查找进程ID

ps aux

2. Kill the process (for example 8572) with command

2.用命令杀死进程(例如8572)

sudo kill -9 8572

3. Start your application again.

3. 再次启动您的应用程序。

回答by OkezieE

Fixed this issue by setting the limits for all users in the file :

通过为文件中的所有用户设置限制来解决此问题:

$ cat /etc/security/limits.d/custom.conf
* hard nofile 550000
* soft nofile 550000

REBOOT THE SERVER after setting the limits.

设置限制后重新启动服务器。

VERY IMPORTANT: The /etc/security/limits.d/folder contains user specific limits. In my case hadoop 2 (cloudera) related limits. These user specific limits would override the global limits so if your limits are not being applied, be sure to check the user specific limits in the folder /etc/security/limits.d/and in the file /etc/security/limits.conf.

非常重要:该/etc/security/limits.d/文件夹包含用户特定的限制。就我而言,hadoop 2 (cloudera) 相关限制。这些用户特定限制将覆盖全局限制,因此如果您的限制未被应用,请务必检查文件夹/etc/security/limits.d/和文件中的用户特定限制/etc/security/limits.conf

CAUTION: Setting user specific limits is the way to go in all cases. Setting the global (*) limit should be avoided. In my case it was an isolated environment and just needed to eliminate file limits issue from my experiment.

注意:设置用户特定限制是所有情况下的方法。应避免设置全局 (*) 限制。在我的情况下,它是一个孤立的环境,只需要从我的实验中消除文件限制问题。

Hope this saves someone some hair - as I spent too much time pulling my hair out chunk by chunk!

希望这可以为某人节省一些头发 - 因为我花了太多时间将头发一块一块地拉出来!

回答by Luqmaan

I had the same problem. Even though ulimit -Snshows my new limit, running supervisorctl restart alland cating the proc files did not show the new limits.

我有同样的问题。即使ulimit -Sn显示了我的新限制,运行supervisorctl restart all和运行catproc 文件也没有显示新限制。

The problem is that supervisordstill has the original limits. Therefore any child processes it creates still have the original limits.

问题是supervisord仍然有原来的限制。因此,它创建的任何子进程仍然具有原始限制。

So, the solution is to kill and restart supervisord.

因此,解决方案是杀死并重新启动supervisord

回答by Dan

To any weary googlers: you might be looking for the minfdssetting in the supervisor config. This setting seems to take effect for both the supervisord process as well as the children. I had a number of other strategies, including launching a shell script that set the limits before executing the actual program, but this was the only thing that worked.

对于任何疲惫的谷歌员工:您可能正在寻找supervisor config 中minfds设置。此设置似乎对 supervisord 进程和子进程都有效。我有许多其他策略,包括在执行实际程序之前启动一个设置限制的 shell 脚本,但这是唯一有效的方法。

回答by lextoumbourou

luqmaan's answer was the ticket for me, except for one small caveat: the *wildcard doesn't apply to root in Ubuntu (as described in limits.conf's comments).

luqmaan 的回答对我来说是一张票,除了一个小警告:*通配符不适用于 Ubuntu 中的 root(如limits.conf的评论中所述)。

You need to explicitly set the limit for root if supervisordis started as the root user:

如果supervisord以 root 用户身份启动,您需要明确设置 root 的限制:

vi /etc/security/limits.conf

vi /etc/security/limits.conf

root soft nofile 65535
root hard nofile 65535

回答by Maicon Santana

Can you set the limit on Service on this way:

你能不能这样设置 Service 的限制:

add: LimitNOFILE=65536in: /etc/systemd/system/{NameofService}.service

添加:LimitNOFILE=65536在:/etc/systemd/system/{NameofService}.service

回答by Paran

Temporarily it can be solved by following command:

暂时可以通过以下命令解决:

ulimit -n 2048

Where 2048 (or you can set as your required) is a number of process(nproc) To permannent solution need to configure two files. For CentOS/RHEL 5 or 6

其中 2048(或者您可以根据需要设置)是多个进程(nproc)要永久解决方案需要配置两个文件。对于 CentOS/RHEL 5 或 6

/etc/security/limits.conf
/etc/security/limits.d/90-nproc.conf

For CentOS/RHEL 7

对于 CentOS/RHEL 7

/etc/security/limits.conf
/etc/security/limits.d/20-nproc.conf

Add or modify following lines above two files where test is a specific user.

在 test 是特定用户的两个文件上方添加或修改以下行。

test hard nproc 2048
test soft nproc 16384

soft: It can be changed by user which is not more than more than hard limit hard: This is a cap on soft limit set by super user and enforced by kernel

软:它可以由用户更改,不超过硬限制硬:这是超级用户设置的软限制上限,由内核强制执行

回答by Sushmita Mallick

prlimit --nofile=softlimit:hardlimitdid the trick for me.

prlimit --nofile= softlimithardlimit对我有用

A little background about soft and hard limits:

关于软限制和硬限制的一些背景知识:

You can set both soft and hard limits. The system will not allow a user to exceed his or her hard limit. However, a system administrator may set a soft limit which can be temporarily exceeded by the user. The soft limit must be less than the hard limit.

Once the user exceeds the soft limit, a timer begins. While the timer is ticking, the user is allowed to operate above the soft limit but cannot exceed the hard limit. Once the user goes below the soft limit, the timer gets reset. However, if the user's usage remains above the soft limit when the timer expires, the soft limit is enforced as a hard limit.

您可以设置软限制和硬限制。系统将不允许用户超过他或她的硬限制。但是,系统管理员可以设置用户可以暂时超过的软限制。软限制必须小于硬限制。

一旦用户超过软限制,计时器就会开始。在计时器计时的同时,允许用户在软限制以上操作,但不能超过硬限制。一旦用户低于软限制,计时器就会重置。但是,如果计时器到期时用户的使用量仍高于软限制,则软限制将强制执行为硬限制。

Ref: https://docs.oracle.com/cd/E19455-01/805-7229/sysresquotas-1/index.html

参考:https: //docs.oracle.com/cd/E19455-01/805-7229/sysresquotas-1/index.html

In my case, increasing the soft limit did the trick. I would suggest talking to your system admin before increasing the hard limit.

就我而言,增加软限制可以解决问题。我建议在增加硬限制之前与您的系统管理员交谈。

Reference to prlimit command syntex here. Before you set soft limit be sure to check what the system hard limit is with: prlimit -n That's the max you can set it to.

在此处参考 prlimit 命令语法。在设置软限制之前,请务必检查系统硬限制是什么: prlimit -n 这是您可以设置的最大值。

In case you want to keep your config permanently on linux server, you can edit /etc/security/limits.conf like others have suggested. In case that does not work (it was not editable in my server), set it in .bashrc.

如果您想将您的配置永久保留在 linux 服务器上,您可以像其他人建议的那样编辑 /etc/security/limits.conf。如果这不起作用(它在我的服务器中不可编辑),请将其设置在 .bashrc 中。