Linux Coredump 被截断
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/8768719/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Coredump is getting truncated
提问by Vivek Goel
I am setting
我正在设置
ulimit -c unlimited.
And in c++ program we are doing
在 C++ 程序中我们正在做
struct rlimit corelimit;
if (getrlimit(RLIMIT_CORE, &corelimit) != 0) {
return -1;
}
corelimit.rlim_cur = RLIM_INFINITY;
corelimit.rlim_max = RLIM_INFINITY;
if (setrlimit(RLIMIT_CORE, &corelimit) != 0) {
return -1;
}
but whenever program is getting crashed the core dump generated by it is getting truncated.
但是每当程序崩溃时,它生成的核心转储就会被截断。
BFD: Warning: /mnt/coredump/core.6685.1325912972 is truncated: expected core file size >= 1136525312, found: 638976.
What can be the issue ?
可能是什么问题?
We are using Ubuntu 10.04.3 LTS
我们正在使用 Ubuntu 10.04.3 LTS
Linux ip-<ip> 2.6.32-318-ec2 #38-Ubuntu SMP Thu Sep 1 18:09:30 UTC 2011 x86_64 GNU/Linux
This is my /etc/security/limits.conf
这是我的 /etc/security/limits.conf
# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
#
#<domain> <type> <item> <value>
#
#Where:
#<domain> can be:
# - an user name
# - a group name, with @group syntax
# - the wildcard *, for default entry
# - the wildcard %, can be also used with %group syntax,
# for maxlogin limit
# - NOTE: group and wildcard limits are not applied to root.
# To apply a limit to the root user, <domain> must be
# the literal username root.
#
#<type> can have the two values:
# - "soft" for enforcing the soft limits
# - "hard" for enforcing hard limits
#
#<item> can be one of the following:
# - core - limits the core file size (KB)
# - data - max data size (KB)
# - fsize - maximum filesize (KB)
# - memlock - max locked-in-memory address space (KB)
# - nofile - max number of open files
# - rss - max resident set size (KB)
# - stack - max stack size (KB)
# - cpu - max CPU time (MIN)
# - nproc - max number of processes
# - as - address space limit (KB)
# - maxlogins - max number of logins for this user
# - maxsyslogins - max number of logins on the system
# - priority - the priority to run user process with
# - locks - max number of file locks the user can hold
# - sigpending - max number of pending signals
# - msgqueue - max memory used by POSIX message queues (bytes)
# - nice - max nice priority allowed to raise to values: [-20, 19]
# - rtprio - max realtime priority
# - chroot - change root to directory (Debian-specific)
#
#<domain> <type> <item> <value>
#
#* soft core 0
#root hard core 100000
#* hard rss 10000
#@student hard nproc 20
#@faculty soft nproc 20
#@faculty hard nproc 50
#ftp hard nproc 0
# ftp - chroot /ftp
#@student - maxlogins 4
#for all users
* hard nofile 16384
* soft nofile 9000
More Details
更多细节
I am using gcc optimization flag
我正在使用 gcc 优化标志
O3
I am setting stack thread size to .5 mb
.
我将堆栈线程大小设置为.5 mb
.
回答by ElektroKraut
I remember there is a hard limit which can be set by the administrator, and a soft limit which is set by the user. If the soft limit is stronger than the hard limit, the hard limit value is taken. I'm not sure this is valid for any shell though, I only know it from bash.
我记得有一个可以由管理员设置的硬限制和一个由用户设置的软限制。如果软限制强于硬限制,则取硬限制值。不过,我不确定这对任何 shell 都有效,我只从 bash 中知道它。
回答by Adam Miller
Hard limits and soft limits have some specifics to them that can be a little hairy: see thisabout using sysctl to make name the changes last.
硬限制和软限制有一些细节向他们表示,可以是一个小毛毛:看到这个关于通过sysctl进行命名改变过去。
There is a fileyou can edit that should make the limit sizes last, although there is probably a corresponding sysctl command that will do so...
回答by JosephH
I had the same problem with core files getting truncated.
我在核心文件被截断时遇到了同样的问题。
Further investigation showed that ulimit -f
(aka file size
, RLIMIT_FSIZE
) also affects core files, so check that limit is also unlimited / suitably high. [I saw this on Linux kernel 3.2.0 / debian wheezy.]
进一步调查表明ulimit -f
(aka file size
, RLIMIT_FSIZE
) 也会影响核心文件,因此请检查该限制是否也不受限制/适当高。[我在 Linux 内核 3.2.0 / debian wheezy 上看到了这个。]
回答by Pavel Ryvintsev
Similar issue happened when I killed the program manually with kill -3. It happened simply because I did not wait enough time for core file to finish generating.
当我用 kill -3 手动杀死程序时发生了类似的问题。这只是因为我没有等待足够的时间让核心文件完成生成。
Make sure that the file stopped growing in size, and only then open it.
确保文件的大小停止增长,然后才打开它。
回答by Roberto Leinardi
If you are using coredumpctl
, a possible solution could be to edit /etc/systemd/coredump.conf
and increase ProcessSizeMax
and ExternalSizeMax
:
如果您正在使用coredumpctl
,一个可能的解决方案可能是编辑/etc/systemd/coredump.conf
和增加ProcessSizeMax
和ExternalSizeMax
:
[Coredump]
#Storage=external
#Compress=yes
ProcessSizeMax=20G
ExternalSizeMax=20G
#JournalSizeMax=767M
#MaxUse=
#KeepFree=
回答by KernelPanic
This solution works when the automated bug reporting tool (abrt) is used.
当使用自动错误报告工具 ( abrt)时,此解决方案有效。
After I tried everything that was already suggested (nothing helped), I found one more setting, which affects dump size, in the /etc/abrt/abrt.conf
在我尝试了已经建议的所有内容(没有任何帮助)之后,我在/etc/abrt/abrt.conf 中找到了另一个影响转储大小的设置
MaxCrashReportsSize = 5000
and increased its value.
并增加了它的价值。
Then restarted abrt daemon: sudo service abrtd restart
, re-ran the crashing application and got a full core dump file.
然后重新启动 abrt daemon: sudo service abrtd restart
,重新运行崩溃的应用程序并获得完整的核心转储文件。