Linux 未生成核心转储文件

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/7732983/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-05 06:38:14  来源:igfitidea点击:

Core dump file is not generated

linuxgdbcoredump

提问by Cyclone

Every time, my application crash a core dump file is not generated. I remember that few days ago, on another server it wasgenerated. I'm running the app using screen in bash like this:

每次,我的应用程序崩溃都不会生成核心转储文件。我记得几天前,它在另一台服务器生成的。我在 bash 中使用 screen 运行应用程序,如下所示:

#!/bin/bash
ulimit -c unlimited
while true; do ./server; done

As you can see I'm using ulimit -c unlimitedwhich is important if I want to generate a core dump, but it still doesn't generate it, when I got an segmentation fault. How can I make it work?

如您所见ulimit -c unlimited,如果我想生成核心转储,我正在使用哪个很重要,但是当我遇到分段错误时它仍然没有生成它。我怎样才能让它工作?

采纳答案by Employed Russian

Make sure your current directory (at the time of crash -- servermay change directories) is writable. If the server calls setuid, the directory has to be writable by that user.

确保您的当前目录(在崩溃时——server可能会更改目录)是可写的。如果服务器调用setuid,则该目录必须可由该用户写入。

Also check /proc/sys/kernel/core_pattern. That may redirect core dumps to another directory, and thatdirectory must be writable. More info here.

还要检查/proc/sys/kernel/core_pattern。这可能会将核心转储重定向到另一个目录,并且目录必须是可写的。更多信息在这里

回答by chown

Also, check to make sure you have enough disk space on /var/coreor wherever your core dumps get written. If the partition is almos full or at 100% disk usage then that would be the problem. My core dumps average a few gigs so you should be sure to have at least 5-10 gig available on the partition.

此外,请检查以确保在/var/core写入核心转储的地方或任何地方有足够的磁盘空间。如果分区已满或磁盘使用率为 100%,那么这就是问题所在。我的核心转储平均有几场演出,因此您应该确保分区上至少有 5-10 场演出可用。

回答by Philipp Cla?en

This linkcontains a good checklist why core dumps are not generated:

此链接包含一个很好的清单,为什么不生成核心转储:

  • The core would have been larger than the current limit.
  • You don't have the necessary permissions to dump core (directory and file). Notice that core dumps are placed in the dumping process' current directory which could be different from the parent process.
  • Verify that the file system is writeable and have sufficient free space.
  • If a sub directory named core exist in the working directory no core will be dumped.
  • If a file named core already exist but has multiple hard links the kernel will not dump core.
  • Verify the permissions on the executable, if the executable has the suid or sgid bit enabled core dumps will by default be disabled. The same will be the case if you have execute permissions but no read permissions on the file.
  • Verify that the process has not changed working directory, core size limit, or dumpable flag.
  • Some kernel versions cannot dump processes with shared address space (AKA threads). Newer kernel versions can dump such processes but will append the pid to the file name.
  • The executable could be in a non-standard format not supporting core dumps. Each executable format must implement a core dump routine.
  • The segmentation fault could actually be a kernel Oops, check the system logs for any Oops messages.
  • The application called exit()instead of using the core dump handler.
  • 核心会大于当前限制。
  • 您没有转储核心(目录和文件)的必要权限。请注意,核心转储位于转储进程的当前目录中,该目录可能与父进程不同。
  • 验证文件系统是否可写并且有足够的可用空间。
  • 如果工作目录中存在名为 core 的子目录,则不会转储核心。
  • 如果名为 core 的文件已经存在但有多个硬链接,则内核不会转储核心。
  • 验证可执行文件的权限,如果可执行文件启用了 suid 或 sgid 位,则默认情况下将禁用核心转储。如果您对文件具有执行权限但没有读取权限,情况也是如此。
  • 验证进程是否未更改工作目录、核心大小限制或可转储标志。
  • 某些内核版本无法转储具有共享地址空间(AKA 线程)的进程。较新的内核版本可以转储此类进程,但会将 pid 附加到文件名。
  • 可执行文件可能是不支持核心转储的非标准格式。每个可执行格式都必须实现一个核心转储例程。
  • 分段错误实际上可能是内核 Oops,请检查系统日志中是否有任何 Oops 消息。
  • 应用程序调用exit()而不是使用核心转储处理程序。

回答by kenorb

Check:

查看:

$ sysctl kernel.core_pattern

to see how your dumps are created (%e will be the process name, and %t will be the system time).

查看您的转储是如何创建的(%e 将是进程名称,而 %t 将是系统时间)。

If you've Ubuntu, your dumps are created by apportin /var/crash, but in different format (edit the file to see it).

如果你有 Ubuntu,你的转储是由apportin创建的/var/crash,但格式不同(编辑文件以查看它)。

You can test it by:

您可以通过以下方式进行测试:

sleep 10 &
killall -SIGSEGV sleep

If core dumping is successful, you will see “(core dumped)” after the segmentation fault indication.

如果核心转储成功,您将在分段错误指示后看到“(core dumped)”。

Read more:

阅读更多:

How to generate core dump file in Ubuntu

如何在 Ubuntu 中生成核心转储文件



Ubuntu

Ubuntu

Please read more at:

请在以下位置阅读更多信息:

https://wiki.ubuntu.com/Apport

https://wiki.ubuntu.com/Appport

回答by Brenda J. Butler

Although this isn't going to be a problem for the person who asked the question, because they ran the program that was to produce the core file in a script with the ulimit command, I'd like to document that the ulimit command is specific to the shell in which you run it (like environment variables). I spent way too much time running ulimit and sysctl and stuff in one shell, and the command that I wanted to dump core in the other shell, and wondering why the core file was not produced.

尽管这对于提出问题的人来说不会成为问题,因为他们运行了使用 ulimit 命令在脚本中生成核心文件的程序,但我想记录下 ulimit 命令是特定的到您运行它的 shell(如环境变量)。我花了太多时间在一个 shell 中运行 ulimit 和 sysctl 以及其他东西,以及我想在另一个 shell 中转储核心的命令,并想知道为什么没有生成核心文件。

I will be adding it to my bashrc. The sysctl works for all processes once it is issued, but the ulimit only works for the shell in which it is issued (maybe also the descendents too) - but not for other shells that happen to be running.

我会将它添加到我的 bashrc 中。sysctl 一经发出就适用于所有进程,但 ulimit 仅适用于发出它的外壳程序(可能也适用于后代) - 但不适用于碰巧正在运行的其他外壳程序。

回答by user18853

Note: If you have written any crash handler yourself, then the core might not get generated. So search for code with something on the line:

注意:如果您自己编写了任何崩溃处理程序,则可能不会生成核心。所以搜索代码就行了:

signal(SIGSEGV, <handler> );

so the SIGSEGV will be handled by handler and you will not get the core dump.

因此 SIGSEGV 将由处理程序处理,您将不会获得核心转储。

回答by user18853

Remember if you are starting the server from a service, it will start a different bash session so the ulimit won't be effective there. Try to put this in your script itself:

请记住,如果您从 service启动服务器,它将启动不同的 bash 会话,因此 ulimit 将在那里无效。尝试将其放入您的脚本中

ulimit -c unlimited

回答by Srki Rakic

The answers given here cover pretty well most scenarios for which core dump is not created. However, in my instance, none of these applied. I'm posting this answer as an addition to the other answers.

此处给出的答案很好地涵盖了未创建核心转储的大多数情况。但是,就我而言,这些都不适用。我发布这个答案作为其他答案的补充。

If your core file is not being created for whatever reason, I recommend looking at the /var/log/messages. There might be a hint in there to why the core file is not created. In my case there was a line stating the root cause:

如果您的核心文件由于某种原因没有被创建,我建议您查看 /var/log/messages.conf 文件。那里可能会提示为什么没有创建核心文件。就我而言,有一行说明根本原因:

Executable '/path/to/executable' doesn't belong to any package

To work around this issue edit /etc/abrt/abrt-action-save-package-data.conf and change ProcessUnpackaged from 'no' to 'yes'.

要解决此问题,请编辑 /etc/abrt/abrt-action-save-package-data.conf 并将 ProcessUnpackaged 从“no”更改为“yes”。

ProcessUnpackaged = yes

This setting specifies whether to create core for binaries not installed with package manager.

此设置指定是否为未安装包管理器的二进制文件创建核心。

回答by zapstar

If you call daemon()and then daemonize a process, by default the current working directory will change to /. So if your program is a daemon then you should be looking for a core in /directory and not in the directory of the binary.

如果您调用daemon()然后守护进程,默认情况下当前工作目录将更改为/. 因此,如果您的程序是守护进程,那么您应该在/目录中而不是在二进制目录中寻找核心。

回答by tejus

If one is on a Linux distro (e.g. CentOS, Debian) then perhaps the most accessible way to find out about core files and related conditions is in the man page. Just run the following command from a terminal:

如果使用的是 Linux 发行版(例如 CentOS、Debian),那么查找核心文件和相关条件的最容易访问的方法可能是在手册页中。只需从终端运行以下命令:

man 5 core