Linux MySQL 在 SQL 上崩溃

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/6408302/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-05 04:37:11  来源:igfitidea点击:

MySQL Crashing on SQL

mysqlsqllinuxcrash

提问by ovidiu8

For the past 4 days MySQL keeps crashing on running scripts, like once / day

在过去的 4 天里,MySQL 在运行脚本时不断崩溃,比如每天一次

this is the error log

这是错误日志

key_buffer_size=134217728
read_buffer_size=1048576
max_used_connections=39
max_threads=100
threads_connected=34
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 336508 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

thd: 0x92025f38
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x95dce36c thread_stack 0x30000
/usr/sbin/mysqld(my_print_stacktrace+0x2d) [0x6b65ad]
/usr/sbin/mysqld(handle_segfault+0x494) [0x3823d4]
[0x110400]
/usr/sbin/mysqld(MYSQLparse(void*)+0x6aa) [0x3b42da]
/usr/sbin/mysqld(mysql_parse(THD*, char const*, unsigned int, char const**)+0x23e) [0x39ce6e]
/usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0xf35) [0x39df25]
/usr/sbin/mysqld(do_command(THD*)+0xf3) [0x39f0e3]
/usr/sbin/mysqld(handle_one_connection+0x2a0) [0x38dbd0]
/lib/tls/i686/cmov/libpthread.so.0(+0x596e) [0x93d96e]
/lib/tls/i686/cmov/libc.so.6(clone+0x5e) [0xd78a4e]
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort...
thd->query at 0x86982ef4 is an invalid pointer
thd->thread_id=2906
thd->killed=NOT_KILLED

The box runs on 2GB RAM, by my calculations it shouldn't have the problem with max memory. I've specifically lowered the memory requirements to a minimum but still getting the errors.

该盒子在 2GB RAM 上运行,根据我的计算,它不应该有最大内存的问题。我专门将内存要求降低到最低限度,但仍然出现错误。

mysql> show variables like "sort_buffer%";
+------------------+---------+
| Variable_name    | Value   |
+------------------+---------+
| sort_buffer_size | 1048576 |
+------------------+---------+

It crashed today on this SQL query

今天在这个 SQL 查询上崩溃了

ALTER TABLE FieldDefaultValue MODIFY value_field varchar(2000) CHARACTER SET utf8 collate utf8_bin;

Anyone got any similar experience ?

有人有类似的经历吗?

EDIT:

编辑:

The table in question actually doesn't contain much data, the database has much larger tables:

有问题的表实际上不包含太多数据,数据库有更大的表:

mysql> desc fielddefaultvalue;
+----------------------+---------------+------+-----+---------+----------------+
| Field                | Type          | Null | Key | Default | Extra          |
+----------------------+---------------+------+-----+---------+----------------+
| fielddefaultvalue_Id | bigint(20)    | NO   | PRI | NULL    | auto_increment |
| version              | bigint(20)    | NO   |     | NULL    |                |
| value_field          | varchar(2000) | YES  | MUL | NULL    |                |
| optimistic_version   | bigint(20)    | NO   |     | NULL    |                |
| property_fk          | bigint(20)    | YES  | MUL | NULL    |                |
| esg_fk               | bigint(20)    | YES  | MUL | NULL    |                |
+----------------------+---------------+------+-----+---------+----------------+
6 rows in set (0.02 sec)

mysql> select count(*) from fielddefaultvalue;
+----------+
| count(*) |
+----------+
|      690 |
+----------+
1 row in set (0.00 sec)

It also fails on multiple inserts (400-500) of little data, but not all the time, the same script can run properly once or crash it

它也会在多次插入(400-500)小数据时失败,但并非总是如此,同一脚本可以正常运行一次或使其崩溃

EDIT 2: After crash recovery the error log also reports:

编辑 2:崩溃恢复后,错误日志还报告:

InnoDB: which exceeds the log group capacity 9433498.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.

my.cnf

我的.cnf

lower_case_table_names = 1
key_buffer              = 16M
key_buffer_size = 128M
max_allowed_packet      = 16M
thread_stack            = 192K
thread_cache_size       = 8
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam-recover         = BACKUP
max_connections        = 100
table_cache            = 512
thread_concurrency     = 4
sort_buffer_size = 1M
read_buffer_size = 1M
table_open_cache = 512
read_rnd_buffer_size = 8M
innodb_file_per_table = 1
open_files_limit = 65536
default_character_set=utf8

query_cache_limit       = 1M
query_cache_size        = 64M

expire_logs_days        = 10
max_binlog_size         = 250M

innodb_buffer_pool_size = 256M
innodb_additional_mem_pool_size = 20M

EDIT: 5 hours later

编辑:5小时后

It just crashed again on the same "regular" script, it's a 25.000 line update script on a date column.

它只是在同一个“常规”脚本上再次崩溃,它是日期列上的 25.000 行更新脚本。

Same error message:

同样的错误信息:

InnoDB: Log scan progressed past the checkpoint lsn 186 4056481576
110620 17:30:52  InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Read

Funny thing is, I've ran this script today and didn't fail, but it did now.

有趣的是,我今天运行了这个脚本并且没有失败,但现在它失败了。

回答by inquam

Strange... I don't know how optimized the ALTER TABLEactually is on MySQL. Perhaps it consumes a lot of power. If the table contains a lot of data, try to move all your data into a temporary table and empty the main one. Then do your alter table and push the data back. If it has to do work on each row then you can split the work up like this and do a few records at a time.

奇怪……我不知道ALTER TABLE在 MySQL 上实际上是如何优化的。也许它会消耗很多电力。如果表包含大量数据,请尝试将所有数据移动到临时表中并清空主表。然后做你的改变表并将数据推回。如果它必须对每一行进行工作,那么您可以像这样拆分工作并一次做几条记录。

回答by MarkR

The most likely explanation is running out of address space; please post your entire my.cnf.

最可能的解释是地址空间不足;请发布您的整个my.cnf。

Running 32-bit OS in production is not a good idea.

在生产中运行 32 位操作系统不是一个好主意。

However, what you should do is:

但是,您应该做的是:

  1. Reproduce the fault on the same MySQL version on a non-production machine
  2. Check that you are using a properly supported, current build from Oracle. If you are not, then install one of those and reproduce the problem. If you are running Redhat (or similar), then you can use Oracle's RPMs. They also provide some other distributions' packages and binaries in a tar.gz file. Your package vendor may patch MySQL with some dodgy patches. I never run OEM MySQL builds in production.
  3. You seem to be running 32-bit. Ensure that you're not running out of address-space.
  1. 在非生产机器上复现相同MySQL版本的故障
  2. 检查您是否正在使用 Oracle 提供的正确支持的当前版本。如果不是,请安装其中之一并重现该问题。如果您运行的是 Redhat(或类似的),那么您可以使用 Oracle 的 RPM。他们还在 tar.gz 文件中提供了一些其他发行版的包和二进制文件。你的软件包供应商可能会用一些狡猾的补丁来修补 MySQL。我从来没有在生产中运行 OEM MySQL 版本。
  3. 您似乎正在运行 32 位。确保您没有用完地址空间。

If you can reproduce the bug using a standardOracle build on a supportedoperating system, you are not running out of memory / adress space and there is no hardware fault, then you can submit the bug to Oracle.

如果您可以在受支持的操作系统上使用标准Oracle 构建重现该错误,并且您没有耗尽内存/地址空间并且没有硬件故障,那么您可以将该错误提交给 Oracle。

The best idea is to reproduce the test-case with the minimum amount of data / table size.

最好的想法是用最少的数据/表大小重现测试用例。

回答by ggiroux

Sounds like your innodb_log_file_size is not big enough - try with 256 MB in my.cnf: innodb_log_file_size=256M

听起来你的 innodb_log_file_size 不够大 - 在 my.cnf 中尝试使用 256 MB:innodb_log_file_size=256M

You need to shut it down cleanly, remove the old logfiles, then restart - mysql will recreate new log files.

您需要彻底关闭它,删除旧的日志文件,然后重新启动 - mysql 将重新创建新的日志文件。