当有几个索引时,为什么 MySQL InnoDB 对大表的插入/更新变得非常慢?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/2222861/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Why do MySQL InnoDB inserts / updates on large tables get very slow when there are a few indexes?
提问by Greg
We have a series of tables that have grown organically to several million rows, in production doing an insert or update can take up to two seconds. However if I dump the table and recreate it from the dump queries are lightning fast.
我们有一系列的表已经有机地增长到几百万行,在生产中执行插入或更新最多可能需要两秒钟。但是,如果我转储该表并从转储查询中重新创建它,则速度非常快。
We have rebuilt one of the tables by creating a copy rebuilding the indexes and then doing a rename switch and copying over any new rows, this worked because that table is only ever appended to. Doing this made the inserts and updates lightning quick.
我们通过创建一个重建索引的副本,然后进行重命名切换并复制任何新行来重建其中一个表,这是有效的,因为该表只被附加到。这样做使插入和更新闪电般快速。
My questions:
我的问题:
Why do inserts get slow over time? Why does recreating the table and doing an import fix this? Is there any way that I can rebuild indexes without locking a table for updates?
为什么插入会随着时间变慢?为什么重新创建表并执行导入可以解决此问题?有什么方法可以在不锁定表进行更新的情况下重建索引?
回答by Brent Baisley
InnoDB performance is heavily dependent on RAM. If the indexes don't fit in RAM, performance can drop considerably and quickly. Rebuild the whole table improves performance because the data and indexes are now optimized.
InnoDB 性能在很大程度上依赖于 RAM。如果索引不适合 RAM,则性能会显着且迅速地下降。重建整个表可以提高性能,因为现在优化了数据和索引。
If you are only ever inserting into the table, MyISAM is better suited for that. You won't have locking issues if only appending, since the record is added to the end of the file. MyISAM will also allow you to use MERGE tables, which are really nice for taking parts of the data offline or archiving without having to do exports and/or deletes.
如果你只是插入到表中,MyISAM 更适合。如果只是追加,你不会有锁定问题,因为记录被添加到文件的末尾。MyISAM 还允许您使用 MERGE 表,这非常适合将部分数据脱机或存档,而无需进行导出和/或删除。
回答by ggiroux
It sounds like it's either
听起来好像是
- Index unbalancing over time
- Disk fragmentation
- Internal innodb datafile(s) fragmentation
- 索引随时间不平衡
- 磁盘碎片
- 内部 innodb 数据文件碎片
You could try analyze table foo
which doesn't take locks, just a few index dives and takes a few seconds.
您可以尝试analyze table foo
哪些不需要锁定,只需几次索引潜水并需要几秒钟。
If this doesn't fix it, you can use
如果这不能解决它,您可以使用
mysql> SET PROFILING=1;
mysql> INSERT INTO foo ($testdata);
mysql> show profile for QUERY 1;
and you should see where most of the time is spent.
你应该看到大部分时间都花在了哪里。
Apparently innodb performs better when inserts are done in PK order, is this your case?
显然,当按 PK 顺序插入时,innodb 的性能更好,这是您的情况吗?
回答by hurikhan77
Updating a table requires indices to be rebuilt. If you are doing bulk inserts, try to do them in one transaction (as the dump and restore does). If the table is write-biased I would think about dropping the indices anyway or let a background job do read-processing of the table (eg by copying it to an indexed one).
更新表需要重建索引。如果您正在执行批量插入,请尝试在一个事务中执行它们(就像转储和还原那样)。如果表是偏写的,我会考虑删除索引或让后台作业对表进行读取处理(例如,通过将其复制到索引表)。
回答by ppostma1
track down the in use my.ini and increase the key_buffer_size
I had a 1.5GB table with a large key where the Queries per second (all writes) were down to 17. I found it strange that the in the administration panel (while the table was locked for writing to speed up the process) it was doing 200 InnoDB reads per second to 24 writes per second.
跟踪正在使用的 my.ini 并增加key_buffer_size
我有一个 1.5GB 的表,其中有一个大键,其中每秒查询数(所有写入)下降到 17。我发现管理面板中的锁定写入以加快进程)它每秒进行 200 次 InnoDB 读取到每秒 24 次写入。
It was forced to read the index table off disk. I changed the key_buffer_size from 8M to 128M and the performance jumped to 150 queries per second completed and only had to perform 61 reads to get 240 writes. (after restart)
它被迫从磁盘读取索引表。我将 key_buffer_size 从 8M 更改为 128M,性能跃升至每秒完成 150 次查询,只需执行 61 次读取即可获得 240 次写入。(重启后)
回答by Rafa
Could it be due to fragmentation of XFS?
可能是由于 XFS 的碎片化吗?
Copy/pasted from http://stevesubuntutweaks.blogspot.com/2010/07/should-you-use-xfs-file-system.html:
从http://stevesubuntutweaks.blogspot.com/2010/07/should-you-use-xfs-file-system.html复制/粘贴:
To check the fragmentation level of a drive, for example located at /dev/sda6:
sudo xfs_db -c frag -r /dev/sda6
The result will look something like so:
actual 51270, ideal 174, fragmentation factor 99.66%
That is an actual result I got from the first time I installed these utilities, previously having no knowledge of XFS maintenance. Pretty nasty. Basically, the 174 files on the partition were spread over 51270 separate pieces. To defragment, run the following command:
sudo xfs_fsr -v /dev/sda6
Let it run for a while. the -v option lets it show the progress. After it finishes, try checking the fragmentation level again:
sudo xfs_db -c frag -r /dev/sda6
actual 176, ideal 174, fragmentation factor 1.14%
Much better!
要检查驱动器的碎片级别,例如位于 /dev/sda6:
须藤 xfs_db -c frag -r /dev/sda6
结果看起来像这样:
实际 51270,理想 174,碎片因子 99.66%
这是我第一次安装这些实用程序时得到的实际结果,以前不了解 XFS 维护。好恶心。基本上,分区上的 174 个文件分布在 51270 个单独的块上。要进行碎片整理,请运行以下命令:
须藤 xfs_fsr -v /dev/sda6
让它运行一段时间。-v 选项让它显示进度。完成后,再次尝试检查碎片级别:
须藤 xfs_db -c frag -r /dev/sda6
实际 176,理想 174,碎片系数 1.14%
好多了!