为什么 MySQL InnoDB 插入这么慢?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/9819271/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-31 12:40:50  来源:igfitidea点击:

Why is MySQL InnoDB insert so slow?

mysqlperformanceinnodb

提问by Will

I am using large random numbers as keys (coming in from another system). Inserts and updates on fairly-small (as in a few million rows) tables are taking much longer than I think is reasonable.

我使用大随机数作为键(来自另一个系统)。在相当小的(如几百万行)表上插入和更新所花费的时间比我认为的合理要长得多。

I have distilled a very simple test to illustrate. In the test table I've tried to make it as simple as possible; my real code does not have such a simple layout and has relations and additional indices and such. However, a simpler setup shows equivalent performance.

我提炼了一个非常简单的测试来说明。在测试表中,我试图让它尽可能简单;我的真实代码没有如此简单的布局,并且具有关系和附加索引等。但是,更简单的设置显示出等效的性能。

Here are the results:

结果如下:

creating the MyISAM table took 0.000 seconds
creating 1024000 rows of test data took 1.243 seconds
inserting the test data took 6.335 seconds
selecting 1023742 rows of test data took 1.435 seconds
fetching 1023742 batches of test data took 0.037 seconds
dropping the table took 0.089 seconds
creating the InnoDB table took 0.276 seconds
creating 1024000 rows of test data took 1.165 seconds
inserting the test data took 3433.268 seconds
selecting 1023748 rows of test data took 4.220 seconds
fetching 1023748 batches of test data took 0.037 seconds
dropping the table took 0.288 seconds

Inserting 1M rows into MyISAM takes 6 seconds; into InnoDB takes 3433 seconds!

将 1M 行插入 MyISAM 需要 6 秒;进入 InnoDB 需要3433 秒

What am I doing wrong? What is misconfigured? (MySQL is a normal Ubuntu installation with defaults)

我究竟做错了什么?什么配置错误?(MySQL 是具有默认值的普通 Ubuntu 安装)

Here's the test code:

这是测试代码:

import sys, time, random
import MySQLdb as db

# usage: python script db_username db_password database_name

db = db.connect(host="127.0.0.1",port=3306,user=sys.argv[1],passwd=sys.argv[2],db=sys.argv[3]).cursor()

def test(engine):

    start = time.time() # fine for this purpose
    db.execute("""
CREATE TEMPORARY TABLE Testing123 (
k INTEGER PRIMARY KEY NOT NULL,
v VARCHAR(255) NOT NULL
) ENGINE=%s;"""%engine)
    duration = time.time()-start
    print "creating the %s table took %0.3f seconds"%(engine,duration)

    start = time.time()
    # 1 million rows in 100 chunks of 10K
    data = [[(str(random.getrandbits(48)) if a&1 else int(random.getrandbits(31))) for a in xrange(10*1024*2)] for b in xrange(100)]
    duration = time.time()-start
    print "creating %d rows of test data took %0.3f seconds"%(sum(len(rows)/2 for rows in data),duration)

    sql = "REPLACE INTO Testing123 (k,v) VALUES %s;"%("(%s,%s),"*(10*1024))[:-1]
    start = time.time()
    for rows in data:
        db.execute(sql,rows)
    duration = time.time()-start
    print "inserting the test data took %0.3f seconds"%duration

    # execute the query
    start = time.time()
    query = db.execute("SELECT k,v FROM Testing123;")
    duration = time.time()-start
    print "selecting %d rows of test data took %0.3f seconds"%(query,duration)

    # get the rows in chunks of 10K
    rows = 0
    start = time.time()
    while query:
        batch = min(query,10*1024)
        query -= batch
        rows += len(db.fetchmany(batch))
    duration = time.time()-start
    print "fetching %d batches of test data took %0.3f seconds"%(rows,duration)

    # drop the table
    start = time.time()
    db.execute("DROP TABLE Testing123;")
    duration = time.time()-start
    print "dropping the table took %0.3f seconds"%duration


test("MyISAM")
test("InnoDB")

采纳答案by Paul Dixon

InnoDB doesn't cope well with 'random' primary keys. Try a sequential key or auto-increment, and I believe you'll see better performance. Your 'real' key field could still be indexed, but for a bulk insert you might be better off dropping and recreating that index in one hit after the insert in complete. Would be interested to see your benchmarks for that!

InnoDB 不能很好地处理“随机”主键。尝试顺序键或自动增量,我相信你会看到更好的性能。您的“真实”键字段仍然可以被索引,但对于批量插入,您最好在插入完成后一次性删除并重新创建该索引。有兴趣看看你的基准测试!

Some related questions

一些相关问题

回答by flo

InnoDB has transaction support, you're not using explicit transactions so innoDB has to do a commit after each statement ("performs a log flush to disk for every insert").

InnoDB 具有事务支持,您没有使用显式事务,因此 innoDB 必须在每个语句之后进行提交(“为每个插入执行日志刷新到磁盘”)。

Execute this command before your loop:

在循环之前执行此命令:

START TRANSACTION

and this after you loop

这在你循环之后

COMMIT

回答by Philip Koshy

I've needed to do testing of an insert-heavy application in both MyISAM and InnoDB simultaneously. There was a single setting that resolved the speed issues I was having. Try setting the following:

我需要同时在 MyISAM 和 InnoDB 中对插入密集型应用程序进行测试。有一个设置可以解决我遇到的速度问题。尝试设置以下内容:

innodb_flush_log_at_trx_commit = 2

Make sure you understand the risks by reading about the setting here.

通过阅读此处的设置,确保您了解风险。

Also see https://dba.stackexchange.com/questions/12611/is-it-safe-to-use-innodb-flush-log-at-trx-commit-2/12612and https://dba.stackexchange.com/a/29974/9405

另请参阅https://dba.stackexchange.com/questions/12611/is-it-safe-to-use-innodb-flush-log-at-trx-commit-2/12612https://dba.stackexchange。 com/a/29974/9405

回答by Andrew

I get very different results on my system, but this is not using the defaults. You are likely bottlenecked on innodb-log-file-size, which is 5M by default. At innodb-log-file-size=100M I get results like this (all numbers are in seconds):

我在我的系统上得到了非常不同的结果,但这不是使用默认值。您可能在 innodb-log-file-size 上遇到瓶颈,默认情况下为 5M。在 innodb-log-file-size=100M 我得到这样的结果(所有数字都以秒为单位):

                             MyISAM     InnoDB
create table                  0.001      0.276
create 1024000 rows           2.441      2.228
insert test data             13.717     21.577
select 1023751 rows           2.958      2.394
fetch 1023751 batches         0.043      0.038
drop table                    0.132      0.305

Increasing the innodb-log-file-sizewill speed this up by a few seconds. Dropping the durability guarantees by setting innodb-flush-log-at-trx-commit=2or 0will improve the insert numbers somewhat as well.

增加innodb-log-file-size将加快几秒钟。通过设置降低耐用性保证,innodb-flush-log-at-trx-commit=2或者0也会在一定程度上提高刀片数量。

回答by Kien Truong

The default value for InnoDB is actually pretty bad. InnoDB is very RAM dependent, you might find better result if you tweak the settings. Here's a guide that I used InnoDB optimization basic

InnoDB 的默认值实际上非常糟糕。InnoDB 非常依赖于 RAM,如果您调整设置,您可能会发现更好的结果。这是我使用InnoDB 优化基础的指南

回答by Allen King

This is an old topic but frequently searched. So long as you are aware of risks (as stated by @philip Koshy above) of losing committed transactions in the last one second or so, before massive updates, you may set these global parameters

这是一个古老的话题,但经常被搜索。只要您意识到在最后一秒左右丢失已提交事务的风险(如上面@philip Koshy 所述),在大规模更新之前,您就可以设置这些全局参数

innodb_flush_log_at_trx_commit=0
sync_binlog=0

then turn then back on (if so desired) after update is complete.

然后在更新完成后重新打开(如果需要)。

innodb_flush_log_at_trx_commit=1
sync_binlog=1

for full ACID compliance.

完全符合 ACID。

There is a huge difference in write/update performance when both of these are turned off and on. In my experience, other stuff discussed above makes some difference but only marginal.

当这两者都关闭和打开时,写入/更新性能会有巨大差异。根据我的经验,上面讨论的其他内容会有所不同,但只是微不足道。

One other thing that impacts update/insertgreatly is full text index. In one case, a table with two text fields having full text index, inserting 2mil rows took 6 hours and the same took only 10 min after full text index was removed. More indexes, more time. So search indexes other than unique and primary key may be removed prior to massive inserts/updates.

另一件影响update/insert很大的事情是全文索引。在一种情况下,具有两个文本字段的表具有全文索引,插入 200 万行需要 6 小时,而在删除全文索引后仅需要 10 分钟。更多索引,更多时间。因此,可能会在大量插入/更新之前删除唯一键和主键以外的搜索索引。

回答by Ajay

What's your innodb buffer-pool size? Make sure you've set it to 75% of your RAM. Usually inserts are better when in primary key order for InnoDB. But with a big pool-size, you should see good speeds.

你的 innodb 缓冲池大小是多少?确保您已将其设置为 RAM 的 75%。通常在 InnoDB 的主键顺序中插入更好。但是对于大池大小,您应该会看到良好的速度。

回答by Shimon Doodkin

things that speed up the inserts:

加速插入的事情:

  • i had removed all keys from a table before large insert into empty table
  • then found i had a problem that the index did not fit in memory.
  • also found i had sync_binlog=0 (should be 1) even if binlog is not used.
  • also found i did not set innodb_buffer_pool_instances
  • 在大量插入空表之前,我已经从表中删除了所有键
  • 然后发现我有一个问题,索引不适合内存。
  • 还发现即使没有使用 binlog,我也有 sync_binlog=0(应该是 1)。
  • 还发现我没有设置 innodb_buffer_pool_instances

回答by William Entriken

Solution

解决方案

  1. Create new UNIQUE key that is identical to your current PRIMARY key
  2. Add new column idis unsigned integer, auto_increment
  3. Create primary key on new idcolumn
  1. 创建与您当前的 PRIMARY 密钥相同的新 UNIQUE 密钥
  2. 添加新列id是无符号整数,auto_increment
  3. 在新id列上创建主键

Bam, immediate 10x+ insert improvement.

Bam,立即 10 倍以上的插入改进。