Linux 为什么导入 SQL 这么慢?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/10227078/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-06 05:52:45  来源:igfitidea点击:

Why is import of SQL so slow?

mysqlwindowslinux

提问by Lars Steen

I have an SQL file containing two tables with around 600,000 rows altogether. Yesterday, I tried to import the file into my MySQL database on Fedora 16, and it took over 2 hours to import the file. On my Windows PC it took 7 minutes. My Linux and Windows machines have exactly the same hardware. A couple of my friends tried it too, and they had a similar experience.

我有一个 SQL 文件,其中包含两个总共有大约 600,000 行的表。昨天,我在Fedora 16上尝试将文件导入我的MySQL数据库,导入文件花了2个多小时。在我的 Windows PC 上,它花了 7 分钟。我的 Linux 和 Windows 机器具有完全相同的硬件。我的几个朋友也尝试过,他们也有类似的经历。

The command we were using was: mysql -u root database_name < sql_file.sql.

我们使用的命令是:mysql -u root database_name < sql_file.sql

Why is there such a difference in speed?

为什么速度相差这么大?

采纳答案by David Schwartz

My bet is that Fedora 16 is honoring the transaction/sync semantics and Windows is not. If you do the math, 600,000 updates in two hours is 5,000 per minute. That's the same order of magnitude as a disk's rotation rate.

我敢打赌,Fedora 16 遵循事务/同步语义,而 Windows 则不然。如果算一下,两小时内 600,000 次更新就是每分钟 5,000 次。这与磁盘的旋转速率处于同一数量级。

You can try adding SET autocommit=0;to the beginning of your import file and COMMIT;to the end. See this pagefor more information.

您可以尝试添加SET autocommit=0;到导入文件的开头和COMMIT;结尾。有关更多信息,请参阅此页面

回答by Mahesh Patil

Why don't you export .sql file as BULK INSERToption and import it, try these options while taking a backup using mysqldump

为什么不导出 .sql 文件作为BULK INSERT选项并导入它,在使用 mysqldump 进行备份时尝试这些选项

--extended-insert: use multiple-row insert statements

--extended-insert: 使用多行插入语句

--quick: do not do buffering of row data, good if tables are large

--quick: 不要缓冲行数据,如果表很大的话很好

Note:Make sure you should increase value of max_allowed_packet=32M or more in my.cnf file before generating .sql file.

注意:max_allowed_packet在生成 .sql 文件之前,确保你应该在 my.cnf 文件中增加=32M 或更多的值。