MySQL 通过命令行将大型sql文件导入MySql

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/19483087/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-31 19:10:45  来源:igfitidea点击:

Importing large sql file to MySql via command line

mysqlubuntu

提问by user2028856

I'm trying to import an sql file of around 300MB to MySql via command line in Ubuntu. I used

我正在尝试通过 Ubuntu 中的命令行将大约 300MB 的 sql 文件导入到 MySql。我用了

source /var/www/myfile.sql;

Right now it's displaying a seemingly infinite rows of:

现在它显示了看似无限的行:

Query OK, 1 row affected (0.03 sec)

However it's been running a little while now. I've not imported a file this large before so I just want to know whether this is normal, if the process stalls or has some errors, will this show up in command line or will this process go on indefinitely?

但是,它已经运行了一段时间。我以前没有导入过这么大的文件,所以我只想知道这是否正常,如果进程停止或有一些错误,这会显示在命令行中还是这个进程会无限期地继续下去?

Thanks

谢谢

回答by Martin Nuc

You can import .sql file using the standard input like this:

您可以使用标准输入导入 .sql 文件,如下所示:

mysql -u <user> -p<password> <dbname> < file.sql

mysql -u <user> -p<password> <dbname> < file.sql

Note:There shouldn't space between <-p>and <password>

注意:<-p>和之间不应有空格<password>

Reference: http://dev.mysql.com/doc/refman/5.0/en/mysql-batch-commands.html

参考:http: //dev.mysql.com/doc/refman/5.0/en/mysql-batch-commands.html

Note for suggested edits:This answer was slightly changed by suggested edits to use inline password parameter. I can recommend it for scripts but you should be aware that when you write password directly in the parameter (-p<password>) it may be cached by a shell history revealing your password to anyone who can read the history file. Whereas -pasks you to input password by standard input.

建议编辑的注意事项:此答案因建议的编辑而略有更改以使用内联密码参数。我可以为脚本推荐它,但您应该知道,当您直接在参数 ( -p<password>) 中写入密码时,它可能会被 shell 历史缓存缓存,将您的密码透露给可以读取历史文件的任何人。而-p要求您通过标准输入输入密码。

回答by Paresh Behede

Guys regarding time taken for importing huge files most importantly it takes more time is because default setting of mysql is "autocommit = true", you must set that off before importing your file and then check how import works like a gem...

关于导入大文件所需的时间,最重要的是需要更多时间,因为 mysql 的默认设置是“autocommit = true”,您必须在导入文件之前将其设置为关闭,然后检查导入如何像 gem 一样工作...

First open MySQL:

首先打开MySQL:

mysql -u root -p

mysql -u 根 -p

Then, You just need to do following :

然后,您只需要执行以下操作:

mysql>use your_db

mysql>SET autocommit=0 ; source the_sql_file.sql ; COMMIT ;

mysql>use your_db

mysql>SET autocommit=0 ; source the_sql_file.sql ; COMMIT ;

回答by Bill Karwin

+1 to @MartinNuc, you can run the mysqlclient in batch mode and then you won't see the long stream of "OK" lines.

+1 @MartinNuc,您可以mysql在批处理模式下运行客户端,然后您将看不到“OK”行的长流。

The amount of time it takes to import a given SQL file depends on a lot of things. Not only the size of the file, but the type of statements in it, how powerful your server server is, and how many other things are running at the same time.

导入给定 SQL 文件所需的时间取决于很多因素。不仅是文件的大小,还包括其中的语句类型、服务器服务器的强大程度以及同时运行的其他东西的数量。

@MartinNuc says he can load 4GB of SQL in 4-5 minutes, but I have run 0.5 GB SQL files and had it take 45 minutes on a smaller server.

@MartinNuc 说他可以在 4-5 分钟内加载 4GB 的 SQL,但我已经运行了 0.5 GB 的 SQL 文件,并且在较小的服务器上需要 45 分钟。

We can't really guess how long it will take to run your SQL script on your server.

我们真的无法猜测在您的服务器上运行您的 SQL 脚本需要多长时间。



Re your comment,

回复你的评论,

@MartinNuc is correct you can choose to make the mysql client print every statement. Or you could open a second session and run mysql> SHOW PROCESSLISTto see what's running. But you probably are more interested in a "percentage done" figure or an estimate for how long it will take to complete the remaining statements.

@MartinNuc 是正确的,您可以选择让 mysql 客户端打印每条语句。或者您可以打开第二个会话并运行mysql> SHOW PROCESSLIST以查看正在运行的内容。但您可能对“完成百分比”数字或完成剩余报表所需时间的估计更感兴趣。

Sorry, there is no such feature. The mysql client doesn't know how long it will take to run later statements, or even how many there are. So it can't give a meaningful estimate for how much time it will take to complete.

抱歉,没有这样的功能。mysql 客户端不知道运行后面的语句需要多长时间,甚至不知道有多少条语句。因此,它无法对完成所需的时间给出有意义的估计。

回答by Chris Richardson

The solution I use for large sql restore is a mysqldumpsplitter script. I split my sql.gz into individual tables. then load up something like mysql workbench and process it as a restore to the desired schema.

我用于大型 sql 还原的解决方案是 mysqldumpsplitter 脚本。我将我的 sql.gz 拆分为单独的表。然后加载诸如 mysql workbench 之类的东西,并将其作为对所需架构的还原进行处理。

Here is the script https://github.com/kedarvj/mysqldumpsplitter

这是脚本 https://github.com/kedarvj/mysqldumpsplitter

And this works for larger sql restores, my average on one site I work with is a 2.5gb sql.gz file, 20GB uncompressed, and ~100Gb once restored fully

这适用于更大的 sql 恢复,我在一个站点上的平均大小是 2.5gb sql.gz 文件,20GB 未压缩,一旦完全恢复约 100Gb

回答by Shailesh Sharma

Importing large sql file to MySql via command line

通过命令行将大型sql文件导入MySql

  1. first download file .
  2. paste file on home.
  3. use following command in your terminals(CMD)
  4. Syntax: mysql -u username -p databsename < file.sql
  1. 首先下载文件。
  2. 将文件粘贴到家中。
  3. 在您的终端(CMD)中使用以下命令
  4. 语法:mysql -u 用户名 -p 数据库名 <file.sql

Example: mysql -u root -p aanew < aanew.sql

示例:mysql -u root -p aanew < aanew.sql