Linux 将多个 .sql 转储文件从 shell 导入 mysql 数据库
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/4708013/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Import Multiple .sql dump files into mysql database from shell
提问by Derek Organ
I have a directory with a bunch of .sql
files that mysql dumps of each database on my server.
我有一个目录,里面有一堆.sql
文件,mysql 在我的服务器上转储每个数据库。
e.g.
例如
database1-2011-01-15.sql
database2-2011-01-15.sql
...
There are quite a lot of them actually.
实际上有很多。
I need to create a shell script or single line probably that will import each database.
我需要创建一个 shell 脚本或单行,可能会导入每个数据库。
I'm running on a Linux Debian machine.
我在 Linux Debian 机器上运行。
I thinking there is some way to pipe in the results of a ls into some find command or something..
我认为有一些方法可以将 ls 的结果输入到一些 find 命令或其他东西中。
any help and education is much appreciated.
非常感谢任何帮助和教育。
EDIT
编辑
So ultimately I want to automatically import one file at a time into the database.
所以最终我想一次自动将一个文件导入数据库。
E.g. if I did it manually on one it would be:
例如,如果我在其中手动完成,它将是:
mysql -u root -ppassword < database1-2011-01-15.sql
采纳答案by D.Shawley
cat *.sql | mysql
? Do you need them in any specific order?
cat *.sql | mysql
? 您需要按任何特定顺序排列吗?
If you have too many to handle this way, then try something like:
如果您有太多的方法无法处理,请尝试以下操作:
find . -name '*.sql' | awk '{ print "source", find . -name '*.sql'|xargs mysql ...
}' | mysql --batch
This also gets around some problems with passing script input through a pipeline though you shouldn't have any problems with pipeline processing under Linux. The nice thing about this approach is that the mysql
utility reads in each file instead of having it read from stdin
.
这也解决了通过管道传递脚本输入的一些问题,尽管 Linux 下的管道处理不应该有任何问题。这种方法的好处是mysql
实用程序读取每个文件,而不是从stdin
.
回答by Navi
I don't remember the syntax of mysqldump but it will be something like this
我不记得 mysqldump 的语法,但它会是这样的
for SQL in *.sql; do DB=${SQL/\.sql/}; echo importing $DB; mysql $DB < $SQL; done
回答by Ronnie
One-liner to read in all .sql
files and imports them:
单行读取所有.sql
文件并导入它们:
myload foo.sql bar.sql.gz
The only trick is the bash substring replacement to strip out the .sql
to get the database name.
唯一的技巧是替换 bash 子字符串以去除.sql
获取数据库名称。
回答by tylerl
I created a script some time ago to do precisely this, which I called (completely uncreatively) "myload". It loads SQL files into MySQL.
前段时间我创建了一个脚本来精确地做到这一点,我称之为(完全没有创意)“myload”。它将 SQL 文件加载到 MySQL 中。
It's simple and straight-forward; allows you to specify mysql connection parameters, and will decompress gzip'ed sql files on-the-fly. It assumes you have a file per database, and the base of the filename is the desired database name.
它简单明了;允许您指定 mysql 连接参数,并将即时解压缩 gzip 的 sql 文件。它假设每个数据库都有一个文件,文件名的基础是所需的数据库名称。
So:
所以:
for i in *.sql
do
echo "file=$i"
mysql -u admin_privileged_user --password=whatever your_database_here < $i
done
Will create (if not exist) databases called "foo" and "bar", and import the sql file into each.
将创建(如果不存在)名为“foo”和“bar”的数据库,并将 sql 文件导入每个数据库。
For the other side of the process, I wrote this script (mydumpall)which creates the corresponding sql (or sql.gz) files for each database (or some subset specified either by name or regex).
对于过程的另一端,我编写了这个脚本 (mydumpall),它为每个数据库(或由名称或正则表达式指定的某个子集)创建相应的 sql(或 sql.gz)文件。
回答by jshirey-ncusa
There is superb little script at http://kedar.nitty-witty.com/blog/mydumpsplitter-extract-tables-from-mysql-dump-shell-scriptwhich will take a hugemysqldump file and split it into a single file for each table. Then you can run this very simple script to load the database from those files:
http://kedar.nitty-witty.com/blog/mydumpsplitter-extract-tables-from-mysql-dump-shell-script 上有一个很棒的小脚本,它将获取一个巨大的mysqldump 文件并将其拆分为一个文件,以便每张桌子。然后你可以运行这个非常简单的脚本来从这些文件加载数据库:
##代码##mydumpsplitter even works on .gz files, but it is much, much slower than gunzipping first, then running it on the uncompressed file.
mydumpsplitter 甚至适用于 .gz 文件,但它比首先使用 gunzip 压缩,然后在未压缩文件上运行它要慢得多。
I say huge, but I guess everything is relative. It took about 6-8 minutes to split a 2000-table, 200MB dump file for me.
我说巨大,但我想一切都是相对的。为我拆分一个 2000 个表、200MB 的转储文件大约需要 6-8 分钟。