MySQL 导入数据库但忽略特定表

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/16735344/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-31 17:44:43  来源:igfitidea点击:

MySQL import database but ignore specific table

mysqlmysqldumpmysqlimport

提问by DanielAttard

I have a large SQL file with one database and about 150 tables. I would like to use mysqlimportto import that database, however, I would like the import process to ignore or skip over a couple of tables. What is the proper syntax to import all tables, but ignore some of them? Thank you.

我有一个包含一个数据库和大约 150 个表的大型 SQL 文件。我想mysqlimport用来导入该数据库,但是,我希望导入过程忽略或跳过几个表。导入所有表但忽略其中一些表的正确语法是什么?谢谢你。

采纳答案by RandomSeed

mysqlimportis not the right tool for importing SQL statements. This tool is meant to import formatted text files such as CSV. What you want to do is feed your sql dump directly to the mysqlclient with a command like this one:

mysqlimport不是导入 SQL 语句的正确工具。此工具旨在导入格式化的文本文件,例如 CSV。您想要做的是mysql使用如下命令将您的 sql 转储直接提供给客户端:

bash > mysql -D your_database < your_sql_dump.sql

Neither mysqlnor mysqlimportprovide the feature you need. Your best chance would be importing the whole dump, then dropping the tables you do not want.

既不提供mysql也不mysqlimport提供您需要的功能。您最好的机会是导入整个转储,然后删除您不想要的表。

If you have access to the server where the dump comes from, then you could create a new dump with mysqldump --ignore-table=database.table_you_dont_want1 --ignore-table=database.table_you_dont_want2 ....

如果您有权访问转储来自的服务器,那么您可以使用mysqldump --ignore-table=database.table_you_dont_want1 --ignore-table=database.table_you_dont_want2 ....



Check out this answer for a workaround to skip importing some table

查看此答案以获取跳过导入某些表的解决方法

回答by Don

The accepted answer by RandomSeed could take a long time! Importing the table (just to drop it later) could be very wasteful depending on size.

RandomSeed 接受的答案可能需要很长时间!导入表(只是稍后删除它)可能会非常浪费,具体取决于大小。

For a file created using

对于使用创建的文件

mysqldump -u user -ppasswd --opt --routines DBname > DBdump.sql

I currently get a file about 7GB, 6GB of which is data for a log table that I don't 'need' to be there; reloading this file takes a couple of hours. If I need to reload (for development purposes, or if ever required for a live recovery) I skim the file thus:

我目前得到一个大约 7GB 的文件,其中 6GB 是我不需要的日志表数据;重新加载这个文件需要几个小时。如果我需要重新加载(出于开发目的,或者如果需要进行实时恢复),我会这样浏览文件:

sed '/INSERT INTO `TABLE_TO_SKIP`/d' DBdump.sql > reduced.sql

And reload with:

并重新加载:

mysql -u user -ppasswd DBname < reduced.sql

This gives me a complete database, with the "unwanted" table created but empty. If you really don't want the tables at all, simply drop the empty tables after the load finishes.

这为我提供了一个完整的数据库,其中创建了“不需要的”表但为空。如果您真的根本不需要这些表,只需在加载完成后删除空表即可。

For multiple tables you could do something like this:

对于多个表,您可以执行以下操作:

sed '/INSERT INTO `TABLE1_TO_SKIP`/d' DBdump.sql | \
sed '/INSERT INTO `TABLE2_TO_SKIP`/d' | \
sed '/INSERT INTO `TABLE3_TO_SKIP`/d' > reduced.sql

There IS a 'gotcha' - watch out for procedures in your dump that might contain "INSERT INTO TABLE_TO_SKIP".

有一个“陷阱”——注意转储中可能包含“INSERT INTO TABLE_TO_SKIP”的过程。

回答by feskr

For anyone working with .sql.gz files; I found the following solution to be very useful. Our database was 25GB+ and I had to remove the log tables.

对于任何使用 .sql.gz 文件的人;我发现以下解决方案非常有用。我们的数据库是 25GB+,我不得不删除日志表。

gzip -cd "./mydb.sql.gz" | sed -r '/INSERT INTO `(log_table_1|log_table_2|log_table_3|log_table_4)`/d' | gzip > "./mydb2.sql.gz"

Thanks to the answer of Don and comment of Xosofox and this related post: Use zcat and sed or awk to edit compressed .gz text file

感谢 Don 的回答和 Xosofox 的评论以及这篇相关文章: 使用 zcat 和 sed 或 awk 编辑压缩的 .gz 文本文件

回答by dGo

Little old, but figure it might still come in handy...

有点旧,但想想它可能仍然派上用场......

I liked @Don's answer (https://stackoverflow.com/a/26379517/1446005) but found it very annoying that you'd have to write to another file first...
In my particular case this would take too much time and disc space

我喜欢@Don 的回答(https://stackoverflow.com/a/26379517/1446005)但发现你必须先写入另一个文件很烦人......
在我的特殊情况下这会花费太多时间和磁盘空间

So I wrote a little bash script:

所以我写了一个小 bash 脚本:

#!/bin/bash

tables=(table1_to_skip table2_to_skip ... tableN_to_skip)


tableString=$(printf "|%s" "${tables[@]}")
trimmed=${tableString:1}
grepExp="INSERT INTO \`($trimmed)\`"

zcat  | grep -vE "$grepExp" | mysql -uroot -p

this does not generate a new sql script but pipes it directly to the database
also, it does create the tables, just doesn't import the data (which was the problem I had with huge log tables)

这不会生成新的 sql 脚本,而是将其直接通过管道传输到数据库
,它确实创建了表,只是不导入数据(这是我在处理大型日志表时遇到的问题)

回答by DUzun

Unless you have ignored the tables during the dump with mysqldump --ignore-table=database.unwanted_table, you have to use some script or tool to filter out the data you don't want to import from the dump file before passing it to mysqlclient.

除非您在使用 转储期间忽略了表,否则您必须mysqldump --ignore-table=database.unwanted_table使用一些脚本或工具在将转储文件传递给mysql客户端之前过滤掉您不想从转储文件中导入的数据。

Here is a bash/sh function that would exclude the unwanted tables from a SQL dump on the fly (through pipe):

这是一个 bash/sh 函数,它可以动态地(通过管道)从 SQL 转储中排除不需要的表:

# Accepts one argument, the list of tables to exclude (case-insensitive).
# Eg. filt_exclude '%session% action_log %_cache'
filt_exclude() {
    local excl_tns;
    if [ -n "" ]; then
        # trim & replace /[,;\s]+/ with '|' & replace '%' with '[^`]*'
        excl_tns=$(echo "" | sed -r 's/^[[:space:]]*//g; s/[[:space:]]*$//g; s/[[:space:]]+/|/g; s/[,;]+/|/g; s/%/[^\`]\*/g');

        grep -viE "(^INSERT INTO \`($excl_tns)\`)|(^DROP TABLE (IF EXISTS )?\`($excl_tns)\`)|^LOCK TABLES \`($excl_tns)\` WRITE" | \
        sed 's/^CREATE TABLE `/CREATE TABLE IF NOT EXISTS `/g'        
    else
        cat
    fi
}

Suppose you have a dump created like so:

假设你有一个像这样创建的转储:

MYSQL_PWD="my-pass" mysqldump -u user --hex-blob db_name | \
pigz -9 > dump.sql.gz

And want to exclude some unwanted tables before importing:

并希望在导入之前排除一些不需要的表:

pigz -dckq dump.sql.gz | \
filt_exclude '%session% action_log %_cache' | \
MYSQL_PWD="my-pass" mysql -u user db_name

Or you could pipe into a file or any other tool before importing to DB.

或者您可以在导入到数据库之前通过管道导入文件或任何其他工具。

回答by Domenic D.

If desired, you can do this one table at a time:

如果需要,您可以一次执行一张表:

mysqldump -p sourceDatabase tableName > tableName.sql
mysql -p -D targetDatabase < tableName.sql

回答by Alexey Muravyov

Here is my script to exclude some tables from mysql dump I use it to restore DB when need to keep orders and payments data

这是我的脚本,用于从 mysql 转储中排除一些表 我在需要保留订单和付款数据时使用它来恢复数据库

exclude_tables_from_dump.sh

exclude_tables_from_dump.sh

#!/bin/bash

if [ ! -f "" ];
then
    echo "Usage: ##代码## mysql_dump.sql"
    exit
fi

declare -a TABLES=(
user
order
order_product
order_status
payments
)

CMD="cat "
for TBL in "${TABLES[@]}";do
    CMD+="|sed 's/DROP TABLE IF EXISTS \`${TBL}\`/# DROP TABLE IF EXIST \`${TBL}\`/g'"
    CMD+="|sed 's/CREATE TABLE \`${TBL}\`/CREATE TABLE IF NOT EXISTS \`${TBL}\`/g'"
    CMD+="|sed -r '/INSERT INTO \`${TBL}\`/d'"
    CMD+="|sed '/DELIMITER\ \;\;/,/DELIMITER\ \;/d'"
done

eval $CMD

It avoid DROP and reCREATE of tables and inserting data to this tables. Also it strip all FUNCTIONS and PROCEDURES that stored between DELIMITER ;; and DELIMITER ;

它避免了表的 DROP 和 reCREATE 以及向该表插入数据。它还剥离存储在 DELIMITER ;; 之间的所有功能和过程。和分隔符;

回答by Zoran

I would not use it on production but if I would have to import some backup quickly that contains many smaller table and one big monster table that might take hours to import I would most probably "grep -v unwanted_table_name original.sql > reduced.sql

我不会在生产中使用它,但如果我必须快速导入一些备份,其中包含许多较小的表和一个可能需要数小时才能导入的大型怪物表,我很可能会“grep -v不需要的表名称” original.sql > reduction.sql

and then mysql -f < reduced.sql

然后 mysql -f < reduction.sql