MySQL mysqldump 表不转储主键

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/1018669/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-31 13:33:06  来源:igfitidea点击:

mysqldump table without dumping the primary key

mysqlmysqldumpload-data-infile

提问by Zee Spencer

I have one table spread across two servers running MySql 4. I need to merge these into one server for our test environment.

我有一张表分布在运行 MySql 4 的两台服务器上。我需要将它们合并到我们的测试环境中的一台服务器中。

These tables literally have millions of records each, and the reason they are on two servers is because of how huge they are. Any altering and paging of the tables will give us too huge of a performance hit.

这些表实际上每个都有数百万条记录,它们在两台服务器上的原因是因为它们有多大。表的任何更改和分页都会给我们带来太大的性能损失。

Because they are on a production environment, it is impossible for me to alter them in any way on their existing servers.

因为它们在生产环境中,所以我不可能在它们现有的服务器上以任何方式更改它们。

The issue is the primary key is a unique auto incrementing field, so there are intersections.

问题是主键是唯一的自动递增字段,因此存在交集。

I've been trying to figure out how to use the mysqldump command to ignore certain fields, but the --disable-keys merely alters the table, instead of getting rid of the keys completely.

我一直试图弄清楚如何使用 mysqldump 命令来忽略某些字段,但 --disable-keys 只是改变了表,而不是完全摆脱键。

At this point it's looking like I'm going to need to modify the database structure to utilize a checksum or hash for the primary key as a combination of the two unique fields that actually should be unique... I really don't want to do this.

在这一点上,看起来我将需要修改数据库结构以使用主键的校验和或哈希作为实际上应该是唯一的两个唯一字段的组合......我真的不想做这个。

Help!

帮助!

采纳答案by longneck

if you don't care what the value of the auto_increment column will be, then just load the first file, rename the table, then recreate the table and load the second file. finally, use

如果您不关心 auto_increment 列的值是什么,那么只需加载第一个文件,重命名表,然后重新创建表并加载第二个文件。最后,使用

INSERT newly_created_table_name (all, columns, except, the, auto_increment, column)
       SELECT all, columns, except, the, auto_increment, column
         FROM renamed_table_name

回答by Eric Seastrand

To solve this problem, I looked up this question, found @pumpkinthehead's answer, and realized that all we need to do is find+replace the primary key in each row with the NULL so that mysql will use the default auto_increment value instead.

为了解决这个问题,我查了这个问题,找到了@pumpkinthehead的答案,并意识到我们需要做的就是用NULL查找+替换每一行中的主键,这样mysql就会使用默认的auto_increment值。

(your complete mysqldump command) | sed -e "s/([0-9]*,/(NULL,/gi" > my_dump_with_no_primary_keys.sql

(your complete mysqldump command) | sed -e "s/([0-9]*,/(NULL,/gi" > my_dump_with_no_primary_keys.sql

Original output:

原始输出:

INSERT INTO `core_config_data` VALUES
    (2735,'default',0,'productupdates/configuration/sender_email_identity','general'),
    (2736,'default',0,'productupdates/configuration/unsubscribe','1'),

Transformed Output:

转换后的输出:

INSERT INTO `core_config_data` VALUES
    (NULL,'default',0,'productupdates/configuration/sender_email_identity','general'),
    (NULL,'default',0,'productupdates/configuration/unsubscribe','1'),

Note: This is still a hack; For example, it will fail if your auto-increment column is not the first column, but solves my problem 99% of the time.

注意:这仍然是一个 hack;例如,如果您的自动增量列不是第一列,它会失败,但在 99% 的时间内解决了我的问题。

回答by jimyi

You can create a view of the table without the primary key column, then run mysqldump on that view.

您可以创建没有主键列的表视图,然后在该视图上运行 mysqldump。

So if your table "users" has the columns: id, name, email

因此,如果您的表“用户”具有以下列:id、name、email

> CREATE VIEW myView AS
  SELECT name, email FROM users

Edit: ah I see, I'm not sure if there's any other way then.

编辑:啊我明白了,我不确定是否还有其他方法。

回答by Fedir RYKHTIK

  1. Clone Your table
  2. Drop the column in clone table
  3. Dump the clone table without the structure (but with -c option to get complete inserts)
  4. Import where You want
  1. 克隆你的桌子
  2. 删除克隆表中的列
  3. 转储没有结构的克隆表(但使用 -c 选项以获得完整插入)
  4. 导入你想要的地方

回答by pumpkinthehead

This is a total pain. I get around this issue by running something like

这是一种彻底的痛苦。我通过运行类似的东西来解决这个问题

sed -e "s/([0-9]*,/(/gi" export.sql > expor2.sql 

on the dump to get rid of the primary keys and then

在转储上摆脱主键,然后

sed -e "s/VALUES/(col1,col2,...etc.) VALUES/gi" LinxImport2.sql > LinxImport3.sql

for all of the columns except for the primary key. Of course, you'll have to be careful that ([0-9]*,doesn't replace anything that you actually want.

对于除主键之外的所有列。当然,你必须小心,([0-9]*,不要取代你真正想要的任何东西。

Hope that helps someone.

希望能帮助某人。

回答by Jeff Ferland

SELECT null as fake_pk, `col_2`, `col_3`, `col_4` INTO OUTFILE 'your_file'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM your_table;

LOAD DATA INFILE 'your_file' INTO TABLE your_table
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n';

For added fanciness, you can set a before inserttrigger on your receiving table that sets the new primary key for reach row before the insertion occurs, thereby using regular dumps and still clearing your pk. Not tested, but feeling pretty confident about it.

为了增加幻想,您可以在接收表上设置一个before insert触发器,在插入发生之前为到达行设置新的主键,从而使用常规转储并仍然清除您的 pk。未测试,但对此感到非常有信心。

回答by LinuxMatt

Use a dummy temporary primary key:

使用虚拟临时主键:

Use mysqldumpnormally --opts -c. For example, your primary key is 'id'. Edit the output files and add a row "dummy_id" to the structure of your table with the same type as 'id' (but not primary key of course). Then modify the INSERTstatement and replace 'id' by 'dummy_id'. Once imported, drop the column 'dummy_id'.

mysqldump正常使用--opts -c。例如,您的主键是“id”。编辑输出文件并将行“dummy_id”添加到表结构中,其类型与“id”相同(当然不是主键)。然后修改INSERT语句并将“id”替换为“dummy_id”。导入后,删除列“dummy_id”。

回答by symcbean

jimyi was on the right track.

jimyi 走在正确的轨道上。

This is one of the reasons why autoincrement keys are a PITA. One solution is not to delete data but add to it.

这是自动增量键是 PITA 的原因之一。一种解决方案不是删除数据而是添加数据。

CREATE VIEW myView AS
SELECT id*10+$x, name, email FROM users

(where $x is a single digit uniquely identifying the original database) either creating the view on the source database (which you hint may not be possible) or use an extract routine like that described by Autocracy or load the data into staging tables on the test box.

(其中 $x 是唯一标识原始数据库的单个数字)在源数据库上创建视图(您暗示可能不可能)或使用像 Autocracy 描述的提取例程或将数据加载到临时表中测试盒。

Alternatively, don't create the table on the test system - instead put in separate tables for the src data then create a view which fetches from them both:

或者,不要在测试系统上创建表 - 而是为 src 数据放入单独的表,然后创建一个从它们两者中获取的视图:

CREATE VIEW users AS
(SELECT * FROM users_on_a) UNION (SELECT * FROM users_on_b)

C.

C。

回答by Hubro

The solution I've been using is to just do a regular SQL export of the data I'm exporting, then removing the primary key from the insert statements using a RegEx find&replace editor. Personally I use Sublime Text, but I'm sure TextMate, Notepad++ etc. can do the same.

我一直在使用的解决方案是对我正在导出的数据进行常规 SQL 导出,然后使用 RegEx 查找和替换编辑器从插入语句中删除主键。我个人使用 Sublime Text,但我确信 TextMate、Notepad++ 等也可以这样做。

Then I just run the query in which ever database the data should be inserted to by copy pasting the query into HeidiSQL's query window or PHPMyAdmin. If there's a LOTof data I save the insert query to an SQL file and use file import instead. Copy & paste with huge amounts of text often makes Chrome freeze.

然后我只是通过将查询复制粘贴到 HeidiSQL 的查询窗口或 PHPMyAdmin 来运行查询,将数据插入到哪个数据库中。如果有一个LOT的数据我插入查询保存到一个SQL文件和使用文件导入来代替。复制和粘贴大量文本通常会使 Chrome 冻结。

This might sound like a lot of work, but I rarely use more than a couple of minutes between the export and the import. Probably a lot less than I would use on the accepted solution. I've used this solution method on several hundred thousand rows without issue, but I think it would get problematic when you reach the millions.

这听起来像是很多工作,但我很少在导出和导入之间使用超过几分钟的时间。可能比我在接受的解决方案上使用的要少得多。我已经在几十万行上使用了这种解决方法没有问题,但我认为当你达到数百万行时会出现问题。

回答by Aidan Miles

I like the temporary table route.

我喜欢临时表路线。

create temporary table my_table_copy
select * from my_table;

alter table my_table_copy drop id;

// Use your favorite dumping method for the temporary table

Like the others, this isn't a one-size-fits-all solution (especially given OP's millions of rows) but even at 10^6 rows it takes several seconds to run but works.

与其他解决方案一样,这不是一刀切的解决方案(特别是考虑到 OP 的数百万行),但即使在 10^6 行时,它也需要几秒钟才能运行但有效。