将 MySQL 转储导入 PostgreSQL 数据库

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/5417386/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-31 19:16:24  来源:igfitidea点击:

Import MySQL dump to PostgreSQL database

mysqlpostgresql

提问by Palani Kannan

How can I import an "xxxx.sql" dump from MySQL to a PostgreSQL database?

如何将“xxxx.sql”转储从 MySQL 导入 PostgreSQL 数据库?

采纳答案by Mike Sherrill 'Cat Recall'

Don't expect that to work without editing. Maybe a lot of editing.

不要指望它在没有编辑的情况下工作。也许很多编辑。

mysqldump has a compatibility argument, --compatible=name, where "name" can be "oracle" or "postgresql", but that doesn't guarantee compatibility. I think server settings like ANSI_QUOTES have some effect, too.

mysqldump 有一个兼容性参数, --compatible=name,其中“名称”可以是“oracle”或“postgresql”,但这并不能保证兼容性。我认为像 ANSI_QUOTES 这样的服务器设置也有一些影响。

You'll get more useful help here if you include the complete command you used to create the dump, along with any error messages you got instead of saying just "Nothing worked for me."

如果您包含用于创建转储的完整命令以及您收到的任何错误消息,而不是仅仅说“对我没有任何帮助”,您将在此处获得更多有用的帮助。

回答by Rob Contreras

This question is a little old but a few days ago I was dealing with this situation and found pgloader.io.

这个问题有点老了,但几天前我正在处理这种情况并找到了pgloader.io

This is by far the easiest way of doing it, you need to install it, and then run a simple lisp script (script.lisp) with the following 3 lines:

这是迄今为止最简单的方法,您需要安装它,然后使用以下 3 行运行一个简单的 lisp 脚本 ( script.lisp):

/* content of the script.lisp */
LOAD DATABASE
FROM mysql://dbuser@localhost/dbname
INTO postgresql://dbuser@localhost/dbname;


/*run this in the terminal*/
pgloader script.lisp

And after that your postgresql DB will have all of the information that you had in your MySQL SB.

之后,您的 postgresql 数据库将拥有您在 MySQL SB 中拥有的所有信息。

On a side note, make sure you compile pgloader since at the time of this post, the installer has a bug. (version 3.2.0)

附带说明一下,请确保您编译了 pgloader,因为在撰写本文时,安装程​​序存在错误。(版本 3.2.0)

回答by cyber8200

Mac OS X

Mac OS X

brew update && brew install pgloader

pgloader mysql://user@host/db_name postgresql://user@host/db_name

回答by Wolph

The fastest (and most complete) way I found was to use Kettle. This will also generate the needed tables, convert the indexes and everything else. The mysqldumpcompatibility argument does notwork.

我发现的最快(也是最完整)的方法是使用 Kettle。这还将生成所需的表、转换索引和其他所有内容。在mysqldump兼容性说法并不能正常工作。

The steps:

步骤:

  1. Download Pentaho ETL from http://kettle.pentaho.org/(community version)

  2. Unzip and run Pentaho (spoon.sh/spoon.bat depending on unix/windows)

  3. Create a new job

  4. Create a database connection for the MySQL source (Tools -> Wizard -> Create database connection)

  5. Create a database connection for the PostgreSQL source (as above)

  6. Run the Copy Tableswizard (Tools -> Wizard -> Copy Tables)

  7. Run the job

  1. http://kettle.pentaho.org/(社区版)下载 Pentaho ETL

  2. 解压并运行 Pentaho (spoon.sh/spoon.bat 取决于 unix/windows)

  3. 创建新工作

  4. 为 MySQL 源创建数据库连接(工具 -> 向导 -> 创建数据库连接)

  5. 为 PostgreSQL 源创建数据库连接(如上)

  6. 运行Copy Tables向导(工具 -> 向导 -> 复制表)

  7. 运行作业

回答by jibiel

For those Googlers who are in 2015+.
I've wasted all day on this and would like to sum things up.

对于那些在 2015+ 年的 Google 员工
我已经浪费了一整天,想总结一下。

I've tried all the solutions described at thisarticle by Alexandru Cotioras(which is full of despair). Of all the solutions mentioned there only one worked for me.

我已经尝试了Alexandru Cotioras(充满绝望)在这篇文章中描述的所有解决方案。在提到的所有解决方案中,只有一个对我有用。

lanyrd/mysql-postgresql-converter @ github.com(Python)

lanyrd/mysql-postgresql-converter @ github.com(Python)

But this alone won't do. When you'll be importing your new converted dump file:

但光靠这个是不行的。当您要导入新的转换后的转储文件时:

# \i ~/Downloads/mysql-postgresql-converter-master/dump.psql 

PostgreSQLwill tell you about messed types from MySQL:

PostgreSQL会告诉你关于混乱的类型MySQL

psql:/Users/jibiel/Downloads/mysql-postgresql-converter-master/dump.psql:381: ERROR:  type "mediumint" does not exist
LINE 2:     "group_id" mediumint(8)  NOT NULL DEFAULT '0',

So you'll have to fix those types manually as per thistable.

因此,您必须按照表手动修复这些类型。

In short it is:

简而言之就是:

tinyint(2) -> smallint  
mediumint(7) -> integer
# etc.

You can use regexand any cool editor to get it done.

您可以使用regex任何很酷的编辑器来完成它。

MacVim+ Substitute:

MacVim+ Substitute:

:%s!tinyint(\w\+)!smallint!g
:%s!mediumint(\w\+)!integer!g

回答by WhiteFang34

回答by Nicolay77

I have this bash script to migrate the data, it doesn't create the tables because they are created in migration scripts, so I need only to convert the data. I use a list of the tables to not import data from the migrationsand sessionstables. Here it is, just tested:

我有这个 bash 脚本来迁移数据,它不会创建表,因为它们是在迁移脚本中创建的,所以我只需要转换数据。我使用表列表来不从migrationssessions表中导入数据。这是,刚刚测试:

#!/bin/sh

MUSER="root"
MPASS="mysqlpassword"
MDB="origdb"
MTABLES="car dog cat"
PUSER="postgres"
PDB="destdb"

mysqldump -h 127.0.0.1 -P 6033 -u $MUSER -p$MPASS --default-character-set=utf8 --compatible=postgresql --skip-disable-keys --skip-set-charset --no-create-info --complete-insert --skip-comments --skip-lock-tables $MDB $MTABLES > outputfile.sql

sed -i 's/UNLOCK TABLES;//g' outputfile.sql
sed -i 's/WRITE;/RESTART IDENTITY CASCADE;/g' outputfile.sql
sed -i 's/LOCK TABLES/TRUNCATE/g' outputfile.sql
sed -i "s/'0000\-00\-00 00\:00\:00'/NULL/g" outputfile.sql
sed -i "1i SET standard_conforming_strings = 'off';\n" outputfile.sql
sed -i "1i SET backslash_quote = 'on';\n" outputfile.sql
sed -i "1i update pg_cast set castcontext='a' where casttarget = 'boolean'::regtype;\n" outputfile.sql
echo "\nupdate pg_cast set castcontext='e' where casttarget = 'boolean'::regtype;\n" >> outputfile.sql

psql -h localhost -d $PDB -U $PUSER -f outputfile.sql

You will get a lot of warnings you can safely ignore like this:

你会收到很多警告,你可以像这样安全地忽略:

psql:outputfile.sql:82: WARNING:  nonstandard use of escape in a string literal
LINE 1: ...,(1714,38,2,0,18,131,0.00,0.00,0.00,0.00,NULL,'{\"prospe...
                                                         ^
HINT:  Use the escape string syntax for escapes, e.g., E'\r\n'.

回答by Frederik Witte

You can use pgloader.

您可以使用 pgloader。

sudo apt-get install pgloader

Using:

使用:

pgloader mysql://user:pass@host/database postgresql://user:pass@host/database

回答by a_horse_with_no_name

It is not possible to import an Oracle (binary) dump to PostgreSQL.

无法将 Oracle(二进制)转储导入 PostgreSQL。

If the MySQL dump is in plain SQL format, you will need to edit the file to make the syntax correct for PostgreSQL (e.g. remove the non-standard backtick quoting, remove the engine definition for the CREATE TABLE statements adjust the data types and a lot of other things)

如果 MySQL 转储是纯 SQL 格式,您将需要编辑该文件以使 PostgreSQL 的语法正确(例如,删除非标准的反引号引用,删除 CREATE TABLE 语句的引擎定义,调整数据类型等其他东西)

回答by anon

Here is a simple program to create and load all tables in a mysql database (honey) to postgresql. Type conversion from mysql is coarse-grained but easily refined. You will have to recreate the indexes manually:

这是一个简单的程序,用于创建 mysql 数据库(honey)中的所有表并将其加载到 postgresql。来自 mysql 的类型转换是粗粒度的,但很容易细化。您必须手动重新创建索引:

import MySQLdb
from magic import Connect #Private mysql connect information
import psycopg2

dbx=Connect()
DB=psycopg2.connect("dbname='honey'")
DC=DB.cursor()

mysql='''show tables from honey'''
dbx.execute(mysql); ts=dbx.fetchall(); tables=[]
for table in ts: tables.append(table[0])
for table in tables:
    mysql='''describe honey.%s'''%(table)
    dbx.execute(mysql); rows=dbx.fetchall()
    psql='drop table %s'%(table)
    DC.execute(psql); DB.commit()

    psql='create table %s ('%(table)
    for row in rows:
        name=row[0]; type=row[1]
        if 'int' in type: type='int8'
        if 'blob' in type: type='bytea'
        if 'datetime' in type: type='timestamptz'
        psql+='%s %s,'%(name,type)
    psql=psql.strip(',')+')'
    print psql
    try: DC.execute(psql); DB.commit()
    except: pass

    msql='''select * from honey.%s'''%(table)
    dbx.execute(msql); rows=dbx.fetchall()
    n=len(rows); print n; t=n
    if n==0: continue #skip if no data

    cols=len(rows[0])
    for row in rows:
        ps=', '.join(['%s']*cols)
        psql='''insert into %s values(%s)'''%(table, ps)
        DC.execute(psql,(row))
        n=n-1
        if n%1000==1: DB.commit(); print n,t,t-n
    DB.commit()