Python 基本pyodbc批量插入
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/37008848/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
basic pyodbc bulk insert
提问by Zip184
In a python script, I need to run a query on one datasource and insert each row from that query into a table on a different datasource. I'd normally do this with a single insert/select statement with a tsql linked server join but I don't have a linked server connection to this particular datasource.
在 python 脚本中,我需要对一个数据源运行查询,并将该查询中的每一行插入到不同数据源的表中。我通常会使用带有 tsql 链接服务器连接的单个插入/选择语句来执行此操作,但我没有与此特定数据源的链接服务器连接。
I'm having trouble finding a simple pyodbc example of this. Here's how I'd do it but I'm guessing executing an insert statement inside a loop is pretty slow.
我很难找到一个简单的 pyodbc 示例。这是我的方法,但我猜在循环中执行插入语句非常慢。
result = ds1Cursor.execute(selectSql)
for row in result:
insertSql = "insert into TableName (Col1, Col2, Col3) values (?, ?, ?)"
ds2Cursor.execute(insertSql, row[0], row[1], row[2])
ds2Cursor.commit()
Is there a better bulk way to insert records with pyodbc? Or is this a relatively efficient way to do this anyways. I'm using SqlServer 2012, and the latest pyodbc and python versions.
是否有更好的批量方式使用 pyodbc 插入记录?或者这是一种相对有效的方法来做到这一点。我使用的是 SqlServer 2012,以及最新的 pyodbc 和 python 版本。
采纳答案by LegendaryDude
The best way to handle this is to use the pyodbc function executemany
.
处理此问题的最佳方法是使用 pyodbc 函数executemany
。
ds1Cursor.execute(selectSql)
result = ds1Cursor.fetchall()
ds2Cursor.executemany('INSERT INTO [TableName] (Col1, Col2, Col3) VALUES (?, ?, ?)', result)
ds2Cursor.commit()
回答by Naufal
Here's a function that can do the bulk insert into SQL Server database.
这是一个可以批量插入 SQL Server 数据库的函数。
import pyodbc
import contextlib
def bulk_insert(table_name, file_path):
string = "BULK INSERT {} FROM '{}' (WITH FORMAT = 'CSV');"
with contextlib.closing(pyodbc.connect("MYCONN")) as conn:
with contextlib.closing(conn.cursor()) as cursor:
cursor.execute(string.format(table_name, file_path))
conn.commit()
This definitely works.
这绝对有效。
UPDATE: I've noticed at the comments, as well as coding regularly, that pyodbc is better supported than pypyodbc.
更新:我在评论和定期编码中注意到,pyodbc 比 pypyodbc 得到更好的支持。
NEW UPDATE: remove conn.close() since the with statement handles that automatically.
新更新:删除 conn.close() 因为 with 语句会自动处理。