psycopg2 将 python 字典插入为 json
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/31796332/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
psycopg2 insert python dictionary as json
提问by Rorschach
I want to insert a python dictionary as a json into my postgresql database (via python and psycopg2).
我想将一个 python 字典作为 json 插入到我的 postgresql 数据库中(通过 python 和 psycopg2)。
I have:
我有:
thedictionary = {'price money': '', 'name': 'Google', 'color': '', 'imgurl': 'http://www.google.com/images/nav_logo225.png', 'charateristics': 'No Description', 'store': 'google'}
cur.execute("INSERT INTO product(store_id, url, price, charecteristics, color, dimensions) VALUES (%d, %s, %s, %d, %s, %s)", (1, 'http://www.google.com', '', thedictionary, 'red', '8.5x11'))
And it gives the error message:
它给出了错误消息:
cur.execute("INSERT INTO product(store_id, url, price, charecteristics, color, dimensions) VALUES (%d, %s, %s, %d, %s, %s)", (1, 'http://www.google.com', '$20', thedictionary, 'red', '8.5x11')) psycopg2.ProgrammingError: can't adapt type 'dict'
cur.execute("INSERT INTO product(store_id, url, price, charecteristics, color, dimensions) VALUES (%d, %s, %s, %d, %s, %s)", (1, ' http:/ /www.google.com', '$20', thedictionary, 'red', '8.5x11')) psycopg2.ProgrammingError: 无法适应类型 'dict'
I am not sure how to proceed from here. I cannot find anything on the internet about how to do this exact kind of thing and I am very new to psycopg2.
我不知道如何从这里开始。我在互联网上找不到任何关于如何做这种确切事情的信息,而且我对 psycopg2 非常陌生。
采纳答案by hd1
cur.execute("INSERT INTO product(store_id, url, price, charecteristics, color, dimensions) VALUES (%s, %s, %s, %s, %s, %s)", (1, 'http://www.google.com', '', json.dumps(thedictionary), 'red', '8.5x11'))
That will solve your problem. However, you really should be storing keys and values in their own separate columns. To retrieve the dictionary, do:
那将解决您的问题。但是,您确实应该将键和值存储在它们自己单独的列中。要检索字典,请执行以下操作:
cur.execute('select charecteristics from product where store_id = 1')
dictionary = json.loads(cur.fetchone()[0])
回答by l mingzhi
You can use psycopg2.extras.Json
to convert dict to json that postgre accept.
您可以使用psycopg2.extras.Json
将 dict 转换为 postgre 接受的 json。
from psycopg2.extras import Json
thedictionary = {'price money': '',
'name': 'Google', 'color': '', 'imgurl': 'http://www.google.com/images/nav_logo225.png', 'charateristics': 'No Description', 'store': 'google'}
item ={
"store_id":1,
"url": 'http://www.google.com',
"price":'',
"charecteristics":Json(thedictionary),
"color":'red',
"dimensions":'8.5x11'
}
def sql_insert(tableName, data_dict):
'''
INSERT INTO product (store_id, url, price, charecteristics, color, dimensions)
VALUES (%(store_id)s, %(url)s, %(price)s, %(charecteristics)s, %(color)s, %(dimensions)s );
'''
sql = '''
INSERT INTO %s (%s)
VALUES (%%(%s)s );
''' % (tableName, ', '.join(data_dict), ')s, %('.join(data_dict))
return sql
tableName = 'product'
sql = sql_insert(tableName, item)
cur.execute(sql, item)
For more information, you can see the official document.
更多信息可以查看官方文档。
class psycopg2.extras.Json(adapted, dumps=None)
An ISQLQuote wrapper to adapt a Python object to json data type.
Json can be used to wrap any object supported by the provided dumps function. If none is provided, the standard json.dumps() is used (simplejson for Python < 2.6; getquoted() will raise ImportError if the module is not available).
dumps(obj)
Serialize obj in JSON format.
The default is to call json.dumps() or the dumps function provided in the constructor. You can override this method to create a customized JSON wrapper.
回答by Felipe Augusto
From the psycopg docs:
Note You can use register_adapter() to adapt any Python dictionary to JSON, either registering Json or any subclass or factory creating a compatible adapter:
psycopg2.extensions.register_adapter(dict, psycopg2.extras.Json)
This setting is global though, so it is not compatible with similar adapters such as the one registered by register_hstore(). Any other object supported by JSON can be registered the same way, but this will clobber the default adaptation rule, so be careful to unwanted side effects.
注意您可以使用 register_adapter() 使任何 Python 字典适应 JSON,注册 Json 或任何子类或工厂创建兼容的适配器:
psycopg2.extensions.register_adapter(dict, psycopg2.extras.Json)
不过这个设置是全局的,所以它与类似的适配器不兼容,比如由 register_hstore() 注册的适配器。任何其他 JSON 支持的对象都可以用同样的方式注册,但这会破坏默认的适配规则,所以要小心不必要的副作用。
So, in my case what I did was:
所以,就我而言,我所做的是:
from psycopg2.extensions import register_adapter
register_adapter(dict, Json)
It worked like a charm.
它就像一个魅力。
回答by Aaron Melgar
Is there a particular reason you want to have each key as its own column? Postgres lets you perform direct query operations within a single column containing valid JSON or JSONB
您是否有特殊原因希望将每个键作为自己的列?Postgres 允许您在包含有效 JSON 或 JSONB 的单个列中执行直接查询操作
This means you can simply create a 2 column DB with ID (primary key) and metadata and then perform queries such as:
这意味着您可以简单地创建一个带有 ID(主键)和元数据的 2 列数据库,然后执行如下查询:
SELECT * FROM users WHERE metadata @> '{"key": "value"}';
Hereis a good resource for you.
这是一个很好的资源给你。
回答by l mingzhi
Just convert the dict type to json_str, use json.dumps(adict)
.
只需将 dict 类型转换为 json_str,使用json.dumps(adict)
.
import pandas as pd
import json
import psycopg2
from sqlalchemy import create_engine
engine_nf = create_engine('postgresql+psycopg2://user:[email protected]:5432/database')
sql_read = lambda sql: pd.read_sql(sql, engine_nf)
sql_execute = lambda sql: pd.io.sql.execute(sql, engine_nf)
sql = '''
CREATE TABLE if not exists product (
store_id int
, url text
, price text
, charecteristics json
, color text
, dimensions text
)
'''
_ = sql_execute(sql)
thedictionary = {'price money': '', 'name': 'Google',
'color': '', 'imgurl': 'http://www.google.com/images/nav_logo225.png',
'charateristics': 'No Description',
'store': 'google'}
sql = '''
INSERT INTO product(store_id, url, price, charecteristics, color, dimensions)
VALUES (%d, '%s', '%s', '%s', '%s', '%s')
''' % (1, 'http://www.google.com', '',
json.dumps(thedictionary), 'red', '8.5x11')
sql_execute(sql)
sql = '''
select *
from product
'''
df = sql_read(sql)
df
# store_id url price charecteristics color dimensions
# 0 1 http://www.google.com {'price money': '', 'name': 'Google', 'color... red 8.5x11
charecteristics = df['charecteristics'].iloc[0]
type(charecteristics)
# dict
In fact, I like another way to dump data to postgres.
事实上,我喜欢另一种将数据转储到 postgres 的方式。
import io
import csv
def df2db(df_a, table_name, engine):
output = io.StringIO()
# ignore the index
df_a.to_csv(output, sep='\t', index = False, header = False, quoting=csv.QUOTE_NONE)
output.getvalue()
# jump to start of stream
output.seek(0)
#engine ---- from sqlalchemy import create_engine
connection = engine.raw_connection()
cursor = connection.cursor()
# null value become ''
cursor.copy_from(output,table_name,null='')
connection.commit()
cursor.close()
df = sql_read('select * from product')
type(df.charecteristics.iloc[0])
df.charecteristics = df.charecteristics.map(json.dumps)
# dump pandas DataFrame to postgres
df2db(df, 'product', engine_nf)
df_end = sql_read('select * from product')