Python Pandas df.to_csv("file.csv" encode="utf-8") 仍然为减号提供垃圾字符

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/25788037/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-18 23:39:08  来源:igfitidea点击:

Pandas df.to_csv("file.csv" encode="utf-8") still gives trash characters for minus sign

pythoncsvutf-8pandas

提问by Maggie

I've read something about a Python 2 limitation with respect to Pandas' to_csv( ... etc ...). Have I hit it? I'm on Python 2.7.3

我已经阅读了一些关于 Pandas 的 to_csv( ... etc ...) 的 Python 2 限制。我打中了吗?我使用的是 Python 2.7.3

This turns out trash characters for ≥ and - when they appear in strings. Aside from that the export is perfect.

当 ≥ 和 - 出现在字符串中时,结果是垃圾字符。除此之外,出口是完美的。

df.to_csv("file.csv", encoding="utf-8") 

Is there any workaround?

有什么解决方法吗?

df.head() is this:

df.head() 是这样的:

demography  Adults ≥49 yrs  Adults 18?49 yrs at high risk||  \
state                                                           
Alabama                 32.7                             38.6   
Alaska                  31.2                             33.2   
Arizona                 22.9                             38.8   
Arkansas                31.2                             34.0   
California              29.8                             38.8  

csv output is this

csv输出是这个

state,  Adults a‰¥49 yrs,   Adults 18a?'49 yrs at high risk||
0,  Alabama,    32.7,   38.6
1,  Alaska, 31.2,   33.2
2,  Arizona,    22.9,   38.8
3,  Arkansas,31.2,  34
4,  California,29.8, 38.8

the whole code is this:

整个代码是这样的:

import pandas
import xlrd
import csv
import json

df = pandas.DataFrame()
dy = pandas.DataFrame()
# first merge all this xls together


workbook = xlrd.open_workbook('csv_merger/vaccoverage.xls')
worksheets = workbook.sheet_names()


for i in range(3,len(worksheets)):
    dy = pandas.io.excel.read_excel(workbook, i, engine='xlrd', index=None)
    i = i+1
    df = df.append(dy)

df.index.name = "index"

df.columns = ['demography', 'area','state', 'month', 'rate', 'moe']

#Then just grab month = 'May'

may_mask = df['month'] == "May"
may_df = (df[may_mask])

#then delete some columns we dont need

may_df = may_df.drop('area', 1)
may_df = may_df.drop('month', 1)
may_df = may_df.drop('moe', 1)


print may_df.dtypes #uh oh, it sees 'rate' as type 'object', not 'float'.  Better change that.

may_df = may_df.convert_objects('rate', convert_numeric=True)

print may_df.dtypes #that's better

res = may_df.pivot_table('rate', 'state', 'demography')
print res.head()


#and this is going to spit out an array of Objects, each Object a state containing its demographics
res.reset_index().to_json("thejson.json", orient='records')
#and a .csv for good measure
res.reset_index().to_csv("thecsv.csv", orient='records', encoding="utf-8")

回答by Mark Tolonen

Your "bad" output is UTF-8 displayed as CP1252.

您的“坏”输出是 UTF-8,显示为 CP1252。

On Windows, many editors assume the default ANSI encoding (CP1252 on US Windows) instead of UTF-8 if there is no byte order mark (BOM) character at the start of the file. While a BOM is meaningless to the UTF-8 encoding, its UTF-8-encoded presence serves as a signature for some programs. For example, Microsoft Office's Excel requires it even on non-Windows OSes. Try:

在 Windows 上,如果文件开头没有字节顺序标记 (BOM) 字符,则许多编辑器假定默认的 ANSI 编码(美国 Windows 上的 CP1252)而不是 UTF-8。虽然 BOM 对 UTF-8 编码毫无意义,但它的 UTF-8 编码存在可用作某些程序的签名。例如,即使在非 Windows 操作系统上,Microsoft Office 的 Excel 也需要它。尝试:

df.to_csv('file.csv',encoding='utf-8-sig')

That encoder will add the BOM.

该编码器将添加 BOM。