Python 分配熊猫数据框列数据类型
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/21197774/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Assign pandas dataframe column dtypes
提问by hatmatrix
I want to set the dtypes of multiple columns in pd.Dataframe(I have a file that I've had to manually parse into a list of lists, as the file was not amenable for pd.read_csv)
我想在中设置dtype多列的s pd.Dataframe(我有一个文件,我必须手动将其解析为列表列表,因为该文件不适合pd.read_csv)
import pandas as pd
print pd.DataFrame([['a','1'],['b','2']],
dtype={'x':'object','y':'int'},
columns=['x','y'])
I get
我得到
ValueError: entry not a 2- or 3- tuple
The only way I can set them is by looping through each column variable and recasting with astype.
我可以设置它们的唯一方法是循环遍历每个列变量并使用astype.
dtypes = {'x':'object','y':'int'}
mydata = pd.DataFrame([['a','1'],['b','2']],
columns=['x','y'])
for c in mydata.columns:
mydata[c] = mydata[c].astype(dtypes[c])
print mydata['y'].dtype #=> int64
Is there a better way?
有没有更好的办法?
采纳答案by Andy Hayden
Since 0.17, you have to use the explicit conversions:
从 0.17 开始,您必须使用显式转换:
pd.to_datetime, pd.to_timedelta and pd.to_numeric
(As mentioned below, no more "magic", convert_objectshas been deprecated in 0.17)
(如下所述,不再有“魔法”,convert_objects已在 0.17 中弃用)
df = pd.DataFrame({'x': {0: 'a', 1: 'b'}, 'y': {0: '1', 1: '2'}, 'z': {0: '2018-05-01', 1: '2018-05-02'}})
df.dtypes
x object
y object
z object
dtype: object
df
x y z
0 a 1 2018-05-01
1 b 2 2018-05-02
You can apply these to each column you want to convert:
您可以将这些应用于要转换的每一列:
df["y"] = pd.to_numeric(df["y"])
df["z"] = pd.to_datetime(df["z"])
df
x y z
0 a 1 2018-05-01
1 b 2 2018-05-02
df.dtypes
x object
y int64
z datetime64[ns]
dtype: object
and confirm the dtype is updated.
并确认 dtype 已更新。
OLD/DEPRECATED ANSWER for pandas 0.12 - 0.16: You can use convert_objectsto infer better dtypes:
pandas 0.12 - 0.16 的旧/弃用答案:您可以convert_objects用来推断更好的 dtypes:
In [21]: df
Out[21]:
x y
0 a 1
1 b 2
In [22]: df.dtypes
Out[22]:
x object
y object
dtype: object
In [23]: df.convert_objects(convert_numeric=True)
Out[23]:
x y
0 a 1
1 b 2
In [24]: df.convert_objects(convert_numeric=True).dtypes
Out[24]:
x object
y int64
dtype: object
Magic!(Sad to see it deprecated.)
魔法!(很遗憾看到它被弃用了。)
回答by Hyman Yates
For those coming from Google (etc.) such as myself:
对于像我这样来自谷歌(等)的人:
convert_objectshas been deprecated since 0.17 - if you use it, you get a warning like this one:
convert_objects自 0.17 以来已被弃用 - 如果您使用它,您会收到如下警告:
FutureWarning: convert_objects is deprecated. Use the data-type specific converters
pd.to_datetime, pd.to_timedelta and pd.to_numeric.
You should do something like the following:
您应该执行以下操作:
df =df.astype(np.float)df["A"] =pd.to_numeric(df["A"])
df =df.astype(np.float)df["A"] =pd.to_numeric(df["A"])
回答by Kaushik Ghose
Another way to set the column types is to first construct a numpy record array with your desired types, fill it out and then pass it to a DataFrame constructor.
设置列类型的另一种方法是首先用您想要的类型构造一个 numpy 记录数组,填写它,然后将其传递给 DataFrame 构造函数。
import pandas as pd
import numpy as np
x = np.empty((10,), dtype=[('x', np.uint8), ('y', np.float64)])
df = pd.DataFrame(x)
df.dtypes ->
x uint8
y float64
回答by Julian C
facing similar problem to you. In my case I have 1000's of files from cisco logs that I need to parse manually.
面临与你类似的问题。就我而言,我有 1000 个来自 cisco 日志的文件,需要手动解析。
In order to be flexible with fields and types I have successfully tested using StringIO + read_cvs which indeed does accept a dict for the dtype specification.
为了灵活处理字段和类型,我成功地使用 StringIO + read_cvs 进行了测试,它确实接受了 dtype 规范的 dict。
I usually get each of the files ( 5k-20k lines) into a buffer and create the dtype dictionaries dynamically.
我通常将每个文件(5k-20k 行)放入缓冲区并动态创建 dtype 字典。
Eventually I concatenate ( with categorical... thanks to 0.19) these dataframes into a large data frame that I dump into hdf5.
最终,我将这些数据帧(使用分类...感谢 0.19)连接到一个大数据帧中,然后将其转储到 hdf5 中。
Something along these lines
沿着这些路线的东西
import pandas as pd
import io
output = io.StringIO()
output.write('A,1,20,31\n')
output.write('B,2,21,32\n')
output.write('C,3,22,33\n')
output.write('D,4,23,34\n')
output.seek(0)
df=pd.read_csv(output, header=None,
names=["A","B","C","D"],
dtype={"A":"category","B":"float32","C":"int32","D":"float64"},
sep=","
)
df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5 entries, 0 to 4
Data columns (total 4 columns):
A 5 non-null category
B 5 non-null float32
C 5 non-null int32
D 5 non-null float64
dtypes: category(1), float32(1), float64(1), int32(1)
memory usage: 205.0 bytes
None
Not very pythonic.... but does the job
不是很pythonic ......但是可以完成工作
Hope it helps.
希望能帮助到你。
JC
JC
回答by Lauren
you can set the types explicitly with pandas DataFrame.astype(dtype, copy=True, raise_on_error=True, **kwargs)and pass in a dictionary with the dtypes you want to dtype
您可以DataFrame.astype(dtype, copy=True, raise_on_error=True, **kwargs)使用 Pandas显式设置类型,并使用您想要的 dtypes 传入字典dtype
here's an example:
这是一个例子:
import pandas as pd
wheel_number = 5
car_name = 'jeep'
minutes_spent = 4.5
# set the columns
data_columns = ['wheel_number', 'car_name', 'minutes_spent']
# create an empty dataframe
data_df = pd.DataFrame(columns = data_columns)
df_temp = pd.DataFrame([[wheel_number, car_name, minutes_spent]],columns = data_columns)
data_df = data_df.append(df_temp, ignore_index=True)
In [11]: data_df.dtypes
Out[11]:
wheel_number float64
car_name object
minutes_spent float64
dtype: object
data_df = data_df.astype(dtype= {"wheel_number":"int64",
"car_name":"object","minutes_spent":"float64"})
now you can see that it's changed
现在你可以看到它已经改变了
In [18]: data_df.dtypes
Out[18]:
wheel_number int64
car_name object
minutes_spent float64
回答by Clem Wang
You're better off using typed np.arrays, and then pass the data and column names as a dictionary.
最好使用类型化的 np.arrays,然后将数据和列名作为字典传递。
import numpy as np
import pandas as pd
# Feature: np arrays are 1: efficient, 2: can be pre-sized
x = np.array(['a', 'b'], dtype=object)
y = np.array([ 1 , 2 ], dtype=np.int32)
df = pd.DataFrame({
'x' : x, # Feature: column name is near data array
'y' : y,
}
)

