pandas Groupby 类并计算特征中的缺失值
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/53947196/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Groupby class and count missing values in features
提问by FlixRo
I have a problem and I cannot find any solution in the web or documentation, even if I think that it is very trivial.
我有一个问题,我在网络或文档中找不到任何解决方案,即使我认为它很微不足道。
What do I want to do?
我想做什么?
I have a dataframe like this
我有一个这样的数据框
CLASS FEATURE1 FEATURE2 FEATURE3
X A NaN NaN
X NaN A NaN
B A A A
I want to group by the label(CLASS) and display the number of NaN-Values that are counted in every feature so that it looks like this. The purpose of this is to get a general idea how missing values are distributed over the different classes.
我想按标签(CLASS)分组并显示在每个特征中计数的 NaN 值的数量,使其看起来像这样。这样做的目的是大致了解缺失值如何分布在不同的类中。
CLASS FEATURE1 FEATURE2 FEATURE3
X 1 1 2
B 0 0 0
I know how to recieve the amount of nonnull-Values - df.groupby['CLASS'].count()
我知道如何接收非空值的数量-df.groupby['CLASS'].count()
Is there something similar for the NaN-Values?
NaN-Values有类似的东西吗?
I tried to subtract the count() from the size() but it returned an unformatted output filled with the value NaN
我试图从 size() 中减去 count() 但它返回了一个填充了 NaN 值的未格式化输出
采纳答案by cs95
Compute a mask with isna
, then group and find the sum:
用 计算掩码isna
,然后分组并找到总和:
df.drop('CLASS', 1).isna().groupby(df.CLASS, sort=False).sum().reset_index()
CLASS FEATURE1 FEATURE2 FEATURE3
0 X 1.0 1.0 2.0
1 B 0.0 0.0 0.0
Another option is to subtract the size
from the count
using rsub
along the 0thaxis for index aligned subtraction:
另一种选择是减去size
从count
使用rsub
沿0个索引对准减法轴:
df.groupby('CLASS').count().rsub(df.groupby('CLASS').size(), axis=0)
Or,
或者,
g = df.groupby('CLASS')
g.count().rsub(g.size(), axis=0)
FEATURE1 FEATURE2 FEATURE3
CLASS
B 0 0 0
X 1 1 2
There are quite a few good answers, so here are some timeits
for your perusal:
有很多很好的答案,所以这里有一些timeits
供您阅读:
df_ = df
df = pd.concat([df_] * 10000)
%timeit df.drop('CLASS', 1).isna().groupby(df.CLASS, sort=False).sum()
%timeit df.set_index('CLASS').isna().sum(level=0)
%%timeit
g = df.groupby('CLASS')
g.count().rsub(g.size(), axis=0)
11.8 ms ± 108 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
9.47 ms ± 379 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
6.54 ms ± 81.6 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Actual performance depends on your data and setup, so your mileage may vary.
实际性能取决于您的数据和设置,因此您的里程可能会有所不同。
回答by Scott Boston
You can use set_index
and sum
:
您可以使用set_index
和sum
:
df.set_index('CLASS').isna().sum(level=0)
Output:
输出:
FEATURE1 FEATURE2 FEATURE3
CLASS
X 1.0 1.0 2.0
B 0.0 0.0 0.0
回答by YOBEN_S
Using the diff between count
and size
使用之间的差异count
和size
g=df.groupby('CLASS')
-g.count().sub(g.size(),0)
FEATURE1 FEATURE2 FEATURE3
CLASS
B 0 0 0
X 1 1 2
And we can transform this question to the more generic question how to count how many NaN
in dataframe with for loop
我们可以将这个问题转换为更通用的问题如何NaN
使用 for 循环计算数据帧中的数量
pd.DataFrame({x: y.isna().sum()for x , y in g }).T.drop('CLASS',1)
Out[468]:
FEATURE1 FEATURE2 FEATURE3
B 0 0 0
X 1 1 2