pandas 如何通过pandas或spark数据框删除所有行中具有相同值的列?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/39658574/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How to drop columns which have same values in all rows via pandas or spark dataframe?
提问by CYAN CEVI
Suppose I've data similar to following:
假设我有类似于以下的数据:
index id name value value2 value3 data1 val5
0 345 name1 1 99 23 3 66
1 12 name2 1 99 23 2 66
5 2 name6 1 99 23 7 66
How can we drop all those columns like (value
, value2
, value3
) where all rows have same values, in one command or couple of commands using python?
我们怎样才能降像所有这些列(value
,value2
,value3
)所有行具有相同的值,在一个命令或夫妇使用命令的蟒蛇?
Consider we have many columns similar to value
,value2
,value3
...value200
.
考虑我们有许多类似于value
, value2
, value3
... 的列value200
。
Output:
输出:
index id name data1
0 345 name1 3
1 12 name2 2
5 2 name6 7
回答by EdChum
What we can do is apply
nunique
to calc the number of unique values in the df and drop the columns which only have a single unique value:
我们可以做的是apply
nunique
计算 df 中唯一值的数量并删除只有一个唯一值的列:
In [285]:
nunique = df.apply(pd.Series.nunique)
cols_to_drop = nunique[nunique == 1].index
df.drop(cols_to_drop, axis=1)
Out[285]:
index id name data1
0 0 345 name1 3
1 1 12 name2 2
2 5 2 name6 7
Another way is to just diff
the numeric columns, take abs
values and sums
them:
另一种方法是只diff
使用数字列,abs
取值和sums
它们:
In [298]:
cols = df.select_dtypes([np.number]).columns
diff = df[cols].diff().abs().sum()
df.drop(diff[diff== 0].index, axis=1)
?
Out[298]:
index id name data1
0 0 345 name1 3
1 1 12 name2 2
2 5 2 name6 7
Another approach is to use the property that the standard deviation will be zero for a column with the same value:
另一种方法是使用具有相同值的列的标准偏差为零的属性:
In [300]:
cols = df.select_dtypes([np.number]).columns
std = df[cols].std()
cols_to_drop = std[std==0].index
df.drop(cols_to_drop, axis=1)
Out[300]:
index id name data1
0 0 345 name1 3
1 1 12 name2 2
2 5 2 name6 7
Actually the above can be done in a one-liner:
实际上,以上可以在一行中完成:
In [306]:
df.drop(df.std()[(df.std() == 0)].index, axis=1)
Out[306]:
index id name data1
0 0 345 name1 3
1 1 12 name2 2
2 5 2 name6 7
回答by jezrael
Another solution is set_index
from column which are not compared and then compare first row selected by iloc
by eq
with all DataFrame
and last use boolean indexing
:
另一个解决方案是set_index
从未比较的列中,然后将iloc
by选择的第一行eq
与 allDataFrame
和 last use 进行比较boolean indexing
:
df1 = df.set_index(['index','id','name',])
print (~df1.eq(df1.iloc[0]).all())
value False
value2 False
value3 False
data1 True
val5 False
dtype: bool
print (df1.ix[:, (~df1.eq(df1.iloc[0]).all())].reset_index())
index id name data1
0 0 345 name1 3
1 1 12 name2 2
2 5 2 name6 7