Python 计算某个值在数据帧列中出现的频率

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/22391433/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 00:51:06  来源:igfitidea点击:

count the frequency that a value occurs in a dataframe column

pythonpandas

提问by yoshiserry

I have a dataset

我有一个数据集

|category|
cat a
cat b
cat a

I'd like to be able to return something like (showing unique values and frequency)

我希望能够返回类似(显示唯一值和频率)

category | freq |
cat a       2
cat b       1

采纳答案by EdChum

Use groupbyand count:

使用groupbycount

In [37]:
df = pd.DataFrame({'a':list('abssbab')})
df.groupby('a').count()

Out[37]:

   a
a   
a  2
b  3
s  2

[3 rows x 1 columns]

See the online docs: http://pandas.pydata.org/pandas-docs/stable/groupby.html

请参阅在线文档:http: //pandas.pydata.org/pandas-docs/stable/groupby.html

Also value_counts()as @DSM has commented, many ways to skin a cat here

同样value_counts()正如@DSM 所评论的,这里有很多剥猫皮的方法

In [38]:
df['a'].value_counts()

Out[38]:

b    3
a    2
s    2
dtype: int64

If you wanted to add frequency back to the original dataframe use transformto return an aligned index:

如果您想将频率添加回原始数据帧,请使用transform返回对齐的索引:

In [41]:
df['freq'] = df.groupby('a')['a'].transform('count')
df

Out[41]:

   a freq
0  a    2
1  b    3
2  s    2
3  s    2
4  b    3
5  a    2
6  b    3

[7 rows x 2 columns]

回答by Shankar ARUL - jupyterdata.com

Using list comprehension and value_counts for multiple columns in a df

对 df 中的多列使用列表理解和 value_counts

[my_series[c].value_counts() for c in list(my_series.select_dtypes(include=['O']).columns)]

https://stackoverflow.com/a/28192263/786326

https://stackoverflow.com/a/28192263/786326

回答by Arran Cudbard-Bell

If you want to apply to all columns you can use:

如果要应用于所有列,可以使用:

df.apply(pd.value_counts)

This will apply a column based aggregation function (in this case value_counts) to each of the columns.

这会将基于列的聚合函数(在本例中为 value_counts)应用于每一列。

回答by Vidhya G

In 0.18.1 groupbytogether with countdoes not give the frequency of unique values:

在 0.18.1groupby和 withcount没有给出唯一值的频率:

>>> df
   a
0  a
1  b
2  s
3  s
4  b
5  a
6  b

>>> df.groupby('a').count()
Empty DataFrame
Columns: []
Index: [a, b, s]

However, the unique values and their frequencies are easily determined using size:

但是,可以使用size以下方法轻松确定唯一值及其频率:

>>> df.groupby('a').size()
a
a    2
b    3
s    2

With df.a.value_counts()sorted values (in descending order, i.e. largest value first) are returned by default.

随着df.a.value_counts()排序的值(按降序排列,即最大价值第一)默认情况下返回。

回答by Timz95

Without any libraries, you could do this instead:

没有任何库,你可以这样做:

def to_frequency_table(data):
    frequencytable = {}
    for key in data:
        if key in frequencytable:
            frequencytable[key] += 1
        else:
            frequencytable[key] = 1
    return frequencytable

Example:

例子:

to_frequency_table([1,1,1,1,2,3,4,4])
>>> {1: 4, 2: 1, 3: 1, 4: 2}

回答by Roman Kazakov

df.apply(pd.value_counts).fillna(0)

value_counts- Returns object containing counts of unique values

value_counts- 返回包含唯一值计数的对象

apply- count frequency in every column. If you set axis=1, you get frequency in every row

apply- 计算每列中的频率。如果你设置axis=1,你会得到每一行的频率

fillna(0) - make output more fancy. Changed NaN to 0

fillna(0) - 使输出更花哨。将 NaN 更改为 0

回答by user666

If your DataFrame has values with the same type, you can also set return_counts=Truein numpy.unique().

如果您的 DataFrame 具有相同类型的值,您还可以return_counts=Truenumpy.unique() 中进行设置。

index, counts = np.unique(df.values,return_counts=True)

index, counts = np.unique(df.values,return_counts=True)

np.bincount()could be faster if your values are integers.

如果您的值是整数,np.bincount()可能会更快。

回答by Satyajit Dhawale

df.category.value_counts()

This short little line of code will give you the output you want.

这短短的一小行代码将为您提供所需的输出。

If your column name has spaces you can use

如果您的列名有空格,您可以使用

df['category'].value_counts()

回答by tsando

You can also do this with pandas by broadcasting your columns as categories first, e.g. dtype="category"e.g.

您也可以通过首先将您的列作为类别广播来对 Pandas 执行此操作,dtype="category"例如

cats = ['client', 'hotel', 'currency', 'ota', 'user_country']

df[cats] = df[cats].astype('category')

and then calling describe:

然后调用describe

df[cats].describe()

This will give you a nice table of value counts and a bit more :):

这将为您提供一个很好的值计数表和更多:):

    client  hotel   currency    ota user_country
count   852845  852845  852845  852845  852845
unique  2554    17477   132 14  219
top 2198    13202   USD Hades   US
freq    102562  8847    516500  242734  340992

回答by RAHUL KUMAR

n_values = data.income.value_counts()

First unique value count

第一个唯一值计数

n_at_most_50k = n_values[0]

Second unique value count

第二个唯一值计数

n_greater_50k = n_values[1]

n_values

Output:

输出:

<=50K    34014
>50K     11208

Name: income, dtype: int64

Output:

输出:

n_greater_50k,n_at_most_50k:-
(11208, 34014)