Python scikit-learn 中的分层训练/测试拆分

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/29438265/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 04:33:22  来源:igfitidea点击:

Stratified Train/Test-split in scikit-learn

pythonscikit-learn

提问by pir

I need to split my data into a training set (75%) and test set (25%). I currently do that with the code below:

我需要将我的数据分成训练集 (75%) 和测试集 (25%)。我目前使用以下代码执行此操作:

X, Xt, userInfo, userInfo_train = sklearn.cross_validation.train_test_split(X, userInfo)   

However, I'd like to stratify my training dataset. How do I do that? I've been looking into the StratifiedKFoldmethod, but doesn't let me specifiy the 75%/25% split and only stratify the training dataset.

但是,我想对我的训练数据集进行分层。我怎么做?我一直在研究该StratifiedKFold方法,但不允许我指定 75%/25% 的拆分,而只对训练数据集进行分层。

采纳答案by Andreas Mueller

[update for 0.17]

[0.17 更新]

See the docs of sklearn.model_selection.train_test_split:

请参阅以下文档sklearn.model_selection.train_test_split

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
                                                    stratify=y, 
                                                    test_size=0.25)

[/update for 0.17]

[/更新为 0.17]

There is a pull request here. But you can simply do train, test = next(iter(StratifiedKFold(...)))and use the train and test indices if you want.

有一个拉要求在这里。但是,train, test = next(iter(StratifiedKFold(...)))如果您愿意,您可以简单地使用训练和测试索引。

回答by tangy

TL;DR : Use StratifiedShuffleSplitwith test_size=0.25

TL;DR:将StratifiedShuffleSplittest_size=0.25

Scikit-learn provides two modules for Stratified Splitting:

Scikit-learn 为分层拆分提供了两个模块:

  1. StratifiedKFold: This module is useful as a direct k-fold cross-validation operator: as in it will set up n_foldstraining/testing sets such that classes are equally balanced in both.
  1. StratifiedKFold:这个模块作为一个直接的 k 折交叉验证算子很有用:因为它会设置n_folds训练/测试集,使得类在两者中均等平衡。

Heres some code(directly from above documentation)

这是一些代码(直接来自上面的文档)

>>> skf = cross_validation.StratifiedKFold(y, n_folds=2) #2-fold cross validation
>>> len(skf)
2
>>> for train_index, test_index in skf:
...    print("TRAIN:", train_index, "TEST:", test_index)
...    X_train, X_test = X[train_index], X[test_index]
...    y_train, y_test = y[train_index], y[test_index]
...    #fit and predict with X_train/test. Use accuracy metrics to check validation performance
  1. StratifiedShuffleSplit: This module creates a single training/testing set having equally balanced(stratified) classes. Essentially this is what you want with the n_iter=1. You can mention the test-size here same as in train_test_split
  1. StratifiedShuffleSplit:此模块创建一个具有同等平衡(分层)类的单一训练/测试集。本质上,这就是您想要的n_iter=1. 您可以在此处提及与在相同的测试大小train_test_split

Code:

代码:

>>> sss = StratifiedShuffleSplit(y, n_iter=1, test_size=0.5, random_state=0)
>>> len(sss)
1
>>> for train_index, test_index in sss:
...    print("TRAIN:", train_index, "TEST:", test_index)
...    X_train, X_test = X[train_index], X[test_index]
...    y_train, y_test = y[train_index], y[test_index]
>>> # fit and predict with your classifier using the above X/y train/test

回答by Jordan

Here's an example for continuous/regression data (until this issue on GitHubis resolved).

这是连续/回归数据的示例(直到解决GitHub 上的此问题)。

# Your bins need to be appropriate for your output values
# e.g. 0 to 50 with 25 bins
bins     = np.linspace(0, 50, 25)
y_binned = np.digitize(y_full, bins)
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y_binned)

回答by Max

In addition to the accepted answer by @Andreas Mueller, just want to add that as @tangy mentioned above:

除了@Andreas Mueller 接受的答案之外,只想添加上面提到的@tangy:

StratifiedShuffleSplitmost closely resembles train_test_split(stratify = y) with added features of:

StratifiedShuffleSplittrain_test_split( stratify= y)最相似,具有以下附加功能:

  1. stratify by default
  2. by specifying n_splits, it repeatedly splits the data
  1. 默认分层
  2. 通过指定n_splits,它重复拆分数据

回答by José Carlos Castro

#train_size is 1 - tst_size - vld_size
tst_size=0.15
vld_size=0.15

X_train_test, X_valid, y_train_test, y_valid = train_test_split(df.drop(y, axis=1), df.y, test_size = vld_size, random_state=13903) 

X_train_test_V=pd.DataFrame(X_train_test)
X_valid=pd.DataFrame(X_valid)

X_train, X_test, y_train, y_test = train_test_split(X_train_test, y_train_test, test_size=tst_size, random_state=13903)

回答by Shayan Amani

You can simply do it with train_test_split()method available in Scikit learn:

您可以简单地train_test_split()使用 Scikit learn 中提供的方法来完成:

from sklearn.model_selection import train_test_split 
train, test = train_test_split(X, test_size=0.25, stratify=X['YOUR_COLUMN_LABEL']) 

I have also prepared a short GitHub Gist which shows how stratifyoption works:

我还准备了一个简短的 GitHub Gist,其中展示了stratifyoption 的工作原理:

https://gist.github.com/SHi-ON/63839f3a3647051a180cb03af0f7d0d9

https://gist.github.com/SHi-ON/63839f3a3647051a180cb03af0f7d0d9