Python 对 scikit learn 决策树中的 random_state 感到困惑
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/39158003/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
confused about random_state in decision tree of scikit learn
提问by Lin Ma
Confused about random_state
parameter, not sure why decision tree training needs some randomness. My thoughts, (1) is it related to random forest? (2) is it related to split training testing data set? If so, why not use training testing split method directly (http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.train_test_split.html)?
对random_state
参数感到困惑,不知道为什么决策树训练需要一些随机性。我的想法,(1)它与随机森林有关吗?(2) 是否与拆分训练测试数据集有关?如果是这样,为什么不直接使用训练测试拆分方法(http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.train_test_split.html)?
http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html
http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html
>>> from sklearn.datasets import load_iris
>>> from sklearn.cross_validation import cross_val_score
>>> from sklearn.tree import DecisionTreeClassifier
>>> clf = DecisionTreeClassifier(random_state=0)
>>> iris = load_iris()
>>> cross_val_score(clf, iris.data, iris.target, cv=10)
...
...
array([ 1. , 0.93..., 0.86..., 0.93..., 0.93...,
0.93..., 0.93..., 1. , 0.93..., 1. ])
regards, Lin
问候, 林
回答by Ami Tavory
This is explained in the documentation
这在文档中解释
The problem of learning an optimal decision tree is known to be NP-complete under several aspects of optimality and even for simple concepts. Consequently, practical decision-tree learning algorithms are based on heuristic algorithms such as the greedy algorithm where locally optimal decisions are made at each node. Such algorithms cannot guarantee to return the globally optimal decision tree. This can be mitigated by training multiple trees in an ensemble learner, where the features and samples are randomly sampled with replacement.
已知学习最优决策树的问题在最优性的几个方面甚至是简单的概念下都是 NP 完全的。因此,实用的决策树学习算法基于启发式算法,例如在每个节点做出局部最优决策的贪心算法。这样的算法不能保证返回全局最优决策树。这可以通过在集成学习器中训练多棵树来缓解,其中特征和样本是随机采样的。
So, basically, a sub-optimal greedy algorithm is repeated a number of times using random selections of features and samples (a similar technique used in random forests). The random_state
parameter allows controlling these random choices.
因此,基本上,使用随机选择的特征和样本(随机森林中使用的类似技术)重复次优贪婪算法多次。该random_state
参数允许控制这些随机选择。
The interface documentationspecifically states:
该接口文件明确规定:
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
如果是 int,random_state 是随机数生成器使用的种子;如果是 RandomState 实例,random_state 是随机数生成器;如果没有,随机数生成器是 np.random 使用的 RandomState 实例。
So, the random algorithm will be used in any case. Passing any value (whether a specific int, e.g., 0, or a RandomState
instance), will not change that. The only rationale for passing in an int value (0 or otherwise) is to make the outcome consistent across calls: if you call this with random_state=0
(or any other value), then each and every time, you'll get the same result.
因此,在任何情况下都将使用随机算法。传递任何值(无论是特定的整数,例如 0 还是RandomState
实例)都不会改变它。传递一个 int 值(0 或其他)的唯一理由是使结果在调用之间保持一致:如果你用random_state=0
(或任何其他值)调用它,那么每次都会得到相同的结果。