pandas Python 中的回归

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/18540738/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-13 21:07:28  来源:igfitidea点击:

Regression in Python

pythonpandasregressionstatsmodels

提问by appleLover

Trying to do logistic regression through pandas and statsmodels. Don't know why I'm getting an error or how to fix it.

尝试通过 pandas 和 statsmodels 进行逻辑回归。不知道为什么我收到错误或如何修复它。

import pandas as pd
import statsmodels.api as sm
x = [1, 3, 5, 6, 8]
y = [0, 1, 0, 1, 1]
d = { "x": pd.Series(x), "y": pd.Series(y)}
df = pd.DataFrame(d)

model = "y ~ x"
glm = sm.Logit(model, df=df).fit()

ERROR:

错误:

Traceback (most recent call last):
  File "regress.py", line 45, in <module>
    glm = sm.Logit(model, df=df).fit()
TypeError: __init__() takes exactly 3 arguments (2 given)

回答by Phillip Cloud

You can't pass a formula to Logit. Do:

您不能将公式传递给Logit. 做:

In [82]: import patsy

In [83]: f = 'y ~ x'

In [84]: y, X = patsy.dmatrices(f, df, return_type='dataframe')

In [85]: sm.Logit(y, X).fit().summary()
Optimization terminated successfully.
         Current function value: 0.511631
         Iterations 6
Out[85]:
<class 'statsmodels.iolib.summary.Summary'>
"""
                           Logit Regression Results
==============================================================================
Dep. Variable:                      y   No. Observations:                    5
Model:                          Logit   Df Residuals:                        3
Method:                           MLE   Df Model:                            1
Date:                Fri, 30 Aug 2013   Pseudo R-squ.:                  0.2398
Time:                        16:56:38   Log-Likelihood:                -2.5582
converged:                       True   LL-Null:                       -3.3651
                                        LLR p-value:                    0.2040
==============================================================================
                 coef    std err          z      P>|z|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
Intercept     -2.0544      2.452     -0.838      0.402        -6.861     2.752
x              0.5672      0.528      1.073      0.283        -0.468     1.603
==============================================================================
"""

This is pretty much straight from the docs on how to do exactly what you're asking.

这几乎直接来自关于如何完全按照您的要求执行的文档

EDIT:You can also use the formula API, as suggested by @user333700:

编辑:您也可以使用公式 API,如@user333700 所建议的:

In [22]: print sm.formula.logit(model, data=df).fit().summary()
Optimization terminated successfully.
         Current function value: 0.511631
         Iterations 6
                           Logit Regression Results
==============================================================================
Dep. Variable:                      y   No. Observations:                    5
Model:                          Logit   Df Residuals:                        3
Method:                           MLE   Df Model:                            1
Date:                Fri, 30 Aug 2013   Pseudo R-squ.:                  0.2398
Time:                        18:14:26   Log-Likelihood:                -2.5582
converged:                       True   LL-Null:                       -3.3651
                                        LLR p-value:                    0.2040
==============================================================================
                 coef    std err          z      P>|z|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
Intercept     -2.0544      2.452     -0.838      0.402        -6.861     2.752
x              0.5672      0.528      1.073      0.283        -0.468     1.603
==============================================================================