Python ValueError:模型的特征数必须与输入匹配

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/44026832/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 23:38:53  来源:igfitidea点击:

ValueError: Number of features of the model must match the input

pythoncsvscikit-learn

提问by Hyman_f

I'm getting this error when trying to predict using a model I built in scikit learn. I know that there are a bunch of questions about this but mine seems different from them because I am wildly off between my input and model features. Here is my code for training my model (FYI the .csv file has 45 columns with one being the known value):

尝试使用我在 scikit learn 中构建的模型进行预测时出现此错误。我知道有很多关于此的问题,但我的问题似乎与它们不同,因为我在输入和模型特征之间非常偏离。这是我用于训练模型的代码(仅供参考,.csv 文件有 45 列,其中一列是已知值):

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn import ensemble
from sklearn.metrics import mean_absolute_error
from sklearn.externals import joblib


df = pd.read_csv("Cinderella.csv")


features_df = pd.get_dummies(df, columns=['Overall_Sentiment', 'Word_1','Word_2','Word_3','Word_4','Word_5','Word_6','Word_7','Word_8','Word_9','Word_10','Word_11','Word_1','Word_12','Word_13','Word_14','Word_15','Word_16','Word_17','Word_18','Word_19','Word_20','Word_21','Word_22','Word_23','Word_24','Word_25','Word_26','Word_27','Word_28','Word_29','Word_30','Word_31','Word_32','Word_33','Word_34','Word_35','Word_36','Word_37','Word_38','Word_39','Word_40','Word_41', 'Word_42', 'Word_43'], dummy_na=True)

del features_df['Slope']

X = features_df.as_matrix()
y = df['Slope'].as_matrix()

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)

model = ensemble.GradientBoostingRegressor(
    n_estimators=500,
    learning_rate=0.01,
    max_depth=5,
    min_samples_leaf=3,
    max_features=0.1,
    loss='lad'
)

model.fit(X_train, y_train)

joblib.dump(model, 'slope_from_sentiment_model.pkl')

mse = mean_absolute_error(y_train, model.predict(X_train))

print("Training Set Mean Absolute Error: %.4f" % mse)

mse = mean_absolute_error(y_test, model.predict(X_test))
print("Test Set Mean Absolute Error: %.4f" % mse)

Here is my code for the actual prediction using a different .csv file (this has 44 columns because it doesn't have any values):

这是我使用不同的 .csv 文件进行实际预测的代码(它有 44 列,因为它没有任何值):

from sklearn.externals import joblib
import pandas


model = joblib.load('slope_from_sentiment_model.pkl')

df = pandas.read_csv("Slaughterhouse_copy.csv")


features_df = pandas.get_dummies(df, columns=['Overall_Sentiment','Word_1', 'Word_2', 'Word_3', 'Word_4', 'Word_5', 'Word_6', 'Word_7', 'Word_8', 'Word_9', 'Word_10', 'Word_11', 'Word_12', 'Word_13', 'Word_14', 'Word_15', 'Word_16', 'Word_17','Word_18','Word_19','Word_20','Word_21','Word_22','Word_23','Word_24','Word_25','Word_26','Word_27','Word_28','Word_29','Word_30','Word_31','Word_32','Word_33','Word_34','Word_35','Word_36','Word_37','Word_38','Word_39','Word_40','Word_41','Word_42','Word_43'], dummy_na=True)

predicted_slopes = model.predict(features_df)

When I run the prediction file I get:

当我运行预测文件时,我得到:

ValueError: Number of features of the model must match the input. Model n_features is 146 and input n_features is 226.

If anyone could help me it would be greatly appreciated! Thanks in advance!

如果有人可以帮助我,将不胜感激!提前致谢!

回答by Scratch'N'Purr

The reason you're getting the error is due to the different distinct values in your features where you're generating the dummy values with get_dummies.

您收到错误的原因是由于您使用get_dummies.

Let's suppose the Word_1column in your training set has the following distinct words: the, dog, jumps, roof, off. That's 5 distinct words so pandas will generate 5 features for Word_1. Now, if your scoring dataset has a different number of distinct words in the Word_1column, then you're going to get a different number of features.

假设Word_1您的训练集中的列具有以下不同的词:the, dog, jumps, roof, off. 这是 5 个不同的单词,因此 pandas 将生成 5 个特征Word_1。现在,如果您的评分数据集在Word_1列中具有不同数量的不同单词,那么您将获得不同数量的特征。

How to fix:

怎么修:

You'll want to concatenate your training and scoring datasets using concat, apply get_dummies, and then split your datasets. That'll ensure you have captured all the distinct values in your columns. Given that you're using two different csv's, you probably want to generate a column that specifies your training vs scoring dataset.

您需要使用concat、 apply连接训练和评分数据集get_dummies,然后拆分数据集。这将确保您已捕获列中的所有不同值。鉴于您使用的是两个不同的 csv,您可能想要生成一个列来指定您的训练与评分数据集。

Example solution:

示例解决方案:

train_df = pd.read_csv("Cinderella.csv")
train_df['label'] = 'train'

score_df = pandas.read_csv("Slaughterhouse_copy.csv")
score_df['label'] = 'score'

# Concat
concat_df = pd.concat([train_df , score_df])

# Create your dummies
features_df = pd.get_dummies(concat_df, columns=['Overall_Sentiment', 'Word_1','Word_2','Word_3','Word_4','Word_5','Word_6','Word_7','Word_8','Word_9','Word_10','Word_11','Word_1','Word_12','Word_13','Word_14','Word_15','Word_16','Word_17','Word_18','Word_19','Word_20','Word_21','Word_22','Word_23','Word_24','Word_25','Word_26','Word_27','Word_28','Word_29','Word_30','Word_31','Word_32','Word_33','Word_34','Word_35','Word_36','Word_37','Word_38','Word_39','Word_40','Word_41', 'Word_42', 'Word_43'], dummy_na=True)

# Split your data
train_df = features_df[features_df['label'] == 'train']
score_df = features_df[features_df['label'] == 'score']

# Drop your labels
train_df = train_df.drop('label', axis=1)
score_df = score_df.drop('label', axis=1)

# Now delete your 'slope' feature, create your features matrix, and create your model as you have already shown in your example
...

回答by Akson

I tried the method suggested hereand ended up with hot encoding the label column as well,and in the dataframe it is shown as 'label_test' and 'label_train' so just a heads uptry this post get_dummies:

我试过的方法在这里建议,并结束了与热码的标签栏的欢迎,并在数据帧则显示为“ label_test”和“ label_train”所以只是单挑试试这个职位get_dummies:

train_df = feature_df[feature_df['label_train'] == 1]
test_df = feature_df[feature_df['label_test'] == 0]
train_df = train_df.drop(['label_train', 'label_test'], axis=1)
test_df = test_df.drop(['label_train', 'label_test'], axis=1)

回答by code-on-treehouse

Below correction to original answer from Scratch'N'Purr would help solve issues one might face using string as value for new inserted column 'label' -
train_df = pd.read_csv("Cinderella.csv") train_df['label'] = 1

以下对来自 Scratch'N'Purr 的原始答案的更正将有助于解决使用字符串作为新插入列 'label' 的值可能面临的问题 -
train_df = pd.read_csv("Cinderella.csv") train_df['label'] = 1

    score_df = pandas.read_csv("Slaughterhouse_copy.csv")
    score_df['label'] = 2

    # Concat
    concat_df = pd.concat([train_df , score_df])

    # Create your dummies
    features_df = pd.get_dummies(concat_df)

    # Split your data
    train_df = features_df[features_df['label'] == '1]
    score_df = features_df[features_df['label'] == '2]
    ...

回答by Michael Gardner

You can utilize the Categorical Dtype to apply null values to unseen data.

您可以利用 Categorical Dtype 将空值应用于看不见的数据。

Input:

输入:

import pandas as pd
import numpy as np
from pandas.api.types import CategoricalDtype

# Create Example Data
train = pd.DataFrame({"text":["A", "B", "C", "D", 'F', np.nan]})
test = pd.DataFrame({"text":["D", "D", np.nan,"B", "E", "T"]})

# Convert columns to category dtype and specify categories for test set
train['text'] = train['text'].astype('category')
test['text'] = test['text'].astype(CategoricalDtype(categories=train['text'].cat.categories))

# Create Dummies
pd.get_dummies(test['text'], dummy_na=True)

Output:

输出:

| A | B | C | D | F | nan |
|---|---|---|---|---|-----|
| 0 | 0 | 0 | 1 | 0 | 0   |
| 0 | 0 | 0 | 1 | 0 | 0   |
| 0 | 0 | 0 | 0 | 0 | 1   |
| 0 | 1 | 0 | 0 | 0 | 0   |
| 0 | 0 | 0 | 0 | 0 | 1   |
| 0 | 0 | 0 | 0 | 0 | 1   |

回答by Sirigireddy Dhanalaxmi

The size of the training data(excluding labels,however) which you fit to the model should be same as the size of the data which you are going to predict

您适合模型的训练数据的大小(但不包括标签)应与您要预测的数据大小相同