Python 如何在 Keras 中处理 LSTM 的多个输入?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/42532386/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How to work with multiple inputs for LSTM in Keras?
提问by Jvr
I'm trying to predict the water usage of a population.
我正在尝试预测人口的用水量。
I have 1 main input:
我有 1 个主要输入:
- Water volume
- 水量
and 2 secondary inputs:
和 2 个辅助输入:
- Temperature
- Rainfall
- 温度
- 雨量
In theory they have a relation with the water supply.
理论上它们与供水有关系。
It must be said that each rainfall and temperature data correspond with the water volume. So this is a time series problem.
必须说,每次降雨和温度数据都与水量相对应。所以这是一个时间序列问题。
The problem is that I don't know how to use 3 inputs from just one .csv file, with 3 columns, each one for each input, as the code below is made. When I have just one input (e.g.water volume) the network works more or less good with this code, but not when I have more than one. (So if you run this code with the csv file below, it will show a dimension error).
问题是我不知道如何仅使用一个 .csv 文件中的 3 个输入,其中有 3 列,每个输入一列,如下面的代码所示。当我只有一个输入(例如水量)时,网络在这段代码中或多或少地工作得很好,但当我有一个以上时就不行了。(因此,如果您使用下面的 csv 文件运行此代码,它将显示尺寸错误)。
Reading some answers from:
阅读一些答案:
- Time Series Prediction with LSTM Recurrent Neural Networks in Python with Keras
- Time Series Forecast Case Study with Python: Annual Water Usage in Baltimore
it seems to be that many people have the same problem.
似乎很多人都有同样的问题。
The code:
编码:
EDIT:Code has been updated
编辑:代码已更新
import numpy
import matplotlib.pyplot as plt
import pandas
import math
from keras.models import Sequential
from keras.layers import Dense, LSTM, Dropout
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
# convert an array of values into a dataset matrix
def create_dataset(dataset, look_back=1):
dataX, dataY = [], []
for i in range(len(dataset) - look_back - 1):
a = dataset[i:(i + look_back), 0]
dataX.append(a)
dataY.append(dataset[i + look_back, 2])
return numpy.array(dataX), numpy.array(dataY)
# fix random seed for reproducibility
numpy.random.seed(7)
# load the dataset
dataframe = pandas.read_csv('datos.csv', engine='python')
dataset = dataframe.values
# normalize the dataset
scaler = MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(dataset)
# split into train and test sets
train_size = int(len(dataset) * 0.67)
test_size = len(dataset) - train_size
train, test = dataset[0:train_size, :], dataset[train_size:len(dataset), :]
# reshape into X=t and Y=t+1
look_back = 3
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)
# reshape input to be [samples, time steps, features]
trainX = numpy.reshape(trainX, (trainX.shape[0], look_back, 3))
testX = numpy.reshape(testX, (testX.shape[0],look_back, 3))
# create and fit the LSTM network
model = Sequential()
model.add(LSTM(4, input_dim=look_back))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
history= model.fit(trainX, trainY,validation_split=0.33, nb_epoch=200, batch_size=32)
# Plot training
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('pérdida')
plt.xlabel('época')
plt.legend(['entrenamiento', 'validación'], loc='upper right')
plt.show()
# make predictions
trainPredict = model.predict(trainX)
testPredict = model.predict(testX)
# Get something which has as many features as dataset
trainPredict_extended = numpy.zeros((len(trainPredict),3))
# Put the predictions there
trainPredict_extended[:,2] = trainPredict[:,0]
# Inverse transform it and select the 3rd column.
trainPredict = scaler.inverse_transform(trainPredict_extended) [:,2]
print(trainPredict)
# Get something which has as many features as dataset
testPredict_extended = numpy.zeros((len(testPredict),3))
# Put the predictions there
testPredict_extended[:,2] = testPredict[:,0]
# Inverse transform it and select the 3rd column.
testPredict = scaler.inverse_transform(testPredict_extended)[:,2]
trainY_extended = numpy.zeros((len(trainY),3))
trainY_extended[:,2]=trainY
trainY=scaler.inverse_transform(trainY_extended)[:,2]
testY_extended = numpy.zeros((len(testY),3))
testY_extended[:,2]=testY
testY=scaler.inverse_transform(testY_extended)[:,2]
# calculate root mean squared error
trainScore = math.sqrt(mean_squared_error(trainY, trainPredict))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(testY, testPredict))
print('Test Score: %.2f RMSE' % (testScore))
# shift train predictions for plotting
trainPredictPlot = numpy.empty_like(dataset)
trainPredictPlot[:, :] = numpy.nan
trainPredictPlot[look_back:len(trainPredict)+look_back, 2] = trainPredict
# shift test predictions for plotting
testPredictPlot = numpy.empty_like(dataset)
testPredictPlot[:, :] = numpy.nan
testPredictPlot[len(trainPredict)+(look_back*2)+1:len(dataset)-1, 2] = testPredict
#plot
serie,=plt.plot(scaler.inverse_transform(dataset)[:,2])
prediccion_entrenamiento,=plt.plot(trainPredictPlot[:,2],linestyle='--')
prediccion_test,=plt.plot(testPredictPlot[:,2],linestyle='--')
plt.title('Consumo de agua')
plt.ylabel('cosumo (m3)')
plt.xlabel('dia')
plt.legend([serie,prediccion_entrenamiento,prediccion_test],['serie','entrenamiento','test'], loc='upper right')
This is the csv file I have created, if it helps.
这是我创建的 csv 文件,如果有帮助的话。
After changing the code, I fixed all the errors, but I'm not really sure about the results. This is a zoom in the prediction plot:
更改代码后,我修复了所有错误,但我不确定结果。这是预测图的放大图:
which shows that there is a "displacement" in the values predicted and in the real ones. When there is a max in the real time series, there is a min in the forecast for the same time, but it seems like it corresponds to the previous time step.
这表明预测值和实际值存在“位移”。当实时序列中有一个最大值时,同时预测中也有一个最小值,但它似乎对应于前一个时间步长。
采纳答案by Nassim Ben
Change
改变
a = dataset[i:(i + look_back), 0]
To
到
a = dataset[i:(i + look_back), :]
If you want the 3 features in your training data.
如果您想要训练数据中的 3 个特征。
Then use
然后使用
model.add(LSTM(4, input_shape=(look_back,3)))
To specify that you have look_back
time steps in your sequence, each with 3 features.
指定look_back
您的序列中有时间步长,每个时间步长有 3 个特征。
It should run
它应该运行
EDIT :
编辑 :
Indeed, sklearn.preprocessing.MinMaxScaler()
's function : inverse_transform()
takes an input which has the same shape as the object you fitted. So you need to do something like this :
事实上,sklearn.preprocessing.MinMaxScaler()
的功能:inverse_transform()
接受与您安装的对象具有相同形状的输入。所以你需要做这样的事情:
# Get something which has as many features as dataset
trainPredict_extended = np.zeros((len(trainPredict),3))
# Put the predictions there
trainPredict_extended[:,2] = trainPredict
# Inverse transform it and select the 3rd column.
trainPredict = scaler.inverse_transform(trainPredict_extended)[:,2]
I guess you will have other issues like this below in your code but nothing that you can't fix :) the ML part is fixed and you know where the error comes from. Just check the shapes of your objects and try to make them match.
我想您的代码中还会有类似下面的其他问题,但没有什么是您无法修复的:) ML 部分已修复,您知道错误来自哪里。只需检查对象的形状并尝试使它们匹配即可。
回答by Askquestions
The displacement could be due to the lag in predicting maximums/minimums given the randomness in the data.
考虑到数据的随机性,位移可能是由于预测最大值/最小值的滞后造成的。
回答by Corey Levinson
You can change what you are optimizing, for maybe better results. For example, try predicting binary 0,1 if there will be a 'spike up' for the next day. Then feed the probability of a 'spike up' as a feature to predict the usage itself.
您可以更改正在优化的内容,以获得更好的结果。例如,尝试预测二进制 0,1 是否会在第二天出现“峰值”。然后将“飙升”的概率作为特征来预测使用情况本身。