Here is my model definition:
model = Sequential()
model.add(LSTM(i, input_shape=(None, 1), return_sequences=True))
model.add(Dropout(l))
model.add(LSTM(j))
model.add(Dropout(l))
model.add(Dense(k))
model.add(Dropout(l))
model.add(Dense(1))
and here is result
p = model.predict(x_test)
plt.plot(y_test)
plt.plot(p)
The sequential input represents the past signal in previous time-steps, the output is predicting the signal in next time-step. After splitting the training and testing data, the predictions on the test data is as follows:
The figure shows almost a perfect match with gold test data and the predictions. Is it possible to predict with such high accuracy?
I think something is wrong because there's no volatility. So I wonder if it's been implemented properly.
If the implementation is correct, how can you get the following(next) value?
Is it right to do this implement?
a = x_test[-1]
b = model.predict(a)
c = model.predict(b)
...
To sum up the question:
Is the implementation right way?
I wonder how to get the value of the next data.
def create_dataset(signal_data, look_back=1):
dataX, dataY = [], []
for i in range(len(signal_data) - look_back):
dataX.append(signal_data[i:(i + look_back), 0])
dataY.append(signal_data[i + look_back, 0])
return np.array(dataX), np.array(dataY)
train_size = int(len(signal_data) * 0.80)
test_size = len(signal_data) - train_size - int(len(signal_data) * 0.05)
val_size = len(signal_data) - train_size - test_size
train = signal_data[0:train_size]
val = signal_data[train_size:train_size+val_size]
test = signal_data[train_size+val_size:len(signal_data)]
x_train, y_train = create_dataset(train, look_back)
x_val, y_val = create_dataset(val, look_back)
x_test, y_test = create_dataset(test, look_back)
I use create_dataset
with look_back=20
.
signal_data
is preprocessed with min-max normalisation MinMaxScaler(feature_range=(0, 1))
.