15

I have a question in mind which relates to the usage of pybrain to do regression of a time series. I plan to use the LSTM layer in pybrain to train and predict a time series.

I found an example code here in the link below

Request for example: Recurrent neural network for predicting next value in a sequence

In the example above, the network is able to predict a sequence after its being trained. But the issue is, network takes in all the sequential data by feeding it in one go to the input layer. For example, if the training data has 10 features each, the 10 features will be simultaneously fed into 10 input nodes at one time.

From my understanding, this is no longer a time series prediction am I right? Since there is no difference in terms of the time each feature is fed into the network? Correct me if I am wrong on this.

Therefore, what I am trying to achieve is a recurrent network that has only ONE input node, and ONE output node. The input node is where all the time series data will be fed sequentially at different time steps. The network will be trained to reproduce the input at the output node.

Could you please suggest or guide me in constructing the network I mentioned? Thank you very much in advance.

Community
  • 1
  • 1
dnth
  • 879
  • 2
  • 12
  • 22

3 Answers3

29

You can train an LSTM network with a single input node and a single output node for doing time series prediction like this:

First, just as a good practice, let's use Python3's print function:

from __future__ import print_function

Then, make a simple time series:

data = [1] * 3 + [2] * 3
data *= 3
print(data)

[1, 1, 1, 2, 2, 2, 1, 1, 1, 2, 2, 2, 1, 1, 1, 2, 2, 2]

Now put this timeseries into a supervised dataset, where the target for each sample is the next sample:

from pybrain.datasets import SequentialDataSet
from itertools import cycle

ds = SequentialDataSet(1, 1)
for sample, next_sample in zip(data, cycle(data[1:])):
    ds.addSample(sample, next_sample)

Build a simple LSTM network with 1 input node, 5 LSTM cells and 1 output node:

from pybrain.tools.shortcuts import buildNetwork
from pybrain.structure.modules import LSTMLayer

net = buildNetwork(1, 5, 1, 
                   hiddenclass=LSTMLayer, outputbias=False, recurrent=True)

Train the network:

from pybrain.supervised import RPropMinusTrainer
from sys import stdout

trainer = RPropMinusTrainer(net, dataset=ds)
train_errors = [] # save errors for plotting later
EPOCHS_PER_CYCLE = 5
CYCLES = 100
EPOCHS = EPOCHS_PER_CYCLE * CYCLES
for i in xrange(CYCLES):
    trainer.trainEpochs(EPOCHS_PER_CYCLE)
    train_errors.append(trainer.testOnData())
    epoch = (i+1) * EPOCHS_PER_CYCLE
    print("\r epoch {}/{}".format(epoch, EPOCHS), end="")
    stdout.flush()

print()
print("final error =", train_errors[-1])

Plot the errors (note that in this simple toy example, we are testing and training on the same dataset, which is of course not what you'd do for a real project!):

import matplotlib.pyplot as plt

plt.plot(range(0, EPOCHS, EPOCHS_PER_CYCLE), train_errors)
plt.xlabel('epoch')
plt.ylabel('error')
plt.show()

Now ask the network to predict the next sample:

for sample, target in ds.getSequenceIterator(0):
    print("               sample = %4.1f" % sample)
    print("predicted next sample = %4.1f" % net.activate(sample))
    print("   actual next sample = %4.1f" % target)
    print()

(The code above is based on the example_rnn.py and the examples from the PyBrain documentation)

Jack Kelly
  • 2,214
  • 2
  • 22
  • 32
  • Can I ask for a bit of clarification on the training step. What exactly does the CYCLES and EPOCHS_PER_CYCLE part of the training stage do? Also how is it different from just training for x number of epochs? – A. Dev Oct 27 '15 at 10:00
  • @A.Devereux It is saving the error. I think the author wanted to save the errors every EPOCHS_PER_CYCLE which in this case is 5 times smaller than all errors. – MCSH Jan 08 '16 at 13:14
1

I think a better (simpler/clearer) example to learn from would be here, towards the bottom of the page:

http://pybrain.org/docs/tutorial/netmodcon.html

Essentially, once set up as shown, it will automatically keep track of the inputs' past history (until and unless you hit reset). From the docs:

http://pybrain.org/docs/api/structure/networks.html?highlight=recurrentnetwork#pybrain.structure.networks.RecurrentNetwork

"Until .reset() is called, the network keeps track of all previous inputs and thus allows the use of recurrent connections and layers that look back in time."

So yes, no need to re-present all the past inputs to the network each time.

rossdavidh
  • 1,966
  • 2
  • 22
  • 33
1

I have tested LSTM predicting some time sequence with Theano. I found that for some smooth curve, it can be predicted properly. However for some zigzag curve . It's hard to predict. The detailed article are as below: Predict Time Sequence with LSTM

The predicted result can be shown as follow:
(source: fuzihao.org)

Glorfindel
  • 21,988
  • 13
  • 81
  • 109
maple
  • 1,828
  • 2
  • 19
  • 28
  • Hey maple, I read your code and I am wondering if it is publicly available? I tried to put those parts on your website together but still not sure about some of the variables. Could you let me know about this? Thanks – ahajib May 24 '16 at 13:49
  • 1
    Hi,@nimafl, the code is not public because I didn't prune the code. It is now just a mess of noodles. If there are some thing confused you, please leave a comment below the blog and I'll try to explain that. – maple May 25 '16 at 14:58
  • Thanks. I'll certainly post there for help. – ahajib May 25 '16 at 15:12