0

everyone! I tried to draw the results predicted by Recurrent Neural Network(RNN), the results should be an animation like this enter image description here, while I run the code in my jupyter notebook in VS code, the notebook can only display the pictures one by one and no animationsenter image description here. Is the .ipynb file different from .py file? How to solve this problem?

import torch
from torch import nn
import numpy as np
import matplotlib.pyplot as plt

# torch.manual_seed(1)    # reproducible

# Hyper Parameters
TIME_STEP = 10      # rnn time step
INPUT_SIZE = 1      # rnn input size
LR = 0.02           # learning rate

# data
steps = np.linspace(0, np.pi*2, 100, dtype=np.float32)  # float32 for converting torch FloatTensor
x_np = np.sin(steps)
y_np = np.cos(steps)

class RNN(nn.Module):
    def __init__(self):
        super(RNN, self).__init__()

        self.rnn = nn.RNN(
            input_size=INPUT_SIZE,
            hidden_size=32,     # rnn hidden unit
            num_layers=1,       # number of rnn layer
            batch_first=True,   # input & output will has batch size as 1s dimension. e.g. (batch, time_step, input_size)
        )
        self.out = nn.Linear(32, 1)

    def forward(self, x, h_state):
        # x (batch, time_step, input_size)
        # h_state (n_layers, batch, hidden_size)
        # r_out (batch, time_step, hidden_size)
        r_out, h_state = self.rnn(x, h_state)

        outs = []    # save all predictions
        for time_step in range(r_out.size(1)):    # calculate output for each time step
            outs.append(self.out(r_out[:, time_step, :]))
        return torch.stack(outs, dim=1), h_state

        # instead, for simplicity, you can replace above codes by follows
        # r_out = r_out.view(-1, 32)
        # outs = self.out(r_out)
        # outs = outs.view(-1, TIME_STEP, 1)
        # return outs, h_state
        
        # or even simpler, since nn.Linear can accept inputs of any dimension 
        # and returns outputs with same dimension except for the last
        # outs = self.out(r_out)
        # return outs

rnn = RNN()
print(rnn)

optimizer = torch.optim.Adam(rnn.parameters(), lr=LR)   # optimize all cnn parameters
loss_func = nn.MSELoss()

h_state = None      # for initial hidden state

plt.figure(1, figsize=(12, 5))
plt.ion()           # continuously plot

for step in range(100):
    start, end = step * np.pi, (step+1)*np.pi   # time range
    # use sin predicts cos
    steps = np.linspace(start, end, TIME_STEP, dtype=np.float32, endpoint=False)  # float32 for converting torch FloatTensor
    x_np = np.sin(steps)
    y_np = np.cos(steps)

    x = torch.from_numpy(x_np[np.newaxis, :, np.newaxis])    # shape (batch, time_step, input_size)
    y = torch.from_numpy(y_np[np.newaxis, :, np.newaxis])

    prediction, h_state = rnn(x, h_state)   # rnn output
    # !! next step is important !!
    h_state = h_state.data        # repack the hidden state, break the connection from last iteration

    loss = loss_func(prediction, y)         # calculate loss
    optimizer.zero_grad()                   # clear gradients for this training step
    loss.backward()                         # backpropagation, compute gradients
    optimizer.step()                        # apply gradients

    # plotting
    plt.plot(steps, y_np.flatten(), 'r-')
    plt.plot(steps, prediction.data.numpy().flatten(), 'b-')
    plt.draw(); plt.pause(0.05)

plt.ioff()
plt.show()

I've been searching results online, and the matplotlib documentation recommends to use %matplotlib widget, however, I found jupyter notebook still fails to generate animations.

  • You won't need necessarily `%matplotlib widget` to make an animation. When citing documentation it would be best to point to it. For example, it would have been nice for you point to where in the documentation you see that advice because it may be for a certain implementation of an animation? To answer one of your points: yes, a `.ipynb` file is different from a `.py` file. Have you tried triggering running the `.py` file from inside the Jupyter notebook with `%run .py` and does any output show up? I've tried running your code inside a notebook ... – Wayne Jan 05 '23 at 21:46
  • and I just keep seeing the output update with each segment/'frame'(?) which seems to be continuing along the x axis and so it seems your implementation just lacks what is necessary for such a plot in a notebook. This is the exact same code you say works as a script? It may be because of settings on your computer aside of Jupyter or VSCode because when I run it with Python on the command line, I don't see any plots generated eventhough it runs. – Wayne Jan 05 '23 at 21:47

1 Answers1

0

If you just want to see the resulting plot you show in the screenshot inside your Jupyter notebook, you can just comment out the one plt.draw(); plt.pause(0.05) line in your code and the code works pasted into notebooks running in sessions launched from here:

#plt.draw(); plt.pause(0.05)

But your title says you want it animated.

Option that works at presently only in the traditional Jupyter notebook interface using %matplotlib notebook & pauses to control updates to fig.canvas.draw()

The simplest way I use for plot animation works with %matplotlib notebook in traditional Jupyter notebooks, not JupyterLab (at this time). I adapted yours to use that.

A Jupyter notebook with related code run is here.
Click here to launch that notebook in an active, temporary Jupyter session served via MyBinder.org with the environment already having Python and ALMOST everything needed to make the animation with a widget work in the notebook installed and working. I qualify it with ALMOST because for there you'll need to first add a cell that is %pip install torch and run that and restart the kernel and then run the plotting code based on yours.

Once it completes, you can scroll over to the left side of the plot and click on the blue button to turn off the interactivity and release the kernel for running more code.

To learn more about this approach, work through the notebook that will come up first when you click on launch binder here. That starts with that method and covers some others that I demonstrate as well.

'Universally'-compatible way that doesn't require setting %matplotlib magic

This option below, based on here and your code, works in both the classic notebook interface or JupyterLab:

# based on https://stackoverflow.com/a/52672859/8508004
from IPython.display import clear_output
from matplotlib import pyplot as plt
import numpy as np
import collections
import time

def live_plot(data_dict, figsize=(12,5), title=''):
    clear_output(wait=True)
    plt.figure(figsize=figsize)
    #plt.plot(data_dict["steps"],data_dict["r"] , 'r-', label = "real")
    #plt.plot(data_dict["steps"],data_dict["b"] , 'b-', label = "prediction")
    for i,_ in enumerate(data_dict["steps"]):
        plt.plot(data_dict["steps"][i], list(data_dict["r"][i]) , 'r-', )
        plt.plot(data_dict["steps"][i], list(data_dict["b"][i]) , 'b-', )
    plt.title(title)
    plt.grid(True)
    #plt.legend(loc='center left') # the plot evolves to the right
    plt.show()
    time.sleep(0.2) # extend delay between adding next frame in animation

data = collections.defaultdict(list)
    
import torch
from torch import nn

# torch.manual_seed(1)    # reproducible

# Hyper Parameters
TIME_STEP = 10      # rnn time step
INPUT_SIZE = 1      # rnn input size
LR = 0.02           # learning rate

# data
steps = np.linspace(0, np.pi*2, 100, dtype=np.float32)  # float32 for converting torch FloatTensor
x_np = np.sin(steps)
y_np = np.cos(steps)

class RNN(nn.Module):
    def __init__(self):
        super(RNN, self).__init__()

        self.rnn = nn.RNN(
            input_size=INPUT_SIZE,
            hidden_size=32,     # rnn hidden unit
            num_layers=1,       # number of rnn layer
            batch_first=True,   # input & output will has batch size as 1s dimension. e.g. (batch, time_step, input_size)
        )
        self.out = nn.Linear(32, 1)

    def forward(self, x, h_state):
        # x (batch, time_step, input_size)
        # h_state (n_layers, batch, hidden_size)
        # r_out (batch, time_step, hidden_size)
        r_out, h_state = self.rnn(x, h_state)

        outs = []    # save all predictions
        for time_step in range(r_out.size(1)):    # calculate output for each time step
            outs.append(self.out(r_out[:, time_step, :]))
        return torch.stack(outs, dim=1), h_state

        # instead, for simplicity, you can replace above codes by follows
        # r_out = r_out.view(-1, 32)
        # outs = self.out(r_out)
        # outs = outs.view(-1, TIME_STEP, 1)
        # return outs, h_state
        
        # or even simpler, since nn.Linear can accept inputs of any dimension 
        # and returns outputs with same dimension except for the last
        # outs = self.out(r_out)
        # return outs

rnn = RNN()
print(rnn)

optimizer = torch.optim.Adam(rnn.parameters(), lr=LR)   # optimize all cnn parameters
loss_func = nn.MSELoss()

h_state = None      # for initial hidden state

for step in range(100):
    start, end = step * np.pi, (step+1)*np.pi   # time range
    # use sin predicts cos
    steps = np.linspace(start, end, TIME_STEP, dtype=np.float32, endpoint=False)  # float32 for converting torch FloatTensor
    x_np = np.sin(steps)
    y_np = np.cos(steps)

    x = torch.from_numpy(x_np[np.newaxis, :, np.newaxis])    # shape (batch, time_step, input_size)
    y = torch.from_numpy(y_np[np.newaxis, :, np.newaxis])

    prediction, h_state = rnn(x, h_state)   # rnn output
    # !! next step is important !!
    h_state = h_state.data        # repack the hidden state, break the connection from last iteration

    loss = loss_func(prediction, y)         # calculate loss
    optimizer.zero_grad()                   # clear gradients for this training step
    loss.backward()                         # backpropagation, compute gradients
    optimizer.step()                        # apply gradients

    # plotting
    data['steps'].append(list(steps))
    data['r'].append(y_np.flatten())
    data['b'].append(prediction.data.numpy().flatten())
    live_plot(data);

In addition to working in both interfaces presently, it doesn't require %matplotlib inline or setting any variation on the %matplotlib magic explicitly at present.
A Jupyter notebook with the above code run is here.
Click here to launch that notebook in an active, temporary JupyterLab session served via MyBinder.org with the environment already having Python and ALMOST everything needed to make the animation with a widget work in the notebook installed and working. It steps you through installing PyTorch which is the one last thing needed there to run this code. (In that same notebook I added at the bottom using funcAnimation() in conjunction with making a widget controller that allows scrubbing back-and-forth through the associated frames using the slider. Note that the widget controlling playback will work even in the 'static' mode when viewed on nbviewer as you can see at the bottom here.)

Wayne
  • 6,607
  • 8
  • 36
  • 93