1

Error, could you show me where is the problem for this error?

pytorch is latest version. Tried to change inputs and initial hidden state into Variable(), but it dose not work.

xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxx


*Traceback (most recent call last):
  File " learnPytorch_lstm_short.py", line 81, in <module>
    main()
  File " learnPytorch_lstm_short.py", line 77, in main
    neural_network()
  File " learnPytorch_lstm_short.py", line 62, in neural_network
    loss.backward() 
  File "/Users/xxx/opt/anaconda3/envs/torch_learn/lib/python3.8/site-packages/torch/_tensor.py", line 363, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  File "/Users/xxx/opt/anaconda3/envs/torch_learn/lib/python3.8/site-packages/torch/autograd/__init__.py", line 173, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.*

The following is the code file:

from audioop import bias
import re
import torch 
import numpy as np 
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim 
import torch
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
import numpy as np
from torch.autograd import Variable


class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()

        self.rnn = nn.LSTMCell(input_size = 10,
                                hidden_size = 10)
                                

    def forward(self, x, hx, cx ):
        output = []
        for i in range(x.shape[0]):
            hx, cx   = self.rnn(x[i], (hx, cx ))
            output.append(hx)
        output =torch.stack(output, dim=0)
        return output, hx, cx 


def neural_network():

    net = Net()
    net = net.float()
    net.zero_grad()
    criterion = nn.MSELoss() #  nn.CrossEntropyLoss()
    optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)



    hx = Variable(torch.randn(3, 10))
    cx = Variable(torch.randn(3, 10))


    for epoch in range(1):
        running_loss =0
        for i in range(20):

            inputs = Variable(torch.randn(10, 3, 10)) # time step, batch,hidden size 
            labels = Variable(torch.randn(10, 3, 10))

            # print("input shape ", inputs.shape)
            optimizer.zero_grad()

            
            outputs, hx, cx  = net(inputs, hx, cx)

            loss = criterion(outputs, labels)
            
            loss.backward() 

            optimizer.step()

            running_loss += loss.item() 

            
            if i % 2 == 1:
                print('[%d, %5d] loss: %.3f' %
                  (epoch + 1, i + 1, running_loss / 2000))
                running_loss = 0.0 
    
    print('Finished training ')

    
def main():
    neural_network()


if __name__ == "__main__":
    main()


Liam
  • 11
  • 2
  • Does this answer your question? [Pytorch - RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed](https://stackoverflow.com/questions/48274929/pytorch-runtimeerror-trying-to-backward-through-the-graph-a-second-time-but). Especially refer to the first comment under its accepted answer. – Ivan Jun 22 '22 at 21:05

1 Answers1

0

Detach the hidden/cell state before passing as input again.

outputs, hx, cx  = net(inputs, hx.detach(), cx.detach())

Also, Variable is deprecated, in your case just remove the Variable eg ... hx = torch.randn(3, 10). If you do need a gradient, just add require_grad=True ... hx = torch.randn(3, 10, requires_grad=True)

Bhupen
  • 1,270
  • 1
  • 12
  • 27