54

I keep running into this error:

RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.

I had searched in Pytorch forum, but still can’t find out what I have done wrong in my custom loss function. My model is nn.GRU, and here is my custom loss function:

def _loss(outputs, session, items):  # `items` is a dict() contains embedding of all items
    def f(output, target):
        pos = torch.from_numpy(np.array([items[target["click"]]])).float()
        neg = torch.from_numpy(np.array([items[idx] for idx in target["suggest_list"] if idx != target["click"]])).float()
        if USE_CUDA:
            pos, neg = pos.cuda(), neg.cuda()
        pos, neg = Variable(pos), Variable(neg)

        pos = F.cosine_similarity(output, pos)
        if neg.size()[0] == 0:
            return torch.mean(F.logsigmoid(pos))
        neg = F.cosine_similarity(output.expand_as(neg), neg)

        return torch.mean(F.logsigmoid(pos - neg))

    loss = map(f, outputs, session)
    return -torch.mean(torch.cat(loss))

Training code:

    # zero the parameter gradients
    model.zero_grad()

    # forward + backward + optimize
    outputs, hidden = model(inputs, hidden)
    loss = _loss(outputs, session, items)
    acc_loss += loss.data[0]

    loss.backward()
    # Add parameters' gradients to their values, multiplied by learning rate
    for p in model.parameters():
        p.data.add_(-learning_rate, p.grad.data)
Eric O. Lebigot
  • 91,433
  • 48
  • 218
  • 260
Viet Phan
  • 1,999
  • 3
  • 23
  • 40

2 Answers2

62

The problem is from my training loop: it doesn’t detach or repackage the hidden state in between batches? If so, then loss.backward() is trying to back-propagate all the way through to the start of time, which works for the first batch but not for the second because the graph for the first batch has been discarded.

there are two possible solutions.

  1. detach/repackage the hidden state in between batches. There are (at least) three ways to do this (and I chose this solution):

    hidden.detach_()

    (or equivalently hidden = hidden.detach()).

  2. replace loss.backward() with loss.backward(retain_graph=True) but know that each successive batch will take more time than the previous one because it will have to back-propagate all the way through to the start of the first batch.

Example

Eric O. Lebigot
  • 91,433
  • 48
  • 218
  • 260
Viet Phan
  • 1,999
  • 3
  • 23
  • 40
  • 12
    The [PyTorch tutorial](http://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html#example-an-lstm-for-part-of-speech-tagging) on LSTMs suggests something along the following lines `model.hidden = model.init_hidden()` You need to clear out the hidden state of the LSTM, detaching it from its history on the last instance. – nikhilweee Apr 23 '18 at 06:08
  • 1
    Variable is deprecated now (https://pytorch.org/docs/stable/autograd.html#variable-deprecated). – Tengerye Dec 21 '18 at 01:45
  • 2
    Regarding solution 1: Why do we need to detach two times? Isn't detach_() an inplace operation that makes `hidden = hidden.detach()` unnecessary? – Tom Dörr Feb 04 '20 at 19:30
  • 3
    It is. He must have meant one or the other. – stason Feb 16 '20 at 04:49
  • The pytorch-1x+ way implementation example link is [here](https://github.com/pytorch/examples/blob/master/word_language_model/main.py#L110) – stason Feb 16 '20 at 04:50
  • @nikhilweee your link doesn't include any `init_hidden()` anymore. – Peyman Jan 05 '21 at 20:28
  • @Peyman You're right. I guess my solution was only valid for PyTorch < 1.0 – nikhilweee Jan 06 '21 at 16:55
1

I had this error too. I was feeding in the same tensor as input mid way in my model sometimes. By calling '.detach()' on that tensor it got rid of the error.

That tensor wasn't what I was training on, and I didn't want grad on it. Calling detach takes it off the graph so it isn't considered by pytorch 'backward()'.

brando f
  • 311
  • 2
  • 9