Questions tagged [autograd]

Autograd can automatically differentiate native Python and Numpy code and is also used by the deep learning framework PyTorch. It can handle a large subset of Python's features, including loops, ifs, recursion and closures, and it can also take derivatives of derivatives of derivatives. The main intended application of Autograd is gradient-based optimization.

362 questions
81
votes
3 answers

Difference between "detach()" and "with torch.nograd()" in PyTorch?

I know about two ways to exclude elements of a computation from the gradient calculation backward Method 1: using with torch.no_grad() with torch.no_grad(): y = reward + gamma * torch.max(net.forward(x)) loss =…
user637140
  • 1,041
  • 2
  • 10
  • 9
70
votes
3 answers

Evaluating pytorch models: `with torch.no_grad` vs `model.eval()`

When I want to evaluate the performance of my model on the validation set, is it preferred to use with torch.no_grad: or model.eval()?
Tom Hale
  • 40,825
  • 36
  • 187
  • 242
54
votes
2 answers

Pytorch - RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed

I keep running into this error: RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time. I had searched in Pytorch forum, but still…
Viet Phan
  • 1,999
  • 3
  • 23
  • 40
28
votes
1 answer

Backward function in PyTorch

I have some question about pytorch's backward function I don't think I'm getting the right output : import numpy as np import torch from torch.autograd import Variable a = Variable(torch.FloatTensor([[1,2,3],[4,5,6]]), requires_grad=True) out = a *…
Elin
  • 305
  • 1
  • 4
  • 7
22
votes
1 answer

Why does autograd not produce gradient for intermediate variables?

trying to wrap my head around how gradients are represented and how autograd works: import torch from torch.autograd import Variable x = Variable(torch.Tensor([2]), requires_grad=True) y = x * x z = y * y z.backward() print(x.grad) #Variable…
foobar
  • 10,854
  • 18
  • 58
  • 66
20
votes
2 answers

Autograd.grad() for Tensor in pytorch

I want to compute the gradient between two tensors in a net. The input X tensor (batch size x m) is sent through a set of convolutional layers which give me back and output Y tensor(batch size x n). I’m creating a new loss and I would like to know…
Xbel
  • 735
  • 1
  • 10
  • 30
18
votes
3 answers

In-place operations with PyTorch

I was wondering how to deal with in-place operations in PyTorch. As I remember using in-place operation with autograd has always been problematic. And actually I’m surprised that this code below works, even though I haven’t tested it I believe this…
MBT
  • 21,733
  • 19
  • 84
  • 102
15
votes
1 answer

How to use autograd.gradcheck in PyTorch?

The documentation does not include any example use case of gradcheck, where would it be useful?
apostofes
  • 2,959
  • 5
  • 16
  • 31
14
votes
2 answers

Higher order gradients in pytorch

I have implemented the following Jacobian function in pytorch. Unless I have made a mistake, it computes the Jacobian of any tensor w.r.t. any dimensional inputs: import torch import torch.autograd as ag def nd_range(stop, dims = None): if…
user650261
  • 2,115
  • 5
  • 24
  • 47
12
votes
2 answers

Difference between autograd.grad and autograd.backward?

Suppose I have my custom loss function and I want to fit the solution of some differential equation with help of my neural network. So in each forward pass, I am calculating the output of my neural net and then calculating the loss by taking the MSE…
12
votes
2 answers

PyTorch autograd -- grad can be implicitly created only for scalar outputs

I am using the autograd tool in PyTorch, and have found myself in a situation where I need to access the values in a 1D tensor by means of an integer index. Something like this: def basic_fun(x_cloned): res = [] for i in range(len(x)): …
mhyousefi
  • 1,064
  • 2
  • 16
  • 30
10
votes
1 answer

how to apply gradients manually in pytorch

Starting to learn pytorch and was trying to do something very simple, trying to move a randomly initialized vector of size 5 to a target vector of value [1,2,3,4,5]. But my distance is not decreasing!! And my vector x just goes crazy. No idea what I…
10
votes
2 answers

pytorch custom layer "is not a Module subclass"

I am new to PyTorch, trying it out after using a different toolkit for a while. I would like understand how to program custom layers and functions. And as a simple test, I wrote this: class Testme(nn.Module): ## it _is_ a sublcass of module…
forgotmysocks
  • 355
  • 1
  • 3
  • 9
9
votes
1 answer

Pytorch Autograd gives different gradients when using .clamp instead of torch.relu

I'm still working on my understanding of the PyTorch autograd system. One thing I'm struggling at is to understand why .clamp(min=0) and nn.functional.relu() seem to have different backward passes. It's especially confusing as .clamp is used…
DaFlooo
  • 91
  • 1
  • 6
8
votes
1 answer

PyTorch warning about using a non-full backward hook when the forward contains multiple autograd Nodes

After a recent upgrade, when running my PyTorch loop, I now get the warning Using a non-full backward hook when the forward contains multiple autograd Nodes`". The training still runs and completes, but I am unsure where I am supposed to place the…
IllyShaieb
  • 111
  • 2
  • 8
1
2 3
24 25