0

I am studying the tutorial from PyTorch official docs. I am trying to understand the content. Start from You can do many crazy things with autograd!

x = torch.randn(3, requires_grad=True)
y = x * 2

i = 0
while y.data.norm() < 100:
    y = y * 2
    i+= 1

print(x)
print(y)
print(i)

Output:

tensor([-0.6933,  0.1126,  0.3913], requires_grad=True)
tensor([-88.7455,  14.4082,  50.0871], grad_fn=<MulBackward>)
6

Find the derivative w.r.t to x at point [0.1, 1.0, 0.0001]

gradients = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
y.backward(gradients)
print(x.grad)

Output:

tensor([ 12.8000, 128.0000,   0.0128])

In my understanding i equals to 6. Then y = (2x)^7 and derivative is different from PyTorch. It has 7 as a factor when substitute the value to my derivative.

The answer from the PyTorch is simply substitute x with given point to the dy/dx = 2^7 * x

Question:

How to derive the derivative?

References:

How to use PyTorch to calculate partial derivatives?

PyTorch Autograd automatic differentiation feature

Milo Lu
  • 3,176
  • 3
  • 35
  • 46
joe
  • 8,383
  • 13
  • 61
  • 109

1 Answers1

1

If you look closely at the expressions, it'd turn out that y = x * (2^7), the derivative of which is 2^7 * x .

  • 1
    `y = x * 2^7`. Has derivative w.r.t x is `2^7`. Since `x` is first degree. It am thinking about chain rule, but I couldn't be able to figure out. Because it has just 2 variables. – joe Oct 06 '18 at 04:23
  • @backothermoon It is no need to do chain rule here since the derivative is just `x` and `y`. Therefore total derivative and partial derivative is the same thing. – joe Oct 09 '18 at 02:47