I am studying the tutorial from PyTorch
official docs. I am trying to understand the content. Start from You can do many crazy things with autograd!
x = torch.randn(3, requires_grad=True)
y = x * 2
i = 0
while y.data.norm() < 100:
y = y * 2
i+= 1
print(x)
print(y)
print(i)
Output:
tensor([-0.6933, 0.1126, 0.3913], requires_grad=True)
tensor([-88.7455, 14.4082, 50.0871], grad_fn=<MulBackward>)
6
Find the derivative w.r.t to x
at point [0.1, 1.0, 0.0001]
gradients = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
y.backward(gradients)
print(x.grad)
Output:
tensor([ 12.8000, 128.0000, 0.0128])
In my understanding i
equals to 6
. Then y = (2x)^7
and derivative is different from PyTorch
. It has 7
as a factor when substitute the value
to my derivative.
The answer from the PyTorch
is simply substitute x
with given point to the dy/dx = 2^7 * x
Question:
How to derive the derivative?
References: