I am brand new to PyTorch and want to do what I assume is a very simple thing but am having a lot of difficulty.
I have the function sin(x) * cos(x) + x^2
and I want to get the derivative of that function at any point.
If I do this with one point it works perfectly as
x = torch.autograd.Variable(torch.Tensor([4]),requires_grad=True)
y = torch.sin(x)*torch.cos(x)+torch.pow(x,2)
y.backward()
print(x.grad) # outputs tensor([7.8545])
However, I want to be able to pass in a vector as x and for it to evaluate the derivative element-wise. For example:
Input: [4., 4., 4.,]
Output: tensor([7.8545, 7.8545, 7.8545])
But I can't seem to get this working.
I tried simply doing
x = torch.tensor([4., 4., 4., 4.], requires_grad=True)
out = torch.sin(x)*torch.cos(x)+x.pow(2)
out.backward()
print(x.grad)
But I get the error "RuntimeError: grad can be implicitly created only for scalar outputs"
How do I adjust this code for vectors?
Thanks in advance,