I have been trying to test the numpy.gradient function recently. However, it's behavior is little bit strange for me. I have created an array with random variables and then applied the numpy.gradient over it, but the values seems crazy and irrelevant. But when using numpy.diff the values are correct.
So, after viewing the documentation of numpy.gradient, I see that it uses distance=1 over the desired dimension.
This is what I mean:
import numpy as np;
a= np.array([10, 15, 13, 24, 15, 36, 17, 28, 39]);
np.gradient(a)
"""
Got this: array([ 5. , 1.5, 4.5, 1. , 6. , 1. , -4. , 11. , 11. ])
"""
np.diff(a)
"""
Got this: array([ 5, -2, 11, -9, 21, -19, 11, 11])
"""
I don't understand how the values in first result came. If the default distance is supposed to be 1, then I should have got the same results as numpy.diff.
Could anyone explain what distance means here. Is it relative to the array index or to the value in the array? If it depends on the value, then does that mean that numpy.gradient could not be used with images since values of neighbor pixels have no fixed value differences?