76

So I know what the gradient of a (mathematical) function is, so I feel like I should know what numpy.gradient does. But I don't. The documentation is not really helpful either:

Return the gradient of an N-dimensional array.

What is the gradient of an array? When is numpy.gradient useful?

usual me
  • 8,338
  • 10
  • 52
  • 95
  • 3
    The docs do give a more detailed description: `The gradient is computed using central differences in the interior and first differences at the boundaries. The returned gradient hence has the same shape as the input array.` Take a look at http://en.wikipedia.org/wiki/Finite_difference_method. – Praveen Jul 08 '14 at 13:56
  • 3
    As you can define the discrete derivative of a monodimensional array (x[i+1]-x[i])/h in the simplest case, with h typically 1), you can define the discrete gradient; it's often used in image algorithms (see http://en.wikipedia.org/wiki/Image_gradient). – Matteo Italia Jul 08 '14 at 13:56

4 Answers4

178

Also in the documentation1:

>>> y = np.array([1, 2, 4, 7, 11, 16], dtype=np.float)
>>> j = np.gradient(y)
>>> j 
array([ 1. ,  1.5,  2.5,  3.5,  4.5,  5. ])
  • Gradient is defined as (change in y)/(change in x).

  • x, here, is the list index, so the difference between adjacent values is 1.

  • At the boundaries, the first difference is calculated. This means that at each end of the array, the gradient given is simply, the difference between the end two values (divided by 1)

  • Away from the boundaries the gradient for a particular index is given by taking the difference between the the values either side and dividing by 2.

So, the gradient of y, above, is calculated thus:

j[0] = (y[1]-y[0])/1 = (2-1)/1  = 1
j[1] = (y[2]-y[0])/2 = (4-1)/2  = 1.5
j[2] = (y[3]-y[1])/2 = (7-2)/2  = 2.5
j[3] = (y[4]-y[2])/2 = (11-4)/2 = 3.5
j[4] = (y[5]-y[3])/2 = (16-7)/2 = 4.5
j[5] = (y[5]-y[4])/1 = (16-11)/1 = 5

You could find the minima of all the absolute values in the resulting array to find the turning points of a curve, for example.


1The array is actually called x in the example in the docs, I've changed it to y to avoid confusion.

SiHa
  • 7,830
  • 13
  • 34
  • 43
  • 2
    Can somebody explain why the returned array has the same size as the input array no matter the spacing parameter? I'd expect it to be smaller with larger spacing – Alaa M. Jul 13 '19 at 11:22
  • 3
    @AlaaM. In the documentation, the example with spacing of `2`, the values are simply half that of the result with no spacing specified. So it looks like it means that the spacing is the distance between each point on the notional x-axis, rather than the default of `1`. So the array will have the same size. – SiHa Jul 15 '19 at 09:31
  • @SiHa The bottom of this answer should really be in the numpy documentation. This is very helpful, thank you. – Decker Mar 16 '20 at 17:54
  • As I understood - np.gradient() can be used to return the first derivative of a signal - is that correct? to compute the second derivative, will it be ok to call np.gradient() twice? e.g.: derivative_2 = np.gradient(np.gradient(signal)) ? – epifanio Aug 10 '22 at 11:54
  • @epifanio I would imagine so, yes, although my mathematics knowledge is very rusty! – SiHa Aug 11 '22 at 07:49
27

Here is what is going on. The Taylor series expansion guides us on how to approximate the derivative, given the value at close points. The simplest comes from the first order Taylor series expansion for a C^2 function (two continuous derivatives)...

  • f(x+h) = f(x) + f'(x)h+f''(xi)h^2/2.

One can solve for f'(x)...

  • f'(x) = [f(x+h) - f(x)]/h + O(h).

Can we do better? Yes indeed. If we assume C^3, then the Taylor expansion is

  • f(x+h) = f(x) + f'(x)h + f''(x)h^2/2 + f'''(xi) h^3/6, and
  • f(x-h) = f(x) - f'(x)h + f''(x)h^2/2 - f'''(xi) h^3/6.

Subtracting these (both the h^0 and h^2 terms drop out!) and solve for f'(x):

  • f'(x) = [f(x+h) - f(x-h)]/(2h) + O(h^2).

So, if we have a discretized function defined on equal distant partitions: x = x_0,x_0+h(=x_1),....,x_n=x_0+h*n, then numpy gradient will yield a "derivative" array using the first order estimate on the ends and the better estimates in the middle.

Example 1. If you don't specify any spacing, the interval is assumed to be 1. so if you call

f = np.array([5, 7, 4, 8])

what you are saying is that f(0) = 5, f(1) = 7, f(2) = 4, and f(3) = 8. Then

np.gradient(f) 

will be: f'(0) = (7 - 5)/1 = 2, f'(1) = (4 - 5)/(2*1) = -0.5, f'(2) = (8 - 7)/(2*1) = 0.5, f'(3) = (8 - 4)/1 = 4.

Example 2. If you specify a single spacing, the spacing is uniform but not 1.

For example, if you call

np.gradient(f, 0.5)

this is saying that h = 0.5, not 1, i.e., the function is really f(0) = 5, f(0.5) = 7, f(1.0) = 4, f(1.5) = 8. The net effect is to replace h = 1 with h = 0.5 and all the results will be doubled.

Example 3. Suppose the discretized function f(x) is not defined on uniformly spaced intervals, for instance f(0) = 5, f(1) = 7, f(3) = 4, f(3.5) = 8, then there is a messier discretized differentiation function that the numpy gradient function uses and you will get the discretized derivatives by calling

np.gradient(f, np.array([0,1,3,3.5]))

Lastly, if your input is a 2d array, then you are thinking of a function f of x, y defined on a grid. The numpy gradient will output the arrays of "discretized" partial derivatives in x and y.

24

The gradient is computed using central differences in the interior and first differences at the boundaries.

and

The default distance is 1

This means that in the interior it is computed as

enter image description here

where h = 1.0

and at the boundaries

enter image description here

4pie0
  • 29,204
  • 9
  • 82
  • 118
  • 5
    Are you sure *h = 1*? Because it means it is *f(x + 0.5) - f(x - 0.5)*, so if we take x = 2 it becomes *f(2.5) - f(1.5)*, if *f* is a 1-d array you can't have non-integer x. Looks like it is *f(x + 1) - f(x - 1)* actually. – Andrey Feb 28 '19 at 13:40
  • 16
    I can't see your formula in dark mode theme. – asrulsibaoel Feb 25 '21 at 02:33
2

Think about N-dimensional array as a matrix. Then gradient is nothing else as matrix differentiation

For a good explanation look at gradient description in matlab documentation.

Robert Zaremba
  • 8,081
  • 7
  • 47
  • 78