The interp1d
returns a value that matches the input in shape - after wrapping in np.array()
if needed:
In [324]: f([1,2,3])
Out[324]: array([ 5., 7., 9.])
In [325]: f([2.5])
Out[325]: array([ 7.75])
In [326]: f(2.5)
Out[326]: array(7.75)
In [327]: f(np.array(2.5))
Out[327]: array(7.75)
Many numpy
operations do return scalars instead of 0d arrays.
In [330]: np.arange(3).sum()
Out[330]: 3
though actually it returns a numpy object
In [341]: type(np.arange(3).sum())
Out[341]: numpy.int32
which does have a shape ()
and ndim 0
.
Whereas interp1d
returns an array.
In [344]: type(f(2.5))
Out[344]: numpy.ndarray
You can extract the value with [()]
indexing
In [345]: f(2.5)[()]
Out[345]: 7.75
In [346]: type(f(2.5)[()])
Out[346]: numpy.float64
This may just be an oversight in the scipy
code. How often do people want to interpolate at just one point? Isn't interpolating over a regular grid of points more common?
==================
The documentation for f.__call__
is quite explicit about returning an array.
Evaluate the interpolant
Parameters
----------
x : array_like
Points to evaluate the interpolant at.
Returns
-------
y : array_like
Interpolated values. Shape is determined by replacing
the interpolation axis in the original array with the shape of x.
===============
The other side to the question is why does numpy
even have a 0d array. The linked answer probably is sufficient. But often the question is asked by people who are used to MATLAB. In MATLAB nearly everything is 2d. There aren't any (true) scalars. Now MATLAB has structures and cells, and matrices with more than 2 dimensions. But I recall a time (in the 1990s) when it didn't have those. Everything, literal, was a 2d matrix.
The np.matrix
approximates that MATLAB case, fixing its arrays at 2d. But it does have a _collapse
method that can return a 'scalar'.