Here's a simple method, but I'm going to highlight the limitations. It's great since it requires no complex analysis, but if the data is sparse, or deviates significantly from linear behavior, it's a poor approximation.
>>> import matplotlib.pyplot as plt
>>> plt.figure()
<matplotlib.figure.Figure object at 0x7f643a794610>
>>> import numpy as np
>>> bins = np.array(range(10))
>>> quad = bins**2
>>> lin = 7*bins-12
>>> plt.plot(bins, quad, color="blue")
[<matplotlib.lines.Line2D object at 0x7f643298e710>]
>>> plt.plot(bins, lin, color="red")
[<matplotlib.lines.Line2D object at 0x7f64421f2310>]
>>> plt.show()

Here I show a plot with a quadratic function [f(x) = x^2] and then the linear fit between points at (3,9) and (4,16).
I can approximate this easily by getting the slope:
m = (y2-y1)/(x2-x1) = (16-9)/(4-3) = 7
I can then find the linear function for the value at a point, I'll do (3,3):
f(x) = 7x + b
9 = 7(3) + b
b = -12
f(x) = 7x - 12
And now we can approximate any value between 3 and 4.
The error will become unusable quickly, however, within close data points, it's quite accurate. For example:
# quadratic
f(3.5) = 3.5**2 = 12.25
# linear
f(3.5) = 7*3.5 - 12 = 12.5
Since error is just (experimental-theoretical)/(theoretical), we get (12.25-12.5)/(12.5), or an error of -2% (NOT BAD).
However, if we try this for f(50), we get.
# quadratic
f(50) = 2500
# linear
f(50) = 7*50 - 12 = 338
Or an error of 639%.