I want to implement exponential smoothing filter in Python. Currently I calculate:
y[n] = alpha * x[n] + (1 - alpha) * y[n-1],
where x[n]
is 'current sample', y[n]
is new sample, y[n-1]
is previous output sample and alpha is filter parameter.
I do calculation using ordinary Python iteration, as you might imagine, it is painfully slow (like, 10000 data points is already very difficult to get).
Can this calculus be coded using numpy
and vector operations?
I'm not sure whether I'm interested in scipy.signal
. My primary intention is to study the influence of input vector length and coefficient alpha resolution on accuracy of output.
EDIT: It seems that I did not formulate the question correctly - i'm not interested in this particular solution, but 'this kind of problem'. I'm using this equation as a base, but I add additional treatments to it, in particular rounding and restriction of integer resolution to alpha (coded as two's complement) and y[n] (trimming to specific bit width), so it gives results as good as the resolution of the parameters is (basically emulating hardware behavior). Hence lfilter from scipy is not what I can use for it. On the other hand, proposed Numba seems to be what i'm looking for.