-1

I want to create an array of the exponential smoothing weights as in Formula (7.2) from here. I'm aware of the recursive definition but need the actual weights. I came up with the following straightforward implementation:

import numpy as np

def create_weights(n, alpha = 1.0):
    wghts = alpha*(1-alpha)**np.arange(n)
    return wghts

The weights should sum up to 1.0, however, as this test shows, this is not the case for smaller alphas, I guess due to floating point impression:

np.set_printoptions(precision=3)
for alpha in np.arange(1.0, 0.0, -0.1):
    print("%.3f, %s, %.3f" % (alpha, create_weights(5, alpha) , create_weights(5, alpha).sum()))

Out:
1.000, [1. 0. 0. 0. 0.], 1.000
0.900, [9.e-01 9.e-02 9.e-03 9.e-04 9.e-05], 1.000
0.800, [0.8   0.16  0.032 0.006 0.001], 1.000
0.700, [0.7   0.21  0.063 0.019 0.006], 0.998
0.600, [0.6   0.24  0.096 0.038 0.015], 0.990
0.500, [0.5   0.25  0.125 0.062 0.031], 0.969
0.400, [0.4   0.24  0.144 0.086 0.052], 0.922
0.300, [0.3   0.21  0.147 0.103 0.072], 0.832
0.200, [0.2   0.16  0.128 0.102 0.082], 0.672
0.100, [0.1   0.09  0.081 0.073 0.066], 0.410

Similar to solutions here, I could simply "normalize" it, to scale it up again:

def create_weights(n, alpha = 1.0):
    wghts = alpha*(1-alpha)**np.arange(n)
    wghts /= wghts.sum()
    return wghts

This results in:

1.000, [1. 0. 0. 0. 0.], 1.000
0.900, [9.e-01 9.e-02 9.e-03 9.e-04 9.e-05], 1.000
0.800, [0.8   0.16  0.032 0.006 0.001], 1.000
0.700, [0.702 0.211 0.063 0.019 0.006], 1.000
0.600, [0.606 0.242 0.097 0.039 0.016], 1.000
0.500, [0.516 0.258 0.129 0.065 0.032], 1.000
0.400, [0.434 0.26  0.156 0.094 0.056], 1.000
0.300, [0.361 0.252 0.177 0.124 0.087], 1.000
0.200, [0.297 0.238 0.19  0.152 0.122], 1.000
0.100, [0.244 0.22  0.198 0.178 0.16 ], 1.000

While now the sum adds up to 1.0, for small alpha the first weight deviates too much from its expected value (it is expected the same as alpha).

Is there another way to implement this to fulfill both properties of the weights, adding to 1.0 (+-small error) and the first weight being alpha (+- small error)?

Marcus V.
  • 6,323
  • 1
  • 18
  • 33

1 Answers1

2

This has nothing to do with floating-point precision. The arrays you're generating wouldn't sum to 1 even with infinite-precision real number arithmetic. An infinite series generated similarly would sum to 1, but your arrays stop at 5 elements instead of going on forever.

Simple exponential smoothing does not arbitrarily stop at 5 elements. It's often reasonable to apply a cutoff for elements far enough in the past that their weights are tiny, but stopping at 5 for every alpha with no normalization will produce unreasonable results.

user2357112
  • 260,549
  • 28
  • 431
  • 505