54

I know this is achievable with boost as per:

Using boost::accumulators, how can I reset a rolling window size, does it keep extra history?

But I really would like to avoid using boost. I have googled and not found any suitable or readable examples.

Basically I want to track the moving average of an ongoing stream of a stream of floating point numbers using the most recent 1000 numbers as a data sample.

What is the easiest way to achieve this?


I experimented with using a circular array, exponential moving average and a more simple moving average and found that the results from the circular array suited my needs best.

Community
  • 1
  • 1
goji
  • 6,911
  • 3
  • 42
  • 59
  • 12
    Why do you want to avoid using Boost? It's a well-established, throughly-used, and well-supported set of C++ libraries. There's no reason to reinvent the wheel. – templatetypedef Jun 12 '12 at 04:41
  • 1
    Which part of this are you stuck at? Do you know which moving average algorithm you want from a mathematical point of view? – juanchopanza Jun 12 '12 at 04:49
  • 1
    Rolling average works fine for integers, but for floating point you may experience weird behavior due to rounding and differences of magnitude... – R.. GitHub STOP HELPING ICE Jun 12 '12 at 04:52
  • The trick is preventing a Buffer-to-AveragingBuffer copy. Some folks here want you to make a separate buffer for the previous samples. This may not be necessary as the samples may arrive from a buffer. – Mikhail Jun 12 '12 at 05:26
  • 1
    @templatetypedef, goji is trying to avoid Boost due to the issue in the comments on his linked question above. The only solution there (as of now) would require re-accumulating data. "Inverse" recommends making a rolling average with a circular buffer or deque. – LabGecko Feb 12 '19 at 03:28

11 Answers11

95

If your needs are simple, you might just try using an exponential moving average.

http://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average

Put simply, you make an accumulator variable, and as your code looks at each sample, the code updates the accumulator with the new value. You pick a constant "alpha" that is between 0 and 1, and compute this:

accumulator = (alpha * new_value) + (1.0 - alpha) * accumulator

You just need to find a value of "alpha" where the effect of a given sample only lasts for about 1000 samples.

Hmm, I'm not actually sure this is suitable for you, now that I've put it here. The problem is that 1000 is a pretty long window for an exponential moving average; I'm not sure there is an alpha that would spread the average over the last 1000 numbers, without underflow in the floating point calculation. But if you wanted a smaller average, like 30 numbers or so, this is a very easy and fast way to do it.

steveha
  • 74,789
  • 21
  • 92
  • 117
  • This may be overkill. Doesn't it require to recalculate the whole series each time a new number is added? – juanchopanza Jun 12 '12 at 05:03
  • 18
    No, it just requires two multiplies and an addition per new number. Plus one subtraction if you didn't pre-calculate `(1.0 - alpha)`. The closer `(1.0 - alpha)` is to 1.0, the longer the effect of previous numbers hangs around, and the less impact each new number has. The closer alpha is to 1.0, the faster the moving average updates in response to new values. – steveha Jun 12 '12 at 05:51
  • 1
    +1 on your post. The exponential moving average can allow the `alpha` to be variable. So this allows it be used to compute time base averages (e.g., bytes per second). If the time since the last accumulator update is more than 1 second, you let `alpha` be `1.0`. Otherwise, you can let `alpha` be (usecs since last update/1000000). – jxh Jun 12 '12 at 06:21
  • 1
    I have found exponential moving averages to be very useful at times. Once I used an EMA to compute a reliability metric on an Internet connection; for each successful connection I averaged in a 1.0 value, and for each failure I averaged in a 0.0 value. It worked very well. I wanted it to hit 100.0% if the connection was reliable, so I added a "bonus" score if the connection was good ten times in a row, and subtracted a penalty if the connection failed ten times in a row. – steveha Jun 12 '12 at 06:32
  • This is more like what I was hoping to achieve, is this really not going to be suitable for a larger sample size of 1000 (or more)? – goji Jun 13 '12 at 10:41
  • Well, one good thing about the exponential moving average is that you can feed it any number of samples (one at a time) and it can handle them. My concern was this: if you put a sample in and multiply the accumulator by `(1.0 - alpha)`, and then you do that a thousand times. You are effectively raising `(1.0 - alpha)` to the power of 1000, and float numbers will underflow or overflow with exponents that large, so the individual contribution of any particular number will probably not survive for a full 1000 samples. But I think you can get a useful average this way. – steveha Jun 13 '12 at 18:54
  • 3
    @user315052 said that if you set alpha to `1.0/1000` that it will approximate an average of 1000 samples. It can't be identical to an actual average of 1000 samples, but I do think it would have an effect similar enough for many purposes. I suggest you try it: use the exponential moving average with alpha set to `1.0/1000` and see if you like the averages you get that way. – steveha Jun 13 '12 at 18:56
  • Sounds good, I will compare the results of the 2 different methods and decide. Actually, the numbers are floating point, but this is purely for cosmetic reasons and they can be rounded if I choose. So I'll experiment and post back again! – goji Jun 13 '12 at 22:50
  • Goji, at risk of resurrecting this again, could you please add more question detail on why you chose the circular array over the exponential moving avg? If you still have data, what accuracy levels are you seeing with 1000 samples? – LabGecko Feb 12 '19 at 03:38
  • WARNING: watch out for floating point accumulation issues! See Kahan Summation on wikipedia, or other. You do NOT want to be adding very small floats to large ones without error mitigiation, seriously! – bunkerdive Mar 20 '19 at 03:14
23

You simply need a circular array (circular buffer) of 1000 elements, where you add the element to the previous element and store it.

It becomes an increasing sum, where you can always get the sum between any two pairs of elements, and divide by the number of elements between them, to yield the average.

Joel Bodenmann
  • 2,152
  • 2
  • 17
  • 44
  • 1
    That's better than my answer. No tricks, just store 1000 numbers and average them. – steveha Jun 12 '12 at 04:51
  • 3
    I was hoping to avoid storing all the numbers in an array and keep them 'longterm'. Seems this may be the only suitable way. – goji Jun 13 '12 at 10:40
  • note that for 'circular array', `boost::circular_buffer` is a (very good) candidate implementation. – xtofl Sep 18 '15 at 07:26
  • 1
    WARNING: watch out for floating point accumulation issues! See Kahan Summation on wikipedia, or other. You do NOT want to be adding very small floats to large ones without error mitigiation, seriously! – bunkerdive Mar 20 '19 at 03:12
21

Basically I want to track the moving average of an ongoing stream of a stream of floating point numbers using the most recent 1000 numbers as a data sample.

Note that the below updates the total_ as elements as added/replaced, avoiding costly O(N) traversal to calculate the sum - needed for the average - on demand.

template <typename T, typename Total, size_t N>
class Moving_Average
{
  public:
    Moving_Average& operator()(T sample)
    {
        total_ += sample;
        if (num_samples_ < N)
            samples_[num_samples_++] = sample;
        else
        {
            T& oldest = samples_[num_samples_++ % N];
            total_ -= oldest;
            oldest = sample;
        }
        return *this;
    }

    operator double() const { return total_ / std::min(num_samples_, N); }

  private:
    T samples_[N];
    size_t num_samples_{0};
    Total total_{0};
};

Examples:

// average of last 3 (from 4) samples...
std::cout << Moving_Average<double, double, 3>{}(4)(7)(2)(6) << '\n';
    // "5\n"

// average of last 3 squares...
Moving_Average<double, double, 3> ma;
for (int i = 0; i < 10; ++i)
    std::cout << (i * i) << ':' << ma(i * i) << ' ';
std::cout << '\n';
    // 0:0 1:0.5 4:1.66667 9:4.66667 16:9.66667 25:16.6667 36:25.6667 49:36.6667 64:49.6667 81:64.6667

Total is made a different parameter from T to support e.g. using a long long when totalling 1000 longs, an int for chars, or a double to total floats.

Issues

This is a bit flawed in that num_samples_ could conceptually wrap back to 0, but it's hard to imagine anyone having 2^64 samples: if concerned, use an extra bool data member to record when the container is first filled while cycling num_samples_ around the array (best then renamed something innocuous like "pos").

Another issue is inherent with floating point precision, and can be illustrated with a simple scenario for T=double, N=2: we start with total_ = 0, then inject samples {1E17, 1, 2}...

  • 1E17, we execute total_ += 1E17, so total_ == 1E17, then inject

  • 1, we execute total += 1, but total_ == 1E17 still, as the "1" is too insignificant to change the 64-bit double representation of a number as large as 1E17, then we inject

  • 2, we execute total += 2 - 1E17, in which 2 - 1E17 is evaluated first and yields -1E17 as the 2 is lost to imprecision/insignificance, so to our total of 1E17 we add -1E17 and total_ becomes 0, despite current samples of 1 and 2 for which we'd want total_ to be 3. Our moving average will calculate 0 instead of 1.5. As we add another sample, we'll subtract the "oldest" 1 from total_ despite it never having been properly incorporated therein; our total_ and moving averages are likely to remain wrong.

You could add code that stores the highest recent total_ and if the current total_ is too small a fraction of that (a template parameter could provide a multiplicative threshold), you recalculate the total_ from all the samples in the samples_ array (and set highest_recent_total_ to the new total_), but I'll leave that to the reader who cares sufficiently.

Tony Delroy
  • 102,968
  • 15
  • 177
  • 252
  • one assumes that "void operator(T sample)" is actually "void operator<<(T sample)" ? – oPless Jun 08 '14 at 11:52
  • @oPless ahhh... well spotted... actually I meant for it to be `void operator()(T sample)` but of course you could use whatever notation you liked. Will fix, thanks. – Tony Delroy Jun 08 '14 at 14:27
  • Yes! I spotted that one could use "void operator()(T sample)" earlier today, and was thinking of attempting to amend my comment to reflect this :-) – oPless Jun 09 '14 at 16:18
  • You can avoid rollover with something like this (in the else part) which will be just as efficient: `num_samples_ = N + (++num_samples_ % N); T& oldest = samples_[num_samples_];` – r11 May 31 '19 at 21:32
  • @r11: then `num_samples_` gets stuck looping between N and 2N-1, rather than being the number of samples seen: not a problem for the current functionality (nothing currently depends on `num_samples_` being the total number of samples), and yes it avoids rollover if you really have so many samples (couple fixes were already suggested for those concerned), but it'd be a misnomer (with no obvious good name) and the class correspondingly less intuitive and maintainable. Anyway, your suggestion's here for anyone who prefers it - thanks! – Tony Delroy Jun 01 '19 at 08:44
  • How can I use this class? I could not figure out :( – DEKKER Jun 05 '19 at 18:19
  • 1
    @DEKKER: an example: `Moving_Average ma; ma(10); ma(15.2); ma(19); std::cout << ma << '\n';` – Tony Delroy Feb 23 '20 at 14:23
19

You can approximate a rolling average by applying a weighted average on your input stream.

template <unsigned N>
double approxRollingAverage (double avg, double input) {
    avg -= avg/N;
    avg += input/N;
    return avg;
}

This way, you don't need to maintain 1000 buckets. However, it is an approximation, so it's value will not match exactly with a true rolling average.

Edit: Just noticed @steveha's post. This is equivalent to the exponential moving average, with the alpha being 1/N (I was taking N to be 1000 in this case to simulate 1000 buckets).

jxh
  • 69,070
  • 8
  • 110
  • 193
  • 1
    This doesn't seem to correspond very well with the actual moving average (at least for random streams), although I'm sure it's not a bad measure either (some code: https://gist.github.com/Aktau/6102979) – Aktau Jul 29 '13 at 11:47
  • @Aktau: This will give an approximation, as already noted. Read Wikipedia article about exponential moving average, link in Steve Ha's answer. – jxh Jul 29 '13 at 17:02
  • 4
    Error can quickly accumulate with this method though, particularly for datasets with high vairiance. Think of a signal with relatively infrequent, high-amplitude spikes. They bump the average up when they come into the window, but when they leave out the back door, the average is only reduced by avg/N, instead of spikeAmp/N. – bunkerdive Jun 10 '14 at 14:37
  • 1
    @JSalazar: I used a fixed alpha assuming the measurements would be taken at regular intervals. However, if the interval between measurements is variable, you should use a time weighted average instead using a variable weighted alpha instead of the fixed 1/N in my answer. – jxh Jun 10 '14 at 18:03
  • @bunkerdive high amplitude spikes are diminished as they are accumulated because the values are divided by N before being added to the sum. – ricosrealm Mar 15 '19 at 17:16
  • @ricosrealm: if the window average is `1`, window length is `10`, and an `input == 1000` comes through, it raises the average by `1000/10 = 100` to `avg=101`. If an additional `10` inputs arrive with value `1`, then the average will be left at `39.7`, and NOT be restored to `1` even though the spike has left the window. – bunkerdive Mar 20 '19 at 03:06
  • @bunkerdive: The error does not "accumulate", in the sense it does not get progressively worse. The average converges to 51.76. This is because the rolling average is looking at a curve, not discrete data. If you attempted to compute a continuous function with the pattern you describe, part of the curve down from the spike would fall into the window. – jxh Mar 20 '19 at 04:24
  • @jxh avg actually becomes 35.8; the error absolutely `can`, quickly accumulate, or on the contrary `could` also be negated with negative input spikes. Many natural (sampled) signal types could see this effect, such as action potentials, or QRS signals. For any given signal, the error (compared to true moving average) is inversely proportional to `window-size` and `sampling rate)`. Here's an example: https://dotnetfiddle.net/8Opawk – bunkerdive Mar 20 '19 at 06:33
  • @bunkerdive https://tio.run/##nY/BboQgEIbvPMUkPaxGN9XriiZ9gT3suRcW0WAQCMKmycZnt6DV3aZNY8qBmfzM//EP1frYUjq9cEmFqxlgrgZrGOmrybJeC2K95uTAW8lqOFeoVu4qGBCtjfq4KCG4bN9uzJCWQbQ@3toUvnoutbMpXJUSoKGEhoiBxXBH4I8fhGMZyuu52JSkXFyrxhuIdAyDrU8nqpwFjOc5Xw7v8rAMGWadkUEv0IgQlxZ6wiVE61@PbD5FvpgaZSAKo9xrWeELhjzLfJckfDWuscpfl8Z5VkXzwsEYF5tlY3cLu5vZgdw9k/fjn9gjetx7o6VgjWPxX4v/WHsf/Bv5n6HGafoE – jxh Mar 20 '19 at 06:34
  • @jxh right, so for a longer, cyclic signal, avg converges to a cyclic state of ranging from 146.59 to 51.76, as opposed to converging to 51.76 itself. – bunkerdive Mar 20 '19 at 06:45
  • 1
    @bunkerdive The error does not accumulate and diverge. That's what I meant by converge. The 51.76 was to oppose the 35.8. – jxh Mar 20 '19 at 06:50
  • 1
    Another note: for a single spike never again repeated, it takes 44 ones, following the thousand, to flush the avg back down to under 2.0, even though the moving 10-item window has had nothing but ones in it for 35 consecutive windows. It will actually never technically reach 1 again, forever converging to it. For the cyclic example -- yep, in the runnable snippet, turning on all prints shows that the average repeatedly loops, in cycles initially ranging from [38.8, 100.9] to cycles eventually accumulating (but converging) to a range of [51.76, 146.59]. – bunkerdive Oct 26 '19 at 03:21
  • 1
    @bunkerdive: One way to cope with your objection is to give a higher weight to an input if it was relatively close to the previous input. The closer the weight approaches 1.0, the tighter the tolerance has to be to satisfy relative closeness. Any time the tolerance is not achieved, the weight drops down to `1/N` again. I implemented a simple proof of concept, see the next comment. – jxh Oct 26 '19 at 05:19
  • @bunkerdive: https://tio.run/##pVHLbsMgELz7K1bqIVDnYR8bP6r@QA4990Js4hIRQBiiSlW@3QUTx44aVZbKgV3Nzgy7bKXUqqmq7omJituaQs5kazQlp7Iz9KQ4MQ6zomWNoDXsyqiWds8pEKW0/HqXnDPRvJ2pJg0FNBTPzRKuORPKmiXspeSgoIAD4S3F8B2BO60hhlUDlZPWZA9wwtUncdJ0vdmF@oAgL4GiCK9geAWEQi0OdAwlpA5O17ANKuySidG5gVXhw3NfHdH4ajrF2QGQwq67erutpDWQ5z3XhcWHWARSaCmIA6KpsVp4ZhZdoogJAyfCBKDhG8Zv81MG0UFqQJ7KHJZkLuSQJonL4pgNwqHZ4uE@8jQpUb8LL8TZTXLzPgbvY@/tnY9T5/n2E@9LNN5zW1uC0Zbivwb/NfY88zvnfzR1t3Lkyi94kybrcfOXrvsB – jxh Oct 26 '19 at 05:19
4

Simple class to calculate rolling average and also rolling standard deviation:

#define _stdev(cnt, sum, ssq) sqrt((((double)(cnt))*ssq-pow((double)(sum),2)) / ((double)(cnt)*((double)(cnt)-1)))

class moving_average {
private:
    boost::circular_buffer<int> *q;
    double sum;
    double ssq;
public:
    moving_average(int n)  {
        sum=0;
        ssq=0;
        q = new boost::circular_buffer<int>(n);
    }
    ~moving_average() {
        delete q;
    }
    void push(double v) {
        if (q->size() == q->capacity()) {
            double t=q->front();
            sum-=t;
            ssq-=t*t;
            q->pop_front();
        }
        q->push_back(v);
        sum+=v;
        ssq+=v*v;
    }
    double size() {
        return q->size();
    }
    double mean() {
        return sum/size();
    }
    double stdev() {
        return _stdev(size(), sum, ssq);
    }

};
Erik Aronesty
  • 11,620
  • 5
  • 64
  • 44
1

One way can be to circularly store the values in the buffer array. and calculate average this way.

int j = (int) (counter % size);
buffer[j] = mostrecentvalue;
avg = (avg * size - buffer[j - 1 == -1 ? size - 1 : j - 1] + buffer[j]) / size;

counter++;

// buffer[j - 1 == -1 ? size - 1 : j - 1] is the oldest value stored

The whole thing runs in a loop where most recent value is dynamic.

Bruno Parmentier
  • 1,219
  • 2
  • 14
  • 34
NKJ
  • 63
  • 6
1

I use this quite often in hard realtime systems that have fairly insane update rates (50kilosamples/sec) As a result I typically precompute the scalars.

To compute a moving average of N samples: scalar1 = 1/N; scalar2 = 1 - scalar1; // or (1 - 1/N) then:

Average = currentSample*scalar1 + Average*scalar2;

Example: Sliding average of 10 elements

double scalar1 = 1.0/10.0;  // 0.1
double scalar2 = 1.0 - scalar1; // 0.9
bool first_sample = true;
double average=0.0;
while(someCondition)
{
   double newSample = getSample();
   if(first_sample)
   {
    // everybody forgets the initial condition *sigh*
      average = newSample;
      first_sample = false;
   }
   else
   {
      average = (sample*scalar1) + (average*scalar2);
   }
 }

Note: this is just a practical implementation of the answer given by steveha above. Sometimes it's easier to understand a concrete example.

baumann
  • 51
  • 5
  • This might be perfect for some usages, but it's worth mentioning that the impact of samples lasts way longer than the N window (above, 10). For example, given 1 100 1 1 1 1 etc... it takes about 20 ones after the 100 before the "average" gets down around 2, and another 10 samples to reach 1.5.... – Tony Delroy Nov 18 '22 at 12:10
0

You could implement a ring buffer. Make an array of 1000 elements, and some fields to store the start and end indexes and total size. Then just store the last 1000 elements in the ring buffer, and recalculate the average as needed.

Tim
  • 4,999
  • 3
  • 24
  • 29
0

Incrementing on @Nilesh's answer (credit goes to him), you can:

  • keep track of the sum, no need to divide and then multiply every time, generating error
  • avoid if conditions using % operator

This is UNTESTED sample code to show the idea, it could also be wrapped into a class:

const unsigned int size=10; // ten elements buffer

unsigned int counterPosition=0;
unsigned int counterNum=0;

int buffer[size];
long sum=0;

void reset() {
    for(int i=0;i<size;i++) {
        buffer[i]=0;
    }
}

float addValue(int value) {
    unsigned  int oldPos = ((counterPosition + 1) % size);

    buffer[counterPosition] = value;
    sum = (sum - buffer[oldPos] + value); 

    counterPosition=(counterPosition+1) % size;
    if(counterNum<size) counterNum++;

    return ((float)sum)/(float)counterNum;
}

float removeValue() {
    unsigned  int oldPos =((counterPosition + 1) % size);

    buffer[counterPosition] = 0;
    sum = (sum - buffer[oldPos]); 

    if(counterNum>1) { // leave one last item at the end, forever
        counterPosition=(counterPosition+1) % size;
        counterNum--; // here the two counters are different
    }
    return ((float)sum)/(float)counterNum;
}

It should be noted that, if the buffer is reset to all zeroes, this method works fine while receiving the first values in as - buffer[oldPos] is zero and counter grows. First output is the first number received. Second output is the average of only the first two, and so on, fading in the values while they arrive until size items are reached.

It is also worth considering that this method, like any other for rolling average, is asymmetrical, if you stop at the end of the input array, because the same fading does not happen at the end (it can happen after the end of data, with the right calculations).

That is correct. The rolling average of 100 elements with a buffer of 10 gives different results: 10 fading in, 90 perfectly rolling 10 elements, and finally 10 fading out, giving a total of 110 results for 100 numbers fed in! It's your choice to decide which ones to show (and if it's better going the straight way, old to recent, or backwards, recent to old).

To fade out correctly after the end, you can go on adding zeroes one by one and reducing the count of items by one every time until you have reached size elements (still keeping track of correct position of old values).

Usage is like this:

int avg=0;
reset();

avg=addValue(2); // Rpeat for 100 times
avg=addValue(3); // Use avg value

...

avg=addValue(-4);
avg=addValue(12); // last numer, 100th input 

// If you want to fade out repeat 10 times after the end of data:

avg=removeValue(); // Rpeat for last 10 times after data has finished
avg=removeValue(); // Use avg value
...
avg=removeValue();
avg=removeValue();
FrancescoMM
  • 2,845
  • 1
  • 18
  • 29
0

I used a deque... seems to work for me. This example has a vector, but you could skip that aspect and simply add them to deque.

#include <deque>

template <typename T>
double mov_avg(vector<T> vec, int len){
  deque<T> dq = {};
  for(auto i = 0;i < vec.size();i++){
    if(i < len){
      dq.push_back(vec[i]);
    }
    else {
      dq.pop_front();
      dq.push_back(vec[i]);
    }
  }
  double cs = 0;
  for(auto i : dq){
    cs += i;
  }
  return cs / len;
}



//Skip the vector portion, track the input number (or size of deque), and the value.


  double len = 10;
  double val; //Accept as input
  double instance; //Increment each time input accepted.

  deque<double> dq;
  if(instance < len){
      dq.push_back(val);
  }
  else {
      dq.pop_front();
      dq.push_back(val);
    }
  }
  double cs = 0;
  for(auto i : dq){
    cs += i;
  }
  double rolling_avg = cs / len;

//To simplify further -- add values to this, then simply average the deque.

 int MAX_DQ = 3;
 void add_to_dq(deque<double> &dq, double value){
    if(dq.size() < MAX_DQ){
      dq.push_back(value);
    }else {
      dq.pop_front();
      dq.push_back(value);
    }
  }
 

Another sort of hack I use occasionally is using mod to overwrite values in a vector.

  vector<int> test_mod = {0,0,0,0,0};
  int write = 0;
  int LEN = 5;
  
  int instance = 0; //Filler for N -- of Nth Number added.
  int value = 0; //Filler for new number.

  write = instance % LEN;
  test_mod[write] = value;
  //Will write to 0, 1, 2, 3, 4, 0, 1, 2, 3, ...
  //Then average it for MA.

  //To test it...
  int write_idx = 0;
  int len = 5;
  int new_value;
  for(auto i=0;i<100;i++){
      cin >> new_value;
      write_idx = i % len;
      test_mod[write_idx] = new_value;

This last (hack) has no buckets, buffers, loops, nothing. Simply a vector that's overwritten. And it's 100% accurate (for avg / values in vector). Proper order is rarely maintained, as it starts rewriting backwards (at 0), so 5th index would be at 0 in example {5,1,2,3,4}, etc.

Zach Oakes
  • 185
  • 14
-1

a simple moving average for 10 items, using a list:

#include <list>

std::list<float> listDeltaMA;

float getDeltaMovingAverage(float delta)
{
    listDeltaMA.push_back(delta);
    if (listDeltaMA.size() > 10) listDeltaMA.pop_front();
    float sum = 0;
    for (std::list<float>::iterator p = listDeltaMA.begin(); p != listDeltaMA.end(); ++p)
        sum += (float)*p;
    return sum / listDeltaMA.size();
}
Pedro Soares
  • 1,117
  • 12
  • 14