Given those two implementations of the average function:
float average(const vector<float>& seq)
{
float sum = 0.0f;
for (auto&& value : seq)
{
sum += value;
}
return sum / seq.size();
}
And:
float average(const vector<float>& seq)
{
float avg = 0.0f;
for (auto&& value : seq)
{
avg += value / seq.size();
}
return avg;
}
To illustrate my question, imagine we have a huge difference in the input data, like so:
1.0f, 0.0f, 0.0f, 0.0f, 1000000.0f
My guess is that in the first implementation, sum
can grow "too much" and loose the least significant digits and be 1000000.0f
instead of 1000001.0f
at the end of the sum loop.
On the other hand, the second implementation seems theorically less efficient, due to the number of divisions to perform (I haven't profiled anything, this is a blind guess).
So, is one of these implementation preferable to the other ? Am I true that the first implementation is less accurate ?