Is there a method for Math calculations which eliminates division rounding errors entirely?
I am trying to calculate the average of a sensor output on the fly but after several millions of iterations my average is drifting away due to rounding error.
Currently I am storing my values in memory which works fine but at the cost of memory/performance.
EG
int number1 = 10;
int number2 = 3;
Console.WriteLine((number1/number2));
result = 3 (3.333~!)
float number1 = 10
float number2 = 3
Console.Writeline((number1/number2))
result = 3.33333325 (3.3~)
Example from: https://learn.microsoft.com/en-us/dotnet/api/system.decimal?view=netcore-3.1
decimal dividend = Decimal.One;
decimal divisor = 3;
Console.WriteLine(dividend/divisor * divisor);
result = 0.9999999999999999999999999999 instead of 1(!!!)
This is an immense issue for my application as I am continuously applying division calculations to sensor Outputs and my values drift off slowly due to that fact. Windows Calculator seems to be able to work around that.
Is there any available solution or do I have to create my own framework?
edit: a possible solution might be to implement fractions as of ⅓. Otherwise a different approach to the problem might be required.