So you have a byte value (from 0 to 255), and you want to get the log base 45, and store it in another byte? As others have said, you're going to lose some accuracy in doing that. However, you can do better than just casting the double
result to a byte
.
The log base 45 of 255 is approximately 1.455675. You can store that in a byte, with some loss of accuracy, by multiplying it by a constant factor. What constant factor? You could use 100, which would give you a value of 145, but you're losing almost half the range of a byte. Since the largest value you want to represent is 1.455675, you can use a constant multiplier of 255/log45(255)
, or about 175.176.
How well does this work? Let's see ...
var mult = 255.0 / Math.Log(255, 45);
Console.WriteLine("Scaling factor is {0}", mult);
double errMax = double.MinValue;
double errMin = double.MaxValue;
double errTot = 0;
for (int i = 1; i < 256; ++i)
{
// Get the log of the number you want
var l = Math.Log(i, 45);
// Convert to byte
var b = (byte)(l * mult);
// Now go back the other way.
var a = Math.Pow(45, (double)b / mult);
var err = (double)(i - a) / i;
errTot += err;
errMax = Math.Max(errMax, err);
errMin = Math.Min(errMin, err);
Console.WriteLine("{0,3:N0}, {1,3:N0}, {2}, {3:P4}", i, b, a, err);
}
Console.WriteLine("max error = {0:P4}", errMax);
Console.WriteLine("min error = {0:P4}", errMin);
Console.WriteLine("avg error = {0:P4}", errTot / 255);
Under .NET 4 on my machine, that gives me a maximum error of 2.1419%, and an average error of 1.0501%.
You can reduce the average error by rounding the result from Math.Pow
. That is:
var a = Math.Round(Math.Pow(45, (double)b / mult));
That reduces the average error to 0.9300%, but increases the maximum error to 3.8462%.