We've been trying to implement a new number that can be any amount of bytes because we need a lot of precision during calculation. As well as that we're working in Unity Burst which at the moment does not support anything bigger then a float.
We've been doing this by using a byte array and implementing binary addition, subtraction, multiplication and division. And this is partly working.
The problem we're walking into is that the performance is way slower then the integer math. So it will produce the following:
4*3 run 100.000 with int's will take about 0-1 milliseconds
4*3 run 100.000 with our number will take about 100 milliseconds
So the question is if we could use the same kind of implementation by inheriting or copying the integer math code. Or even just get a look at how it's implemented? I looked in the C# source code but can't quite find the actual math.
Any help or ideas would be appreciated.