I understand that there can be a .000000000000001 margin of error for double math and this is be made worse by multiplication to make the margin of error larger. With that said, is it possible to round off every calculation to a significant digit (maybe 4 decimal places) to achieve consistency across all platforms? Would it simply be more efficient using decimal math or will decimal math require similar rounding?
I will be using this for my lockstep RTS game which requires a deterministic physics engine for synchronous multiplayer. I'm using C#. Some calculations and some calculations I wish to perform include Sqrt, Sin, and Pow of the System.Math library.