The issue here is that different systems use different approximations for their representation of real numbers.
A double
or decimal
in C# isn't necessarily represented in the same way on another system.
The only guarantee that you have is when you perform a computation on one system using a specific representation of real numbers that that system uses that you'll get the same result.
You probably will get the same result if you use a type on two different system if that type is implemented correctly with a known standard - such as IEE Floating Point.
However, even on the same system with the same types things can go wrong. Take these two mathematically identical functions:

When computed (on a system that keeps only the four most significant decimal digits) they can produce very different results depending on the input:

You are dealing with approximations. You will gets errors. It's best that you figure out a way to compute the results that you need in such a way that the errors don't matter (so much).