In theory, on an IEEE 754 conforming system, the same operation with the same input values is supposed to produce the same result.
As Wikipedia summarizes it:
The IEEE 754-1985 allowed many variations in implementations (such as the encoding of some values and the detection of certain exceptions). IEEE 754-2008 has strengthened up many of these, but a few variations still remain (especially for binary formats). The reproducibility clause recommends that language standards should provide a means to write reproducible programs (i.e., programs that will produce the same result in all implementations of a language), and describes what needs to be done to achieve reproducible results.
As usual, however, theory is different from practice. Most programming languages in common use, C# included, do not strictly conform to IEEE 754, and do not necessarily provide a means to write a reproducible program.
Additionally, modern CPU/FPUs make it somewhat awkward to ensure strict IEEE 754 compliance. By default they will operate with "extended precision", storing values with more bits than a double internally. If you want strict semantics you need to pull values out of the FPU into a CPU register, check for and handle various FPU exceptions, and then push values back in -- between each FPU operation. Due to this awkwardness, strict conformance has a performance penalty, even at the hardware level. The C# standard chose a more "sloppy" requirement to avoid imposing a performance penalty on the more common case where small variations are not a problem.
None of this is often an issue in practice since most programmers have internalized the (incorrect, or at least misleading) idea that floating-point math is inexact. Additionally, the errors we're talking about here are all extremely small, enough so that they are dwarfed by the much more common loss of precision caused by converting from decimal.