So small question, I've been looking into moving part of my C# code to C++ for performance reasons. Now when I look at my float.Epsilon in C# its value is different from my C++ value.
In C# the value, as described by microsoft is 1.401298E-45.
In C++ the value, as described by cppreferences is 1.19209e-07;
How can it be that the smallest possible value for a float/single can be different between these languages?
If I'm correct, the binary values should be equal in terms of number of bytes an maybe even their binary values. Or am I looking at this the wrong way?
Hope someone can help me, thanks!