On a 32-bit system, I found that the operation below always return the correct value when a < 2^31 but returns random results where a is larger.
uint64_t a = 14227959735;
uint64_t b = 32768;
float c = 256.0;
uint64_t d = a - b/ c; // d returns 14227959808
I believe the problem here is that the int-to-float operation returns undefined behavior, but could someone help explain why it gives such a value?