First of all, I am quite unaware of a low-level machine representation of data (so, please, be kind if I misinterpret/misunderstand some things - an advice/correction is always welcome though)
All the data is, obviously, presented as sequences of 0's and 1's.
Integer numbers are just plain bits of information that can be converted to any numeral system (from the binary one).
Floating-point numbers are, however, represented as sign + exponent + fraction
(let's speak in terms of IEEE 754 Floating Point Standard) which are indeed just the same old bits (0's and 1's) and that are indeed can be converted (differently, of course) to any numeral system.
How does it "magically" happen that when you do a simple casting operation (see the example below), you actually get the correct result?:
double a = 5.12354e3; // 5123.54
int b = int(a); // 5123
What is the logic inside the computing machine that converts sign + exponent + fraction
to sign + value
? It does not seem to be just a "plain" cast (you had 4/8 bytes before - you get 4 bytes after), right?
P.S.: If I am just not getting a very basic, obvious thing, sorry. Anyway, please, explain.