Precision only matters when the number can't be represented exactly. Since both floats and doubles (being IEEE 754 single and double precision values) can represent 1.0
exactly, precision doesn't come into it.
1.0
is basically a zero sign bit, all exponent bits except the highest set to 1, and no mantissa bits set. In single precision, that's binary:
0-01111111-00000000000000000000000
and, for double precision:
0-01111111111-0000000000000000000000000000000000000000000000000000
Not all numbers are exactly representable in IEEE 754 - for example, the 1.1
you mention in a comment is actually stored as 1.100000023841858
in single precision.
Have a look at this answer for an example of decoding a floating point value.
Harald Schmidt's online single-precision converter is an excellent site to play around with if you want to understand the formats. I liked it so much, I made a desktop version in case it ever disappeared (and was capable of doing double precision as well).