Basically I'm confused right now and can't find anything helpful on stackoverflow or through google search. I've been reading about how computers store differents data types in binary format to better understand C programming and general knowledge about computer science. I think I understand how floating-point numbers work but from what I understood the first bit in front of the decimal point (or binary point idk) isn't included because it is supposedly always 1 since we move the decimal point behind the first bit with a value of 1 from left to right. In that case, since we don't store the first bit, how would we be able to differentiate a floating-point variable storing the value 1.0 from 0.0 .
Ps. don't hesitate to edit this post if needed. English is not my first language.