Floating points are stored as the numeric portion (mantissa) and the exponent (how many places to move the decimal point). The mantissa portion of loating point numbers are stored as a sum of fractions. They are calculated by adding a series of fractions. The order of the fractions is:
1/2, 1/4, 1/8, 1/16, 1/32, 1/64, 1/128, ... etc
The binary representation is stored as 0s and 1s which indicate yes/no. For example, 001010 would be 0 * 1/2 + 0 * 1/4 + 1 * 1/8 + 0 * 1/16 + 1 * 1/32.
This is a rough example of why floating points cannot be exact. As you add precision (float -> double -> long double) you get more precision to a limit.
The underlying binary data stored is split into two pieces - one for the part that appears before the decimal point, and the other for the part after the decimal. It's an IEEE standard that has been adopted because of the speed at which calculations can be performed (and probably other factors on top).
Check this link for more information:
https://en.wikibooks.org/wiki/A-level_Computing/AQA/Paper_2/Fundamentals_of_data_representation/Floating_point_numbers