Although this question has been asked and answered several times before, I don't see any answer that is actually correct. The key is that FLT_MIN
is the smallest normalized value that can be represented. Back in the olden days that was all that mattered. Then Intel came along and introduced subnormal values, which reduce precision in order to represent values closer to 0. Subnormals are values with the minimum exponent and a fraction whose high bits are all zeros. It follows from that that the smallest non-zero subnormal value has a fraction that's all zeros except for the lowest bit, which is a 1. That's the smallest value that can be represented, but when you're down there, changing a bit here and there makes a large change in the value, so these things have to be used with great care.
EDIT, to clarify "normalization":
Suppose we're writing decimal values: 6.02x10^23, .602*10^24, 60.2*10^22. Those all represent the same value, but they clearly look different. So let's introduce a rule for writing decimal values: every value must have exactly one non-zero digit to the left of the decimal point. So the "normalized" form of that value is 6.02x10^23, and if we have a value written in a non-normalized form we can move the decimal point and adjust the exponent to preserve the value and put it into normalized form.
IEEE floating-point does the same thing: the rule is that the high bit of the fraction must always be 1, and any calculation has to adjust the fraction and the exponent of its result to satisfy that rule.
When we write decimal values that are really close to 0 that's not a problem: we can make the exponent as small as we need to, so we can write numbers like 6.02*10^-16384. With floating-point values we can't do that: there a minimum exponent that we can't go below. In order to allow smaller values, the IEEE requirements say that when the exponent is the smallest representable value, the fraction doesn't have to be normalized, that is, it doesn't have to have a 1 in its high bit. In writing decimal values, that's like saying we can have a 0 to the left of the decimal point. So if our decimal rule said that the lowest allowable exponent is -100, the smallest normalized value would be 1.00x10^-100, but smaller value could be represented as non-normalized: 0.10*10^-100, 0.01*10^-100, etc.
Now add a requirement to our decimal rules that we can only have three digits: one to the left of the decimal point and two to the right. That's like the floating-point fraction in that it has a fixed number of digits. So for small normal values we have three digits to play with: 1.23*10^-100. For smaller values we use leading zeros, and the remaining digits have less precision: 0.12*10^-100 has two digits, and 0.01*10^-100 has only 1. That's also how floating-point subnormals work: you get fewer and fewer significant bits as you get farther and farther below the minimum normalized value, until you run out of bits and you get 0.
EDIT: to clarify terminology, the IEEE-754 standard referred to those values that are greater than 0 and less then the minimum normalized value as denormals; the latest revision of IEEE-754 refers to them as subnormals. They mean the same thing.