Since you saw my answer to the question you linked, let's work through it and make the necessary changes to examine your second scenario:
In binary, 1.7 is:
b1.1011001100110011001100110011001100110011001100110011001100110...
However, 1.7 is a double-precision literal, whose value is 1.7 rounded to the closest representable double-precision value, which is:
b1.1011001100110011001100110011001100110011001100110011
In decimal, that's exactly:
1.6999999999999999555910790149937383830547332763671875
When you write float a = 1.7, that double value is rounded again to single-precision, and a gets the binary value:
b1.10110011001100110011010
which is exactly
1.7000000476837158
in decimal (note that it rounded up!)
When you do the comparison (a < 1.7), you are comparing this single-precision value (converted to double, which does not round, because all single-precision values are representable in double precision) to the original double-precision value. Because
1.7000000476837158 > 1.6999999999999999555910790149937383830547332763671875
the comparison correctly returns false, and your program prints "false".
OK, so why are the results different with 0.7 and 1.7? It's all in the rounding. Single-precision numbers have 24 bits. When we write down 0.7 in binary, it looks like this:
b.101100110011001100110011 00110011...
(there is space after the 24th bit to show where it is). Because the next digit after the 24th bit is a zero, when we round to 24 bits, we round down.
Now look at 1.7:
b1.10110011001100110011001 10011001...
because we have the leading 1.
, the position of the 24th bit shifts, and now the next digit after the 24th bit is a one, and we round up instead.