As an example, the set of double precision floating point numbers used in most programming languages is quite roughly the set of numbers +/- m * 2^k where m is an integer from 2^52 to 2^53, and k is an integer from roughly -1024 to +1024. Numbers like 0.1 are not in that set. Any integer multiplied or divided by a power of two is never equal to 0.1.
Any floating-point calculation calculates the exact result, then rounds it to the nearest floating-point number.
When you compare 0.1 + 0.2 to 0.3, you actually take the floating point number closest to 0.1, and the floating point number closest to 0.2, add the numbers, round the result to the nearest floating-point number, and compare to the floating-point number closest to 0.3.
Both results will be very close together. It is more or less coincidence whether they are equal, or whether one result is smaller or the other is smaller. If you do the same with 100 * 0.1, 100 * 0.2, and 100 * 0.3, the same thing happens. You will end up with two numbers that are very close together, and it is more or less coincidence whether the are equal, or one or the other is smaller.
In your last question, the first expression divides by 10. The second multiplies by the floating point number closest to 0.1. There is no floating point number equal to 0.1, the nearest floating point number is slightly larger or smaller than 0.1. So the results will be very close together (they are), but there is no guarantee that they would be the same.