Imagine multiplying a decimal number by ten repeatedly. Every time you do a multiplication, an additional zero is added to the decimal representation:
12345 // Initial value
123450 // × 10
1234500 // × 100
12345000 // × 1000
123450000 // × 10000
1234500000 // × 100000
12345000000 // × 1000000
123450000000 // × 10000000
1234500000000 // × 100000000
If your number representation had a finite number of K digits, and kept the lower K digits after multiplication, the resulting representation would become zero after at most K multiplications (fewer for numbers divisible by a power of ten).
The same thing happens to binary numbers, because 2 to a binary number is what 10 is for decimal numbers (in fact, two in binary is also 10; this is true for numbers in any base: the base of a numeric system is written as 10 in that system itself).