A floating-point number is made of a sign (+ or -), a significand and an exponent.
The significand is made of a number of base-B digits. The interpretation of the exponent e is that the number should be multiplied by Be. Choosing B as 2 or 10 does not make a difference for small integers (0, 1, 2, … can be represented exactly both in binary floating-point and in decimal floating-point) but the fractional numbers and the very large numbers that can be represented exactly are not the same in binary and decimal.
Examples: binary floating-point can represent 2-100 and 2100 exactly, but decimal128 cannot. Decimal floating-point can represent 0.1 exactly, but binary floating-point cannot (at any precision).
You are right that everything is binary in the end. A number of schemes are used to represent decimal floating-point as bits. The significand is the tricky part to encode. The IEEE standardisation committee devised two schemes, one that is convenient for software and one that is convenient for hardware decoding. I do not know what .NET uses. It probably uses something similar to the “Binary integer significand field” technique from the IEEE 754 floating-point standard, where the allocated bits are used to encode, in binary, a number between 0 and Bn-1, since this is what works best in software.