So I (think I) understand the difference between Float, Double, and Decimal, but one thing that I've wondered about is why there are two sizes of floating binary point, but only one size of floating decimal point?
If I understand the general principle correctly, you'd want to use a float(32-bit) for performance over a double (64-bit) on a 32-bit processor, if you don't need the extra size of the double. On a 64-bit processor, the double should be more performant so this rationale isn't as necessary. But the Decimal type is 128-bits. So why not offer a 64-bit decimal, or even a 32-bit?
Is it just a matter of use cases; no one really needed it? Or is there a technical reason, like you can't accurately present useful decimal ranges with less than 128 bits?