I have come across a confusing pattern of the size and Max value of these data types in C#.
While comparing these size using Marshal.SizeOf(), I have found following result-
Float- 4 bytes,
Double - 8 bytes,
Decimal - 16 bytes
and when compared their MaxValues, i got the results like this,
Float- 340282346638528986604286022844204804240,
Double - 179769313486231680088648464220646842686668242844028646442228680066046004606080400844208228060084840044686866242482868202680268820402884062800406622428864666882406066422426822086680426404402040202424880224808280820888844286620802664406086660842040886824002682662666864246642840408646468824200860804260804068888,
Decimal - 79228162514264337593543950335
The reason I am confused is, Decimal takes more unmanaged memory than Float and Double but is not able to contain larger value than float even. Can anyone explain this behavior of Compiler. Thanks.