(while trying to analyze how decimal
works ) && after reading @jonskeet article and seeing msdn , and thinking for the last 4 hours ,
I have some questions :
in this link they say something very simple :
1.5 x 10^2
has 2
significant figures
1.50 x 10^2
has 3
significant figures.
1.500 x 10^2
has 4
significant figures etc...
ok...we get the idea.
from jon's article :
sign * mantissa / 10^exponent
As usual, the sign is just a single bit, but there are 96 bits of mantissa and 5 bits of exponent
^ _ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^___ ^^^^^
1 _ 96 5
ok
so max mantiss val = 2^96-1 = 79228162514264337593543950335
which is : 7.9228162514264*10^28
(according to my iphone... could'nt see exponent representation in windows calc.)
notice :
7.9228162514264*10^28
has 14 significant figures (according to examples above)
now the part with the 5 bit in exponent is irrelevant because its in the denominator - so i need the min val which is 2^0
question #1 :
msdn say :
28-29 significant digits
but according to my sample (1.500 x 10^2
has 4
significant figures) they have 2 significant figures which is 7.9
( 7 and 9).
if msdn would have written :
±79228162514264337593543950335 × 10^0
i would understand this , since all significant digits are in the expression.
why do they write 28-29 but display 2 ?
question #2 :
how will decimal representation ( mantiss && exponent) will be displayed for the value 0.5 ?
the max denominator can be 2^32-1 --> 31
thanks guys.
question #3 :
1+96+5 = 102 bits.
msdn says :
The decimal keyword denotes a 128-bit data type.
128-102 = 26
could understnad from article why there isnt a usage to those 26 bits