I need to know what is the correct solution for the minimum bits required to store an unsigned int. Say, I have 403 its binary representation as an unsigned int will be 00000000000000000000000110010011 that adds up to 32 bit. Now, I know that an unsigned integer takes 32 bits to store. But, why do we have all those zeros in front when the number can be explained by only 9 bits 110010011. Moreover, How come unsigned int takes 32 bits to store and decimal takes only 8 bits ? Please explain in detail. Thanks
1 Answers
This has nothing to do with how many bits are needed, and everything to do with how many bits your computer is wired for (32). Though 9 bits are enough, your computer has data channels that are 32 bits wide - it's physically wired to be efficient for 32, 64, 128, etc. And your compiler presumably chose 32 bits for you.
The decimal representation of "403" is three digits, and to represent each digit in binary requires at least four bits (2^4 is 16, so you have 6 spare codes); so the minimum "decimal" representation of "403" requires 12 bits, not eight.
However, to represent a normal character (including the decimal digits as well as alpha, punctuation, etc) it's common to use 8 bits, which allows up to 2^8 or 256 possible characters. Represented this way, it takes 3x8 or 24 binary bits to represent 403.

- 2,536
- 15
- 16
-
So, we cant simply add values of bits as each value is taking 4 bits to represent ? so the minimum number of bits to store will depend on digits in the number rather than whether the number is unsigned int or decimal? Thanks – Sunil Sharma Oct 08 '15 at 23:20
-
When building up a binary number, every bit doubles the number of possible combinations. 3 bits encodes 8 possibilities (not enough for a decimal digit) but 4 encodes 16. However, your computer is not really using raw binary coding. It can efficiently use 8-bit or 32-bit coding (or more), not arbitrary sizes, because of its wiring. – cliffordheath Oct 08 '15 at 23:27