I read that C compilers must allocate 2 bytes for the type short and must allocate 4 bytes for the type long. So, depending on the system the type int can have 2 bytes or 4 bytes. So, what's the purpose of it? If we need a 2 byte number, we can use short and if we need a 4 byte number, we can use long. I feel like int is like gambling if the system is 16-bit or not. (I generally don't understand why C can't decide by itself how much memory is needed for number, just like Python does)
-
3"C compilers must allocate 2 bytes for the type short and must allocate 4 bytes for the type long" -- this is incorrect, the only requirement for the sizes of integer types is that `sizeof(char) == 1`. In fact, on my system, `long` is 64-bit. (**edit** i am wrong) – Adrian Jan 03 '15 at 21:59
-
Oh, I read this in my textbook.. – ClassicEndingMusic Jan 03 '15 at 22:00
-
2You might want to get a different textbook – Adrian Jan 03 '15 at 22:01
-
Well, there are many incorrect text-books out there. THough in this case, it might be part of "lie-to-children", or the author simply not being aware that `char` might be anything but 8 bits big. – Deduplicator Jan 03 '15 at 22:02
-
3@Adrian: His textbook is correct. `short` must be at least 16 bits, `int` at least 16, and `long` at least 32. See [C data types](http://en.wikipedia.org/wiki/C_data_types), or the standard. – Jon Purdy Jan 03 '15 at 22:02
-
@Adrian Any references? – ClassicEndingMusic Jan 03 '15 at 22:02
-
2@JonPurdy Well, this textbook didn't say "at least", it said "must have".. – ClassicEndingMusic Jan 03 '15 at 22:04
-
To your last part, the C standard cannot prescribe the exact behavior and width of each type, because, in contrast to python, it aims for being usable and efficient on a wide range of hardware (which is nearly equal with "all systems which exist"). – Deduplicator Jan 03 '15 at 22:08
-
If it says something like "some type" must be at least two bytes big, it is simply wrong. A valid way is making a byte 64 bit long, and all types one byte. – Deduplicator Jan 03 '15 at 22:10
-
@Deduplicator Can you post some code which will make a byte 64 bits long? This seems very interesting :) – Adrian Jan 03 '15 at 22:11
-
1I haven't yet written a C compiler, sorry. Though, take a look at compilers for signal processors, some of them have decided that 64 bit is the most and the least size a type should have. – Deduplicator Jan 03 '15 at 22:13
-
It is true that varying `int` size is not a good idea. But it is implemented in this way and we cannot change it now. – i486 Jan 03 '15 at 22:14
-
@Deduplicator: That’s true, but in practice, all commonly used architectures since the 1980s have `CHAR_BIT == 8`. – Jon Purdy Jan 03 '15 at 22:15
-
@JonPurdy: If you ignore signal processors and such, quite certain. If you don't, it's at least overwhelming. – Deduplicator Jan 03 '15 at 22:17
-
2@i486 Is it really a bad idea? Isn't `int` the size that the system can do operations on the fastest. When specifying the size of the int is important there are `int8_t`, `int16_t` etc from `stdint.h` – Forss Jan 03 '15 at 22:18
-
The problem is that the main reason for varying `int` is portability of software. You define `int` and get optimal size on different platforms. BUT this is the reason for thousands of errors. If you think that standard type must be variable size `int` and use definition like `int32_t` - I can say: why not the opposite scheme? `int32` can be built-in type and you may define "portable" `int` in `stdint.h`. (This cannot be changed now, but it could be the reality.) – i486 Jan 03 '15 at 22:29
-
"Isn't int the size that the system can do operations on the fastest" Very useful fact, but can somebody also approve that? – ClassicEndingMusic Jan 03 '15 at 22:46
-
If C uses arbitrary precision arithmetic in every cases then it's extremely bad for performance and memory – phuclv Feb 04 '15 at 09:32
-
@JonPurdy a lot of DSPs nowadays have 16/24 or 32-bit `CHAR_BIT` http://stackoverflow.com/questions/2098149/what-platforms-have-something-other-than-8-bit-char?lq=1 some other "strange" architectures with `CHAR_BIT != 8` are also [in use](http://stackoverflow.com/questions/6971886/exotic-architectures-the-standards-committees-care-about?lq=1) – phuclv Feb 04 '15 at 09:38
-
O/T, re the question you just deleted: Python is very dynamic, virtually everything can be patched at runtime (see e.g. http://stackoverflow.com/a/29488561/3001761). However, although it has some support for functional paradigms, it is nowhere near Lisp in that regard. – jonrsharpe Apr 16 '15 at 13:46
1 Answers
In B, the ancestor of C, the only type was int
. It was the size of a “machine word”, which is generally to say the size of a register—16 bits on a 16-bit system, 32 bits on a 32-bit system, and so forth. C simply preserved this type. short
and long
were introduced as ways of controlling storage space when less or more range was needed. This matters when available memory is constrained: why allocate a long
when you know a value will never exceed the range of a short
?
I generally don't understand why C can’t decide by itself how much memory is needed for number, just like Python does
Python decides this dynamically, using an arbitrary-precision representation. C decides this statically, and requires that it be specified by the programmer. There are statically typed languages in which type annotations are not required, due to type inference. If you want arbitrary-precision integers in C, you can use GMP, which provides mpz_t
and a host of other types and functions for arbitrary-precision arithmetic.

- 53,300
- 8
- 96
- 166