0

This is a bit of a general question and not completely related to the c programming language but it's what I'm on studying at the moment.

Why does an integer take up 4 bytes or How ever many bytes dependant on the system?

Why does it not take up 1 byte per integer?

For example why does the following take up 8 bytes:

int a = 1;
int b = 1;

Thanks

Weather Vane
  • 33,872
  • 7
  • 36
  • 56
Ben Osborne
  • 218
  • 4
  • 15
  • 1
    If `int a = 1;` occupied 1 byte, then what would `a = 32535` do? – DeiDei Nov 28 '17 at 20:38
  • Because the `int` has a minimum range to support as documented in C standard. And it is greater than 1 byte can support. – Eugene Sh. Nov 28 '17 at 20:39
  • If int only has values from -128 to 127, it wouldn't be very useful in the real world. – Lee Daniel Crocker Nov 28 '17 at 20:44
  • 2
    Because that is what the compiler does. the `int` must be at least 16 bits, in your case it is 32. The size is not *adaptable* to suit the particular range of values. It's up to you to pick the most appropiate type. But don't be mean, use `int` unless you have good reason to do otherwise, such as limited memory on embedded system. – Weather Vane Nov 28 '17 at 20:45
  • 2
    Are you asking why `int` has a particular size or why `int` has a fixed size rather than a variable size? – Eric Postpischil Nov 28 '17 at 20:49
  • We have to store values in either memory or on a hard disk. When we save them we have an address location and a size of bits to look up. Different data types take up a different amount of space and the number of bits is defined. For int it is always 4 bytes (16 bits) so we know to look for 4 bytes starting at the memory/harddisk address given. – Adam Sampson Nov 28 '17 at 20:51
  • @AdamSampson: check your arithmetic. – cdarke Nov 28 '17 at 20:53
  • 1
    Yeah. see that now. I shouldn't ever try mental math in public. – Adam Sampson Nov 28 '17 at 20:57
  • By the way, you ask "any other language". Several languages have common implementations in C and they inherit limits from the C compiler they are built with. Note that a number of languages have both C and non-C implementations and could have different limits, so you shouldn't make assumptions. – cdarke Nov 28 '17 at 22:26

4 Answers4

6

I am not sure whether you are asking why int objects have fixed sizes instead of variable sizes or whether you are asking why int objects have the fixed sizes they do. This answers the former.

We do not want the basic types to have variable lengths. That makes it very complicated to work with them.

We want them to have fixed lengths, because then it is much easier to generate instructions to operate on them. Also, the operations will be faster.

If the size of an int were variable, consider what happens when you do:

b = 3;
b += 100000;
scanf("%d", &b);

When b is first assigned, only one byte is needed. Then, when the addition is performed, the compiler needs more space. But b might have neighbors in memory, so the compiler cannot just grow it in place. It has to release the old memory and allocate new memory somewhere.

Then, when we do the scanf, the compiler does not know how much data is coming. scanf will have to do some very complicated work to grow b over and over again as it reads more digits. And, when it is done, how does it let you know where the new b is? The compiler has to have some mechanism to update the location for b. This is hard and complicated and will cause additional problems.

In contrast, if b has a fixed size of four bytes, this is easy. For the assignment, write 3 to b. For the addition, add 100000 to the value in b and write the result to b. For the scanf, pass the address of b to scanf and let it write the new value to b. This is easy.

Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312
1

The basic integral type int is guaranteed to have at least 16 bits; At least means that compilers/architectures may also provide more bits, and on 32/64 bit systems int will most likely comprise 32 bits or 64 bits (i.e. 4 bytes or 8 bytes), respectively (cf, for example, cppreference.com):

Integer types

... int (also accessible as signed int): This is the most optimal integer type for the platform, and is guaranteed to be at least 16 bits. Most current systems use 32 bits (see Data models below).

If you want an integral type with exactly 8 bits, use the int8_t or uint8_t.

Stephan Lechner
  • 34,891
  • 4
  • 35
  • 58
0

It doesn't. It's implementation-defined. A signed int in gcc on an Atmel 8-bit microcontroller, for example, is a 16-bit integer. An unsigned int is also 16-bits, but from 0-65535 since it's unsigned.

TomServo
  • 7,248
  • 5
  • 30
  • 47
0

The fact that an int uses a fixed number of bytes (such as 4) is a compiler/CPU efficiency and limitation, designed to make common integer operations fast and efficient.

There are types (such as BigInteger in Java) that take a variable amount of space. These types would have 2 fields, the first being the number of words being used to represent the integer, and the second being the array of words. You could define your own VarInt type, something like:

struct VarInt {
    char length;
    char bytes[]; // Variable length
}

VarInt one = {1, {1}};          // 2 bytes
VarInt v257 = {2, {1,1}};       // 3 bytes
VarInt v65537 = {4, {1,0,0,1}}; // 5 bytes

and so on, but this would not be very fast to perform arithmetic on. You'd have to decide how you would want to treat overflow; resizing the storage would require dynamic memory allocation.

AJNeufeld
  • 8,526
  • 1
  • 25
  • 44