5

I Know that an integer datatype take 2 or 4 bytes of memory. I want to know that if the value of int datatype variable value is less then is the space is wasted?

#include <stdio.h>
int main(void)
{
  int a=1;
  printf("%d\n",a);
}

a binary value is 00000001 which is 1 byte, the int data type allocates 2byte of space for the a value.is the remaining 1 byte is wasted?

Jens
  • 69,818
  • 15
  • 125
  • 179
  • *"I Know that an integer datatype take 2 or 4 bytes of memory."* You can't know something that is wrong. On some systems an int takes 8 bytes. Don't fall for "all the world's boxen look like my box". – Jens Jan 26 '18 at 09:41
  • 4
    Unless you're targeting an embedded system with limited memory, does it really matter? And what if, in the future, the program will need to handle larger values? You might want to do some research about [the Y2K "crisis"](https://en.wikipedia.org/wiki/Year_2000_problem) which happened because the programmers in the last century wanted to save a couple of bytes without much thought for the future. – Some programmer dude Jan 26 '18 at 09:41
  • The space taken by an `int` is `sizeof(int)` (usually 4 or 8 bytes nowadays, sometimes still 2 bytes or potentially even something else, it depends on the platform). Your `a` variable might be set to a larger value later, so nothing is wasted. – Jabberwocky Jan 26 '18 at 09:46
  • @MichaelWalz Interestingly, had once a system (TI DSP) with CHAR_BIT == 16 and sizeof(int) == sizeof(short) == sizeof(char) == 1... – Aconcagua Jan 26 '18 at 09:59

6 Answers6

5

In theory, yes the space is wasted. Although on a 32 bit CPU, allocating 32 bits of data might mean faster access since it suits the alignment. So using a 32 bit variable just to store the value 1 could be an optimization of speed over memory consumption.

On microcontroller systems, programmers have far less memory and are therefore more picky with variable declarations, using the types from stdint.h instead, to allocate just as much memory as needed. They would use uint8_t rather than int.

If you want the best of both worlds - fastest access and then low memory consumption if possible - use the uint_fast8_t type. Then the compiler will pick the fastest possible type that can store values up to 255.

Lundin
  • 195,001
  • 40
  • 254
  • 396
  • Lundin, wouldn't just `int` default to the machine's native size? (Assuming register width and bus width are the same.) Wouldn't that then always be the fastest? – Paul Ogilvie Jan 26 '18 at 10:24
  • @PaulOgilvie Sadly, no. The native size of 8 bit computers is 8, but they use 16 bit `int`, since it does not make sense to have `char`, `short` and `int` all meaning 8 bits. And the C standard doesn't even allow `int` to be that small. Similarly on 64 bit computers, it doesn't make sense to have `int`, `long` and `long long` all meaning 64 bits. [See this](https://stackoverflow.com/a/48405733/584518). – Lundin Jan 26 '18 at 10:46
  • Suppose, though, you have a great chance getting these via the `[...]_fast_t` types - provided they are not larger than native register size. – Aconcagua Jan 26 '18 at 11:42
2

I Know that an integer datatype take 2 or 4 bytes of memory

Do you? All the C standard states is that an int must be capable of storing a number between the inclusive range -32767 and +32767, and is no smaller than a short or a char.

An exotic system might even have unused padding bits at the end of an int. Over the coming years, we may well see the "normal" int being 64 bit.

If you want to minimise wasted space then use a signed char type. That must have a range -127 to +127. And sizeof(char) is 1 by the standard. And the number of bits used is given by CHAR_BIT, which is normally 8.

Finally note that minimising space may well have little bearing on the execution speed, particularly in C, where int is normally the CPU's native type, and narrower types than int are widened to int anyway in the majority of expressions in C.

Bathsheba
  • 231,907
  • 34
  • 361
  • 483
  • The `long long` type was introduced in C and C++ just for the sake of not "polluting" `int` even more, it is already a horrible type to use as it is. Therefore it seems highly unlikely that `int` will become 64 bits, unless of course the compiler implementation was written by morons (which isn't forbidden by the standard). If we ever take the leap to 128 bit computers, I suppose there will be a `long long long` type. – Lundin Jan 26 '18 at 09:48
  • @lundin "compiler implementation was written by morons (which isn't forbidden by the standard)." - I love that. – Bathsheba Jan 26 '18 at 09:48
  • We seem to already have some compilers that made `long` 64 bits. Those were written by morons - ISO compliant morons. – Lundin Jan 26 '18 at 09:50
  • @Lundin I wouldn't really agree on that about long having 64 bits. It is a problem *now*, concerning portability (over systems and with old code), and with some functions (e. g. `htonl`). On the long term, I'd rather consider it a good step: Why should we have two different types (int and long) both exactly the same width? [...] – Aconcagua Jan 26 '18 at 09:57
  • [...] And with upcoming 128 bits: long long is predestined for then. Maybe would even have been a good idea to move long long to 128 bits right when long got 64. If doing later, we'll have the same problems later *again*, by doing both the same time, we have them only *once*... And best introducing (u)int128_t at the same time... Unfortunately, this chance is gone already... – Aconcagua Jan 26 '18 at 09:57
  • @Aconcagua Your reasoning falls flat on `int` being 16 bits on a whole lot of systems produced even today. So `int` does not necessarily have the same size of `long`. The C programming equivalent to the saying "how long is a rope" is "how long is a `long`". This is why all professional programmers already gave up on `int` and `long` some 20+ years ago. Everyone who needs portable code with deterministic behavior (read "_bug-free code_") uses `stdint.h`. – Lundin Jan 26 '18 at 10:03
  • @Lundin No, it doesn't. I was only talking about those systems where they *did* this step, and on these, int *has* 32 bit. On those systems int having 16 bits, long hasn't 64 either... Truth about stdint, though, probably the wholy grale in this matter anyway (that's why I came up with uint128_t immediately...). Upvoted your answer for that already (and especially for [...]_fast_t!). – Aconcagua Jan 26 '18 at 10:09
  • @Aconcagua My point is that as long as there exists different systems with different integer widths at the same time, you can't set an industry de facto standard by using the native types of C. There's still people using 8 and 16 bit microcontrollers nowadays, because they got that in a packet of corn flakes (Arduino). The computer industry has not even managed to phase out 8 bitters, even though 16 bitters were invented somewhere in the 1960s or so. – Lundin Jan 26 '18 at 10:22
  • 1
    @Lundin Cannot help, by your last comment, I even feel pushed in *my* point... I totally agree on this, the only thing I disagree is the ones having made long 64 bit wide being morons - because on the concrete systems where this applied, this step might have been just appropriate... – Aconcagua Jan 26 '18 at 10:29
  • @Aconcagua There exist systems that will never execute any legacy C code? I very much doubt that. It is quite possible that old code that assumes `long` is 32 bit will break horribly if `long` is changed to 64 bits. – Lundin Jan 26 '18 at 10:41
  • @Lundin And that's a reason to stop the world turning? I thouhgt I aknowledged the problems already ("now" vs. "on the long term"). Quite a lot of legacy code breaks, too, if e. g. MS decides to drop old functions from WinAPI. Removal of deprecated features from C or C++ standard might break old, existing code as well... – Aconcagua Jan 26 '18 at 11:39
  • @Aconcagua There's a huge difference between "old code doesn't compile any longer" and "old code compiles just fine but now it contains dormant, severe bugs". – Lundin Jan 26 '18 at 12:03
  • Hm, admitted, my examples are inappropriate. But then this one: Porting some code from one platform, assuming int having 16 bits, to another one, int having 32 bits, might provoke exactly the same problems as well. Compiling for x64 is compiling for another platform than compiling for x86. So why should we even expect int being of the same size on both platforms? Happens to be, but does not need to. Same for long now... (apart from the "happens to be" part, of course). – Aconcagua Jan 26 '18 at 12:10
1

To determine how much space is wasted, if at all, you need to consider the range of values that you want to store in your int variable, not just the current value.

If your int is 32-bit in size, and you want to store positive and negative values in it in the range between -2,000,000,000 and 2,000,000,000, then you need all 32 bits, so none of the bits in your int are wasted. If, on the other hand, the range is from -30,000 to 30,000, then you could have used a 16-bit data type, so two bytes are wasted.

Note that sometimes "wasting" a few bytes comes with an improvement in speed, because a larger size happens to be the "native" size for the CPU's registers. In this case a "waste" becomes a "trade-off", because you get extra speed for using additional memory space.

Sergey Kalinichenko
  • 714,442
  • 84
  • 1,110
  • 1,523
0

Practically yes, since the value you want to store could be represented with less memory.

I mean if you just wanted to represent binary values, 0 and 1, then one bit would suffice. Everything that uses more memory than one bit to represent these values, consumes extra memory.

That's why some people store small values to chars.

gsamaras
  • 71,951
  • 46
  • 188
  • 305
0

"wasted" is the wrong word - all binary-digits of a type are significant in its value. However for values of a limited range of possible values, you can choose to use a smaller type. For example char is an integer type too, and typically (though not universally) 8 bit.

If you want to be explicit about storage size requirements, use the stdint.h types such as uint8_t, int8_t, uint16_t, int16_t etc.

That said on may platforms there is often limited benefit is using the smallest possible type since processor data alignment and register storage requirements of inefficiencies may "waste" space in any case due to architectural restrictions or performance efficiency.

On the other hand, if you are writing a file record or implementing a communications packet for example, where alignment may not be an architectural issue, then using the smaller data type may be significant in space and I/O performance.

Further you could use bitfields to specify the minimum number of bits necessary to represent a value. But what you save in storage may be offset against the additional code generated to access the bitfields, and alignment and packing remains compiler/architecture dependent, so it is not a given that there will be any saving whatsoever.

Clifford
  • 88,407
  • 13
  • 85
  • 165
0

Usually char,short, int`` long, long long,float and double (with relative unsigned type) have a specific number of bytes, like explained in the following [link] [ 1].

For example, for compute, 2 bytes for a char (that usually is 1 byte). For example the ARM architecture has a specific assembly instruction for manipulate 16bit memory location; the compiler can choose to adopt 2 bytes for a speed and space. However, the programmer must not be concerned with making conversion because the compiler make them.   In this cases the extra bytes are not used in your code.

LucaG
  • 74
  • 9