-3

In C integer and short integer variables are identical: both range from -32768 to 32767, and the required bytes of both are also identical, namely 2.

So why are two different types necessary?

Telemachus
  • 19,459
  • 7
  • 57
  • 79
FrOgY
  • 151
  • 5

5 Answers5

6

Basic integer types in C language do not have strictly defined ranges. They only have minimum range requirements specified by the language standard. That means that your assertion about int and short having the same range is generally incorrect.

Even though the minimum range requirements for int and short are the same, in a typical modern implementation the range of int is usually greater than the range of short.

AnT stands with Russia
  • 312,472
  • 42
  • 525
  • 765
2

The standard only guarantees sizeof(short) <= sizeof(int) <= sizeof(long) as far as I remember. So both short and int can be the same but don't have to. 32 bit compilers usually have 2 bytes short and 4 bytes int.

Tomek
  • 4,554
  • 1
  • 19
  • 19
  • `long` is at least 32 bits, whereas the others are at least 16. – chris Jul 09 '13 at 22:21
  • This would be a better answer if you quoted the spec. I'm pretty sure that the spec guarantees *minimum* sizes for each data type, not that one size is greater than or equal to another. – Robert Harvey Jul 09 '13 at 22:25
  • Pedantically speaking, the `sizeof` requirement, if I remember correctly, is actually from C++. C does not have this requirement. C requirements are expressed in terms of ranges, not in terms of object sizes. – AnT stands with Russia Jul 09 '13 at 22:29
0

The guaranteed minimum ranges of int and short are the same. However an implementation is free to define short with a smaller range than int (as long as it still meets the minimum), which means that it may be expected to take the same or smaller storage space than int1. The standard says of int that:

A ‘‘plain’’ int object has the natural size suggested by the architecture of the execution environment.

Taken together, this means that (for values that fall into the range -32767 to 32767) portable code should prefer int in almost all cases. The exception would be where the a very large number of values are being stored, such that the potentially smaller storage space occupied by short is a consideration.


1. Of course a pathological implementation is free to define a short that has a larger size in bytes than int, as long as it still has equal or lesser range - there is no good reason to do so, however.
caf
  • 233,326
  • 40
  • 323
  • 462
0

The C++ standard (and the C standard, which has a very similar paragraph, but the quote is from the n3337 version of the C++11 draft specification):

Section 3.9.1, point 2:

There are five standard signed integer types : “signed char”, “short int”, “int”, “long int”, and “long long int”. In this list, each type provides at least as much storage as those preceding it in the list. There may also be implementation-defined extended signed integer types. The standard and extended signed integer types are collectively called signed integer types. Plain ints have the natural size suggested by the architecture of the execution environment ; the other signed integer types are provided to meet special needs.

Different architectures have different size "natural" integers, so a 16-bit architecture will naturally calculate a 16-bit value, where a 32- or 64-bit architecture will use either 32 or 64-bit int's. It's a choice for the compiler producer (or the definer of the ABI for a particular architecture, which tends to be a decision formed by a combination of the OS and the "main" Compiler producer for that architecture).

In modern C and C++, there are types along the lines of int32_t that is guaranteed to be exactly 32 bits. This helps portability. If these types aren't sufficient (or the project is using a not so modern compiler), it is a good idea to NOT use int in a data structure or type that needs a particular precision/size, but to define a uint32 or int32 or something similar, that can be used in all places where the size matters.

In a lot of code, the size of a variable isn't critical, because the number is within such a range, that a few thousand is way more than you ever need - e.g. number of characters in a filename is defined by the OS, and I'm not aware of any OS where a filename/path is more than 4K characters - so a 16, 32 or 64 bit value that can go to at least 32K would be perfectly fine for counting that - it doesn't really matter what size it is - so here we SHOULD use int, not try to use a specific size. int should, in a compiler be a type that is "efficient", so should help to give good performance, where some architectures will run slower if you use short, and certainly 16-bit architectures will run slower using long.

Mats Petersson
  • 126,704
  • 14
  • 140
  • 227
-1

They both are identical for 16 bit IBM compatible PC. However it is not sure that it will be identical on other hardwares as well.

VAX type of system which is known as virtual address extension they treat all these 2 variables in different manner. It occupies 2 bytes for short integer and 4 bytes for integer.

So this is the reason that we have 2 different but identical variables and their property.

for general purpose in desktops and laptops we use integer.

FrOgY
  • 151
  • 5
  • 1
    Kinda kills platform independence, doesn't it? – Robert Harvey Jul 09 '13 at 22:18
  • Why are you answering your own question, and discussing systems that are 20 years old? – Mats Petersson Jul 09 '13 at 22:19
  • just for knowledge sharing – FrOgY Jul 09 '13 at 22:20
  • @MatsPetersson: http://meta.stackexchange.com/questions/17463/can-i-answer-my-own-questions-even-if-i-knew-the-answer-before-asking. Just treat the question like you would any other SO question. – Robert Harvey Jul 09 '13 at 22:20
  • @rodrigo, I find that these don't get a lot of votes unless the shared knowledge deserves way more rep than they get, even if they end up with dozens of votes. – chris Jul 09 '13 at 22:23
  • 2
    @ChintanfRoGyGurjar You'll find you get more reputation if the knowledge you are sharing is correct. – Philip Kendall Jul 09 '13 at 22:24
  • It's a nice attempt at a self-answered question, but I'm not sure I agree with your conclusions. The C standard should be pretty clear about this. – Robert Harvey Jul 09 '13 at 22:25
  • @RobertHarvey: Not really - you write platform-independent code by relying only on the minimum guaranteed ranges. A consequence of this is that there's very little reason to use `short` at all in such code - the only reason you would do so is if you were storing a large number of values, such that the potentially smaller storage size of `short` was significant. – caf Jul 09 '13 at 22:28