-1

Why is the 'typical bit width' of unsigned and signed short int data types classed as 'range'? Does this mean they are likely to be any number of bytes? If so why when the 'typical range' is predictable (0 to 65,535 & -32768 to 32767) as with other data types?

newprogrammer
  • 600
  • 10
  • 22
  • 2
    The *typical* size of a `short` (signed or unsigned) is usually `2` bytes on modern PC-type systems. That is, 16 bits. – Some programmer dude Oct 28 '19 at 13:30
  • 2
    it is architecture dependent. `int` should not be smaller then a `short` and should at least be 16 bits. But often on 32 bit machines its 32 bit. And it could as well be 64 bits. Meanwhile, a 'short int' should be at least 16 bits and not be smaller then a `char`. A `char` should at least be 8 bits... – JHBonarius Oct 28 '19 at 13:31
  • 2
    See table __Properties__ on about the 2nd page: https://en.cppreference.com/w/cpp/language/types – Richard Critten Oct 28 '19 at 13:32
  • Can you be more specific about what you are asking? Are you quoting a document when you put "range" in quotes? Why exactly are you surprised that the typical range is "predictable"? Note that the reason for not exactly specifying integral sizes (and hence value ranges) like e.g. Java does it is performance: An `int` is the "natural" size, for example a register size, on a given platform (and C runs on more platforms than just PCs). – Peter - Reinstate Monica Oct 28 '19 at 13:34
  • 1
    That's why I like to use `uint16_t` and `int16_t` types. It's quite obvious how many bits are in the variable. – Swedgin Oct 28 '19 at 13:38
  • 1
    Imagine you have a machine that has a 16 bit data type but 2 of those bits are used for some error checking purpose. Would you say that type is okay to be used as a `short int` if the standard said the size has to be 16 bits? – NathanOliver Oct 28 '19 at 13:38

2 Answers2

1

It's both sensible and intuitive to describe the possible values of an integer in terms of its numerical range.

I realise that it's tempting to focus on implementation details, like "how many bits there are" or "how many bytes it takes up", but we're not in the 1970s any more. We're not creating machine instructions on punchcards. C++ and C are abstractions. Think in terms of semantics and in terms of behaviours and you'll find your programming life much easier.

The author of the information you're looking at is following that rule.

Lightness Races in Orbit
  • 378,754
  • 76
  • 643
  • 1,055
0

Why is the 'typical bit width' of unsigned and signed short int data types classed as 'range'?

In math, "range" is (depending on context) synonymous with "interval". An interval is set of numbers lying between two numbers (the minimum and maximum values). The set of all values of all integer types are intervals, and as such may be referred to as ranges.

The minimum required range that signed short must have as specified by the C11 standard is [−32,767, +32,767], and unsigned must have at least [0, 65,535].

Does this mean they are likely to be any number of bytes?

That does not follow from "range", but the number of bytes is indeed implementation defined. A minimum of 16 bits are required to represent the minimum range, and that requires at least one or two bytes depending on the size of the byte (which is at least 8 bits).

What number of bytes is "likely" depends on what system one is likely to use.

If so why

Because that allows the language to be usable on wide variety of CPU architectures which have different sizes of bytes, different representations for signed integers as well as different set of instructions that support different widths of registers.

eerorika
  • 232,697
  • 12
  • 197
  • 326