0

According to the Wikipedia article on C_data_types, there's one part that mentions how any data type can be 64-bit:

The actual size of the integer types varies by implementation. The standard only requires size relations between the data types and minimum sizes for each data type:

The relation requirements are that the long long is not smaller than long, which is not smaller than int, which is not smaller than short. As char's size is always the minimum supported data type, no other data types (except bit-fields) can be smaller.

The minimum size for char is 8 bits, the minimum size for short and int is 16 bits, for long it is 32 bits and long long must contain at least 64 bits.

The type int should be the integer type that the target processor is most efficiently working with. This allows great flexibility: for example, all types can be 64-bit. However, several different integer width schemes (data models) are popular. Because the data model defines how different programs communicate, a uniform data model is used within a given operating system application interface.[8]

Yet, I do observe clear data type sizes when I code in C on most machines. Is this article just saying that there's a uniform data model in place (enforced by most operating systems) which enforces the size of data types to allow ease of communication across programs/machines?

Community
  • 1
  • 1
rapidDev
  • 109
  • 2
  • 13
  • That's because the interesting machines with 36-bit and 60-bit words are no longer in wide-spread use. Everything is boringly 8, 16, 32, 64, … bits. Time was when machines used 16 bits for an `int`; very few still do that (outside the low end of the embedded space, at any rate). There weren't any 64-bit types in many machines, either. – Jonathan Leffler Oct 11 '18 at 04:59
  • The user community is, in a sense, enforces a "uniform data model" by a more restrictive C than allowed - spec-wise. Example. when `char/unsigned char/signed char` range is not a smaller range of `int/unsigned`, allowable in C, all hell breaks out with `fgetc()`. C allows `INT_MAX == UINT_MAX`, yet that would break lots of code that _assumes_ `INT_MAX == UINT_MAX/2`. – chux - Reinstate Monica Oct 11 '18 at 06:10

1 Answers1

0

Yet, I do observe clear data type sizes when I code in C on most machines.

The widths of the types are fixed in any one particular C implementation. The Wikipedia article is telling you they may be different in different C implementations.

For this purpose, a C implementation is the one provided by a particular compiler and its associated tools (linker, standard C library) and the settings used with it. (Some compilers have switches for selecting different widths for some types. Each selection of such settings is technically a different C implementation.)

There is no uniform data model. There are very common models. Operating systems cannot enforce type widths (in part because computers are practically Universal Turing Machines)), except it can be a nuisance to communicate with the operating system if types are mismatched.

Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312