11

Wouldn't it have made more sense to make long 64-bit and reserve long long until 128-bit numbers become a reality?

Ciro Santilli OurBigBook.com
  • 347,512
  • 102
  • 1,199
  • 985
sj755
  • 3,944
  • 14
  • 59
  • 79
  • 1
    Two things: firstly, long long ain't necessarily 64 bits. Second, isn't suggesting it be 128 bits wide similarly narrow-minded - we should be preparing for 1024 bit hardware to become commonplace, right? – Mac Sep 02 '11 at 05:24
  • 1
    Actually "C compilers" do _not_ specify that `long` is 32 bit, nor that `int` is 32 bit, nor that `long long` is 64 bit. This all depends very much on the compiler... So your question is based on a false premise. – Nemo Sep 02 '11 at 05:25
  • Wouldn't it make more sense to give standard types fixed sizes (int32 int64 etc.) from the very beginning, and save us from whole class of portability issues. Like it was done in C# for example. – hamstergene Sep 02 '11 at 05:27
  • 2
    They finally did in C99: http://en.wikipedia.org/wiki/Stdint.h – Mysticial Sep 02 '11 at 05:31
  • @Mac I doubt we'll ever get to 1024 bit, also we are preparing for 128-bit. Clearly you've never heard of quad-precision floating point numbers. – sj755 Sep 02 '11 at 05:38
  • @Nemo I'm talking about regular compilers like GCC or Visual C. – sj755 Sep 02 '11 at 05:40
  • @Eugene True, but most people just use int, long, and long long. – sj755 Sep 02 '11 at 05:41
  • I'd say we'll get to 1024 bit types in the form of SIMD registers. We're at 256-bits right now with AVX. Intel has plans to go up to 1024 bits. But as for basic integers, that might take a while... – Mysticial Sep 02 '11 at 05:48
  • @seljuq70: of course I'm not suggesting that 1024 bit hardware is going to happen any time soon, or that 128 bit _isn't_. The point is that why skip the _current_ 64 bit hardware in favour of _future_ 128 bit hardware? – Mac Sep 02 '11 at 05:51
  • @seljuq70 "Most people" are _not_ using those types, every professional programmer I know of either uses stdint.h from C99 or their own typedef:ed equivalents. – Lundin Sep 02 '11 at 06:41
  • 2
    @seljuq70: `long long` can't be "reserved", since the C99 standard guarantees its existence. On a 16-bit system with a 16-bit `int`, 32-bit `long` and 64-bit `long long` they'd all be different, but those days are gone as far as desktop machines are concerned. We're not going to stick with 16-bit `int` just so that we don't feel there's a redundant type in the middle somewhere. – Steve Jessop Sep 02 '11 at 08:52
  • 1
    @Eugene - For another discussion on why not everything is fixed by the standard, see this question [Exotic-architectures-the-standard-committee-cares-about](http://stackoverflow.com/questions/6971886/exotic-architectures-the-standard-committee-cares-about) – Bo Persson Sep 02 '11 at 17:08
  • [What does the C++ standard state the size of int, long type to be?](http://stackoverflow.com/q/589575/995714) – phuclv Jul 01 '16 at 06:00

7 Answers7

12

Yes, it does make sense, but Microsoft had their own reasons for defining "long" as 32-bits.

As far as I know, of all the mainstream systems right now, Windows is the only OS where "long" is 32-bits. On Unix and Linux, it's 64-bit.

All compilers for Windows will compile "long" to 32-bits on Windows to maintain compatibility with Microsoft.

For this reason, I avoid using "int" and "long". Occasionally I'll use "int" for error codes and booleans (in C), but I never use them for any code that is dependent on the size of the type.

Mysticial
  • 464,885
  • 45
  • 335
  • 332
  • 2
    I use `long` in cases where 32 bits is big enough, and I don't want `int32_least_t` or my own typedef all over my code. It's probably best to make the dependency obvious and explicit, and if it's in a struct you'd probably use `int32_t` to avoid bloating it where `long` is bigger, but there does come a point of "can't be bothered with this". – Steve Jessop Sep 02 '11 at 08:58
  • 2
    Many embedded devices (billions per year in 2015) use 32-bit `long`. Hardly "all the mainstream systems ... it's 64-bit". – chux - Reinstate Monica May 09 '16 at 19:48
5

The c standard have NOT specified the bit-length of primitive data type, but only the least bit-length of them. So compilers can have options on the bit-length of primitive data types. On deciding the bit-length of each primitive data type, the compiler designer should consider the several factors, including the computer architecture.

here is some references: http://en.wikipedia.org/wiki/C_syntax#Primitive_data_types

Jason Kuang
  • 261
  • 2
  • 8
2

For historical reasons. For a long time (pun intended), "int" meant 16-bit; hence "long" as 32-bit. Of course, times changed. Hence "long long" :)

PS:

GCC (and others) currently support 128 bit integers as "(u)int128_t".

PPS:

Here's a discussion of why the folks at GCC made the decisions they did:

http://www.x86-64.org/pipermail/discuss/2005-August/006412.html

paulsm4
  • 114,292
  • 17
  • 138
  • 190
2

For the history, including why UNIX systems generally converged on LP64, and why Windows did not (big code base that had int 16 and long 32), and the various arguments: The Long Road to 64 Bits - Double, double, toil and trouble—Shakespeare, Macbeth https://queue.acm.org/detail.cfm?id=1165766 Queue 2006 OR https://dl.acm.org/doi/pdf/10.1145/1435417.1435431 CACM 2009

Note: I helped design the 64/32-bit MIPS R4000, suggested the idea that led to <inttypes.h>, and wrote the long long motivation section for C99.

0

Ever since the days of the first C compiler for a general-purpose reprogrammable microcomputer, it has often been necessary for code to make use of types that held exactly 8, 16, or 32 bits, but until 1999 the Standard didn't explicitly provide any way for programs to specify that. On the other hand, nearly all compilers for 8-bit, 16-bit, and 32-bit microcomputers define "char" as 8 bits, "short" as 16 bits, and "long" as 32 bits. The only difference among them is whether "int" is 16 bits or 32.

While a 32-bit or larger CPU could use "int" as a 32-bit type, leaving "long" available as a 64-bit type, there is a substantial corpus of code which expects that "long" will be 32 bits. While the C Standard added "fixed-sized" types in 1999, there are other places in the Standard which still use "int" and "long", such as "printf". While C99 added macros to supply the proper format specifiers for fixed-sized integer types, there is a substantial corpus of code which expects that "%ld" is a valid format specifier for int32_t, since it will work on just about any 8-bit, 16-bit, or 32-bit platform.

Whether it makes more sense to have "long" be 32 bits, out of respect for an existing code base going back decades, or 64 bits, so as to avoid the need for the more verbose "long long" or "int64_t" to identify the 64-bit types is probably a judgment call, but given that new code should probably favor the use of specified-size types when practical, I'm not sure I see a compelling advantage to making "long" 64 bits unless "int" is also 64 bits (which will create even bigger problems with existing code).

supercat
  • 77,689
  • 9
  • 166
  • 211
0

d 32-bit microcomputers define "char" as 8 bits, "short" as 16 bits, and "long" as 32 bits. The only difference among them is whether "int" is 16 bits or 32.

While a 32-bit or larger CPU could use "int" as a 32-bit type, leaving "long" available as a 64-bit type, there is a substantial corpus of code which expects that "long" will be 32 bits. While the C Standard added "fixed-sized" types in 1999, there are other places in the Standard which still use "int" and "long", such as "printf". While C99 added macros to supply the

aarif
  • 1
-2

C99 N1256 standard draft

Sizes of long and long long are implementation defined, all we know are:

  • minimum size guarantees
  • relative sizes between the types

5.2.4.2.1 Sizes of integer types <limits.h> gives the minimum sizes:

1 [...] Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown [...]

  • UCHAR_MAX 255 // 2 8 − 1
  • USHRT_MAX 65535 // 2 16 − 1
  • UINT_MAX 65535 // 2 16 − 1
  • ULONG_MAX 4294967295 // 2 32 − 1
  • ULLONG_MAX 18446744073709551615 // 2 64 − 1

6.2.5 Types then says:

8 For any two integer types with the same signedness and different integer conversion rank (see 6.3.1.1), the range of values of the type with smaller integer conversion rank is a subrange of the values of the other type.

and 6.3.1.1 Boolean, characters, and integers determines the relative conversion ranks:

1 Every integer type has an integer conversion rank defined as follows:

  • The rank of long long int shall be greater than the rank of long int, which shall be greater than the rank of int, which shall be greater than the rank of short int, which shall be greater than the rank of signed char.
  • The rank of any unsigned integer type shall equal the rank of the corresponding signed integer type, if any.
  • For all integer types T1, T2, and T3, if T1 has greater rank than T2 and T2 has greater rank than T3, then T1 has greater rank than T3
Community
  • 1
  • 1
Ciro Santilli OurBigBook.com
  • 347,512
  • 102
  • 1,199
  • 985