6

In C++, is there any benefit to using long over int?

It seems that long is the default word size for x86 and x86_64 architectures (32 bits on x86 and 64 bits on x86_64, while int is 32 bits on both), which should (theoretically) be faster when doing arithmetic.

The C++ standard guarantees that sizeof(int) <= sizeof(long), yet it seems that long is the default size on both 32-bit and 64-bit systems, so should long be used instead of int where possible when trying to write code that is portable over both architectures?

Siddiqui
  • 7,662
  • 17
  • 81
  • 129
  • 1
    On Windows, `long` is 32-bits. On Linux, `long` is 64-bits. That breaks a lot of applications. – Mysticial Jul 27 '12 at 06:36
  • Related: [What is the difference between an int and a long in C++?](http://stackoverflow.com/q/271076/11343) – CharlesB Jul 27 '12 at 06:47

3 Answers3

5

long is guaranteed to be at least 32-bits whereas int is only guaranteed to be at least 16-bits. When writing a fully portable program you can use long where the guaranteed size of an int is not sufficient for your needs.

In practice, though, many people make the implicit assumption that int is larger than the standard guarantees as they are only targeting such platforms. In these situations it doesn't usually matter much.

int should be the "natural" size of a number for a system; in theory long might be more expensive but on many architectures operations on long are not more expensive even where long is actually longer than int.

CB Bailey
  • 755,051
  • 104
  • 632
  • 656
4

If you need integer types that will remain the same size across different platforms, you want the types in <stdint.h>.

For instance, if you absolutely need a 32-bit unsigned integer, you want uint32_t. If you absolutely need a 64-bit signed integer, you want int64_t.

atomicinf
  • 3,596
  • 19
  • 17
  • Agreed. But please use the `xintX_t` types as sparingly as possible. These can impact performance severely if the code is ported to a less forgiving CPU architecture. – wallyk Jul 27 '12 at 06:49
  • @wallyk: Of course code being buggy due to assumptions about the size of the types no longer being true hurts even more, so I would rather risk performance penalties the buggy code (of course the code shouldn't make more assumptions about the size of the type then whats guaranteed by the standard, but it still happens often enough) – Grizzly Jul 27 '12 at 10:41
1

What is faster and what is not is something that is becoming harder to predict every day. The reason is that processors are no more "simple" and with all the complex dynamics and algorithms behind them the final speed may follow rules that are totally counter-intuitive.

The only way out is to just measure and decide. Also note that what is faster depends on the little details and even for compatible CPUs what is an optimization for one can be a pessimization for the other. For very critical parts some software just tries and checks timings for different approaches at run time during program initialization.

That said, as a general rule the faster integer you can have is int. You should use other integers only if you need them specifically (e.g. if long is larger and you need the higher precision, or if short is smaller but enough and you need to save memory).

Even better if you need a specific size then use a fixed standard type or add a typedef instead of just sprinkling around long where you need it. This way it will be easier to support different compilers and architectures and also the intent will be clearer for whoever is going to read the code in the future.

6502
  • 112,025
  • 15
  • 165
  • 265