Many style guides such as the Google one recommend using int
as a default integer when indexing arrays for instance. With the rise of 64-bit platforms where most of the time an int
is only 32 bits which is not the natural width of the platform. As a consequence, I see no reason, apart from the simple same, to keep that choice. We clearly see that where compiling the following code:
double get(const double* p, int k) {
return p[k];
}
which gets compiled into
movslq %esi, %rsi
vmovsd (%rdi,%rsi,8), %xmm0
ret
where the first instruction promotes the 32 bits integer into a 64 bits integer.
If the code is transformed into
double get(const double* p, std::ptrdiff_t k) {
return p[k];
}
the generated assembly is now
vmovsd (%rdi,%rsi,8), %xmm0
ret
which clearly shows that the CPU feels more at home with std::ptrdiff_t
than with an int
. Many C++ users have moved to std::size_t
, but I don't want to use unsigned integers unless I really need modulo 2^n
behaviour.
In most cases, using int
does not hurt performance as the undefined behaviour or signed integer overflows allow the compiler to internally promote any int
to a std::ptrdiff_t
in loops that deal with indices, but we clearly see from the above that the compiler does not feel at home with int
. Also, using std::ptrdiff_t
on a 64-bit platform would make overflows less likely to happen as I see more and more people getting trapped by int
overflows when they have to deal with integers larger than 2^31 - 1
which become really common these days.
From what I have seen, the only thing that makes int
stand apart seems to be the fact that literals such as 5
are int
, but I don't see where it might cause any problem if we move to std::ptrdiff_t
as a default integer.
I am on the verge of making std::ptrdiff_t
as the de facto standard integer for all the code written in my small company. Is there a reason why it could be a bad choice?
PS: I agree with the fact that the name std::ptrdiff_t
is ugly which is the reason why I have typedef'ed it to il::int_t
which look a bit better.
PS: As I know that many people will recommend me to use std::size_t
as a default integer, I really want to make it clear that I don't want to use an unsigned integer as my default integer. The use of std::size_t
as a default integer in the STL has been a mistake as acknowledged by Bjarne Stroustrup and the standard committee in the video Interactive Panel: Ask Us Anything at time 42:38 and 1:02:50.
PS: In terms of performance, on any 64-bit platform that I know of, +
, -
and *
gets compiled the same way for both int
and std::ptrdiff_t
. So there is no difference in speed. If you divide by a compile-time constant, the speed is the same. It's only when you divide a/b
when you know nothing about b
that using 32 bits integer on a 64-bit platform gives you a slight advantage in performance. But this case is so rare as I don't see as a choice from moving away from std::ptrdiff_t
. When we deal with vectorized code, here there is a clear difference, and the smaller, the better, but that's a different story, and there would be no reason to stick with int
. In those cases, I would recommend going to the fixed size types of C++.