The standard library vector::size()
gives a size_t
, an unsigned number. In one of the CppCon talks I have heard somebody (was it Chandler Carruth?) say that this is unfortunate and that it should rather use signed integers.
The background is that the overflow is not defined for signed integers, therefore the compiler has much more leeway. In a talk Carruth showed how a uint8_t
as a for
loop index in bzip2 creates many more machine instructions on x86 than int8_t
because it has to explicitly simulate the overflow with masks and shifts.
In the code that I now work on, there are certain sizes which are strictly positive. These are represented as size_t
. This seems decent because this shows that they cannot be negative. On the other hand, there is no need for the defined modular arithmetic, so as long as the signed integer is large enough (we go to like 200), the unsigned integer would have the wrong interface for the arithmetics that we want.
At some point in the code there are loops from 0 to this size. And then the loop indices are subtracted and the absolute value is taken.
When I compiled it with my more modern GCC 7, it could not resolve the proper overload of std::abs
because size_t - size_t
gives ambigious values, apparently. I have changed the code to use int
in the loop indices:
for (int t1 = 0; t1 < Lt; t1++) {
for (int t2 = 0; t2 < Lt; t2++) {
Now the abs(t1 - t2)
works just fine. But the comparison t1 < Lt
gives a warning because it is a comparison between signed and unsigned numbers.
What is the right approach?
- Use unsigned integers for everthing that is non-negative and then use
static_cast<int>()
whenever I need to do a substraction. - Use signed integers for loop indices but unsigned integers for the sizes of the containers. Then use
static_cast<int>
in the comparisons. - Just use signed integers everywhere. When other libraries return unsigned integers, just use
static_cast<int>
there in order to satisfy the warnings.