The generated code is likely to be identical, especially for such a simple case. With more complex math, signed types (if you can safely use them) are somewhat more optimizable because the compiler is allowed to assume they never overflow. With signed types, you also won't get unpleasant surprises if you decide to compare against a negative index.
So if you sum it up:
very_large_arrays negative_comparison_safe maybe_faster
int no yes yes
size_t yes no no
it looks like int
could be preferable with definitely-small (<2^15 or perhaps 2^31 if your architecture targets guarantee that) ranges unless you can think of another criterion where size_t
wins.
The advantage of size_t
is that it will definitely work for any array no matter the size as long as you aren't comparing against negative indices. (This may be much more important than negative-comparison-safety and potential speed gains from undefined overflow).
ssize_t
(== signed size_t) combines the best of both, unless you need every last bit of size_t
(you definitely don't on a 64 bit machine).