What is faster and what is not is something that is becoming harder to predict every day. The reason is that processors are no more "simple" and with all the complex dynamics and algorithms behind them the final speed may follow rules that are totally counter-intuitive.
The only way out is to just measure and decide. Also note that what is faster depends on the little details and even for compatible CPUs what is an optimization for one can be a pessimization for the other. For very critical parts some software just tries and checks timings for different approaches at run time during program initialization.
That said, as a general rule the faster integer you can have is int
. You should use other integers only if you need them specifically (e.g. if long
is larger and you need the higher precision, or if short
is smaller but enough and you need to save memory).
Even better if you need a specific size then use a fixed standard type or add a typedef
instead of just sprinkling around long
where you need it. This way it will be easier to support different compilers and architectures and also the intent will be clearer for whoever is going to read the code in the future.