Take an C++ integral variable i
, and suppose that you're multiplying its value by 2.
If i
has signedness, I believe that the operation is somewhat equivalent, at least mathematically, to:
i = i << 1;
But if i
's type is unsigned, then since unsigned values do not overflow but are performed modulo their range, presumably the operation is something like this:
i = (i << 1) & (decltype(i))-1;
Now, I figure that the actual machine instructions will probably be more concise than a sequence of shifts for multiplication. But would a modern, say x86, CPU have a specific instruction for unsigned/modulo math? Or will performing math with unsigned values tend to cost an additional instruction, when compared to math with signed values?
(Yes, it would be ridiculous to care about this whilst programming; I'm interested out of pure curiosity.)