Whenever I need to average two numbers for an algorithm like binary search, I always do something like this:
int mid = low + ((high - low) / 2);
I recently saw another way to do it in this post, but I don't understand it. It says you can do this in Java:
int mid = (low + high) >>> 1;
or this in C++:
int mid = ((unsigned int)low + (unsigned int)high)) >> 1;
The C++ version essentially makes both operands unsigned, so doing a shift results in an arithmetic shift instead of a signed shift. I understand what both these pieces of code are doing, but how does this solve the overflow issue? I thought the whole issue was that the intermediate value high + low
could overflow?
Edit:
Oh, duh. All the answers didn't exactly answer my question, but it was @John Zeringue's answer that made it click. I'll try to explain here.
The issue with (high + low)/2
in Java isn't exactly that high + low
overflows (it does overflow since the integers are both signed, but all the bits are still there, and no information is lost). The issue with taking the average like this is the division. The division is operating on a signed value, so your result will be negative. Using the shift instead will divide by two but consider the bits instead of the sign (effectively treating it as unsigned).