-2

How much will it affect the performance if I use:

n>>1 instead of n/2
n&1 instead of n%2!=0
n<<3 instead of n*8
n++ instead of n+=1
and so on...

and if it does increase the performance then please explain why.

  • 2
    A good optimizing compiler will recognize these code snippets and will generate the same code. A question like yours may have had relevance decades ago, but not in this day and age. – PaulMcKenzie Jul 26 '20 at 05:10
  • Probably no difference. The compiler will optimize the crap out of whatever you write, and swap it for the fastest version it can conceive of, and if you don't optimize, why the smurf are you concerned about speed? – user4581301 Jul 26 '20 at 05:10
  • 2
    Does this answer your question? [C++ style vs. performance?](https://stackoverflow.com/questions/4171858/c-style-vs-performance) – Andreas is moving to Codidact Jul 26 '20 at 05:13
  • You could write a small program that executes these millions of times and time them – Paul Baxter Jul 26 '20 at 05:14
  • You could write a small program and look at the object code that [your compiler](https://godbolt.org/) generates. – john Jul 26 '20 at 05:16
  • 2
    This question is impossible to answer without knowing what type `n` is. For unsigned integer types, it's trivial for the compiler to transform one into the other. For signed integer types or for types where the operators are overloaded, the operations might not be equivalent. – jamesdlin Jul 26 '20 at 05:21
  • There is a concept known as premature optimisation. This question demonstrates that concept in spades. – Peter Jul 26 '20 at 05:38

2 Answers2

3

Any half decent compiler will optimize the two versions into the same thing. For example, GCC compiles this:

unsigned int half1(unsigned int n) { return n / 2; }
unsigned int half2(unsigned int n) { return n >> 1; }
bool parity1(int n) { return n % 2; }
bool parity2(int n) { return n & 1; }
int mult1(int n) { return n * 8; }
int mult2(int n) { return n << 3; }
void inc1(int& n) { n += 1; }
void inc2(int& n) { n++; }

to

half1(unsigned int):
        mov     eax, edi
        shr     eax
        ret
half2(unsigned int):
        mov     eax, edi
        shr     eax
        ret
parity1(int):
        mov     eax, edi
        and     eax, 1
        ret
parity2(int):
        mov     eax, edi
        and     eax, 1
        ret
mult1(int):
        lea     eax, [0+rdi*8]
        ret
mult2(int):
        lea     eax, [0+rdi*8]
        ret
inc1(int&):
        add     DWORD PTR [rdi], 1
        ret
inc2(int&):
        add     DWORD PTR [rdi], 1
        ret

One small caveat is that in the first example, if n could be negative (in case that it is signed and the compiler can't prove that it's nonnegative), then the division and the bitshift are not equivalent and the division needs some extra instructions. Other than that, compilers are smart and they'll optimize operations with constant operands, so use whichever version makes more sense logically and is more readable.

eesiraed
  • 4,626
  • 4
  • 16
  • 34
1

Strictly speaking, in most cases, yes.

This is because bit manipulation is a simpler operation to perform for CPUs due to the circuitry in the APU being much simpler and requiring less discrete steps (clock cycles) to perform fully.

As others have mentioned, any compiler worth a damn will automatically detect constant operands to certain arithmetic operations with bitwise analogs (like those in your examples) and will convert them to the appropriate bitwise operations under the hood.

Keep in mind, if the operands are runtime values, such optimizations cannot occur.

Qix - MONICA WAS MISTREATED
  • 14,451
  • 16
  • 82
  • 145