-4

I have two variable a and b.I have to write a if condition on variable a and b:

This is First Approach:

if(a > 0 || b >0){
    //do some things
}

This is second Approach:

if((a+b) > 0){
    //do some thing
}

Update: consider a and b are unsigned.then which will take lesser execution time between logical or(||) and arithmetic (+ )operator

this condition will Iterate around one million times.
Any help on this will be appreciated.

user207421
  • 305,947
  • 44
  • 307
  • 483
Sanjay
  • 1,078
  • 11
  • 15

6 Answers6

9

Your second condition is wrong. If a=1, b=-1000, it will evaluate to false, whereas your first condition will be evaluated to true. In general you shouldn't worry about speed at these kind of tests, the compiler optimizes the condition a lot, so a logical OR is super fast. In general, people are making bigger mistakes than optimizing such conditions... So don't try to optimize unless you really know what is going on, the compiler is in general doing a much better job than any of us.

In principle, in the first expression you have 2 CMP and one OR, whereas in the second, you have only one CMP and one ADD, so the second should be faster (even though the complier does some short-circuit in the first case, but this cannot happen 100% of the time), however in your case the expressions are not equivalent (well, they are for positive numbers...).

vsoftco
  • 55,410
  • 12
  • 139
  • 252
3

I decided to check this for C language, but identical arguments apply to C++, and similar arguments apply to Java (except Java allows signed overflow). Following code was tested (for C++, replace _Bool with bool).

_Bool approach1(int a, int b) {
    return a > 0 || b > 0;
}
_Bool approach2(int a, int b) {
    return (a + b) > 0;
}

And this was resulting disasembly.

    .file   "faster.c"
    .text
    .p2align 4,,15
    .globl  approach1
    .type   approach1, @function
approach1:
.LFB0:
    .cfi_startproc
    testl   %edi, %edi
    setg    %al
    testl   %esi, %esi
    setg    %dl
    orl %edx, %eax
    ret
    .cfi_endproc
.LFE0:
    .size   approach1, .-approach1
    .p2align 4,,15
    .globl  approach2
    .type   approach2, @function
approach2:
.LFB1:
    .cfi_startproc
    addl    %esi, %edi
    testl   %edi, %edi
    setg    %al
    ret
    .cfi_endproc
.LFE1:
    .size   approach2, .-approach2
    .ident  "GCC: (SUSE Linux) 4.8.1 20130909 [gcc-4_8-branch revision 202388]"
    .section    .note.GNU-stack,"",@progbits

Those codes are quite different, even considering how clever the compilers are these days. Why is that so? Well, the reason is quite simple - they aren't identical. If a is -42 and b is 2, the first approach will return true, and second will return false.

Surely, you may think that a and b should be unsigned.

    .file   "faster.c"
    .text
    .p2align 4,,15
    .globl  approach1
    .type   approach1, @function
approach1:
.LFB0:
    .cfi_startproc
    orl %esi, %edi
    setne   %al
    ret
    .cfi_endproc
.LFE0:
    .size   approach1, .-approach1
    .p2align 4,,15
    .globl  approach2
    .type   approach2, @function
approach2:
.LFB1:
    .cfi_startproc
    addl    %esi, %edi
    testl   %edi, %edi
    setne   %al
    ret
    .cfi_endproc
.LFE1:
    .size   approach2, .-approach2
    .ident  "GCC: (SUSE Linux) 4.8.1 20130909 [gcc-4_8-branch revision 202388]"
    .section    .note.GNU-stack,"",@progbits

It's quite easy to notice that approach1 is better here, because it doesn't do pointless addition, which is in fact, quite wrong. In fact, it even makes an optimization to (a | b) != 0, which is correct optimization.

In C, unsigned overflows are defined, so the compiler has to handle the case when integers are very high (try INT_MAX and 1 for approach2). Even assuming you know the numbers won't overflow, it's easy to notice approach1 is faster, because it simply tests if both variables are 0.

Trust your compiler, it will optimize better than you, and that without small bugs that you could accidentally write. Write code instead of asking yourself whether i++ or ++i is faster, or if x >> 1 or x / 2 is faster (by the way, x >> 1 doesn't do the same thing as x / 2 for signed numbers, because of rounding behavior).

If you want to optimize something, optimize algorithms you use. Instead of using worst case O(N4) sorting algorithm, use worst case O(N log N) algorithm. This will actually make program faster, especially if you sort reasonably big arrays

Konrad Borowski
  • 11,584
  • 3
  • 57
  • 71
1

The real answer for this is always to do both and actually test which one runs faster. That's the only way to know for sure.

I would guess the second one would run faster, because an add is a quick operation but a missed branch causes pipeline clears and all sort of nasty things. It would be data dependent though. But it isn't exactly the same, if a or b is allowed to be negative or big enough for overflow then it isn't the same test.

Gabe Sechan
  • 90,003
  • 9
  • 87
  • 127
1

Well, I wrote some quick code and disassembled:

public boolean method1(final int a, final int b) {
    if (a > 0 || b > 0) {
        return true;
    }
        return false;
}

public boolean method2(final int a, final int b) {
    if ((a + b) > 0) {
        return true;
    }
        return false;
}

These produce:

public boolean method1(int, int);
  Code:
     0: iload_1       
     1: ifgt          8
     4: iload_2       
     5: ifle          10
     8: iconst_1      
     9: ireturn       
    10: iconst_0      
    11: ireturn       

public boolean method2(int, int);
  Code:
     0: iload_1       
     1: iload_2       
     2: iadd          
     3: ifle          8
     6: iconst_1      
     7: ireturn       
     8: iconst_0      
     9: ireturn       

So as you can see, they're pretty similar; the only difference is performing a > 0 test vs a + b; looks like the || got optimized away. What the JIT compiler optimizes these to, I have no clue.

If you wanted to get really picky:

Option 1: Always 1 load and 1 comparison, possible 2 loads and 2 comparisons

Option 2: Always 2 loads, 1 addition, 1 comparison

So really, which one performs better depends on what your data looks like and whether there is a pattern the branch predictor can use. If so, I could imagine the first method running faster because the processor basically "skips" the checks, and in the best case only has to perform half the operations the second option will. To be honest, though, this really seems like premature optimization, and I'm willing to bet that you're much more likely to get more improvement elsewhere in your code. I don't find basic operations to be bottlenecks most of the time.

awksp
  • 11,764
  • 4
  • 37
  • 44
0

If a and b have the potential to be negative numbers, the two choices are not equivalent, as has been pointed out by the answer by @vsoftco.

If both a and b are guaranteed to be non-negative integers, I would use

if ( (a|b) > 0 )

instead of

if ( (a+b) > 0 )

I think bitwise | is faster than integer addition.

Update Use bitwise | instead of &.

R Sahu
  • 204,454
  • 14
  • 159
  • 270
  • With bitwise `|`, you actually have an if statement equivalent to `if(a >= 0 && b >= 0 && !(a == 0 && b == 0))`. – Justin May 18 '14 at 04:59
  • @Quincunx, the assembly code generated for `+` by gcc 4.7.3 on a Linux machine is just one line of assembly instruction - `add`. The assembly code for the bitwise `|` is also one line of assembly instruction - `or`. I have no idea why you think the bitwise `|` is as complex as you have suggested. – R Sahu May 18 '14 at 05:23
  • I'm not suggesting it is complex. I'm just saying that that if statement is equivalent to the one I posted. Think about it. If one of `a` or `b` is negative, the result of the bitwise `|` is negative, so it fails the comparison. However, one of `a` or `b` can be `0` and the if statement still accepts it. So the code works if both `a` and `b` are nonzero integers. I'm basically saying that `((a|b)>0)` is remarkably similar to `(a > 0 && b > 0)` – Justin May 18 '14 at 05:25
  • @Quincunx that's why I prefaced my answer with "If both `a` and `b` are guaranteed to be positive integers". – R Sahu May 18 '14 at 05:27
  • I'm saying that you can improve that range by saying, "If both `a` and `b` are nonzero integers". – Justin May 18 '14 at 05:29
  • Regarding "I think bitwise `|` is faster than integer addition.": http://stackoverflow.com/a/15668887/1896169 – Justin May 18 '14 at 05:31
0

Two things:

  1. (a|b) > 0 is strictly better than (a+b) > 0, so replace it.

  2. The above two only work correctly if the numbers are both unsigned.

user541686
  • 205,094
  • 128
  • 528
  • 886