2

I've seen in a fair amount of Java code something like this:

int blah = ~someFunc() + 1;

instead of

int blah = -1 * someFunc();

is there any real difference in output here? Does javac recognize a difference between these two cases? It seems the compiler should be able to transform these two lines into the same byte code.

Edit: This is not about how bit flips work, it's a question about why an implementer might choose a certain approach to the operation.

Edit2: I did javap -c on a simple test and here's some JVM bytecode:

int one = -1 * someFunc();
0: iconst_m1     
1: invokestatic  #2                  // Method someFunc:()I
4: imul          
5: istore_1      

int two = ~someFunc() + 1;
6: invokestatic  #2                  // Method someFunc:()I
9: iconst_m1     
10: ixor          
11: iconst_1      
12: iadd          
13: istore_2    

So there are 3 more java instructions for the bit flip (which is iconst_m1, ixor) plus one, but how that translates into machine cycles is probably very architecture-specific.

joeblubaugh
  • 1,127
  • 7
  • 10
  • The compiler/interpreter may not be smart enough to distinguish in all cases, but I think what you're seeing is just programmer's whim of choice on how they do it. In the grand scheme of things, `*` *might* be a little less efficient, but I doubt noticeably so. Maybe the programmer is showing off their bitwise operation knowledge? ;) – lurker Jun 24 '15 at 15:35
  • possible duplicate of [How does the bitwise complement (~) operator work?](http://stackoverflow.com/questions/791328/how-does-the-bitwise-complement-operator-work) – Raman Shrivastava Jun 24 '15 at 15:36
  • 2
    @RamanShrivastava I don't think it's a duplicate. The OP isn't asking *how* `~` is used to negate a value (how it works). The OP is wondering *why* it's being used instead of `-1 *`. – lurker Jun 24 '15 at 15:36
  • 4
    Or how about just `int blah = -someFunc();`? – Fred Larson Jun 24 '15 at 15:38
  • 9
    I would consider it to be a nano-optimization that detracts from readability. I'd rather see int blah = -someFunc(); – duffymo Jun 24 '15 at 15:38
  • it is just a bit faster :p , but when you are working with other developer better keep your code simple and write the usual one. – yahya el fakir Jun 24 '15 at 15:38
  • 1
    My guess would be it's less efficient. It's two operations instead of one. A smart compiler would spot the multiply by -1 and use a single, simple instruction. Ones complement and add 1 is harder to spot, and is probably left as two operations. – markspace Jun 24 '15 at 15:40
  • 1
    I really doubt an explicit complement and increment is any faster than using the negation operator. It might be worse. – Fred Larson Jun 24 '15 at 15:40
  • Why are we guessing which one might be faster when we could just test? – Ron Thompson Jun 24 '15 at 15:47
  • Well, one reason we are speculating, @RonThompson, is because different compilers and different instruction sets could yield different results. So to try to address the *general* case, it's better to treat it as a thought problem. Testing would yield a single data point that might not be universally valid. – markspace Jun 24 '15 at 15:56
  • Of course, it's just *speculation* that the original programmer was seeking for *faster*. Who knows why they really chose the form they did for this operation. – lurker Jun 24 '15 at 16:34
  • 1
    I would expect the JIT to turn these all into the same thing, but javac to leave them alone. In any event, I would expect this to be completely unimportant. – Louis Wasserman Jun 24 '15 at 18:52

1 Answers1

1

Speaking strictly of the instruction costs, the first approach could indeed be faster (on some processors): For example, visit http://www.agner.org/optimize/instruction_tables.pdf and compare cost of operations:

IMUL r32 = 3 ops
ADD = 1 op
NOT = 1 op

So, you might save one operation. On the other hand, each function call requires you to put vars on register stack and retrieve them from it, which adds additional cost.

marcelv3612
  • 663
  • 4
  • 10
  • 2
    On Intel, a `neg` opcode only requires one op. And `add 1` is going to be two ops on many systems. So this could vary a lot depending on your machine and your compiler. The comment above about 'nano-optimization' is probably the best. It's dumb because it hides your true intent, and *at best* only saves you one op. – markspace Jun 24 '15 at 16:00