2

I was curious if anyone knows which of the following executes faster (I know this seems like a weird question but I'm trying to shave as much time and resources as possible off my program.)

int i=0;

i+=1; 

or

int i;

i=1;

and I also was curious about which comparison is faster:

//given some integer i
// X is some constant
i < X+1

or

i<=X

For those of you who already posted answers I'm sorry, I edited it so the first section is correct, I meant for if i was initialized to 0. Again sorry for the confusion.

seld
  • 49
  • 2
  • 12
    If you're really that concerned about optimization (and these sorts of things won't help you anyway), check the binaries generated by your compiler and profile the heck out of the application. – Carl Norum Mar 26 '10 at 17:49

6 Answers6

16

The first operation probably has no meaning because, unless i is static, you've left i uninitialized.

You're misguided and focusing on the wrong things. Guessing isn't going to get you anywhere; bring out a profile, profile your code, and find out with data which parts need to be optimized. Optimizations are design changes, be them different data structures or algorithms.

You heavily underestimate compilers. Nothing you do is going to make a difference with such tiny changes, both will compile to whichever the compiler decides is faster. If you want an integer at one, just do: int i = 1; and live your life. If you want to compare if an integer is less than or equal to X, then just say so: i <= X;. Write clean readable code. On a side note, your two comparisons are not the same when X is at it's maximal value; you'll overflow when you add one.

If you're really serious, again: pull out a profiler. Another thing to do is look at the generated assembly and see which instructions it generates. If you don't know how to do that, chances are you're probably not in a position to need optimization. :/

GManNickG
  • 494,350
  • 52
  • 494
  • 543
0

If I were you I'd be more worried about which one gives you the result you want because the two do totally different things. In fact, as you've written them, only one has definable results.

Edward Strange
  • 40,307
  • 7
  • 73
  • 125
0

In the first example, you'd probably be better off with int i=1;, though it's less about speed than correctness (e.g., if i has auto storage, the int i; i+=1; will give undefined behavior).

On the second, if there's any difference, it probably favors the i <= X; (but there's a good chance there won't be any difference here either).

Jerry Coffin
  • 476,176
  • 80
  • 629
  • 1,111
  • In the second case, if there's any speed difference, get a better compiler. Nothing you'll be able to do will help as much as getting a compiler that any compiler writer in the past thirty years might not be ashamed of. – David Thornley Mar 26 '10 at 18:15
0

The first may be faster, since it invokes undefined behavior and the optimizer can basically treat it as if it doesn't exist. However, I suspect that that's not what you want.

Stephen Canon
  • 103,815
  • 19
  • 183
  • 269
  • I don't think it's undefined behavior, just that the value is unspecified. (Since integers don't have trap values.) I'm probably wrong though. – GManNickG Mar 26 '10 at 18:27
  • Integers can have trap values -- unsigned char cannot, and by extension char *probably* can't either, but other types can. (C99, §6.7.8/10:"If an object that has automatic storage duration is not initialized explicitly, its value is indeterminate.", §3.17.2/1: "indeterminate value: either an unspecified value or a trap representation" – Jerry Coffin Mar 26 '10 at 18:55
0

I'm assuming that the variables you're talking about are local (not global).

If that's the case, the first thing you wrote: int i; ... i++; // this happening in a function is wrong. I wouldn't trust it. Ever.

If i is global then I might trust it, but I still wouldn't trust it unless I knew how the compiler and run time environment worked. Even then initializing it to 0 should be seen by the optimizer and made free.

If you're just using simple integer types then your compiler should take care of this type operation. The only difference you should ever be able to detect here is that maybe the registers that some of your variables are stored in may get swapped around, and this almost never matters. It'd just be a side effect of throwing data at the compiler's register allocator in a different order, and I don't even know why I brought it up. Oh yeah I remember why I brought it up -- because if you did happen to look at the assembly and see a differenct in code generated you might not believe me.

With the comparison, if X is a variable then with absolutely no compiler optimization you might come out better with the second one.

I'm OCD about micro-optimization, and I can assure you that baring a dumb compiler this is not going to win you anything.

nategoose
  • 12,054
  • 27
  • 42
  • @nategoose thank you,I was mostly curious because I realized that the comparison would go into a loop, but eventually I'd be looking at 2*10^8 0bjects (or more) to compare, 1 at a time. I'm looking into trying to multi-thread my program and using a better set of libraries for the operations I'm doing (one of them relies on the GCD (greatest common denominator) function, though the only way I can get more efficient for that is if I use some of the mathematical research papers. I'm also looking into trying to figure out how to apply a shell sort at the moment I'm using a simple C++ quicksort. – seld Mar 26 '10 at 22:26
0

There are people who are experts on performance. I don't think any of them would advise you to start by looking at such a low level. Rather they would say, write your program, cleanly, and then profile. I have had people beg me to guess what could be making their program slow, and I can guess about as well as anyone, which is to say, badly. Here's the method I use to find out what needs to be optimized.

Community
  • 1
  • 1
Mike Dunlavey
  • 40,059
  • 14
  • 91
  • 135