616

This was an interview question asked by a senior manager.

Which is faster?

while(1) {
    // Some code
}

or

while(2) {
    //Some code
}

I said that both have the same execution speed, as the expression inside while should finally evaluate to true or false. In this case, both evaluate to true and there are no extra conditional instructions inside the while condition. So, both will have the same speed of execution and I prefer while (1).

But the interviewer said confidently: "Check your basics. while(1) is faster than while(2)." (He was not testing my confidence)

Is this true?

See also: Is "for(;;)" faster than "while (TRUE)"? If not, why do people use it?

Community
  • 1
  • 1
Nikole
  • 4,719
  • 3
  • 16
  • 15
  • 216
    A half-decent compiler will optimise both forms to nothing. –  Jul 20 '14 at 07:36
  • 72
    In optimized build every while(n), n != 0 or for(;;) will be translated to Assembly endless loop with label in the beginning and goto in the end. Exactly the same code, the same performance. – Alex F Jul 20 '14 at 07:48
  • 66
    Not surprising, a stock optimize brings `0x100000f90: jmp 0x100000f90` (address varies, obviously) for *both* snippets. The interviewer probably hedged on a register test vs. a simple flagged jump. Both the question, and their supposition, is lame. – WhozCraig Jul 20 '14 at 07:49
  • 54
    This question by the interviewer falls under the same auspices as http://dilbert.com/strips/comic/1995-11-17/ - you will meet someone who genuinely believes what they are saying regardless of the quotient of stupidity in their statement. Simply choose from the following: a deep breat, swear, laugh, cry, some combination of the above :) – GMasucci Jul 22 '14 at 08:13
  • 3
    @Mike W: one can wonder what a compiler ought to do: translate to a Halt statement, or consider that the loop exits after infinite time and optimize away the infinite delay ? –  Jul 23 '14 at 07:24
  • 1
    @YvesDaoust Apparently, compilers are allowed to optimize away completely (see [this c++ question](http://stackoverflow.com/q/3592557/509868) or [this discussion](https://groups.google.com/forum/#!topic/comp.std.c/vFfu3wMM0ZI) - search for "C1X" for a quote from the Standard), and some compilers actually do it (the latter link mentions which ones). – anatolyg Jul 23 '14 at 11:06
  • 1
    @this: so how do you explain that -1 is slower than 0? – d33tah Jul 23 '14 at 11:09
  • 1
    The question does not ask which iterates more quickly at runtime, or which compiles in a shorter time. It simply asks which is faster. Therefore, whichever one is first is clearly the faster of the two choices; so while(1) must be faster. DUH – Jeff Wolski Jul 23 '14 at 20:49
  • 2
    Maybe they were looking for you to clarify whether they were looking for compile vs. execution time. I'm not certain that they will have identical compile times, as I could see some compilers having an early-out for "while(1)" and "while(true)" and otherwise passing it on to a more generic const-expression evaluator. – MooseBoys Jul 25 '14 at 21:38
  • As @MooseBoys mentions. My first thought was also that he may be talking about build time rather than run time. Would be nice if this could be investigated somehow. – Dennis Jaheruddin Jul 28 '14 at 15:55
  • 1
    @pts This question might be just to attract votes, but it's _precisely_ ___on___ _topic_ for Stack Overflow. – Donal Fellows Jul 30 '14 at 14:01
  • Mike W: Nope, it would leave the loops intact. The compiler shouldn't change the meaning of the program. If program doesn't terminate, it should stay this way. – Rok Kralj Sep 18 '14 at 08:57
  • This question was just plagiarized on Quora: https://www.quora.com/Programming-Which-is-faster-while-1-or-while-2 – Keith Thompson Apr 30 '19 at 23:28
  • This is an old question, but for any future interviewers out there: `while (42) { ... }` is definitely the slowest, under any optimization level, with any compiler. And actually, `return 42;` runs slower than returning any other value as well. After all... Actually, on a serious note, if I really did see `while (42)` I would know exactly what the author was thinking, and be able to move on right away too. – owacoder Jul 06 '19 at 01:09
  • 3
    You should always ask why when you encounter this sort of thing. There's a lot of misinformation out there, and interviewers are not immune. Interviewer is dead flat wrong in this case. Compiler writer speaking here. It is also a rather stupid interview question: what difference does it make in practice? Who is really going to write `while (2)`? Just an opportunity for the interviewer to show off his ignorance. – user207421 Sep 27 '19 at 07:00
  • @SOFe Perhaps if done frequently, one can use `#define a while(9){`. Pays for itself after a couple uses. All you have to do is sacrifice readability. – General Grievance Mar 01 '21 at 14:47
  • `for(;;)` is definitely faster. Fewer characters to type, fewer to parse. :-) – Jens Mar 04 '22 at 22:50
  • @Jens: Fewer characters, but one more token! – Adrian McCarthy Mar 21 '22 at 18:34
  • For the record, ISO C and C++ differ on this. In C, an infinite loop with a constant expression as the control is well-defined behaviour. In C++ it's not; the loop needs a side effect or volatile access. So @anatolyg's C++ link about compilers removing infinite loops doesn't directly apply to this question, unless it mentions that C vs. C++ difference. – Peter Cordes Nov 09 '22 at 09:07
  • It doesn't matter whether you use while (1) or (2) – Supergamer Jan 08 '23 at 15:37

23 Answers23

717

Both loops are infinite, but we can see which one takes more instructions/resources per iteration.

Using gcc, I compiled the two following programs to assembly at varying levels of optimization:

int main(void) {
    while(1) {}
    return 0;
}

int main(void) {
    while(2) {}
    return 0;
}

Even with no optimizations (-O0), the generated assembly was identical for both programs. Therefore, there is no speed difference between the two loops.

For reference, here is the generated assembly (using gcc main.c -S -masm=intel with an optimization flag):

With -O0:

    .file   "main.c"
    .intel_syntax noprefix
    .def    __main; .scl    2;  .type   32; .endef
    .text
    .globl  main
    .def    main;   .scl    2;  .type   32; .endef
    .seh_proc   main
main:
    push    rbp
    .seh_pushreg    rbp
    mov rbp, rsp
    .seh_setframe   rbp, 0
    sub rsp, 32
    .seh_stackalloc 32
    .seh_endprologue
    call    __main
.L2:
    jmp .L2
    .seh_endproc
    .ident  "GCC: (tdm64-2) 4.8.1"

With -O1:

    .file   "main.c"
    .intel_syntax noprefix
    .def    __main; .scl    2;  .type   32; .endef
    .text
    .globl  main
    .def    main;   .scl    2;  .type   32; .endef
    .seh_proc   main
main:
    sub rsp, 40
    .seh_stackalloc 40
    .seh_endprologue
    call    __main
.L2:
    jmp .L2
    .seh_endproc
    .ident  "GCC: (tdm64-2) 4.8.1"

With -O2 and -O3 (same output):

    .file   "main.c"
    .intel_syntax noprefix
    .def    __main; .scl    2;  .type   32; .endef
    .section    .text.startup,"x"
    .p2align 4,,15
    .globl  main
    .def    main;   .scl    2;  .type   32; .endef
    .seh_proc   main
main:
    sub rsp, 40
    .seh_stackalloc 40
    .seh_endprologue
    call    __main
.L2:
    jmp .L2
    .seh_endproc
    .ident  "GCC: (tdm64-2) 4.8.1"

In fact, the assembly generated for the loop is identical for every level of optimization:

 .L2:
    jmp .L2
    .seh_endproc
    .ident  "GCC: (tdm64-2) 4.8.1"

The important bits being:

.L2:
    jmp .L2

I can't read assembly very well, but this is obviously an unconditional loop. The jmp instruction unconditionally resets the program back to the .L2 label without even comparing a value against true, and of course immediately does so again until the program is somehow ended. This directly corresponds to the C/C++ code:

L2:
    goto L2;

Edit:

Interestingly enough, even with no optimizations, the following loops all produced the exact same output (unconditional jmp) in assembly:

while(42) {}

while(1==1) {}

while(2==2) {}

while(4<7) {}

while(3==3 && 4==4) {}

while(8-9 < 0) {}

while(4.3 * 3e4 >= 2 << 6) {}

while(-0.1 + 02) {}

And even to my amazement:

#include<math.h>

while(sqrt(7)) {}

while(hypot(3,4)) {}

Things get a little more interesting with user-defined functions:

int x(void) {
    return 1;
}

while(x()) {}

#include<math.h>

double x(void) {
    return sqrt(7);
}

while(x()) {}

At -O0, these two examples actually call x and perform a comparison for each iteration.

First example (returning 1):

.L4:
    call    x
    testl   %eax, %eax
    jne .L4
    movl    $0, %eax
    addq    $32, %rsp
    popq    %rbp
    ret
    .seh_endproc
    .ident  "GCC: (tdm64-2) 4.8.1"

Second example (returning sqrt(7)):

.L4:
    call    x
    xorpd   %xmm1, %xmm1
    ucomisd %xmm1, %xmm0
    jp  .L4
    xorpd   %xmm1, %xmm1
    ucomisd %xmm1, %xmm0
    jne .L4
    movl    $0, %eax
    addq    $32, %rsp
    popq    %rbp
    ret
    .seh_endproc
    .ident  "GCC: (tdm64-2) 4.8.1"

However, at -O1 and above, they both produce the same assembly as the previous examples (an unconditional jmp back to the preceding label).

TL;DR

Under GCC, the different loops are compiled to identical assembly. The compiler evaluates the constant values and doesn't bother performing any actual comparison.

The moral of the story is:

  • There exists a layer of translation between C source code and CPU instructions, and this layer has important implications for performance.
  • Therefore, performance cannot be evaluated by only looking at source code.
  • The compiler should be smart enough to optimize such trivial cases. Programmers should not waste their time thinking about them in the vast majority of cases.
Mike 'Pomax' Kamermans
  • 49,297
  • 16
  • 112
  • 153
ApproachingDarknessFish
  • 14,133
  • 7
  • 40
  • 79
  • 211
    Maybe the interviewer wasn't using gcc – M.M Jul 20 '14 at 10:39
  • 2
    @MattMcNabb That's a good point; unfortunately I don't have any other compilers installed on my system. If anyone can test for clang, msvs, or anything else, I'd be curious to see the results. – ApproachingDarknessFish Jul 21 '14 at 00:50
  • 125
    @Matt McNabb That's a good point, but if the interviewer was relying on compiler-specific optimisations, then they need to be very explicit about that in their question, and they need to accept the answer "there is no difference" as being correct for some (most?) compilers. – Jonathan Hartley Jul 21 '14 at 11:22
  • 119
    For the sake of removing any doubt, I tested this in clang 3.4.2, and both loops produce the same assembly at every `-O` level. – Martin Tournoij Jul 21 '14 at 12:08
  • 19
    I don't find this at all surprising since everything you have placed in the conditional section of the loops are compile-time constants. As such, I would suspect a compiler would see they the loops will always be true or false and either respectiively simply `jmp` back to the beginning or remove the loop completely. – sherrellbc Jul 21 '14 at 12:51
  • 3
    @Carpetsmoker: That removes any doubt about the behavior of the clang 3.4.2 in the particular configuration in which you used it. – Keith Thompson Jul 21 '14 at 15:08
  • Interesting answer. I wonder what happens if you hint the compiler to `inline` your user function... – CompuChip Jul 22 '14 at 08:47
  • As for user defined functions , [this](http://stackoverflow.com/questions/24708114/how-can-the-compile-time-be-exponentially-faster-than-run-time) answer provides a explanation. – Tejas Kale Jul 22 '14 at 10:04
  • 3
    FYI, `.seh_endproc` and `.ident "GCC: (tdm64-2) 4.8.1"` are not part of the assembly language for the loop. The first of these marks the end of the current function, and the second one annotates the object file with the identity of the compiler that created it. (Normally there would be a "function epilogue", clearing the stack frame and issuing a `ret` instruction, in between the loop and the `.seh_endproc`, but the compiler has realized that control will never get past the infinite loop, and so has omitted it as unnecessary.) – zwol Jul 22 '14 at 20:24
  • 2
    `so measuring their "speed" is a bit nonsensical`, you called? – nonsensickle Jul 22 '14 at 21:59
  • You've shown correlation but claimed causation. It could be that GCC optimized this way because it saw that nothing happens inside the loop. – hippietrail Jul 26 '14 at 04:26
  • 12
    @hippietrail I don't see how the content (or lack hereof) of the loop could possibly effect these optimizations (excepting the possibility of any `break` statements), but I just tested it anyway and no, even with code inside the loop the jump is absolute and unconditional for both `while(1)` and `while(2)`. Feel free to test the others yourself if you're actually concerned. – ApproachingDarknessFish Jul 26 '14 at 04:35
  • 2
    The basic premise here appears to be: all compilers, at minimum, will choose `cmp eax, 0` (or the local machine's equivalent to "result equals zero"), as the preferred algorithm, and advanced analysis (optimizations) will almost certainly yield an unconditional jump. The odds of a compiler doing anything else seems practically absurd, since this is the easiest method of implementing the tests. – sfdcfox Jul 27 '14 at 06:29
  • 1
    This example program is too minimalistic, read this: http://blog.llvm.org/2011/05/what-every-c-programmer-should-know_14.html , compilers are ALLOWED to do do ANYTHING because the loop does not terminate here. they could skip it altogether, or spawn butterflies. – v.oddou Jul 29 '14 at 06:54
  • 1
    @v.oddou That's just about undefined behavior; it says nothing about infinite loops. – NobodyNada Dec 26 '15 at 02:51
  • 1
    [**`gcc -O0` doesn't really mean *no* optimizations**](http://stackoverflow.com/questions/33278757/disable-all-optimization-options-in-gcc/33284629#33284629). It means just the minimum needed to produce braindead debugable code, still going through all the intermediate representations gcc uses internally. Making even worse code than `-O0` that's more like a literal translation of the source is not a feature anyone's ever implemented. The goal for `-O0` is compile fast and make code you can always single-step line by line in a debugger. `-Og` is similar: optimize for debugging. – Peter Cordes Apr 02 '16 at 05:52
  • 2
    If the interviewer was not using `gcc` they should have asked **which version is faster on *my* compiler**. – The Vee Oct 16 '16 at 09:15
  • for conditions as `while (2==2)` and `while(3==3 && 4==4) ` I would have created a definition for while so I could write `as_long_as(2==2)` – bart s Aug 11 '21 at 10:17
313

Yes, while(1) is much faster than while(2), for a human to read! If I see while(1) in an unfamiliar codebase, I immediately know what the author intended, and my eyeballs can continue to the next line.

If I see while(2), I'll probably halt in my tracks and try to figure out why the author didn't write while(1). Did the author's finger slip on the keyboard? Do the maintainers of this codebase use while(n) as an obscure commenting mechanism to make loops look different? Is it a crude workaround for a spurious warning in some broken static analysis tool? Or is this a clue that I'm reading generated code? Is it a bug resulting from an ill-advised find-and-replace-all, or a bad merge, or a cosmic ray? Maybe this line of code is supposed to do something dramatically different. Maybe it was supposed to read while(w) or while(x2). I'd better find the author in the file's history and send them a "WTF" email... and now I've broken my mental context. The while(2) might consume several minutes of my time, when while(1) would have taken a fraction of a second!

I'm exaggerating, but only a little. Code readability is really important. And that's worth mentioning in an interview!

Chris Culter
  • 4,470
  • 2
  • 15
  • 30
  • My thoughts exactly, and precisely why while (1) is faster lol Although I think it was simply the old case of a manager trying to convince himself that he knows about coding when he does not, weve all seen it, everybody wants to be a jedi. – ekerner Jul 25 '14 at 04:54
  • 8
    Absolutely, this is NO exaggeration at all. Definitely the `svn-annotate ` or `git blame` (or whatever) will be used here, and in general it takes minutes to load a file blame history. Then just finally to decide "ah I understand, the author wrote this line when fresh out of highschool", just lost 10 minutes... – v.oddou Jul 29 '14 at 06:08
  • 15
    Voted down because I'm sick of conventions. The developer has this puny opportunity to write a number he likes, then someone boring comes and nags about why it isn't `1`. – Utkan Gezer Aug 11 '14 at 03:34
  • I wasted far more time reading this drivel (that isn't a valid SO answer yet got 162 votes, more than the excellent answer by Keith Thompson) then I would ever spend looking at `while (2)`. Code readability my left cheek ... for that write `while (true)`. (And I feel very sad for Bogdan.) – Jim Balter Sep 18 '14 at 09:39
  • 5
    Downvoted due to rhetorics. Reading while(1) is exactly the same speed as while(2). – hdante Nov 29 '14 at 12:05
  • 1
    `while(l)` looks a lot like `while(1)`... Even innocent looking statements can hide subtile bugs. – chqrlie Dec 12 '16 at 02:16
  • 6
    I went through the exact same mental process as described in this answer a minute ago when I saw `while(2)` in the code (down to considering "Do the maintainers of this codebase use while(n) as an obscure commenting mechanism to make loops look different?"). So no, you're not exaggerating! – Hisham H M Jan 21 '17 at 18:25
  • @chqrlie A decent test editor will use different colors for numbers and variable names. Even VIM has syntax highlighting. – alx - recommends codidact Feb 20 '19 at 21:41
  • 1
    @CacahueteFrito: maybe, but not all editors do. `emacs` for example does not use a different color for numbers and variables by default. `qemacs` does not either, and my personal colorizing choices use a slightly different shade of green for variables and numbers, which does not solve the ambiguity. Error messages are not always colorized either. Using unambiguous names solves the problem in all circumstances. – chqrlie Feb 21 '19 at 12:46
  • 1
    @chqrlie Emacs defaults are not decent. Anyway, while (true) is faster than both of them :) – alx - recommends codidact Feb 21 '19 at 17:27
  • @CacahueteFrito: I prefer `for(;;)`, which does not require `` – chqrlie Feb 21 '19 at 18:54
  • The funniest thing about this, is that this one interview question has consumed more developer time through SO than while(2) will ever claim. Is there such a thing as a `micro-styleguide-optimization`? – Anne Quinn Aug 15 '21 at 09:16
156

The existing answers showing the code generated by a particular compiler for a particular target with a particular set of options do not fully answer the question -- unless the question was asked in that specific context ("Which is faster using gcc 4.7.2 for x86_64 with default options?", for example).

As far as the language definition is concerned, in the abstract machine while (1) evaluates the integer constant 1, and while (2) evaluates the integer constant 2; in both cases the result is compared for equality to zero. The language standard says absolutely nothing about the relative performance of the two constructs.

I can imagine that an extremely naive compiler might generate different machine code for the two forms, at least when compiled without requesting optimization.

On the other hand, C compilers absolutely must evaluate some constant expressions at compile time, when they appear in contexts that require a constant expression. For example, this:

int n = 4;
switch (n) {
    case 2+2: break;
    case 4:   break;
}

requires a diagnostic; a lazy compiler does not have the option of deferring the evaluation of 2+2 until execution time. Since a compiler has to have the ability to evaluate constant expressions at compile time, there's no good reason for it not to take advantage of that capability even when it's not required.

The C standard (N1570 6.8.5p4) says that

An iteration statement causes a statement called the loop body to be executed repeatedly until the controlling expression compares equal to 0.

So the relevant constant expressions are 1 == 0 and 2 == 0, both of which evaluate to the int value 0. (These comparison are implicit in the semantics of the while loop; they don't exist as actual C expressions.)

A perversely naive compiler could generate different code for the two constructs. For example, for the first it could generate an unconditional infinite loop (treating 1 as a special case), and for the second it could generate an explicit run-time comparison equivalent to 2 != 0. But I've never encountered a C compiler that would actually behave that way, and I seriously doubt that such a compiler exists.

Most compilers (I'm tempted to say all production-quality compilers) have options to request additional optimizations. Under such an option, it's even less likely that any compiler would generate different code for the two forms.

If your compiler generates different code for the two constructs, first check whether the differing code sequences actually have different performance. If they do, try compiling again with an optimization option (if available). If they still differ, submit a bug report to the compiler vendor. It's not (necessarily) a bug in the sense of a failure to conform to the C standard, but it's almost certainly a problem that should be corrected.

Bottom line: while (1) and while(2) almost certainly have the same performance. They have exactly the same semantics, and there's no good reason for any compiler not to generate identical code.

And though it's perfectly legal for a compiler to generate faster code for while(1) than for while(2), it's equally legal for a compiler to generate faster code for while(1) than for another occurrence of while(1) in the same program.

(There's another question implicit in the one you asked: How do you deal with an interviewer who insists on an incorrect technical point. That would probably be a good question for the Workplace site).

Keith Thompson
  • 254,901
  • 44
  • 429
  • 631
  • 8
    "In this case, the relevant (implicit) constant expressions are 1 != 0 and 2 != 0, both of which evaluate to the int value 1" ... This is overly complicated, and inaccurate. The standard simply says that the controlling expression of `while` must be of scalar type and the loop body is repeated until the expression compares equal to 0. It doesn't say that there is an implicit `expr != 0` that is evaluated ... that would require the result of that -- 0 or 1 -- in turn be compared to 0, ad infinitum. No, the expression is compared to 0, but that comparison doesn't produce a value. P.S. I upvoted. – Jim Balter Jul 21 '14 at 08:38
  • 3
    @JimBalter: I see your point, and I'll update my answer to address it. What I meant, though, was that the standard's wording "... until the controlling expression compares equal to 0" implies evaluating ` == 0`; that's what "compares equal to 0" *means* in C. That comparison is part of the semantics of the `while` loop. There is no implication, either in the standard or in my answer, that the result needs to be compared to `0` again. (I should have written `==` rather than `!=`.) But that part of my answer was unclear, and I'll update it. – Keith Thompson Jul 21 '14 at 14:56
  • 1
    ""compares equal to 0" means in C" -- but that language is in the *standard*, not in C ... the implementation does the comparison, it doesn't generate a C language comparison. You write "both of which evaluate to the int value 1" -- but no such evaluation ever occurs. If you look at the code generated for `while (2)`, `while (pointer)`, `while (9.67)`, by the most naive, unoptimized compiler, you won't see any code generated that yields `0` or `1`. "I should have written == rather than !=" -- no, that wouldn't have made any sense. – Jim Balter Jul 21 '14 at 19:30
  • 2
    @JimBalter: Hmmm. I don't mean to suggest that the "compares equal to 0" implies the existence of an `... == 0` C expression. My point is that both the "compares equal to 0" required by the standard's description of `while` loops and an explicit `x == 0` expression logically imply the same operation. And I think that a *painfully* naive C compiler might generate code that generates an `int` value of `0` or `1` for any `while` loop -- though I don't believe any actual compiler is quite that naive. – Keith Thompson Jul 21 '14 at 20:24
  • 1
    "I don't mean to suggest" ... but you did exactly that, and you do it again when you write "I think that a painfully naive C compiler might generate code that generates an int value of 0 or 1 for any while loop". See the code that an *actual* painfully naive compiler generates in anatolyg's answer: an explicit comparison -- *in the target machine, not in C* -- between the controlling expression and zero. That's what the standard calls for, *not* the evaluation of some C expression that yields 0 or 1. If you still don't get it, further discussion won't help, so I'm done here. – Jim Balter Jul 21 '14 at 21:57
  • Would not the language definition permit a compiler to do absolutely anything it wants with either expression? I personally despise the idea of making side-effect-free infinite loops UB (I think a much better definition would be to say that a compiler need not perform any calculation until such time as its results would affect the side-effects produced by a program, and execution time--even if infinite--is not considered a side-effect). To have defined behavior, the code should at minimum define a global volatile unsigned variable and increment it within the loop. – supercat Jul 22 '14 at 16:09
  • @supercat: There is no undefined behavior. C11 added a new rule ([N1570](http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf) 6.8.5p6) causing *some* infinite loops to have undefined behavior, but it applies only when the controlling expression is not a constant expression. The behavior of `int main(void){while (1){}}` is well defined; it has no side effects and does not terminate. – Keith Thompson Jul 22 '14 at 16:16
  • @KeithThompson: Would the rule make `int shouldHang; while(1) if (!shouldHang) break;` different from `while(shouldHang);` [recognizing that in either case, if `shouldHang` is observed true once, a compiler would be under no obligation to notice if its value is externally modified]. Also, is there anything that clarifies whether "assuming the loop terminates" is equivalent to "assuming that the loop terminates for some reason allowed by the C standard"? What would you think of my proposed rule/clarification? – supercat Jul 22 '14 at 16:28
  • @supercat: Personally, I think that terminating vs. not terminating should be considered a side effect. Apparently the committee (presumably influenced by compiler vendors) felt that the optimization opportunities made available by breaking that assumption are worth the cost. I dislike the "may be assumed by the implementation to terminate" wording; the standard should describe how programs are required (or not required) to behave, and expressing that in terms of the compiler's internal "assumptions" is IMHO sloppy. If it were up to me, I wouldn't "improve" that rule, I'd nuke it. – Keith Thompson Jul 22 '14 at 16:37
  • @KeithThompson: There is usefulness to allowing code which is unsequenced relative to other code may be run in parallel with that code, and allowing a compiler to consider a side-effect-free loop as unsequenced relative to any code which neither computes values used by the loop (and must thus precede it) nor uses values computed in the loop (and must thus follow it). The value of the latter allowance if compilers are allowed to consider the loops unsequenced even in cases where the compiler can't prove that the loops terminate. Note that as I would propose the spec, ... – supercat Jul 22 '14 at 17:05
  • ...compilers would be required to produce a result consistent with having the compiler start a new parallel execution unit run the code in the endless loop each time the main thread reaches the start of that loop. – supercat Jul 22 '14 at 17:08
  • 12
    Note: This is already a [workplace.se] question: http://workplace.stackexchange.com/questions/4314/how-to-tell-a-interviewer-that-he-is-wrong-on-a-technical-question – Joe Jul 22 '14 at 18:33
  • @KeithThompson: this is a reading everybody should have in mind when talking about termination and optimizations possibilities: http://blog.llvm.org/2011/05/what-every-c-programmer-should-know_14.html and: http://blog.regehr.org/archives/140 – v.oddou Jul 29 '14 at 06:36
150

Wait a minute. The interviewer, did he look like this guy?

enter image description here

It's bad enough that the interviewer himself has failed this interview, what if other programmers at this company have "passed" this test?

No. Evaluating the statements 1 == 0 and 2 == 0 should be equally fast. We could imagine poor compiler implementations where one might be faster than the other. But there's no good reason why one should be faster than the other.

Even if there's some obscure circumstance when the claim would be true, programmers should not be evaluated based on knowledge of obscure (and in this case, creepy) trivia. Don't worry about this interview, the best move here is to walk away.

Disclaimer: This is NOT an original Dilbert cartoon. This is merely a mashup.

janos
  • 120,954
  • 29
  • 226
  • 236
  • no but really, we can all imagine fairly easily that all compilers written by serious companies will produce reasonable code. let's take the "non optimized case" /O0, maybe it will end up like anatolyg has posted. Then its a question of CPU, will the `cmp` operand run in less cycles comparing 1 to 0 than 2 to 0 ? how many cycles does it take to execute cmp in general ? is it variable according to bit patterns ? are they more "complex" bit patterns that slow down `cmp` ? I don't know personally. you could imagine a super idiotic implementation checking bit by bit from rank 0 to n (e.g. n=31). – v.oddou Jul 29 '14 at 06:16
  • 5
    That's my point too: the `cmp` operand *should be* equally fast for 1 and 200. Probably we *could imagine* idiotic implementations where this is not the case. But can we imagine a *non-idiotic* implementation where `while(1)` is faster than `while(200)`? Similarly, if in some pre-historic age, the only available implementation was idiotic like that, should we fuss about it today? I don't think so, this is pointy haired boss talk, and a real gem at that! – janos Jul 29 '14 at 08:55
  • @v.ouddou "will the cmp operand run in less cycles comparing 1 to 0 than 2 to 0" -- No. You should learn what a cycle is. "I don't know personally. you could imagine a super idiotic implementation checking bit by bit from rank 0 to n" -- or the other way, still making the interviewer a clueless idiot. And why worry about checking bit by bit? The implementation could be a man in a box who decides to take a lunch break in the middle of evaluating your program. – Jim Balter Sep 18 '14 at 09:56
  • Loading 0 or 1 into a register can be faster than loading 2 depending on CPU architecture. But the guy asking this question clearly doesn't realize that the test in the loop compiles away to nothing. – Joshua Sep 25 '20 at 16:13
82

Your explanation is correct. This seems to be a question that tests your self-confidence in addition to technical knowledge.

By the way, if you answered

Both pieces of code are equally fast, because both take infinite time to complete

the interviewer would say

But while (1) can do more iterations per second; can you explain why? (this is nonsense; testing your confidence again)

So by answering like you did, you saved some time which you would otherwise waste on discussing this bad question.


Here is an example code generated by the compiler on my system (MS Visual Studio 2012), with optimizations turned off:

yyy:
    xor eax, eax
    cmp eax, 1     (or 2, depending on your code)
    je xxx
    jmp yyy
xxx:
    ...

With optimizations turned on:

xxx:
    jmp xxx

So the generated code is exactly the same, at least with an optimizing compiler.

Lightness Races in Orbit
  • 378,754
  • 76
  • 643
  • 1,055
anatolyg
  • 26,506
  • 9
  • 60
  • 134
  • `cmp eax, 1 (or 2, depending on your code)` - that's not entirely true. The operand for `while` is of boolean type, therefore it doesn't correspond completely to operation on integers in assembly (although it can, as long as the semantics are preserved). – SomeWittyUsername Jul 20 '14 at 10:38
  • 28
    This code is really what the compiler outputs on my system. I didn't make it up. – anatolyg Jul 20 '14 at 10:48
  • 1
    I didn't say you made it (well, your compiler surely didn't output the part in parentheses), just saying that comparing to 1,2,3, or 10000 doesn't have a 1-1 correspondence to the `while (some boolean value)` logic. Your compiler might choose this implementation if it preserves the fucntionality – SomeWittyUsername Jul 20 '14 at 10:51
  • 1
    What compiler/compile settings did you use? – ApproachingDarknessFish Jul 21 '14 at 03:46
  • 11
    icepack "The operand for while is of boolean type" -- utter nonsense. Are you the interviewer? I suggest that you become familiar with the C language and its standard before making such claims. – Jim Balter Jul 21 '14 at 07:38
  • 30
    "I didn't make it up." -- Please don't pay any attention to icepack, who is talking nonsense. C has no boolean type (it does have _Bool in stdbool.h, but that's not the same, and the semantics of `while` long preceded it) and the operand of `while` is not of boolean or _Bool or any other specific type. The operand of `while` can be *any* expression ... the while breaks on 0 and continues on non-0. – Jim Balter Jul 21 '14 at 07:49
  • 38
    "Both pieces of code are equally fast, because both take infinite time to complete" made me think of something interesting. The only ways to terminate an infinite loop would be for a bit to be flipped by a charged particle or the hardware failing: changing the statement from `while (00000001) {}` to `while (00000000) {}`. The more true bits you have the less chance of the value flipping to false. Sadly, 2 also only has one true bit. 3, however, would run for significantly longer. This also only applies to a compiler which doesn't always optimize this away (VC++). – Jonathan Dickinson Jul 21 '14 at 12:56
  • @JonathanDickinson Is there any tiniest possible optimization could achieve by eliminating the values that could be flipped/toggle? – mr5 Jul 21 '14 at 13:25
  • 8
    @mr5 nope. In order for a bit flip to actually result in something like this you are talking about execution times in tens of thousands of years. Merely a **thought experiment.** If you hail from an immortal race you may want to use -1 to prevent bit-flipping from affecting your program. – Jonathan Dickinson Jul 21 '14 at 13:47
  • 4
    @JonathanDickinson "you may want to use -1" -- I would use a compiler that optimizes away the constant, which is nearly all of them. – Jim Balter Jul 21 '14 at 19:48
  • "The operand of while can be *any* expression" -- I should have added "of scalar type". – Jim Balter Jul 21 '14 at 19:51
  • @JonathanDickinson It made me to ask that silly question because you say 3 bits would run significantly longer than 2 bits. – mr5 Jul 25 '14 at 03:11
  • 4
    So what's the correct answer if it's a self-confidence test? Coders have to test assumptions every day, being too confident is a recipe for disaster. – Dax Fohl Jul 25 '14 at 15:40
  • You all failed to speak about the CPU, the code is the same, but what about the `cmp` operation ? nobody said it had a constant time execution whatever the value in the operands just yet. You've got to say it, and prove it, THEN your argument would be finished. Compilers are one thing, that's very important but the story doesn't stop here just yet. – v.oddou Jul 29 '14 at 06:27
  • The non-optimal version is 32bit but the optimized could be 64 code, so maybe it will run slower because the jmp will take longer as the register is larger? :-) – Mikhail Oct 16 '14 at 07:02
  • Confidently going against the interviewer seems like a risky strategy to me. It's possible he is just "testing your confidence" but it's IMO far more likely that he really belives what he is saying. – plugwash Jan 13 '17 at 19:06
  • @JimBalter: Jumping in again a few years later."C has no boolean type (it does have _Bool in stdbool.h, but that's not the same, and the semantics of while long preceded it)". C, starting with C99, does have a boolean type, called `_Bool`. It's not defined in ``, it's predefined just like `int` is. `` defines `bool` as an alias for `_Bool`. On the other hand, the equality and comparison operators yield results of type `int`, not of type `_Bool` -- but I wouldn't say that causes `_Bool` not to be a boolean type. – Keith Thompson May 11 '17 at 01:55
63

The most likely explanation for the question is that the interviewer thinks that the processor checks the individual bits of the numbers, one by one, until it hits a non-zero value:

1 = 00000001
2 = 00000010

If the "is zero?" algorithm starts from the right side of the number and has to check each bit until it reaches a non-zero bit, the while(1) { } loop would have to check twice as many bits per iteration as the while(2) { } loop.

This requires a very wrong mental model of how computers work, but it does have its own internal logic. One way to check would be to ask if while(-1) { } or while(3) { } would be equally fast, or if while(32) { } would be even slower.

Ryan Cavanaugh
  • 209,514
  • 56
  • 272
  • 235
  • 41
    I assumed the interviewer's misapprehension would be more like "2 is an int that needs to be converted to a boolean in order to be used in a conditional expression, whereas 1 is already boolean." – Russell Borogove Jul 21 '14 at 15:30
  • more likely this : http://stackoverflow.com/questions/24848359/which-is-faster-while1-or-while2#comment38583790_24848359 – Mardoxx Jul 22 '14 at 11:33
  • 7
    And if the comparison algorithm starts from the left, it is the other way around. – Paŭlo Ebermann Jul 22 '14 at 19:31
  • 1
    +1, that's exactly what I thought. you could perfectly imagine somebody believing that fundamentally, the `cmp` algorithm of a CPU is a linear bit check with early loop exit on the first difference. – v.oddou Jul 29 '14 at 06:19
  • It seems to require a mental model that assumes something like: CPU clock cycle length is the time it takes for electricity to travel the distance to perform the simplest logic operation (like passing through the first level of XOR gates or something). Of course, clock cycle length is actually an arbitrary time constraint based on the physics of the CPU, a quantum against which the the CPU manufacturer will build a catalog of instructions of varying clock-cycle costs. On second thought, maybe the mental model is just that CPUs can't perform electrical operations on register bits in parallel? – Ben Simmons Dec 24 '14 at 03:41
  • Here's a nice refresher on the equivalence function that, when combined with a "Grand AND", produces a circuit to compare bytes: http://www.calculemus.org/logsoc03/bramki/LogGates1.html – Ben Simmons Dec 24 '14 at 03:53
  • I hadn't even considered that anyone might think that way. Any decent compiler will emit code that just uses an unconditional branch for an infinite loop. The question in my mind is whether there are any compilers so dumb that they only recognize `while(1)` (and maybe `while(42)`) as infinite-loop idioms, not the general case of any non-zero compile-time constant. – Peter Cordes Apr 02 '16 at 06:10
  • According to analotlyg's answer, MSVC in debug mode really does fail to recognize either as an infinite loop, and makes code to zero a register and then compare it with an immediate `1`, which is hilarious. But still equal code size and execution speed for both. Data-dependent execution times are very rare on any CPU for common simple instructions. It does happen for complex instructions (like `div`. In some CPUs, maybe `mul`.) slow-but-constant-time is common in some designs, data-dependent is not. (esp. in out-of-order CPUs, scheduling is hard with variable latency.) – Peter Cordes Apr 02 '16 at 06:11
  • @PeterCordes the purpose of unoptimized compiled code in a debugging context becomes clearer when you think about the difficulties of a debugger single-stepping through optimized code. (What piece of code does this source line I'm breaking on correspond to? Does it correspond to any instructions at all?) – blubberdiblub Aug 30 '19 at 04:57
  • @blubberdiblub: Right, but none of that stops you from optimizing some *within* a C statement like [gcc and clang do in their debug mode (-O0)](https://stackoverflow.com/questions/53366394). e.g. optimizing `while(1) { }` to a `jmp` at the bottom of the loop. But each statement is still compiled to a separate contiguous block of asm. As you can see from the Compiler Explorer's colour highlighting (https://godbolt.org/z/Xoegb4) statements are still matched to blocks of asm in a sane way. Trying to set a breakpoint on the `while(1)` itself would give you the top of the loop which is fine. – Peter Cordes Aug 30 '19 at 05:12
  • @blubberdiblub: Or the way gcc/clang look at it in their internal representation, `while(1)` isn't a conditional at all, it just *is* an unconditional loop. The same way programmers look at it. This takes no asm to implement, other than the loop branch. – Peter Cordes Aug 30 '19 at 05:19
  • @PeterCordes you're making a good point. But apparently MSVC doesn't work like that. I haven't researched whether the C or C++ specs require it to evaluate it away in the resulting code. I suspect they don't and therefore MSVC wouldn't be wrong in producing such code. As long as it fully evaluates constant expressions in contexts where the standard requires it to, there's no problem. (you cannot use `std::log10` or other math functions as `constexpr`in MSVC). If you want an efficient release, you're not going to compile it in debugging mode anyway. – blubberdiblub Aug 30 '19 at 05:32
  • @blubberdiblub: ICO C/C++ specify nothing at all about any kind of "debug mode". That's purely an implementation feature. The ISO standards give the implementation complete freedom as long as the resulting program produces the same *observable* results as the C++ abstract machine. (This is the "as-if" rule.) The only thing that comes remotely close is that `volatile` accesses must really happen as written. The fact that data races on non-atomic variables are UB (in the C11 memory model) means that even the order of changing globals doesn't count as "observable". – Peter Cordes Aug 30 '19 at 05:52
  • @blubberdiblub: I've never heard anyone (other than you) argue that gcc or clang's `-O0` output is not literal enough. And fun fact: gcc even does optimizations like [Why does GCC use multiplication by a strange number in implementing integer division?](//stackoverflow.com/a/41184098) at -O0 when it's within a single statement. e.g. see [Why does integer division by -1 (negative one) result in FPE?](//stackoverflow.com/a/46378352). And BTW, debug-mode performance is occasionally relevant for games or other things that have realtime requirements to be usable at all. – Peter Cordes Aug 30 '19 at 05:55
  • 1
    @PeterCordes "I've never heard anyone (other than you) argue that gcc or clang's `-O0` output is not literal enough." I never said such thing and I didn't have gcc or clang in mind at all. You must be misreading me. I'm just not of your opinion that what MSVC produces is hilarious. – blubberdiblub Aug 30 '19 at 06:00
  • @blub all I meant was that you were suggesting that the loop should be more complex than gcc/ clang make. Didn't mean to accuse you of anything. – Peter Cordes Aug 30 '19 at 06:06
  • @PeterCordes you can very well instruct MSVC to produce optimized code when debugging, so you can have the `while (1)` optimized away even then. However, there is a very real difference between two breakpoints placed at the while and immediately after the `{` inside the loop, at least in MSVC. The first will break only once before entry in the loop and the other at the beginning at each iteration, which may be important when the loop content isn't trivial. And if you enable optimizations for debug, the first breakpoint will be gone. It does not *need* to be as complex as that, but it can help. – blubberdiblub Aug 30 '19 at 06:10
  • @blubberdiblub: interesting. GCC / clang have the same effect but only for loops with a non-trivial condition. After `gcc -g -O0`, if you use GDB to set a breakpoint on the `while(1)` line it actually applies to the first line of the loop body. But a breakpoint on the `while(x)` is effectively just *before* the loop. In the asm, it's a break on the address of the `jmp` before the loop proper that jumps to the loop condition at the bottom. ([Why are loops always compiled into "do...while" style (tail jump)?](//stackoverflow.com/q/47783926)). I'm curious what address MSVC would break on. – Peter Cordes Aug 30 '19 at 06:29
  • 1
    @PeterCordes I was wrong, in MSVC a breakpoint on the `while(1)` does not break before the loop, but on the expression. But it still collapses inside the loop when optimizing the expression away. Take [this code](https://i.paste.pics/6K6WT.png) with lots of breaks. In unoptimized debugging mode you get [this](https://i.paste.pics/6K6YB.png). When optimizing, [many breakpoints fall together or even spill over into the next function](https://i.paste.pics/6K702.png) (the IDE shows the result breakpoints while debugging) - [corresponding disassembly](https://i.paste.pics/6K70P.png). – blubberdiblub Aug 31 '19 at 07:42
36

Of course I do not know the real intentions of this manager, but I propose a completely different view: When hiring a new member into a team, it is useful to know how he reacts to conflict situations.

They drove you into conflict. If this is true, they are clever and the question was good. For some industries, like banking, posting your problem to Stack Overflow could be a reason for rejection.

But of course I do not know, I just propose one option.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Tõnu Samuel
  • 2,877
  • 2
  • 20
  • 30
  • 2
    It is indeed excellent but while(2) vs while(1) is obviously taken from dilbert comics. It CANNOT be invented by somebody in his right mind (how does anybody come up with while(2) as a possible thing to write anyway ?). If your hypothesis was true, definitely you would give a problem so unique that you can google for it. Like "is while(0xf00b442) slower than while(1)", how the bank would find the inerviewee's question otherwise ? do you suppose they are the NSA and have access to keyscore ? – v.oddou Jul 29 '14 at 06:22
29

I think the clue is to be found in "asked by a senior manager". This person obviously stopped programming when he became a manager and then it took him/her several years to become a senior manager. Never lost interest in programming, but never wrote a line since those days. So his reference is not "any decent compiler out there" as some answers mention, but "the compiler this person worked with 20-30 years ago".

At that time, programmers spent a considerable percentage of their time trying out various methods for making their code faster and more efficient as CPU time of 'the central minicomputer' was so valueable. As did people writing compilers. I'm guessing that the one-and-only compiler his company made available at that time optimized on the basis of 'frequently encountered statements that can be optimized' and took a bit of a shortcut when encountering a while(1) and evaluated everything else, including a while(2). Having had such an experience could explain his position and his confidence in it.

The best approach to get you hired is probably one that enables the senior manager to get carried away and lecture you 2-3 minutes on "the good old days of programming" before YOU smoothly lead him towards the next interview subject. (Good timing is important here - too fast and you're interrupting the story - too slow and you are labelled as somebody with insufficient focus). Do tell him at the end of the interview that you'd be highly interested to learn more about this topic.

OldFrank
  • 888
  • 6
  • 7
22

You should have asked him how did he reached to that conclusion. Under any decent compiler out there, the two compile to the same asm instructions. So, he should have told you the compiler as well to start off. And even so, you would have to know the compiler and platform very well to even make a theoretical educated guess. And in the end, it doesn't really matter in practice, since there are other external factors like memory fragmentation or system load that will influence the loop more than this detail.

Rad'Val
  • 8,895
  • 9
  • 62
  • 92
  • 15
    @GKFX If you've given your answer and they tell you that you are wrong there is no reason you can't ask them to explain why. If Anatolyg is correct and it is a test of your self confidence then you should be explaining why you answered the way you did and asking them the same. – P.Turpie Jul 22 '14 at 04:14
  • I meant as the first thing you say to them. It can't be "Why is x faster?" "I don't know; why is x faster?". Obviously, having answered properly, you can then ask. – GKFX Jul 24 '14 at 14:54
20

For the sake of this question, I should that add I remember Doug Gwyn from C Committee writing that some early C compilers without the optimizer pass would generate a test in assembly for the while(1) (comparing to for(;;) which wouldn't have it).

I would answer to the interviewer by giving this historical note and then say that even if I would be very surprised any compiler did this, a compiler could have:

  • without optimizer pass the compiler generate a test for both while(1) and while(2)
  • with optimizer pass the compiler is instructed to optimize (with an unconditional jump) all while(1) because they are considered as idiomatic. This would leave the while(2) with a test and therefore makes a performance difference between the two.

I would of course add to the interviewer that not considering while(1) and while(2) the same construct is a sign of low-quality optimization as these are equivalent constructs.

ouah
  • 142,963
  • 15
  • 272
  • 331
11

Another take on such a question would be to see if you got courage to tell your manager that he/she is wrong! And how softly you can communicate it.

My first instinct would have been to generate assembly output to show the manager that any decent compiler should take care of it, and if it's not doing so, you will submit the next patch for it :)

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
UncleKing
  • 723
  • 4
  • 10
9

To see so many people delve into this problem, shows exactly why this could very well be a test to see how quickly you want to micro-optimize things.

My answer would be; it doesn't matter that much, I rather focus on the business problem which we are solving. After all, that's what I'm going to be paid for.

Moreover, I would opt for while(1) {} because it is more common, and other teammates would not need to spend time to figure out why someone would go for a higher number than 1.

Now go write some code. ;-)

Bart Verkoeijen
  • 16,545
  • 7
  • 52
  • 56
  • 1
    Unless you are being paid to optimize some real-time code for which you need to shave just 1 or 2 milliseconds to fit into its running time demands. Of course that's a job for the optimizer, some would say - that is if you have an optimizer for your architecture. – Neowizard Jul 24 '14 at 09:38
7

If you're that worried about optimisation, you should use

for (;;)

because that has no tests. (cynic mode)

Tom Tanner
  • 9,244
  • 3
  • 33
  • 61
6

It seems to me this is one of those behavioral interview questions masked as a technical question. Some companies do this - they will ask a technical question that should be fairly easy for any competent programmer to answer, but when the interviewee gives the correct answer, the interviewer will tell them they are wrong.

The company wants to see how you will react in this situation. Do you sit there quietly and don't push that your answer is correct, due to either self-doubt or fear of upsetting the interviewer? Or are you willing to challenge a person in authority who you know is wrong? They want to see if you are willing to stand up for your convictions, and if you can do it in a tactful and respectful manner.

pacoverflow
  • 3,726
  • 11
  • 41
  • 71
5

Maybe the interviewer posed such dumb question intentionally and wanted you to make 3 points:

  1. Basic reasoning. Both loops are infinite, it's hard to talk about performance.
  2. Knowledge about optimisation levels. He wanted to hear from you if you let the compiler do any optimisation for you, it would optimise the condition, especially if the block was not empty.
  3. Knowledge about microprocessor architecture. Most architectures have a special CPU instruction for comparision with 0 (while not necessarily faster).
Rok Kralj
  • 46,826
  • 10
  • 71
  • 80
4

Here's a problem: If you actually write a program and measure its speed, the speed of both loops could be different! For some reasonable comparison:

unsigned long i = 0;
while (1) { if (++i == 1000000000) break; }

unsigned long i = 0;
while (2) { if (++i == 1000000000) break; }

with some code added that prints the time, some random effect like how the loop is positioned within one or two cache lines could make a difference. One loop might by pure chance be completely within one cache line, or at the start of a cache line, or it might to straddle two cache lines. And as a result, whatever the interviewer claims is fastest might actually be fastest - by coincidence.

Worst case scenario: An optimising compiler doesn't figure out what the loop does, but figures out that the values produced when the second loop is executed are the same ones as produced by the first one. And generate full code for the first loop, but not for the second.

gnasher729
  • 51,477
  • 5
  • 75
  • 98
  • 1
    The "cache lines" reasoning will only work on a chip that has an extremely small cache. – sharptooth Jul 21 '14 at 08:11
  • 10
    This is beside the point. Questions about speed assume "all else equal". – Jim Balter Jul 21 '14 at 08:42
  • @sharptooth: Wrong. The problem is not whether code fits into cache or not - of course a tight loop will fit. The problem is that processors have to fetch instructions from memory, typically through cache, into the instruction decoder. And usually the instruction decoder can only fetch instructions from one cache line at a time. So fetching the same ten bytes of instructions from a single cache line will be faster than fetching it when its stretched out over two cache lines. – gnasher729 Jul 21 '14 at 11:01
  • 6
    @JimBalter: This was not a question of speed, it was a question about an argument with an interviewer. And the possibility that doing an actual test might prove the interviewer "right" - for reasons that have nothing to do with his argument but are pure coincidence. Which would put you into an embarrassing situation if you tried to prove he was wrong. – gnasher729 Jul 21 '14 at 11:03
  • @gnasher729: I see your point now. Yes, this could make a difference. – sharptooth Jul 21 '14 at 11:32
  • 1
    @gnasher729 All logical arguments aside, the case of coincidence which you talk about would be worth looking at. But still blindly believing such a case exists is not only naive but stupid. As long as you don't explain what, how or why that would happen this answer is useless. – user568109 Jul 21 '14 at 14:59
  • 4
    @gnasher729 "This was not a question of speed" -- I will not debate people who employ dishonest rhetorical techniques. – Jim Balter Jul 21 '14 at 19:36
  • @user568109: Download Agner Fog's excellent optimisation manual at http://www.agner.org/optimize/ . One optimisation technique is aligning loops within cache lines to minimise the number of instruction fetches. Get an old PowerPC manual. The G4 processor had a branch target cache holding two instruction words at the branch destination, and execution speed was optimal if a loop consisted of the 64 bits in the BTC, plus 128 bits from the next cache line. Is that enough evidence? – gnasher729 Jul 25 '14 at 00:07
  • @user568109: And then there was the 68020 processor, which actually managed to execute the _same_ loop in different numbers of cycles, depending in opaque internal state of the processor. And I mean the same loop, not a copy of the loop with identical instructions. Typically, the number of cycles per iteration could change after returning from a hardware interrupt. – gnasher729 Jul 25 '14 at 00:13
  • 1
    @gnasher729 You are just speculating that cache optimisation would make it run differently. It would affect both of them, so what is the difference in performance. For the second example are you seriously telling me interrupt handling would affect performance and cause the difference. I don't buy your whimsical sense of comparison. Let's not debate between accpeted ways to do a comparison and your ways. – user568109 Jul 25 '14 at 05:19
  • @user568109: I'm seriously telling you. Maybe I should tell you that I have optimised code in the late 70s. And optimised code on the 68020 in the late 80s. And measured it. On machines where interrupts could be turned off. And found loops that two and sometimes three iteration times with interrupts turned off, and variable times with interrupts turned on. – gnasher729 Jul 27 '14 at 13:10
  • To the first part of your comment: You don't seem to be very good at reading. I said that optimising cache alignment can improve the speed of code. The consequence is that code with non-optimised random cache alignment _may_ run at different speeds depending on random cache alignment. And that's what for example Intels own "Intel 64 and IA-32 Architectures Optimization Reference Manual" tells you. – gnasher729 Jul 27 '14 at 13:13
4

I used to program C and Assembly code back when this sort of nonsense might have made a difference. When it did make a difference we wrote it in Assembly.

If I were asked that question I would have repeated Donald Knuth's famous 1974 quote about premature optimization and walked if the interviewer didn't laugh and move on.

Peter Wooster
  • 6,009
  • 2
  • 27
  • 39
  • 1
    Given he also said "In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal and I believe the same viewpoint should prevail in software engineering", I think you walking is unjustified. – Alice Aug 10 '14 at 03:20
3

They are both equal - the same.

According to the specifications anything that is not 0 is considered true, so even without any optimization, and a good compiler will not generate any code for while(1) or while(2). The compiler would generate a simple check for != 0.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
3

Judging by the amount of time and effort people have spent testing, proving, and answering this very straight forward question I'd say that both were made very slow by asking the question.

And so to spend even more time on it...

while (2) is ridiculous, because,

while (1), and while (true) are historically used to make an infinite loop which expects break to be called at some stage inside the loop based upon a condition that will certainly occur.

The 1 is simply there to always evaluate to true and therefore, to say while (2) is about as silly as saying while (1 + 1 == 2) which will also evaluate to true.

And if you want to be completely silly just use: -

while (1 + 5 - 2 - (1 * 3) == 0.5 - 4 + ((9 * 2) / 4.0)) {
    if (succeed())
        break;
}

I think that the interviewer made a typo which did not effect the running of the code, but if he intentionally used the 2 just to be weird then sack him before he puts weird statements all through your code making it difficult to read and work with.

Random
  • 505
  • 2
  • 17
ekerner
  • 5,650
  • 1
  • 37
  • 31
  • The rollback you made reverts edit by a very high-rep user. These people usually know the policies very well and they’re here to teach them to you. I find the last line of you post adding nothing useful to your post. Even if I got the joke I am obviously missing, I would consider it too chatty, not worth the extra paragraph. – Palec Jul 29 '14 at 10:11
  • @Palec: Thanks for the comment. I found that the same guy edited many off my posts, mostly only to remove the sign off. So I figured he was just a reputation troll, and hence his reputation. Has online forums depleted the language - and common courtesies - so much that we no longer sign our writings? maybe I'm a little old school but it just seems rude to omit hellos and goodbyes. We are using apps, but we are still humans. – ekerner Jul 29 '14 at 10:50
  • 1
    On [SE], [salutations, taglines, thanks etc. are not welcome](http://meta.stackexchange.com/q/2950/238706). The same is mentioned in [help], specifically under [expected behavior](http://stackoverflow.com/help/behavior). No part of [SE], not even [SO], is a forum – they are Q&A sites. [Editing](http://stackoverflow.com/help/editing) is one of the differences. Also comments should serve only to provide feedback on the post, not for discussion ([chat]). And there are many other ways Q&A differs from a forum. More on that in [help] and on [meta]. – Palec Jul 29 '14 at 11:27
  • Note that when you have over 2000 rep (edit privilege), you gain no more rep from edits. When in doubt why and edit you disagree with has been applied to your post, or you see someone’s misbehavior, ask on [meta]. If the editor did the wrong thing, they could be notified by a mod (maybe they did not realize) or even get punished somehow (if the behavior was willfully malicious). Otherwise you get pointers to explanation why the edit/behavior is correct. Unnecessary rollback clutters revision history and can get you into rollback war. I roll back only when really sure the edit is incorrect. – Palec Jul 29 '14 at 11:36
  • Finally about signing, greeting, …: It might seem rude to not include these formalities, but it increases signal to noise ratio and no information is lost. You can choose any nickname you want, that is your signature. In most places you have your avatar, too. – Palec Jul 29 '14 at 11:52
  • @ekerner My edits earned me no rep; I was simply cleaning up your posts. I did not just remove your signature, I fixed any spelling/grammatical errors I noticed. When in doubt, you should do as others do. Do you see a single other person, anywhere on this site, using signatures in their posts? It's not because they are rude, they just understand the convention here is not to use them, any more than you would expect to see signatures and salutations in a Wikipedia article. I've handled the rollbacks; I've pretty efficient at performing large numbers of edits at this point. – user229044 Jul 29 '14 at 15:36
  • You should replace `==` by `!=` in your condition to make it `true` – anatolyg Jun 01 '16 at 07:41
  • @anatolyg: LOL I cant believe you made me double check haha! You should look again, its correct. The math of the left and on the right each evaluate to 1. – ekerner Jun 03 '16 at 04:37
  • @ekerner it's not true https://godbolt.org/z/676E36fh4 probably since rhs is floating point (not really sure about `C`) – apple apple Aug 12 '21 at 15:37
  • @appleapple in c dividing by and int returns an int. So the / 4 will round the result, you need to divide by a double like 4.0 or a float like 4.0f .. if (1 + 5 - 2 - (1 * 3) == 0.5 - 4 + ((9 * 2) / 4.0)) return 1; – ekerner Aug 14 '21 at 00:20
  • @ekerner I don't study `C`, but afaict the flags make compiler compile is as `C`. even [with `-pedantic`](https://godbolt.org/z/zaWej6hYM) or [with `std=c89`](https://godbolt.org/z/cYh7n67bj) it still not true. – apple apple Aug 16 '21 at 14:42
  • @ekerner btw the point I'm in doubt is `0.5 - ...` which doesn't involve division. – apple apple Aug 16 '21 at 14:45
  • @appleapple The 0.5 is of type double, its not the problem as the compiler knows it is a double. The problem is 9 * 2 / 4 .. the / 4 causes the expected double or float result of 4.5 to be treated as an int, to force it to a double we need to typecast, assign, or use 4.0 instead of 4. Simply put, if you replace 4 with 4.0 in your godbolt sandbox it works as expected. – ekerner Aug 16 '21 at 18:43
  • @ekerner I'm not talking why my code return 0. But your code `while (1 + 5 - 2 - (1 * 3) == 0.5 - 4 + ((9 * 2) / 4))` which (you) means it should be true while it's not. (unless you're talking gcc does it wrong), @ anatolyg said the same thing. – apple apple Aug 17 '21 at 12:29
2

That depends on the compiler.

If it optimizes the code, or if it evaluates 1 and 2 to true with the same number of instructions for a particular instruction set, the execution speed will be the same.

In real cases it will always be equally fast, but it would be possible to imagine a particular compiler and a particular system when this would be evaluated differently.

I mean: this is not really a language (C) related question.

user143522
  • 49
  • 1
0

Since people looking to answer this question want the fastest loop, I would have answered that both are equally compiling into the same assembly code, as stated in the other answers. Nevertheless you can suggest to the interviewer using 'loop unrolling'; a do {} while loop instead of the while loop.

Cautious: You need to ensure that the loop would at least always run once.

The loop should have a break condition inside.

Also for that kind of loop I would personally prefer the use of do {} while(42) since any integer, except 0, would do the job.

Antonin GAVREL
  • 9,682
  • 8
  • 54
  • 81
0

The obvious answer is: as posted, both fragments would run an equally busy infinite loop, which makes the program infinitely slow.

Although redefining C keywords as macros would technically have undefined behavior, it is the only way I can think of to make either code fragment fast at all: you can add this line above the 2 fragments:

#define while(x) sleep(x);

it will indeed make while(1) twice as fast (or half as slow) as while(2).

chqrlie
  • 131,814
  • 10
  • 121
  • 189
-4

The only reason I can think of why the while(2) would be any slower is:

  1. The code optimizes the loop to

    cmp eax, 2

  2. When the subtract occurs you're essentially subtracting

    a. 00000000 - 00000010 cmp eax, 2

    instead of

    b. 00000000 - 00000001 cmp eax, 1

cmp only sets flags and does not set a result. So on the least significant bits we know if we need to borrow or not with b. Whereas with a you have to perform two subtractions before you get a borrow.

hdost
  • 883
  • 13
  • 22