1

For input 0xffffffff, the following c code works fine with no optimization, but produces wrong results when compiled with -O1. Other compilation options are -g -m32 -Wall. The code is tested with clang-900.0.39.2 in macOS 10.13.2.

#include <stdio.h>
#include <stdlib.h>

int main(int argc, char *argv[]) {
    if (argc < 2) return 1;

    char *endp;
    int x = (int)strtoll(argv[1], &endp, 0);

    int mask1 = 0x55555555;
    int mask2 = 0x33333333;
    int count = (x & mask1) + ((x >> 1) & mask1);

    int v1 = count >> 2;
    printf("v1 = %#010x\n", v1);

    int v2 = v1 & mask2;
    printf("v2 = %#010x\n", v2);

    return 0;
}

Input: 0xffffffff

Outputs with -O0: (expected)

v1 = 0xeaaaaaaa

v2 = 0x22222222

Outputs with -O1: (wrong)

v1 = 0x2aaaaaaa

v2 = 0x02222222

Below are disassembled instructions for the line "int v1 = count >> 2;" with -O0 and -O1.

With -O0:

sarl $0x2, %esi

With -O1:

shrl $0x2, %esi

Below are disassembled instructions for the line "int v2 = v1 & mask2;" with -O0 and -O1.

With -O0:

andl -0x24(%ebp), %esi //-0x24(%ebp) stores 0x33333333

With -O1:

andl $0x13333333, %esi //why does the optimization changes 0x33333333 to 0x13333333?

In addition, if x is set to 0xffffffff locally instead of getting its value from arguments, the code will work as expected even with -O1.

P.S: The code is an experimental piece based on my solution to the Data Lab from the CS:APP course @ CMU. The lab asks the student to implement a function that counts the number of 1 bit of an int variable without using any type other than int.

Zack Zhu
  • 378
  • 4
  • 8
  • 5
    `0xffffffff` is > `MAX_INT` in your case and overflow of `int` is undefined behavior. – Stargateur Dec 22 '17 at 00:12
  • 1
    @Stargateur Are you sure? Shouldn't this be -1? – Sergey Kalinichenko Dec 22 '17 at 00:13
  • 1
    @dasblinkenlight signed overflow is used as exemple of undefined behavior, http://port70.net/~nsz/c/c11/n1570.html#3.4.3p3. – Stargateur Dec 22 '17 at 00:16
  • 3
    That said, I suggest you to use `unsigned int` for many reason, bit operator should only be use with `unsigned integer`, `printf("v1 = %#010x\n", v1);` => `%x` expect an `unsigned int` so it's undefined behavior to send an `int`. – Stargateur Dec 22 '17 at 00:20
  • 1
    can reproduce with clang .. I get the same (correct) output with gcc for both `-O0` and `-O1` – yano Dec 22 '17 at 01:38
  • 2
    but I think the issue is right-shifting negative signed types is implementation-defined (and apparently clang and gcc handle this differently). Probably safest to confine bit operations to unsigned types: https://stackoverflow.com/questions/4009885/arithmetic-bit-shift-on-a-signed-integer – yano Dec 22 '17 at 02:01
  • 1
    I'm surprised that it works at all due to the missing `#include` statements – user3629249 Dec 22 '17 at 02:34
  • 1
    never access beyond `argv[0]` without first checking `argc` to assure the user actually entered the expected command line parameter – user3629249 Dec 22 '17 at 02:37
  • this statement: `int count = (x & mask1) + ((x >> 1) & mask1);` will overflow the variable `count` Suggest cast the expressions to `ssize_t` or better `int64_t` and declare `count` as being that longer type. – user3629249 Dec 22 '17 at 02:42
  • The code really needs to be checking that the parameter was actually entered by the user, otherwise the code will have a seg fault event when the function: `strtol()` is called – user3629249 Dec 22 '17 at 02:50
  • 2
    @Stargateur: Converting an overly large integer value to a signed integer type is not *undefined behavior*. It is *implementation-defined behavior*. Signed overflow causes undefined behavior when it happens during evaluation of arithmetic operators (this is what your link talks about). Overflow during *conversion* to a signed integer type is *implementation-defined* (http://port70.net/~nsz/c/c11/n1570.html#6.3.1.3p3). Only floating-point values can cause UB when converted to integer types. – AnT stands with Russia Dec 22 '17 at 07:36
  • 1
    This code suffers from "sloppy typing", meaning that the programmer did not give any thought about the size or signedness needed for their algorithm, they just typed `int` all over the place. "Sloppy typing" tends to lead to various forms of poorly-defined behavior bugs and always leads to completely non-portable code. Professional programmers use `stdint.h`. – Lundin Dec 22 '17 at 07:44
  • @user3629249 Sorry for the missing parts. I just added the #include statements and argc checking. – Zack Zhu Dec 22 '17 at 07:53
  • @Lundin Thanks for the tip on stdint.h. The code is an experimental piece based on my solution to the Data Lab from the CS:APP course @ CMU. The lab asks the student to implement a function that counts the number of 1 bit of an int variable without using any type other than int. Now I understands right-shifting int has uncertainties, but I still don't understand why the optimization changes mask2 (0x33333333) to 0x13333333. – Zack Zhu Dec 22 '17 at 08:42
  • @ZackZhu No idea. gcc -O1 and -O3 do not change that mask in the disassembly. – Lundin Dec 22 '17 at 08:55

3 Answers3

2

As several commenters have pointed out, right-shifting signed values is not well defined.

I changed the declaration and initialization of x to

unsigned int x = (unsigned int)strtoll(argv[1], &endp, 0);

and got consistent results under -O0 and -O1. (But before making that change, I was able to reproduce your result under clang under MacOS.)

Steve Summit
  • 45,437
  • 7
  • 70
  • 103
  • I can confirm changing the type of x to unsigned removed the inconsistency. But I still don't understand why the optimization changes mask2 to 0x13333333. – Zack Zhu Dec 22 '17 at 08:21
  • 1
    @ZackZhu I don't quite understand, either, although it could simply be the difference between shifting in a 0 or shifting in a 1 at the left edge. But, to me, it's not even an interesting question. Although what we're talking about here is implementation-defined behavior, rather than undefined, much of [this other answer](https://stackoverflow.com/questions/37087286/c-program-crashes-when-adding-an-extra-int/37087465#37087465) applies. – Steve Summit Dec 22 '17 at 16:42
2

As you have discovered, you raise Implementation-defined Behavior in your attempt to store 0xffffffff (4294967295) in int x (where INT_MAX is 7fffffff, or 2147483647). C11 Standard §6.3.1.3 (draft n1570) - Signed and unsigned integers Whenever using strtoll (or strtoull) (both versions with 1-l would be fine) and attempting to store the value as an int, you must check the result against INT_MAX before making the assignment with a cast. (or if using exact width types, against INT32_MAX, or UINT32_MAX for unsigned)

Further, in circumstance such as this where bit operations are involved, you can remove uncertainty and insure portability by using the exact width types provided in stdint.h and the associated format specifiers provided in inttypes.h. Here, there is no need for use of a signed int. It would make more sense to handle all values as unsigned (or uint32_t).

For example, the following provides a default value for the input to avoid the Undefined Behavior invoked if your code is executed without argument (you can also simply test argc), replaces the use of strtoll with strtoul, validates the input fits within the associated variable before assignment handling the error if it does not, and then makes use of the unambiguous exact types, e.g.

#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <inttypes.h>

int main (int argc, char *argv[]) {

    uint64_t tmp = argc > 1 ? strtoul (argv[1], NULL, 0) : 0xffffffff;

    if (tmp > UINT32_MAX) {
        fprintf (stderr, "input exceeds UINT32_MAX.\n");
        return 1;
    }

    uint32_t x = (uint32_t)tmp,
        mask1 = 0x55555555,
        mask2 = 0x33333333,
        count = (x & mask1) + ((x >> 1) & mask1),
        v1 = count >> 2,
        v2 = v1 & mask2;

    printf("v1 = 0x%" PRIx32 "\n", v1);

    printf("v2 = 0x%" PRIx32 "\n", v2);

    return 0;
}

Example Use/Output

$ ./bin/masktst
v1 = 0x2aaaaaaa
v2 = 0x22222222

Compiled with

$ gcc -Wall -Wextra -pedantic -std=gnu11 -Ofast -o bin/masktst masktst.c

Look things over and let me know if you have further questions.

Steve Summit
  • 45,437
  • 7
  • 70
  • 103
David C. Rankin
  • 81,885
  • 6
  • 58
  • 85
  • Converting an overly large integer value to a signed integer type is not *undefined behavior*. It is *implementation-defined behavior*. – AnT stands with Russia Dec 22 '17 at 07:33
  • 1
    Good point, it is the finer points of the 'defined' verbiage that tripped me up. (the Annex J.2 or J.3) The actual standard reference (outside the Annex) is [6.3.1.3 Signed and unsigned integers](http://port70.net/~nsz/c/c11/n1570.html#6.3.1.3) Whether undefined or implementation defined -- it is something I try to avoid like the plague... – David C. Rankin Dec 22 '17 at 08:02
-2

this statement:

int x = (int)strtoll(argv[1], &endp, 0);

results in a signed overflow, which is undefined behavior.

(on my system, the result is: -1431655766

The resulting values tend to go downhill from there:

The variable: v1 receives: -357913942

The variable: v2 receives: 572662306

the %x format specifier only works correctly with unsigned variables

user3629249
  • 16,402
  • 1
  • 16
  • 17
  • 1
    This is out-of-range conversion, not overflow. See 6.3.1.3/3 for behaviour of out-of-range conversion (which is not UB). – M.M Dec 22 '17 at 03:11
  • 1
    Or, it would be if OP actually included `stdlib.h` -- as is, the call to `strtoll` is a constraint violation in C99 for calling undeclared function (and `strtoll` did not exist in C89) – M.M Dec 22 '17 at 03:17