93

C++0x is going to make the following code and similar code ill-formed, because it requires a so-called narrowing conversion of a double to a int.

int a[] = { 1.0 };

I'm wondering whether this kind of initialization is used much in real world code. How many code will be broken by this change? Is it much effort to fix this in your code, if your code is affected at all?


For reference, see 8.5.4/6 of n3225

A narrowing conversion is an implicit conversion

  • from a floating-point type to an integer type, or
  • from long double to double or float, or from double to float, except where the source is a constant expression and the actual value after conversion is within the range of values that can be represented (even if it cannot be represented exactly), or
  • from an integer type or unscoped enumeration type to a floating-point type, except where the source is a constant expression and the actual value after conversion will fit into the target type and will produce the original value when converted back to the original type, or
  • from an integer type or unscoped enumeration type to an integer type that cannot represent all the values of the original type, except where the source is a constant expression and the actual value after conversion will fit into the target type and will produce the original value when converted back to the original type.
Johannes Schaub - litb
  • 496,577
  • 130
  • 894
  • 1,212
  • lets hope not , i dont see this type of initialization but they assured that they are trying thier best to not break any code-base, C++0x has good improvments none the less – Saif al Harthi Dec 13 '10 at 22:38
  • @litb: Will this also be ill-formed, or is it just when there's a conversion taking place? `int a[10] = {0};` – John Dibling Dec 13 '10 at 22:44
  • 1
    Assuming this is valid only for initialization of inbuilt types, I can't see how this would harm. Sure, this may break some code. But should be easy to fix. – Johan Kotlinski Dec 13 '10 at 22:45
  • I hope not too, there's lots of legacy code where I work where this type of conversion is done haphazardly! One thing going for us though, using C++0x in production is a pipe dream at the moment.. :) – Nim Dec 13 '10 at 22:45
  • 1
    @John Dibling: No, the initialization is not ill-formed when the value can be exactly represented by the target type. (And `0` is already an `int` anyway.) – aschepler Dec 13 '10 at 22:50
  • 2
    @Nim: Note that this is only ill-formed within `{` curly brace initializers `}`, and the only legacy usage of those is for arrays and POD structs. Also, if existing code has explicit casts where they belong, it won't break. – aschepler Dec 13 '10 at 22:53
  • So in fact, according to the rules, `unsigned char x = { -1 };` will become ill-formed, as will `unsigned char x = { ~0 };` on a two's complement machine :) – Johannes Schaub - litb Dec 13 '10 at 22:55
  • @Johannes: Would you also post the language that forbids narrowing conversion in this context (or at least a pointer to the right section of the draft)? Thanks. – Ben Voigt Dec 13 '10 at 23:40
  • As commented below, i know that at least much OpenGL code will be affected (float vertex arrays vs. integer based coordinates in the codebase, etc.). I guess there are more examples where interfacing with C-style APIs is needed. – Georg Fritzsche Dec 14 '10 at 02:13
  • @litb: Is there any reason you went with the an array declaration rather than plain `int a = 1.0;` or `int a; a = 1.0;`? – j_random_hacker Dec 14 '10 at 04:43
  • 4
    @j_random_hacker as the working paper says, `int a = 1.0;` is still valid. – Johannes Schaub - litb Dec 14 '10 at 10:26
  • 1
    @litb: Thanks. Actually I find that understandable but disappointing -- IMHO it would have been much better to require explicit syntax for all narrowing conversions right from the start of C++. – j_random_hacker Dec 14 '10 at 11:35

8 Answers8

42

I ran into this breaking change when I used GCC. The compiler printed an error for code like this:

void foo(const unsigned long long &i)
{
    unsigned int a[2] = {i & 0xFFFFFFFF, i >> 32};
}

In function void foo(const long long unsigned int&):

error: narrowing conversion of (((long long unsigned int)i) & 4294967295ull) from long long unsigned int to unsigned int inside { }

error: narrowing conversion of (((long long unsigned int)i) >> 32) from long long unsigned int to unsigned int inside { }

Fortunately, the error messages were straightforward and the fix was simple:

void foo(const unsigned long long &i)
{
    unsigned int a[2] = {static_cast<unsigned int>(i & 0xFFFFFFFF),
            static_cast<unsigned int>(i >> 32)};
}

The code was in an external library, with only two occurrences in one file. I don't think the breaking change will affect much code. Novices might get confused, though.

Community
  • 1
  • 1
Timothy003
  • 2,348
  • 5
  • 28
  • 33
10

Try adding -Wno-narrowing to your CFLAGS, for example :

CFLAGS += -std=c++0x -Wno-narrowing
Kukuh Indrayana
  • 204
  • 3
  • 9
10

I would be surprised and disappointed in myself to learn that any of the C++ code I wrote in the last 12 years had this sort of problem. But most compilers would have spewed warnings about any compile-time "narrowings" all along, unless I'm missing something.

Are these also narrowing conversions?

unsigned short b[] = { -1, INT_MAX };

If so, I think they might come up a bit more often than your floating-type to integral-type example.

aschepler
  • 70,891
  • 9
  • 107
  • 161
  • 1
    I don't understand why you say this would be a not-uncommon thing to find in code. What is the logic between using -1 or INT_MAX instead of USHRT_MAX? Was USHRT_MAX not in climits in late 2010? –  Apr 20 '17 at 19:26
9

A practical instance that I have encountered:

float x = 4.2; // an input argument
float a[2] = {x-0.5, x+0.5};

The numeric literal is implicitly double which causes promotion.

Jed
  • 1,651
  • 17
  • 26
  • 2
    so make it `float` by writing `0.5f`. ;) – underscore_d Oct 04 '16 at 22:43
  • 3
    @underscore_d Doesn't work if `float` was a typedef or template parameter (at least without loss of precision), but the point is that the code as written worked with the correct semantics and became an error with C++11. I.e., the definition of a "breaking change". – Jed Oct 05 '16 at 20:21
7

I wouldn't be all that surprised if somebody gets caught out by something like:

float ra[] = {0, CHAR_MAX, SHORT_MAX, INT_MAX, LONG_MAX};

(on my implementation, the last two don't produce the same result when converted back to int/long, hence are narrowing)

I don't remember ever writing this, though. It's only useful if an approximation to the limits is useful for something.

This seems at least vaguely plausible too:

void some_function(int val1, int val2) {
    float asfloat[] = {val1, val2};    // not in C++0x
    double asdouble[] = {val1, val2};  // not in C++0x
    int asint[] = {val1, val2};        // OK
    // now do something with the arrays
}

but it isn't entirely convincing, because if I know I have exactly two values, why put them in arrays rather than just float floatval1 = val1, floatval1 = val2;? What's the motivation, though, why that should compile (and work, provided the loss of precision is within acceptable accuracy for the program), while float asfloat[] = {val1, val2}; shouldn't? Either way I'm initializing two floats from two ints, it's just that in one case the two floats happen to be members of an aggregate.

That seems particularly harsh in cases where a non-constant expression results in a narrowing conversion even though (on a particular implementation), all values of the source type are representable in the destination type and convertible back to their original values:

char i = something();
static_assert(CHAR_BIT == 8);
double ra[] = {i}; // how is this worse than using a constant value?

Assuming there's no bug, presumably the fix is always to make the conversion explicit. Unless you're doing something odd with macros, I think an array initializer only appears close to the type of the array, or at least to something representing the type, which could be dependent on a template parameter. So a cast should be easy, if verbose.

Steve Jessop
  • 273,490
  • 39
  • 460
  • 699
4

It was indeed a breaking change as real life experience with this feature has shown gcc had turned narrowing into a warning from an error for many cases due to real life pains with porting C++03 code bases to C++11. See this comment in a gcc bug report:

The standard only requires that "a conforming implementation shall issue at least one diagnostic message" so compiling the program with a warning is allowed. As Andrew said, -Werror=narrowing allows you to make it an error if you want.

G++ 4.6 gave an error but it was changed to a warning intentionally for 4.7 because many people (myself included) found that narrowing conversions where one of the most commonly encountered problems when trying to compile large C++03 codebases as C++11. Previously well-formed code such as char c[] = { i, 0 }; (where i will only ever be within the range of char) caused errors and had to be changed to char c[] = { (char)i, 0 }

Community
  • 1
  • 1
Shafik Yaghmour
  • 154,301
  • 39
  • 440
  • 740
4

Narrowing conversion errors interact badly with implicit integer promotion rules.

I had an error with code which looked like

struct char_t {
    char a;
}

void function(char c, char d) {
    char_t a = { c+d };
}

Which produces an narrowing conversion error (which is correct according to the standard). The reason is that c and d implicitly get promoted to int and the resulting int isn't allowed to be narrowed back to char in an initializer list.

OTOH

void function(char c, char d) {
    char a = c+d;
}

is of course still fine (otherwise all hell would break loose). But surprisingly, even

template<char c, char d>
void function() {
    char_t a = { c+d };
}

is ok and compiles without a warning if the sum of c and d is less than CHAR_MAX. I still think this is a defect in C++11, but the people there think otherwise - possibly because it isn't easy to fix without get rid of either implicit integer conversion (which is a relict from the past, when people wrote code like char a=b*c/d and expected it to work even if (b*c) > CHAR_MAX) or narrowing conversion errors (which are possibly a good thing).

Gunther Piez
  • 29,760
  • 6
  • 71
  • 103
  • I ran into the following which is really annoying nonsense: `unsigned char x; static unsigned char const m = 0x7f; ... unsigned char r = { x & m };` <-- narrowing conversion inside { }. Really? So the operator& also implicitly converts unsigned chars to int? Well I don't care, the result is still guaranteed to be an unsigned char, argh. – Carlo Wood Dec 07 '16 at 13:27
  • "_implicit integer conversion_" promotions? – curiousguy Jul 28 '18 at 13:39
1

It looks like GCC-4.7 no longer gives errors for narrowing conversions, but warnings instead.

kyku
  • 5,892
  • 4
  • 43
  • 51