6

is it correct to do this?

typedef unsigned int Index;

enum
{
  InvalidIndex = (Index) -1 
};

I have read that it is unsafe across platforms, but I have seen this in so many "professional" codes...

Lightness Races in Orbit
  • 378,754
  • 76
  • 643
  • 1,055
relaxxx
  • 7,566
  • 8
  • 37
  • 64

4 Answers4

6

What you read was probably out of Fear, Uncertainty and Doubt. The author of whatever you read probably thought that (unsigned)-1 was underflowing and potentially causing chaos on systems where the bit representation doesn't happen to give you UINT_MAX for your trouble.

However, the author is wrong, because the standard guarantees that unsigned values wrap-around when they reach the edge of their range. No matter what bit representations are involved, (unsigned)-1 is std::numeric_limits<unsigned>::max(). Period.

I'm not sure what the benefit of it is here, though. You're going to get that large, maximum value.. If that is fine, I guess you're good to go.

James
  • 1,651
  • 2
  • 18
  • 24
Lightness Races in Orbit
  • 378,754
  • 76
  • 643
  • 1,055
  • 1
    You may not know the underlying type of the `enum`, but you know that it will be able to hold the value of any of the enum constants, or you will get a compile time error. `UINT_MAX` is a value, and that is the only legal value the enum constant can take. – James Kanze May 31 '11 at 18:21
4

If you wanted to get UINT_MAX, I'm pretty sure that's actually the best way of doing it. Casting -1 to unsigned is guaranteed to yield UINT_MAX. It's explained in the comments.

salezica
  • 74,081
  • 25
  • 105
  • 166
  • 3
    `UINT_MAX`. And it is guaranteed but *not* because of the bitwise signed representation. It's guaranteed because casting to an unsigned type performs modulo arithmetic. It just so happens that with 2's complement representation this leaves the bit pattern unchanged, but in 1s' complement and sign-magnitude it does change the bit pattern, and the cast is guaranteed to do that work. – Steve Jessop May 31 '11 at 18:01
  • You meant `INT_MAX`, and it's `UINT_MAX` that's relevant here anyway. And not because of bits, but because the standard says unsigned values wrap. Then when you convert to `signed` again for use in the enum (_if_ the underlying type is signed), you get -1 back. – Lightness Races in Orbit May 31 '11 at 18:03
  • @Tomalak: "you get -1 back" - in practice but not in principle (4.7/3). – Steve Jessop May 31 '11 at 18:08
  • 1
    @Tomalak: I'm not claiming that I like the rule, just that I like pointing it out in response to SO questions! – Steve Jessop May 31 '11 at 18:10
  • 1
    @SteveJessop: I'm not claiming that I disagree with this behaviour; merely that C++ just _had_ to come and find something to get me on :P – Lightness Races in Orbit May 31 '11 at 18:11
2

It is unsafe because what an enum is is not clearly defined.

See Are C++ enums signed or unsigned? for more information on that.

At the end of the day what you have written looks like it would end up being (int)(unsigned int)(int) in translation so I am not sure what you are trying to accomplish.

Community
  • 1
  • 1
James
  • 1,651
  • 2
  • 18
  • 24
  • It's perfectly safe, because the value of the expression is perfectly defined, and either that value can be represented in the enum, or it is an error. – James Kanze May 31 '11 at 18:23
-2

Not quite sure if this is implementation defined, but casting -1 (which is obviously signed) to an unsigned integer causes an underflow, which usually leads to extremely large values (i.e. INT_MAX).

cmende
  • 397
  • 2
  • 11