135

A lot of times I see flag enum declarations that use hexadecimal values. For example:

[Flags]
public enum MyEnum
{
    None  = 0x0,
    Flag1 = 0x1,
    Flag2 = 0x2,
    Flag3 = 0x4,
    Flag4 = 0x8,
    Flag5 = 0x10
}

When I declare an enum, I usually declare it like this:

[Flags]
public enum MyEnum
{
    None  = 0,
    Flag1 = 1,
    Flag2 = 2,
    Flag3 = 4,
    Flag4 = 8,
    Flag5 = 16
}

Is there a reason or rationale to why some people choose to write the value in hexadecimal rather than decimal? The way I see it, it's easier to get confused when using hex values and accidentally write Flag5 = 0x16 instead of Flag5 = 0x10.

Adi Lester
  • 24,731
  • 12
  • 95
  • 110
  • 4
    What'd make it any less likely that you'll write `10` rather than `0x10` if you used decimal numbers? Particularly since these are binary numbers we're dealing with, and hex is trivially convertible to/from binary? `0x111` is far less annoying to translate in one's head than `273`... – cHao Nov 04 '12 at 20:43
  • 5
    It's a shame that C# doesn't have a syntax that doesn't explicitly require writing out the powers of two. – Colonel Panic Nov 04 '12 at 21:59
  • You're doing something nonsensical here. The intent behind flags is that they will be bitwise combined. But the bitwise combinations are not elements of the type. The value `Flag1 | Flag2` is 3, and 3 does not correspond to any domain value of `MyEnum`. – Kaz Nov 05 '12 at 04:29
  • Where do you see that? with reflector? – giammin Nov 06 '12 at 17:41
  • @giammin It's a general question, not about a specific implementation. You can take open source projects or just code available on the net for example. – Adi Lester Nov 07 '12 at 14:43
  • Possible duplicate of [Why do enum permissions often have 0, 1, 2, 4 values?](http://stackoverflow.com/questions/9811114/why-do-enum-permissions-often-have-0-1-2-4-values) – Michael Freidgeim Oct 13 '16 at 05:13

7 Answers7

203

Rationales may differ, but an advantage I see is that hexadecimal reminds you: "Okay, we're not dealing with numbers in the arbitrary human-invented world of base ten anymore. We're dealing with bits - the machine's world - and we're gonna play by its rules." Hexadecimal is rarely used unless you're dealing with relatively low-level topics where the memory layout of data matters. Using it hints at the fact that that's the situation we're in now.

Also, i'm not sure about C#, but I know that in C x << y is a valid compile-time constant. Using bit shifts seems the most clear:

[Flags]
public enum MyEnum
{
    None  = 0,
    Flag1 = 1 << 0,  //1
    Flag2 = 1 << 1,  //2
    Flag3 = 1 << 2,  //4
    Flag4 = 1 << 3,  //8
    Flag5 = 1 << 4   //16
}
Shimmy Weitzhandler
  • 101,809
  • 122
  • 424
  • 632
exists-forall
  • 4,148
  • 4
  • 22
  • 29
  • 8
    That's very interesting and it's actually a valid C# enum as well. – Adi Lester Nov 04 '12 at 20:57
  • 15
    +1 with this notation you never do error in enum value calculations – Sergey Berezovskiy Nov 04 '12 at 21:04
  • @lazyberezovsky: Or you could just omit the values and let the compiler assign them - even less error prone. But this is indeed cool for when you must explicitly set values. – Allon Guralnek Nov 06 '12 at 18:32
  • 1
    @AllonGuralnek: Does the compiler assign unique bit positions given the [Flags] annotation? Generally it starts at 0 and goes in increments of 1, so any enum value assigned 3 (decimal) would be 11 in binary, setting two bits. – Eric J. Nov 06 '12 at 18:47
  • 2
    @Eric: Huh, I don't know why but I was always certain that it did assign values of powers of two. I just checked, and I guess I was wrong. – Allon Guralnek Nov 06 '12 at 19:49
  • 43
    Another fun fact with the `x << y` notation. `1 << 10 = KB`, `1 << 20 = MB`, `1 << 30 = GB` and so on. It is really nice if you want to make a 16 KB array for a buffer you can just go `var buffer = new byte[16 << 10];` – Scott Chamberlain Nov 06 '12 at 22:04
  • Actually even when dealing with human-readable data using hexadecimals is quite nice. Example: directions (north, north east, south etc.). We can do `enum Direction { NORTH = 0x0001, SOUTH = 0x0010, EAST = 0x0100, WEST = 0x1000, NORTH_EAST = NORTH | EAST, NORTH_WEST = NORTH | WEST, SOUTH_EAST = SOUTH | EAST, SOUTH_WEST = SOUTH | WEST };`. If this was done with decimals one might have a pretty tough time to get it right when it comes to OR (or other logical operations) while with hexadecimals it's quite readable (code-wise). – rbaleksandar Sep 07 '16 at 12:06
  • 1
    If you have more than 33 elements in enum NOT WORKS because: `i<<1` and `i<<33` give the same result, because 1 and 33 have the same low-order five bits. – Joaquinglezsantos Jul 21 '17 at 13:57
  • @ScottChamberlain 8 years on and that comment is still worth a bookmark in and of itself. Brilliant. – GrayedFox Jan 09 '21 at 17:16
49

It makes it easy to see that these are binary flags.

None  = 0x0,  // == 00000
Flag1 = 0x1,  // == 00001
Flag2 = 0x2,  // == 00010
Flag3 = 0x4,  // == 00100
Flag4 = 0x8,  // == 01000
Flag5 = 0x10  // == 10000

Though the progression makes it even clearer:

Flag6 = 0x20  // == 00100000
Flag7 = 0x40  // == 01000000
Flag8 = 0x80  // == 10000000
Oded
  • 489,969
  • 99
  • 883
  • 1,009
  • 4
    I actually add a 0 in front of 0x1, 0x2, 0x4, 0x8... So I get 0x01, 0x02, 0x04, 0x08 and 0x10... I find that easier to read. Am I messing something up? – LightStriker Nov 04 '12 at 20:46
  • 4
    @Light - Not at all. It is very common so you can see how these align. Just makes the bits more explicit :) – Oded Nov 04 '12 at 20:46
  • 2
    @LightStriker just going to throw it out there that *does* matter if you aren't using hex. Values that start with only a zero are interpreted as octal. So `012` is actually `10`. – Jonathon Reinhart Nov 04 '12 at 20:49
  • @JonathonRainhart: That I know. But I always use hex when using bitfields. I'm not sure I would feel safe using `int` instead. I know it's stupid... but habits die hard. – LightStriker Nov 04 '12 at 20:50
  • Best answer in my opinion. +1 :) – Momoro Apr 22 '21 at 22:29
48

I think it's just because the sequence is always 1,2,4,8 and then add a 0.
As you can see:

0x1 = 1 
0x2 = 2
0x4 = 4
0x8 = 8
0x10 = 16
0x20 = 32
0x40 = 64
0x80 = 128
0x100 = 256
0x200 = 512
0x400 = 1024
0x800 = 2048

and so on, as long as you remember the sequence 1-2-4-8 you can build all the subsequent flags without having to remember the powers of 2

Neeku
  • 3,646
  • 8
  • 33
  • 43
VRonin
  • 481
  • 4
  • 2
17

Because [Flags] means that the enum is really a bitfield. With [Flags] you can use the bitwise AND (&) and OR (|) operators to combine the flags. When dealing with binary values like this, it is almost always more clear to use hexadecimal values. This is the very reason we use hexadecimal in the first place. Each hex character corresponds to exactly one nibble (four bits). With decimal, this 1-to-4 mapping does not hold true.

Jonathon Reinhart
  • 132,704
  • 33
  • 254
  • 328
6

Because there is a mechanical, simple way to double a power-of-two in hex. In decimal, this is hard. It requires long multiplication in your head. In hex it is a simple change. You can carry this out all the way up to 1UL << 63 which you can't do in decimal.

usr
  • 168,620
  • 35
  • 240
  • 369
  • 1
    I think this makes the most sense. For enums with a large set of values it's the easiest way to go (with the possible exception of @Hypercube's example). – Adi Lester Nov 08 '12 at 21:29
5

Because it is easier to follow for humans where the bits are in the flag. Each hexadecimal digit can fit a 4 bit binary.

0x0 = 0000
0x1 = 0001
0x2 = 0010
0x3 = 0011

... and so on

0xF = 1111

Typically you want your flags to not overlap bits, the easiest way of doing and visualizing it is using hexadecimal values to declare your flags.

So, if you need flags with 16 bits you will use 4 digit hexadecimal values and that way you can avoid erroneous values:

0x0001 //= 1 = 000000000000 0001
0x0002 //= 2 = 000000000000 0010
0x0004 //= 4 = 000000000000 0100
0x0008 //= 8 = 000000000000 1000
...
0x0010 //= 16 = 0000 0000 0001 0000
0x0020 //= 32 = 0000 0000 0010 0000
...
0x8000 //= 32768 = 1000 0000 0000 0000
Only You
  • 2,051
  • 1
  • 21
  • 34
  • This explanation is good....only thing missing is the binary equivalent to show the ultimate line up of the bit, nibbles, and bytes ;) – GoldBishop Oct 31 '17 at 15:59
0

Because hex aligns more cleanly with the binary it represents, compared with decimal values:

0x001 = 0000 0000 0001 // 1
0x002 = 0000 0000 0010 // 2
0x004 = 0000 0000 0100 // 4
0x008 = 0000 0000 1000 // 8

0x010 = 0000 0001 0000 // 16
0x020 = 0000 0010 0000 // 32
0x040 = 0000 0100 0000 // 64
0x080 = 0000 1000 0000 // 128

0x100 = 0001 0000 0000 // 256
0x200 = 0010 0000 0000 // 512
0x400 = 0100 0000 0000 // 1024
0x800 = 1000 0000 0000 // 2048

// etc.
Johnathan Barclay
  • 18,599
  • 1
  • 22
  • 35