-2

Is there a performance issue or any other reason?

  • 2
    You are going to have to give some example of their use that you find interesting/confusing. At the moment you are just asking about some arbitrary numbers. (They may be interesting because they both represent one byte with all bits set, but why you might want that depends on the context). – BoBTFish Jul 18 '17 at 07:03
  • 1
    Use of hex literals typically involves dealing with bit flags / masks. – user7860670 Jul 18 '17 at 07:04
  • You can find *everything* from here too: https://stackoverflow.com/questions/81656/where-do-i-find-the-current-c-or-c-standard-documents – Bathsheba Jul 18 '17 at 07:13
  • Speaking only for myself: If you see a hex literal in my code, that means that the bit pattern is significant. If you see a decimal literal, it means that the numeric value is significant. – Solomon Slow Jul 18 '17 at 07:47

1 Answers1

6

There is absolutely no performance benefit in using one over the other, but note that the use of a hexadecimal literal means that the implied type can include unsigned integral types, (cf. decimal literals which can't) and that can have surprising effects:

void foo(const unsigned&)
{
    // pay me a bonus
}

void foo(const long&)
{
    // reformat my hard disk
}

int main()
{
    foo(0xffffffff); // thankfully unsigned on a platform with 32 bit int.
}

See http://en.cppreference.com/w/cpp/language/integer_literal, including the link to C at the bottom of that page.

Bathsheba
  • 231,907
  • 34
  • 361
  • 483