14

I'm using a well known template to allow binary constants

template< unsigned long long N >
struct binary
{
  enum { value = (N % 10) + 2 * binary< N / 10 > :: value } ;
};

template<>
struct binary< 0 >
{
  enum { value = 0 } ;
};

So you can do something like binary<101011011>::value. Unfortunately this has a limit of 20 digits for a unsigned long long.

Does anyone have a better solution?

Evan Teran
  • 87,561
  • 32
  • 179
  • 238
Unknown
  • 45,913
  • 27
  • 138
  • 182
  • The limit of 20 is probably compiler dependent. It depends on how much template recursion it will tolerate. Some modern compilers will allow you to pass in an argument to set the maximum template recursion depth. – Brian Neal Mar 31 '09 at 02:23
  • 3
    I though the limit was due to the number of decimal digits you could store in a unsigned long long, since it's basically taking the *decimal* number 101011011 and turning that into binary, yes? – paxdiablo Mar 31 '09 at 02:27
  • Pax: yes, at least for GCC which I am using. – Unknown Mar 31 '09 at 02:31
  • There is a related question here: http://stackoverflow.com/questions/537303/binary-literals – Frank Mar 31 '09 at 19:29
  • 1
    Possible duplicate of [Binary literals?](https://stackoverflow.com/questions/537303/binary-literals) – M.J. Rayburn Nov 10 '18 at 01:46
  • @Frank I think it is the same question. – M.J. Rayburn Nov 10 '18 at 01:55
  • A better solution for what? It is not clear what what you want to do or what the problem is. Do you want to declare in decimal? Do you want to use an `uint128_t`? Something else? – jww Nov 10 '18 at 01:58
  • If you don't want to / can't use GCC and/or C++14 as alluded to in various answers to the this question and the one linked above, you could just use a string and make a function that parses it. – M.J. Rayburn Nov 10 '18 at 01:59
  • Oh wait never mind. You wanted the code to run at compile time just like literal evaluation so it wouldn't slow your program down, and that's why you used templates. Yeah I don't know. You clearly know more about what you would need to solve this than I do. I know next to nothing about the preprocessor. – M.J. Rayburn Nov 10 '18 at 02:17
  • @jww He's trying to initialize an integer using a binary representation instead of a decimal one just like the long built in C++ features that allow you to initialize an integer in hexadecimal or octal. unsigned long long n = 0xFFFFFFFFFFFFFFFF and unsigned long long n = 07777777777777777777771. Apparently this has been a feature of the GCC compiler for a while and was recently added to the standard in C++14. unsigned long long n = 0b1111111111111111111111111111111111111111111111111111111111111111. ... – M.J. Rayburn Nov 10 '18 at 03:04
  • @jww ... The problem is while he can write preprocessor code to parse binary he's still stuck using a regular decimal literal, which is capped at 20 digits, to input the number in the first place. Thus he can only get 20 bits out of his make shift binary literal machine which isn't enough to fill an int much less a long long. Decimal literals are of course capped at 20 digits because that's how many digits it takes to represent the largest integer primitive, an unsigned long long, in decimal form. – M.J. Rayburn Nov 10 '18 at 03:04

7 Answers7

25

Does this work if you have a leading zero on your binary value? A leading zero makes the constant octal rather than decimal.

Which leads to a way to squeeze a couple more digits out of this solution - always start your binary constant with a zero! Then replace the 10's in your template with 8's.

Mark Ransom
  • 299,747
  • 42
  • 398
  • 622
5

The approaches I've always used, though not as elegant as yours:

1/ Just use hex. After a while, you just get to know which hex digits represent which bit patterns.

2/ Use constants and OR or ADD them. For example (may need qualifiers on the bit patterns to make them unsigned or long):

#define b0  0x00000001
#define b1  0x00000002
: : :
#define b31 0x80000000

unsigned long x = b2 | b7

3/ If performance isn't critical and readability is important, you can just do it at runtime with a function such as "x = fromBin("101011011");".

4/ As a sneaky solution, you could write a pre-pre-processor that goes through your *.cppme files and creates the *.cpp ones by replacing all "0b101011011"-type strings with their equivalent "0x15b" strings). I wouldn't do this lightly since there's all sorts of tricky combinations of syntax you may have to worry about. But it would allow you to write your string as you want to without having to worry about the vagaries of the compiler, and you could limit the syntax trickiness by careful coding.

Of course, the next step after that would be patching GCC to recognize "0b" constants but that may be an overkill :-)

paxdiablo
  • 854,327
  • 234
  • 1,573
  • 1,953
  • funny you mentioned the last part. I was also using bitset<>(string(str)).to_ulong() – Unknown Mar 31 '09 at 02:33
  • 2
    I wonder what the situation is that makes using the 'binary template' better than just straightforward hex constants or 'or-ing' together enums with proper names for the bits if you're modeling hardware or communications protocols? – Michael Burr Mar 31 '09 at 06:07
  • Actually GCC supports 0b constants. – Prof. Falken Feb 29 '12 at 07:29
  • Right, it's always better to use _slightly_ less "elegant" code, but get rid of rubbish boilerplate. So if C++ isn't your friend (no native 0b01), it's better to stay with clear understandable code. – Yury Jun 27 '12 at 09:18
4

C++0x has user-defined literals, which could be used to implement what you're talking about.

Otherwise, I don't know how to improve this template.

Community
  • 1
  • 1
rmmh
  • 6,997
  • 26
  • 37
4
template<unsigned int p,unsigned int i> struct BinaryDigit 
{
  enum  { value = p*2+i };
  typedef BinaryDigit<value,0> O;
  typedef BinaryDigit<value,1> I;
};
struct Bin
{
  typedef BinaryDigit<0,0> O;
  typedef BinaryDigit<0,1> I;
};

Allowing:

Bin::O::I::I::O::O::value

much more verbose, but no limits (until you hit the size of an unsigned int of course).

3

Technically it is not C nor C++, it is a GCC specific extension, but GCC allows binary constants as seen here:

 The following statements are identical:

 i =       42;
 i =     0x2a;
 i =      052;
 i = 0b101010;

Hope that helps. Some Intel compilers and I am sure others, implement some of the GNU extensions. Maybe you are lucky.

Prof. Falken
  • 24,226
  • 19
  • 100
  • 173
3

You can add more non-type template parameters to "simulate" additional bits:

// Utility metafunction used by top_bit<N>.
template <unsigned long long N1, unsigned long long N2>
struct compare {
    enum { value = N1 > N2 ? N1 >> 1 : compare<N1 << 1, N2>::value };
};

// This is hit when N1 grows beyond the size representable
// in an unsigned long long.  It's value is never actually used.
template<unsigned long long N2>
struct compare<0, N2> {
    enum { value = 42 };
};

// Determine the highest 1-bit in an integer.  Returns 0 for N == 0.
template <unsigned long long N>
struct top_bit {
    enum { value = compare<1, N>::value };
};

template <unsigned long long N1, unsigned long long N2 = 0>
struct binary {
    enum {
        value =
            (top_bit<binary<N2>::value>::value << 1) * binary<N1>::value +
            binary<N2>::value
    };
};

template <unsigned long long N1>
struct binary<N1, 0> {
    enum { value = (N1 % 10) + 2 * binary<N1 / 10>::value };
};

template <>
struct binary<0> {
    enum { value = 0 } ;
};

You can use this as before, e.g.:

binary<1001101>::value

But you can also use the following equivalent forms:

binary<100,1101>::value
binary<1001,101>::value
binary<100110,1>::value

Basically, the extra parameter gives you another 20 bits to play with. You could add even more parameters if necessary.

Because the place value of the second number is used to figure out how far to the left the first number needs to be shifted, the second number must begin with a 1. (This is required anyway, since starting it with a 0 would cause the number to be interpreted as an octal number.)

j_random_hacker
  • 50,331
  • 10
  • 105
  • 169
2

A simple #define works very well:

#define HEX__(n) 0x##n##LU

#define B8__(x) ((x&0x0000000FLU)?1:0)\
               +((x&0x000000F0LU)?2:0)\
              +((x&0x00000F00LU)?4:0)\
               +((x&0x0000F000LU)?8:0)\
               +((x&0x000F0000LU)?16:0)\
               +((x&0x00F00000LU)?32:0)\
               +((x&0x0F000000LU)?64:0)\
               +((x&0xF0000000LU)?128:0)

#define B8(d) ((unsigned char)B8__(HEX__(d)))
#define B16(dmsb,dlsb) (((unsigned short)B8(dmsb)<<8) + B8(dlsb))
#define B32(dmsb,db2,db3,dlsb) (((unsigned long)B8(dmsb)<<24) + ((unsigned long)B8(db2)<<16) + ((unsigned long)B8(db3)<<8) + B8(dlsb))

B8(011100111)
B16(10011011,10011011)
B32(10011011,10011011,10011011,10011011)

Not my invention, I saw it on a forum a long time ago.