Can anyone please let me know what exactly 1LL and -1LL are? Here is the line in which they have been used:
#define All_In(G) (((G) == 64) ? (W64)(-1LL) : ((1LL << (G))-1LL))
Can anyone please let me know what exactly 1LL and -1LL are? Here is the line in which they have been used:
#define All_In(G) (((G) == 64) ? (W64)(-1LL) : ((1LL << (G))-1LL))
The purpose of the macro appears to be to produce an integer with the G
least significant bits set. All_In(1)==1
, All_In(2)==3
, All_In(3)==7
, and so on.
In psuedo-code, the macro is saying this:
if G == 64
produce -1
else
produce (1 bitshifted left by G) - 1
LL stands for LongLong, which means at least 64-bit
LL
suffix of a constant literal means that the literal's type need to be interpreted as long long (signed). To answer exactly the question title: 1LL
is a constant literal, which value is 1
and it's type is long long. Similarly, -1LL
is -1
with the type long long.
You cannot pass 1LL
literal simply into functions which accept integer values of smaller types like long, short. Probably where the All_In macro is used the expected parameter is long long.
All_In(1) == 0b00000000...00000001LL
All_In(2) == 0b00000000...00000011LL
All_In(3) == 0b00000000...00000111LL
All_In(4) == 0b00000000...00001111LL
All_In(5) == 0b00000000...00011111LL
All_In(6) == 0b00000000...00111111LL
All_In(7) == 0b00000000...01111111LL
All_In(8) == 0b00000000...11111111LL
...
All_In(57) == 0b00000001...11111111LL
All_In(58) == 0b00000011...11111111LL
All_In(59) == 0b00000111...11111111LL
All_In(60) == 0b00001111...11111111LL
All_In(61) == 0b00011111...11111111LL
All_In(62) == 0b00111111...11111111LL
All_In(63) == 0b01111111...11111111LL
All_In(64) == 0b11111111...11111111LL
The macro works this way:
(G) == 64)
? see ternary operator ?:
), then it yields a number which binary representation is 64 1
((W64)(-1LL)
, where W64
specifies that the width of the number is 64 in bits). For this you need to know, that in case of signed integers and in 2 complement number representation the -1 means all bits set to 1. For example in case of a signed char, the values range from -128 to 127. The binary representation of the -128 is 11111111
. Extend this to 64 bit length.1LL << (G))-1LL
. This goes the following way, let's do it for input 3 and if the bit length is 8.a. First it shifts the 1 by 3.
0b00000001 (2^0)
0b00000010 (2^1 after shifting by 1)
0b00000100 (2^2 after shifting by 1 again)
0b00001000 (2^3 after shifting by 1 again, 3 times all together)
b. Then it subtracts 1 from that number. This results in a number we wanted. 2^n-1 always consists of bits of ones without a zero in between them. So 2^3-1=7. Which representation is:
0b00000111
Probably this can be used to mask some flags or something.
Note, that (0b
prefix for representing binary literals in my example doesn't work with all C compilers).
The line of code is a macro that produces from a positive integer less than or equal to 64 a long long
with that many 1
s in its binary expansion. AllIn(G)
equals $2^G - 1$. Darn it, no TeX. So,
AllIn(1) == 0x0000000000000001LL
AllIn(2) == 0x0000000000000003LL
AllIn(3) == 0x0000000000000007LL
AllIn(4) == 0x000000000000000FLL
AllIn(5) == 0x0000000000000011LL
...
AllIn(64) == 0xFFFFFFFFFFFFFFFFLL