3

I don't ask a lot of questions here, so forgive me if my "question asking skills" are a bit rusty. Here goes:


I've looked around, and can't seem to find a solution to this.

The idea is that the program prints out a table of the decimal (signed and unsigned), hexadecimal, and binary representations of each possible value of a number with an arbitrary bit size. It accepts the bit size from argv, and an example output should be (focusing on the binary representations):

$ ./test 2
00
01
10
11
$ ./test 4
0000
0001
0010
0011
0100
0101
0110
0111
1000
1001
1010
1011
1100
1101
1110
1111

This works fine if I use a preprocessor macro, but that means I have to re-compile every time I want to change the bit size. Argv would solve all my problems, if it weren't for one small caveat. And that is that the compiler doesn't like what I'm trying to do, even though it seems perfectly logical.

What I'm trying to do is this:

int bit_size = atoi(argv[1]);
struct { unsigned int a : bit_size; } test;

But the compiler gives me an error, quite sternly telling me I can't do that ("bitfield not an integer constant").

Okay...so I try using a const int instead:

const int bit_size = atoi(argv[1]);
struct { unsigned int a : bit_size; } test;

I get the same exact error.

Just to note: what GCC wants me to do is this:

struct { unsigned int a : 8; } test;

And it works without any problem. But I need to be able to vary that.

I am perplexed.

Note that I do not want or need the bitfield's width to be able to change mid-program, which is what I'm assuming GCC's trying to prevent me from doing. This is effectively a one-shot operation.

Also note that I'm not trying to do this on a normal variable like this question (and many others like it).

This question also has nothing to do with what I'm trying to accomplish.


Also, if it helps, this is an example of what displays when it runs without error (bitfield width = 4):

$ ./test 4
$ cat table.md
Table for a/an 4-bit integer: 

| Unsigned | Signed | Hex | Binary | Sign Bit |
|:--------:|:------:|:---:|:------:|:--------:|
|    0     |   0    |  0  |  0000  |    0     |
|    1     |   1    |  1  |  0001  |    0     |
|    2     |   2    |  2  |  0010  |    0     |
|    3     |   3    |  3  |  0011  |    0     |
|    4     |   4    |  4  |  0100  |    0     |
|    5     |   5    |  5  |  0101  |    0     |
|    6     |   6    |  6  |  0110  |    0     |
|    7     |   7    |  7  |  0111  |    0     |
|    8     |   -8   |  8  |  1000  |    1     |
|    9     |   -7   |  9  |  1001  |    1     |
|    10    |   -6   |  A  |  1010  |    1     |
|    11    |   -5   |  B  |  1011  |    1     |
|    12    |   -4   |  C  |  1100  |    1     |
|    13    |   -3   |  D  |  1101  |    1     |
|    14    |   -2   |  E  |  1110  |    1     |
|    15    |   -1   |  F  |  1111  |    1     |
Community
  • 1
  • 1
Braden Best
  • 8,830
  • 3
  • 31
  • 43
  • You want a bit-field with a variable width but C does not have this. Oh and a `const` qualified object is not a constant in C, it is simply a read-only object. – ouah Apr 18 '14 at 19:12
  • Just use a full unsigned int and limit your code to a specific width. – this Apr 18 '14 at 19:13
  • @ouah I wasn't aware that there was a distinction between "constant" and "read-only" – Braden Best Apr 18 '14 at 19:18
  • "constant expression" has a special meaning in C, you can only get it with defines, enums, and things which are defined at compile-time. – Étienne Apr 18 '14 at 19:19
  • even using bit_size = atoi(argv[1]); is runtime decision. compiler don't know that(argv[1]) at compile time. so it is not possible with c. simplest way is to use full int and mask it using argv[1]. –  Apr 18 '14 at 19:20
  • 1
    aha, so basically, bitfields are not the solution. The only reason I _was_ using them was for the signed ints, but I just noticed a logical pattern in the signed negatives (signbit - the rest of the number = absolute value of the negative number). I might post my own answer in a bit. – Braden Best Apr 18 '14 at 19:33
  • I figured out how to do it. 1010 = 10 or -6. 1000 - 0010 = 8 - 2 = 6. What I ended up doing was shifting the signbit left, and subtracting it from the number -> 01010 - 10000 = 10 - 16 = -6. So I solved it on accident. Going to run more tests in case this is a fluke with 4-bit numbers. – Braden Best Apr 18 '14 at 19:59
  • No you're right the equality is always true, check wikipedia http://en.wikipedia.org/wiki/Two%27s_complement chapter "subtraction from 2^N" – Étienne Apr 18 '14 at 20:25
  • 1
    @Étienne Ah, so _that's_ what a two's complement is. – Braden Best Apr 18 '14 at 20:34

3 Answers3

5

You can not define the size of a bit-field at run-time in C. But you don't need a bit-field to print binary values, simply write a function to print a number in binary format, like the one here: Is there a printf converter to print in binary format?

Then write a simple loop to print your numbers:

//get n
for (int i = 0; i < n; i++) {
    print_binary(i);
}

Edit: To answer your question about printing negative numbers coded in two's complement which don't have native types in C (like int8_t, int16_t, int32_t..), like you found out for a signed word coded on N bits in 2's complement, for negative numbers you can use the equality:

Xnegative = 2^N - Xpositive

//get n
for (uint32_t Xpos = 0; Xpos < (1<<n); Xpos++) {
    if (Xpos > 1<<(n-1))
        printf("%d\n", -(1 << n) + Xpos);
    else
        printf("%u\n", Xpos);
}
Community
  • 1
  • 1
Étienne
  • 4,773
  • 2
  • 33
  • 58
  • Note: The problem isn't the binary numbers, per se. The real problem that makes the bitfields necessary in my eyes, is that I'm not sure about how I would go about printing the signed ints (see the example run I added to the question). For instance, how would I tell it that 8 is -8 and 9 is -7? I suppose I could arrive there mathematically. (e.g. 1010 => 1000 - 0010 => 8 - 2 => 6 => -6) – Braden Best Apr 18 '14 at 19:24
  • Anyhow, thanks for the direct answer, and now this question is solved for future people with the same problem! – Braden Best Apr 18 '14 at 19:28
  • "how would I tell it that 8 is -8 and 9 is -7?" you can't do that without context anyway. – Meirion Hughes Apr 18 '14 at 20:38
1

So I figured it out.

As stated in other answers, GCC literally will not allow you to set the width of a bitfield in runtime, as it is done during compile time.

So I started thinking about why I needed the bitfield in the first place, and that was so that I could display the negative numbers.

I then studied a couple signed ints to see if I could find a mathematical pattern.

I found such a pattern and decided the bitfield was no longer necessary.

1010 = 0 - (sign_bit (1000) - raw_number (0010)) = -6

Though I ended up accidentally stumbling over another algorithm, while debugging the original, that works just as well:

number - ( (number << 1) & (1 << number of bits) )

And my notes so I could understand it and make sure it wasn't a fluke:

For a negative number:

n = 10 (binary 1010)
num_bits = 4
power = (1010 << 1 => 10100) & (1 << num_bits => 10000)
  10100
& 10000
  10000
return n (10) - power (16) => -6

For a positive number:

n = 6 (binary 0110)
num_bits = 4
power = (0110 << 1 => 01100) & (1 << num_bits => 10000)
  01100
& 10000
  00000
return n (6) - power (0) => 6

And the final function:

signed int get_signed(long long int n, int num_bits){
  int power = (n << 1) & (1 << num_bits);
  return n - power;
}

I tested it with num_bits set to 1, 2, 4, 8, and 16, and lo and behold, it works flawlessly


Edit:

I just got my original algorithm working

signed int get_tc(long long int n, int num_bits){
  int sign_bit = n >> (num_bits - 1) & 1;
  sign_bit <<= (num_bits - 1);
  int raw = n ^ sign_bit;
  return -1 * (sign_bit - raw);
}

And the logic:

n = 10
num_bits = 4
sign_bit = 10 >> (4-1) & 1
  1010
>>3
  0001
& 0001
  0001
sign_bit <<= (4-1)
  0001
<<3
  1000
sign_bit = 8
raw = 10 ^ 8
  1010
^ 1000
  0010
raw = 2
return -1 * (8 - 2)
  8 - 2 = 6
  6 * -1 = -6
return = -6

n = 6
num_bits = 4
sign_bit = 6 >> (4-1) & 1
  0110
>>3
  0000
& 0001
  0000
sign_bit <<= (4-1)
  0000
<<3
  0000
sign_bit = 0
raw = n ^ sign_bit
  0110
^ 0000
  0110
raw = 6
return -1 * (0 - 6)
  0 - 6 = -6
  -6 * -1 = 6
Braden Best
  • 8,830
  • 3
  • 31
  • 43
0

I know you've figured it out, but here was my solution to print this table. Especially since I prefer using simpler exclusive-or operations. This relies on taking a negative number representation, like 0x1001 in the 4-bit case being -7, and sign-extending it into a signed long and then printing it.

Start with two things, your number (say, -7 in 4 bits, 0b1001), and a mask containing the number of bits in question (say, 4) set to 1 (0b1111) xor'ed with -1 in the signed representation of your choosing. So, for a 32-bit signed number, the mask is:

MASK: 0b1111 ^ 0b11111111111111111111111111111111 = 0b11111111111111111111111111110000

When you xor this mask with your number, you end up with a sign-extended version of that number in the resolution you chose to use.

0b1001 ^ 0b11111111111111111111111111110000 = 0b11111111111111111111111111111001

Which is a -7 in a signed 32-bit representation and is easy to print. Sign extension is a simple concept.

#include <stdio.h>
#include <stdlib.h>
#include <math.h>

void makeStrbits(long n,long b,char *strbits)
{
   long i=0, m=1L<<(b-1);
   while(m){
      strbits[i++]= (n&m)?'1':'0';
      m=m>>1;
   }
}

int main(int argc, char *argv[])
{
   long i,bit=4,mask=0;
   char *strbits = NULL;
   if (argc==2) bit = strtol( argv[1],NULL,10 );

   strbits = calloc(1,bit+1);

   for(i=0;i<bit;i++) mask |= (1L<<i);
   mask ^= -1L;

   printf("Table for a %ld bit integer\n", bit);
   printf("unsgn\t sign\t hex\t sbit\t bits\n" );
   for(i=0; i<pow(2,bit); i++) {
      int sign = ((i&(1L<<(bit-1))) != 0);
      makeStrbits(i,bit,strbits);
      printf("%lu\t %ld\t %0x\t %1d\t %s\n", i, sign?i^mask:i, i, sign, strbits );
   }
   free(strbits);
   return 0;
}
JohnH
  • 2,713
  • 12
  • 21
  • Very interesting read, but the algorithm _does_ need to work with any number, not just 0..15. That's why I tried to implement the algorithm by getting the sign bit, followed by getting the rest of the number, but without the sign bit, then subtracting the "raw" number from the sign bit's value, then multiply by -1. `1010 => -1 * (1000 - 0010) = -6`, `0101 => -1 * (0000 - 0101) = +5`. Though the algorithm I accidentally found when trying to work out how to get the "raw number" works just as well. – Braden Best Apr 19 '14 at 02:57
  • It does work with any number ( up to the num of bits in a long, but it would be trivial to use long long to extend it). My program above takes the number of bits you want on the command line as the first arg, but defaults to 4 if you don't provide an arg. – JohnH Apr 19 '14 at 04:29