-5

I'm pretty confused by pointers in general and I have no idea why this is happening. Normally a pointer to an array would just print out the values of an array but I have no idea why this is happening. Could someone explain why or suggest what is happening?

char *showBits(int dec, char *buf) {
    char array[33];
    buf=array;
    unsigned int mask=1u<<31;
    int count=0;
    while (mask>0) {
        if ((dec & mask) == 0) {
            array[count]='0';
        }
        else {
            array[count]='1';
        }
        count++;
        mask=mask>>1;
    }
    return buf;
    }

Expecting it to return a binary representation of dec, but printing it produces random garbage.

3 Answers3

1

You have

char *showBits(int dec, char *buf);

and the function is expected "to return a binary representation of dec".

Assuming int is 32 bits, do

#define INT_BITS (32) // to avoid all those magic numbers: 32, 32-1, 32+1

Assuming further that the function is called like this:

int main(void)
{
  int i = 42;  
  char buf[INT_BITS + 1]; // + 1 to be able to store the C-string's '0'-terminator.

  printf("%d = 0b%s\n", i, showBits(i, buf));
}

You could change your code as follows:

char *showBits(int dec, char *buf) {
  // char array[INT_BITS + 1]; // drop this, not needed as buf provides all we need
  // buf=array; // drop this; see above

  unsigned int mask = (1u << (INT_BITS - 1));
  size_t count = 0; // size_t is typically used to type indexes

  while (mask > 0) {
    if ((dec & mask) == 0) {
      buf[count] = '0'; // operate on the buffer provided by the caller. 
    } else {
      buf[count] = '1'; // operate on the buffer provided by the caller. 
    }

    count++;

    mask >>= 1; // same as: mask = mask >> 1;
  }

  buf[INT_BITS] = '\0'; // '0'-terminate the char-array to make it a C-string.

  return buf;
}

Alternatively the function can be used like this:

int main(void)
{
  ...

  showBits(i, buf);
  printf("%d = 0b%s\n", i, buf);
}

The result printed should look like this in both cases:

42 = 0b00000000000000000000000000101010
alk
  • 69,737
  • 10
  • 105
  • 255
1

The problem is that you're returning a reference to local array. Instead, let the caller allocate the buffer. I've also fixed some other problems in the code:

#define MAX_BUFFER_LENGTH (sizeof(unsigned int) * CHAR_BIT + 1)

char *to_bit_string(unsigned int n, char *buf) {
    unsigned int mask = UINT_MAX - (UINT_MAX >> 1);
    char *tmp;

    for (tmp = buf; mask; mask >>= 1, tmp++) {
        *tmp = n & mask ? '1': '0';
    }

    *tmp = 0;
    return buf;
}

First of all, we use unsigned int instead of signed int here, because signed ints would be converted to unsigned ints when used in conjunction with unsigned int. Second, unsigned ints can have varying number of bits; so we use sizeof(unsigned int) * CHAR_BIT + 1 to get the absolute maximum of the number of bits. Third, we use UINT_MAX - (UINT_MAX >> 1) as a handy way to get a value that has only the most-significant bit set, no matter how many value bits the number has. Fourth: instead of indices, we use a moving pointer. Fifth - we remember to null-terminate the string.

Usage:

char the_bits[MAX_BUFFER_LENGTH];
puts(to_bit_string(0xDEADBEEF, the_bits));

Output

11011110101011011011111011101111
0

A bit modified code - the caller should provide buff to accommodate the string

char *showBits(unsigned int dec, char *buf) {
    unsigned int mask = 1u << 31;
    int count = 0;
    while (mask>0) {
        if ((dec & mask) == 0) {
            buf[count] = '0';
        }
        else {
            buf[count] = '1';
        }
        count++;
        mask = mask >> 1;
    }
    buf[count] = '\0';
    return buf;
}
0___________
  • 60,014
  • 4
  • 34
  • 74