-1

So have a signed decimal number that can be represented with 16 bits, in a char*. I want to have a char* of that number in 2s compliment binary. So I want to go from "-42" to "1111111111010110" (note all 16 bits are shown) in C. Is there a quick and dirty way to do this? Some library function perhaps? Or do I have to crank out a large-ish function myself to do this?

I'm aware that strtol() may be of some use.

user3724404
  • 216
  • 2
  • 9

1 Answers1

0

There isn't a standard library function that can generate binary strings as you describe.

However it is not particularly difficult to do.

#include <stdio.h>
#include <stdint.h>
#include <ctype.h>

int main(int argc, char ** argv)
{
    while(--argc >= 0 && ++argv && *argv){
        char const * input = *argv;
        if(! input)
            continue;

        uint32_t value = 0;
        char    negative = 0;
        for(; *input; input++){
            if (isdigit(*input))
                value = value * 10 + (*input -'0');
            else if (*input == '-' && value == 0)
                negative = 1;
            else {
                printf("Error: unexpected character: %c at %d\n", *input, (int)(input - *argv));
                continue; // this function doesn't handle floats, or hex
            }
        }

        if (value > 0x7fff + negative){
            printf("Error: value too large for 16bit integer: %d %x\n", value, value);
            continue;   // can't be represented in 16 bits
        }

        int16_t result = value;
        if (negative)
            result = -value;

        for (int i=1; i <= 16; i++)
            printf("%d", 0 != (result & 1 << (16-i)) );

        printf("\n");
    }
}

That function handles all valid 16 bit values and leverages the fact that the architecture stores integers as two's complement values. I'm not aware of an architecture that doesn't, so it's a fairly reasonable assumption.

Note that two's complement INT_MIN != -1 * INT_MAX. This is handled by adding the negative flag to the validity check before conversion from unsigned 32bit to signed 16bit.

./foo 1 -1 2 -2 42 -42 32767 -32767 32768 -32768
0000000000000001
1111111111111111
0000000000000010
1111111111111110
0000000000101010
1111111111010110
0111111111111111
1000000000000001
Error: value too large for 16bit integer: 32768 8000
1000000000000000
Jeff K
  • 183
  • 8