4

I have problem to solve and have no idea how to do that. My program receives from serial port string with hex value (like DFF7DF). I need to convert it to binary form, discard first four bits, take fifth bit as sign bit and next 12 bits as a value.

I need to get value as normal INT.

I was able to make such program in MATLAB, but I need C++ to be able to run it on my linux arm board.

Thanks in advance for help! Marcin

gozwei
  • 700
  • 1
  • 7
  • 13
  • Have a look at this: http://stackoverflow.com/questions/483609/how-can-i-convert-hex-numbers-to-binary-in-c – razlebe Jul 08 '11 at 12:21
  • not an exact duplicate, just similar. –  Jul 08 '11 at 12:28
  • Hm, as a matter of getting-to-the-point, does the first sentence really need saying? Just being nitpicky :-) – Kerrek SB Jul 08 '11 at 12:50
  • 2
    `discard first four bits, take fifth bit as sign bit and next 12 bits as a value` : That adds up to 17 bits, but the example string DFF7DF is 24. Can you clarify with example values? Also is the "value" part two's comp, or sign and magnitude encoding? – Roddy Jul 08 '11 at 12:53
  • There is an opensource project I have written: http://bitn.sourceforge.net/ which may be useful for your needs. – Alina Danila Aug 01 '11 at 13:38

5 Answers5

5

You could do something like:

unsigned long value = strtoul("DFF7DF", NULL, 16);
value >>= 4; // discard first four bits
printf("Minus sign: %s\n", value & 1 ? "yes" : "no");
printf("Value: %lu\n", (value & 0x1FFF) >> 1);

long newvalue = (value & 1 ? -1 : 1) * ((value & 0x1FFF) >> 1);
Blagovest Buyukliev
  • 42,498
  • 14
  • 94
  • 130
  • You probably want the unsigned version of everything there if you do bit fiddling... and it should be `1FFFF`, non? But why was this downvoted? – Kerrek SB Jul 08 '11 at 12:43
  • @Kerrek: it's `1FFF` because `value` is already shifted by 4 bits. – Blagovest Buyukliev Jul 08 '11 at 12:46
  • @Blag: Yes, 4 bits, not 8 :-) The original value 24 bits, you want 20. (printf need `%lu`, too.) – Kerrek SB Jul 08 '11 at 12:48
  • @Kerrek: thanks for noticing the wrong printf specifier; Regarding the mask, I still don't get it: the binary representation of `1FFF` is `0001 1111 1111 1111` - that's the first thirteen bits set. The OP says he needs the twelve bits after the sign. Would you illustrate your idea in more detail? – Blagovest Buyukliev Jul 08 '11 at 12:55
  • 1
    @Blag: Hm, maybe the OP isn't telling us everything. From her description, she needs to process 17 bits, but `DFF7DF` is clearly a 24-bit value. Really you'll need to shift out the _bottom_ 7 bits first and than mask out the remaining 5 bits from the top I guess. – Kerrek SB Jul 08 '11 at 13:14
  • 1
    `value = (value >> 7) & 0xFFF` seems to extract the 12-bit value that the OP describes. – Kerrek SB Jul 08 '11 at 13:15
2

The correct answer depends on a few conventions - is the hex string big-endian or little-endian? Do you start counting bits from the most significant or the least significat bit? Will there always be exactly 6 hex characters (24 bits)?

Anyways, here's one solution for a big-endian, always-24-bits, counting from most significant bit. I'm sure you'll be able to adapt it if some of my assumptions are wrong.

int HexToInt(char *hex)
{
    int result = 0;
    for(;*hex;hex++)
    {
        result <<= 4;
        if ( *hex >= '0' && *hex <= '9' )
            result |= *hex-'0';
        else
            result |= *hex-'A';
    }
    return result;
}

char *data = GetDataFromSerialPortStream();
int rawValue = HexToInt(data);
int sign = rawValue & 0x10000;
int value = (sign?-1:1) * ((rawValue >> 4) & 0xFFF);
Vilx-
  • 104,512
  • 87
  • 279
  • 422
  • 1
    I think it's safe to assume that when an int is converted to ASCII HEX values ("DFF7DF") that the first char is the most significant! One of the few advantages of character representation is that endianness ceases to be an issue. – Roddy Jul 08 '11 at 12:46
1

The question is tagged C++ but everyone is using C strings. Here's how to do it with a C++ STL string

std::string s("DFF7DF");  

int val;
std::istringstream iss(s);
iss >> std::setbase(16) >> val;

int result = val & 0xFFF;  // take bottom 12 bits

if (val & 0x1000)    // assume sign + magnitude encoding
  result = - result;

(The second "bit-fiddling" part isn't clear from your question. I'll update the answer if you clarify it.)

Roddy
  • 66,617
  • 42
  • 165
  • 277
0

You have to check your machine type for endian-ness but this is basically the idea.

const char * string = "DFF7DF";
const unsigned char second_nibble = hex_to_int (string[1]);
const unsigned char third_nibble  = hex_to_int (string[2));
const unsigned char fourth_nibble = hex_to_int (string[2));

int sign = second_nibble & (1<<3) ? -1 : 1;

unsigned value = unsigned (second_nibble & ~(1<<3)) << 12-3; // Next three bits are in second nibble
value |= (unsigned(third_nibble)<<1) | (fourth_nibble&1); // Next 9 bits are in the next two nibbles.

Make sure you perform your bit-shift operators on unsigned types.

spraff
  • 32,570
  • 22
  • 121
  • 229
-1

Here's a pattern to follow:

const char* s = "11";
istringstream in(string(s, 3));
unsigned i=0;
in >> hex >> i;
cout << "i=" << dec << i << endl;

The rest is just bit-shifting.

John
  • 2,326
  • 1
  • 19
  • 25