0

I have an 18 bit integer that is in two's complement and I'd like to convert it to a signed number so I can better use it. On the platform I'm using, ints are 4 bytes (i.e. 32 bits). Based on this post:

Convert Raw 14 bit Two's Complement to Signed 16 bit Integer

I tried the following to convert the number:

using SomeType = uint64_t;
SomeType largeNum = 0x32020e6ed2006400;
int twosCompNum = (largeNum & 0x3FFFF);
int regularNum = (int) ((twosCompNum << 14) / 8192);

I shifted the number left 14 places to get the sign bit as the most significant bit and then divided by 8192 (in binary, it's 1 followed by 13 zeroes) to restore the magnitude (as mentioned in the post above). However, this doesn't seem to work for me. As an example, inputting 249344 gives me -25600, which prima facie doesn't seem correct. What am I doing wrong?

easythrees
  • 1,530
  • 3
  • 16
  • 43

3 Answers3

1

The constant 8192 is wrong, it should be 16384 = (1<<14).

int regularNum = (twosCompNum << 14) / (1<<14);

With this, the answer is correct, -12800.

It is correct, because the input (unsigned) number is 249344 (0x3CE00). It has its highest bit set, so it is a negative number. We can calculate its signed value by subtracting "max unsigned value+1" from it: 0x3CE00-0x40000=-12800.

Note, that if you are on a platform, for which right signed shift does the right thing (like on x86), then you can avoid division:

int regularNum = (twosCompNum << 14) >> 14;

This version can be slightly faster (but has implementation-defined behavior), if the compiler doesn't notice that division can be exactly replaced by a shift (clang 7 notices, but gcc 8 doesn't).

geza
  • 28,403
  • 6
  • 61
  • 135
  • *For negative `a`, the value of `a >> b` is implementation-defined*, so be careful to read the specification of every compiler you want to use that with. – Toby Speight Aug 17 '18 at 09:15
1

The almost-portable way (with assumption that negative integers are natively 2s-complement) is to simply inspect bit 17, and use that to conditionally mask in the sign bits:

constexpr SomeType sign_bits = ~SomeType{} << 18;
int regularNum = twosCompNum & 1<<17 ? twosCompNum | sign_bits : twosCompNum;

Note that this doesn't depend on the size of your int type.

Toby Speight
  • 27,591
  • 48
  • 66
  • 103
0

Two problems: first your test input is not an 18-bit two's complement number. With n bits, two's compliment permits -(2 ^ (n - 1)) <= value < 2 ^ (n - 1). In the case of 18 bits, that's -131072 <= value < 131071. You say you input 249344 which is outside of this range and would actually be interpreted as -12800.

The second problem is that your powers of two are off. In the answer you cite, the solution offered is of the form

mBitOutput = (mBitCast)(nBitInput << (m - n)) / (1 << (m - n));

For your particular problem, you desire

int output = (nBitInput << (32 - 18)) / (1 << (32 - 18));
// or equivalent
int output = (nBitInput << 14) / 16384;

Try this out.

OrderNChaos
  • 421
  • 2
  • 4
  • divide by 16384 won't work. You always need a arithmetic right shift to do sign extension because division will round toward zero. For example -5/2 = -2 but -5 >> 1 = -3 – phuclv Aug 17 '18 at 06:20
  • @phuclv: it will, because division result doesn't need rounding. – geza Aug 17 '18 at 08:01
  • @geza no. You always need rounding, whether to zero, inf nearest or whatever. And the C++ standard mandates that the result is round to zero. But the arithmetic right shift will round toward negative inf which will make the results differ – phuclv Aug 17 '18 at 08:24
  • @phuclv: I mean, the division result have zero remainder. – geza Aug 17 '18 at 08:29
  • @geza I got it. Because you've done a left shift so the low 14 bits will be zero. But the [compilers don't realized that and you'll still get worse assembly output](https://godbolt.org/g/vwvX8F) if you use a division – phuclv Aug 17 '18 at 08:37
  • @phuclv: but clang recognizes :) – geza Aug 17 '18 at 08:52