-6

I have this number -16777216 in Int32 and its hex representation is FF000000. On right shifting this number right by 24 I get 0xFFFFFFFF and on masking by 0xFF000000 and then shifting right by 24 I get 0x000000FF.

Int32 a = -16777216;//FF000000 in hex
Int32 b = a >> 24;
Int32 c = (a & 0xFF000000) >> 24;

Why are b and c different numbers?

  • 9
    What is your question ? – Arthur Attout Aug 03 '19 at 19:48
  • 1
    you are not clear with your question, but keep in mind that one bit shift to the right is like multiplying by two, so if you shift 0xFF00 0000 one bit and try to store it in two bytes, it will overflow – Majid khalili Aug 03 '19 at 19:54
  • 1
    Please [edit] your question to include the full source code you have as a [mcve]. Explain in detail what the problem is and what result you are expecting (and what you get instead). – Progman Aug 03 '19 at 20:00
  • Is is call sign extension. You are doing it wrong. For example -2 (twos compliement ) in 8 bit is 0xFE. In 16 bit it is 0xFFFE. In 24 bit it is 0xFFFFFE. In 32 bit it is 0xFFFFFFFE. In 64 bit it is 0xFFFFFFFFFFFFFFFE. So to sign extend a 24 bit number to 32 bit you test the MSB like this : results = ((number & 0x800000) > 0)? (number | 0xFF000000) : number; – jdweng Aug 03 '19 at 20:43

2 Answers2

2

When you shifting a negative number to the right the bit 1 is added at the beginning, not 0. So when you shift the value 0xFF000000 to the right you get 0xFFFFFFFF (which is -1 btw.). This ensures that the negative number stays negative and not suddenly become positive because of the shift operation.

However, this applies for Int32 values you have written there. But with the code

(a & 0xFF000000)

you get a result of type Int64 or long, not int (or Int32). So instead of having 0xFF000000 you actually have 0x00000000FF000000, a positive number. If you shift it to the right you get 0x00000000000000FF, which is the positive number 255.

The value 0xFF000000 is a UInt32 value. Combining it with an Int32 value with the & operator results in a Int64 value.

int a = 4;
uint b = 15;
object c = a & b;
Console.WriteLine($"{a} - {a.GetType()}");
Console.WriteLine($"{b} - {b.GetType()}");
Console.WriteLine($"{c} - {c.GetType()}");

This results in the following output:

4 - System.Int32
15 - System.UInt32
4 - System.Int64
Progman
  • 16,827
  • 6
  • 33
  • 48
  • How is FF000000 both an Int32 and UInt32 value? – HIMANSHU GARG Aug 03 '19 at 20:38
  • @HIMANSHUGARG It depends on the context what the hex representation you have means and where you use it, but the actual integer literal `0xFF000000` you have in your code is a `UInt32` value. – Progman Aug 03 '19 at 20:42
  • Progman- Could you please elaborate more about it. The thing I failed to understand is how that value of a(-16777216) is FF000000 and also the value of 4278190080 is also FF000000 in Int32? – HIMANSHU GARG Aug 04 '19 at 03:59
  • @HIMANSHUGARG when your write literally the code `0xFF000000` inside your source code as an integer literal (as you did in your question) this value will be a `UInt32` value because the value it represents (4278190080) cannot fit inside a `Int32` variable. This behaviour is defined in the C# specification "7.4.5.3 Integer literals". When you want an `Int32` value from that exact hex string you have to use other methods mentioned in https://stackoverflow.com/questions/1139957/convert-integer-to-hexadecimal-and-back-again. – Progman Aug 04 '19 at 08:13
1

No. This is actually not a Computer question - the number Systems themself do not allow this:

  • Every number can be represented
  • But every number can only be represented one way

However the same Number can have a lot of possible String Representations.

In your case you seem to be mixing up 0xFF000000 and 0x000000FF? Or are just wondering why the right shifting pads with 1's? It is hard to tell.

Christopher
  • 9,634
  • 2
  • 17
  • 31
  • Christopher The number has everything unchanged on masking with 0xFF000000, except the number is no longer a signed integer. How? – HIMANSHU GARG Aug 03 '19 at 20:00
  • @HIMANSHUGARG ??? You gave us a hexdecimal representation of the bytes. That does not tell us **anything** about it being interpreted as signed or unsinged integer. All I can tell is: If it is a signed interger, you swapped the sign. – Christopher Aug 03 '19 at 20:05