I need to convert between signed integers and their internal representation as series of bytes. In C I was using functions like:
unsigned char hibyte(unsigned short i)
{return i>>8;}
unsigned char lobyte(unsigned short i)
{return i & 0xFF;}
unsigned short makeshort(unsigned char hb, unsigned char lb)
{return ((short)hb << 8) | (short)lb;}
The problem is that this code does not work under C# because the rules of signed/unsigned cast are not the same: as I understand a C# cast mean conversion of the value whereas in C casting between signed/unsigned types does not modify the underlying data. Moreover in C#, for signed numbers the >> operator shifts in the sign bit. All this makes it difficult to convert my code to C# e.g.
1) the C# function
public static byte hibyte(short i)
{return (byte) (i>>8);}
throws an overflow exception if i is negative
2) the C# function
public static ushort makeshort(byte hb, byte lb)
{return (short) (((ushort)hb << 8) | (ushort)lb); }
throws an overflow exception if the resulting short is negative. Here the expression "(ushort)hb << 8" works because the shift is done on unsigned number. But then I need to interpret the same data as a signed integer and I don't know how to do it. I understand for C# such C-like cast is cheating because a positive value may become a negative value but this is what I actually need (eg. for processing a byte stream read from a device etc.) For the moment I'm using the C code compiled as an unmanaged dll for all binary manipulations like this but this is not very elegant and I'm sure this can be done somehow (possibly simply) in C#. Any suggestions are welcome!