0

Example:

int32 Temp;
Temp= (Temp & 0xFFFF);

How can I tell that 0xFFFF is signed not unsigned. Usually we add a "u" to the hexadecimal number 0xFFFFu and then perform operation. But what happens when we need a signed result ?

Andrew
  • 2,046
  • 1
  • 24
  • 37
  • 1
    If no suffix provided, then it is a `signed` number. Reference: https://en.cppreference.com/w/cpp/language/integer_literal – Zongru Zhan Feb 28 '22 at 06:18
  • 1
    @ZongruZhan No that's wrong, hex constants have different type rules than decimal constants, see C17 6.4.4. – Lundin Feb 28 '22 at 11:31
  • You should clarify your question. What do you mean with needing signed result? For example, if you start with negative value, do expect to have sign extended value after the masking? (In this case value in range -32768..+32767.) Or something else? – user694733 Feb 28 '22 at 13:24
  • "How can I tell that 0xFFFF is signed not unsigned. " --> use `_Generic(0xFFFF) ...`. – chux - Reinstate Monica Feb 28 '22 at 16:29

2 Answers2

3

How can I tell that 0xFFFF is signed not unsigned.

You need to know the size of an int on the given system:

  • In case it is 16 bits, then 0xFFFF is of type unsigned int.
  • In case it is 32 bits, then 0xFFFF is of type (signed) int.

See the table at C17 6.4.4.1 §5 for details. As you can tell, this isn't portable nor reliable, which is why we should always use u suffix on hex constants. (See Why is 0 < -0x80000000? for an example of a subtle bug caused by this.)


In the rare event where you actually need signed numbers when doing bitwise operations, use explicit casts. For example MISRA-C compliant code for masking out some part of a signed integer would be:

int32_t Temp; 
Temp = (int32_t) ((uint32_t)Temp & 0xFFFFu);

The u makes the 0xFFFFu "essentially unsigned". We aren't allowed to mix essentially signed and unsigned operands where implicit promotions might be present, hence the cast of Temp to unsigned type. When everything is done, we have to make an explicit cast back to signed type, because it isn't allowed to implicitly go from unsigned to signed during assignment either.

Lundin
  • 195,001
  • 40
  • 254
  • 396
0

How to express hexadecimal as signed and perform operation?

When int is wider than 16 bit, 0xFFFF is signed and no changes needed.

int32 Temp;
Temp= (Temp & 0xFFFF);

To handle arbitrary int bit width, use a cast to quiet that warning.

Temp = Temp & (int32)0xFFFF;

Alternatively, use a signed constant as decimal constants, since C99, are always signed.

Temp = Temp & 65535; // 0xFFFF

The alternative idea goes against the "express hexadecimal as signed" but good code avoids naked magic numbers and so the usage of a hex constant is less relevant as the concept of the mask should be carried in its name.

#define IMASK16 65535
...
Temp = Temp & IMASK16;
chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256
  • 0xFFFF / 65535 aren't MISRA compatible, the `u` suffix is required. – Lundin Feb 28 '22 at 15:55
  • @Lundin OP was asking about MISRA 10.1. Are you suggesting a `u` is required for that or for some other MIRSA one (and which one)? – chux - Reinstate Monica Feb 28 '22 at 16:00
  • It's for another rule 7.2 but I'm assuming the OP wants the code to be MISRA compliant no matter which rule they happen to break. – Lundin Feb 28 '22 at 16:03
  • @Lundin C89 has "unsuffixed decimal `int`, `long int`, `unsigned long int`" 6.1.3.2. C99 uses `int, long , long long`. So in C89, a decimal consent may be `unsigned long` - not the same as C99. In OP's case here, that does not apply to `0xFFFF`. – chux - Reinstate Monica Feb 28 '22 at 16:14
  • Ah I see, you are correct. – Lundin Mar 01 '22 at 07:05