0

I need to do bitewise operations on 32bit integers (that indeed represent chars, but whatever).

Is the following kind of code safe?

uint32_t input;
input = ...;

if(input & 0x03000000) {
    output  = 0x40000000;
    output |= (input & 0xFC000000) >> 2;

I mean, in the "if" statement, I am doing a bitwise operation on, on the left side, a uint32_t, and on the right side... I don't know!

So do you know the type and size (by that I mean on how much bytes is it stored) of hard-coded "0x03000000" ?

Is it possible that some systems consider 0x03000000 as an int and hence code it only on 2 bytes, which would be catastrophic?

Community
  • 1
  • 1
KrisWebDev
  • 9,342
  • 4
  • 39
  • 59
  • 1
    Looks perfectly fine to me. – Some programmer dude Jun 29 '13 at 11:24
  • 1
    The right side is promoted to the smallest integer type it fits in (or `int` if it would be shorter than `int`), so the comparison is safe. –  Jun 29 '13 at 11:26
  • 1. You can always cast it to what you need: `(uint32_t)0x03000000`. 2. You should make these `#define` for clarity and re-use. You can add the cast too: `#define MYBITS (uint32_t)0x03000000`. –  Jun 29 '13 at 14:18

1 Answers1

1

Is the following kind of code safe?

Yes, it is.

So do you know the type and size (by that I mean on how much bytes is it stored) of hard-coded "0x03000000" ?

0x03000000 is int on a system with 32-bit int and long on a system with 16-bit int.

(As uint32_t is present here I assume two's complement and CHAR_BIT of 8. Also I don't know any system with 16-bit int and 64-bit long.)

Is it possible that some systems consider 0x03000000 as an int and hence code it only on 2 bytes, which would be catastrophic?

See above on a 16-bit int system, 0x03000000 is a long and is 32-bit. An hexadecimal constant in C is the first type in which it can be represented: int, unsigned int, long, unsigned long, long long, unsigned long long

ouah
  • 142,963
  • 15
  • 272
  • 331
  • OK but what if I use 0x00000001 instead? On 16-bit system it will be represented as a 16-bit int. Then I will do a bitewise operation between a 32-bit uint32_t and a 16-bit 0x00000001. Will that throw an error or is it OK? – KrisWebDev Jun 29 '13 at 11:36
  • 2
    `0x00000001` is guaranteed to be an `int`. In the bitwise expression with an `uint32_t` the constant will be promoted to `uint32_t`, so no problem. – ouah Jun 29 '13 at 11:38
  • @ouah: I think on a platform where `int` is 64 bit wide, both sides would be converted to `int`. – Kerrek SB Jun 29 '13 at 11:55