4

I tried to check QA exercices about C++ and one question drove me crazy !!

typedef struct {
    unsigned int i : 1;
} myStruct;

int main()
{
    myStruct s;
    s.i = 1;
    s.i++;
    cout << s.i;
    return 0;
}

The question said what is the ouput : 0/1/2/3/-1/Seg Error ?

I did check 2 which is a wrong answer :D , so why does the program shows 0 ?

WaleedYaser
  • 605
  • 5
  • 9
Blood-HaZaRd
  • 2,049
  • 2
  • 20
  • 43
  • 4
    Its a 1 bit int. If you increment 1 it goes back to 0. – Paul Rooney Jun 12 '18 at 23:40
  • 3
    `unsigned int i : 1;` is a bitfield with a length of 1 bit. Valid values are 0 and 1. If you set it to 1 and then increment it what do you get? – user4581301 Jun 12 '18 at 23:41
  • 2
    Show caution with questions that offer an answer like "Seg Error". If you always got these where you thought you'd get one, debugging would be a whole hell of a lot easier. – user4581301 Jun 12 '18 at 23:44
  • @user4581301 : so the fact that I put unsigned int i : 1; it will make it 1 bit even if sizeof(unsigned int) = 4 ? – Blood-HaZaRd Jun 12 '18 at 23:45
  • 2
    Yep. Usually what happens is you would have multiple bits packed into the 32 bits of your unsigned int. Eg: `struct myStruct{ unsigned int i : 1; unsigned int j : 3; };` would give you two members of the structure in the same 32 bits. One takes 1 bit and the other takes 3. Aside: You don't need the `typedef` in C++. – user4581301 Jun 12 '18 at 23:48
  • @user4581301: aha nice it sounds like segmentation mecanism. So could that be dabngerous if we work in a 64bits, saying with a more complexe struct ? – Blood-HaZaRd Jun 12 '18 at 23:50
  • 2
    Wait a second. I may have misinterpreted you. If `sizeof(unsigned int)` is 4 then `sizeof(myStruct)` will also be 4. the other 31 bits are unused. If you port to an environment where `int` is 64 bit, then you will have 63 bits going unused. Those 32 or 64 bits can be subdivided any way you like. – user4581301 Jun 12 '18 at 23:54

1 Answers1

6

You need to familiarize yourself with bitfields.

By default int has size of 32 bits(4 bytes). But using the given notation, you can specify how many bits are used for the variable.

So when you increment the value from 1, it overflows and returns to zero.