0

I am working on some codes using c++.

Type char is one byte in my machine and int type is four bytes.

Since my input value is under 100, I used char type instead of integer type in order to make my process in an efficient way as shown in below codes.

int do_main(int argc, const char *argv[]) {
        int TC;
        cin >> TC;

        while (TC--) {
            unsigned char num;
            char code[7];
            for (int i = 0; i < 7; ++i) code[i] = 0;
            cin >> num;
            int idx = 0;
            while (1) {
                code[7-1-idx++] = num % 2;
                num /= 2;
                if (num < 2) break;
            }

            code[7 - 1 - idx] = num;

            for (int i = 0; i < 7; ++i) cout << code[i] << endl;

        }
        return 0;
    }

As you can see above codes, the 10 digit value is transformed into 2 bit type. However, my problem is that the value was not what I expected. it showed some strange character not 1 or zero.

I thought that it was due to the types. Therefore I modified the char into int and ran the codes again. I confirmed that the codes[i] showed the corrected value that I expected.

To summary.. my question is that what is wrong with the usage of char instead of int?

I know that char is aimed at storage of character but for some small integer value (up to 1 byte), we can use char instead of int type.

sclee1
  • 1,095
  • 1
  • 15
  • 36

1 Answers1

2

It is because std::cout's operator<<(char) is overloaded such that it will display ascii character corresponding to the digit. Instead of changing type from char to int, you can do like this

for (int i = 0; i < 7; ++i)
    cout << static_cast<int>(code[i]) << endl;

See this.

0x0001
  • 525
  • 4
  • 12