-1
int x =5;
char y = x + '0';

Apparently this bit of code converts an integer to a string. But I don't understand how it works behind the scenes. I'm a newbie. Could someone explain how adding a string 0 with an int converts it into a string?

kerbx4
  • 9
  • 3
  • Does this answer your question? [Why does adding a '0' to an int digit allow conversion to a char?](https://stackoverflow.com/questions/24422178/why-does-adding-a-0-to-an-int-digit-allow-conversion-to-a-char) – phuclv May 27 '21 at 07:18
  • duplicates: [What does + '0' mean in C](https://stackoverflow.com/q/54750403/995714), [Converting an integer to char by adding '0' - what is happening?](https://stackoverflow.com/q/30029294/995714) – phuclv May 27 '21 at 07:19

3 Answers3

1

In

char y = x + '0';

put the character code (ASCII or whatever character code your system implements) of '0' and add 5 to it then again convert that integer to an ASCII(/UTF-8 or any other character encoding) character that's what your output(5) will be.

Behind the scenes this is what's happening:

y = 5 + 48  ;for ASCII/UTF-8 character code of '0' is 48
y = 53

and ASCII/UTF-8 char for 53 is '5'.

It does not convert a string to an integer. It just gives you the equivalent character. We can convert an int(0-127) to a char or vice-versa, output will be an ASCII/UTF-8 value. char are stored as short int in memory. i.e. they are 8-bit int values.

Shubham
  • 1,153
  • 8
  • 20
0

You have a bit of misunderstanding here. The variable y isn't string, it's a char. string is an array of chars. What you are actually doing here is:

char y = x + 48; //48 is the ASCII encoding of 0

This is somewhat like:

char y = 50;
printf("%c\n", y); //prints 2 as a char

This is because chars are encoded with a specific value. Eg. 'a' is 97. 'b' is 98 and so on. The same way '0''s code is 48.

 char y = x + '0'

means is adding the encoded value of 0(ie 48) to x. And because 0 is the first number, the other number's code will be right next to it. This is what I mean:

48 -> This is 0
49 -> This is 1
50 -> This is 2
51 -> This is 3
52 -> This is 4
53... and so on

You may notice that in order to get eg. 1, we need to add one to zero. 48 + 1 = 49 and 49 is the encoding for 1. This is true for all numbers.

Note: I've used ASCII encoding for explaining but there are others and it should work on most on weird encodings.

Shambhav
  • 813
  • 7
  • 20
  • Re “it should work on most on weird encodings”: Adding the value of a digit (0-9) to `'0'` works in all C implementations that conform to the C standard, because the standard requires the character codes for the digits to be consecutive. – Eric Postpischil May 27 '21 at 09:55
-1

'0' is an int, not a string. String literals have double quotes (e.g. "Hi!"). Character literals (which are integers) have single quotes (e.g. 'A').

x + '0' evaluates to int since both the operands are int.

The integral value of a character literal is its ASCII value : https://en.wikipedia.org/wiki/ASCII#Character_set
From the ascii table, the integer value of '0' is 48.

Now, coming to the expression in the question:
x is 5, '0' is 48.
Therefore y = 48+5 (i.e 53).

When you print y as a character using printf("%c", y), the character whose ASCII value is 53 is printed. i.e. '5'

Why adding '0' to a single digit integer seems to convert it into an integer is because '0' to '9' are one after another in the ASCII table. This means '9' (57) is exactly 9 spaces after '0' (48).

This is why adding double digit integers to '0' will not work. E.g. '0' + 10 will be ':' (ASCII value 58).

To properly convert larger strings to integers, use the atoi function (https://man7.org/linux/man-pages/man3/atoi.3.html). e.g.

int a = atoi("1023"); // a is an integer with the value 1023
mew
  • 71
  • 1
  • 5
  • 1
    `That means it can take values from -128 to 127` that's completely wrong. `char` [can be signed or unsigned](https://stackoverflow.com/q/2054939/995714) and can contain more than 8 bits of data. So `char` can contain values in the range [0, 2^CHAR_BIT] if it's unsigned, and [-2^(CHAR_BIT/2), 2^(CHAR_BIT/2) - 1] if signed – phuclv May 27 '21 at 08:34
  • 1
    *`'0'` is a character* [Actually it's an integer](https://port70.net/~nsz/c/c11/n1570.html#6.4.4.4p10): "An integer character constant has type `int`. ..." – Andrew Henle May 27 '21 at 09:15
  • Thanks for the feedback @phuclv and @Andrew ! I've always assumed that `char` is signed by default and takes exactly 8 bits (Since `uint8_t` typedefs to `unsigned char`. Did not know that even _that_ is implementation specific). Also did not know that `'0'` is an `int`. (Why is that? Wouldn't it be more efficient to use `char`?) Anyways, I'm updating the answer to fix those issues. If it still does not provide value, please let me know. I will delete it. – mew May 28 '21 at 05:38
  • 1
    @RehanVipin `uintN_t` are optional types and `uint8_t` only exists if `CHAR_BIT == 8`. Only `(u)int_fastN_t` and `(u)int_leastN_t` are required. I don't know why character C decides that character literals should be int, but probably because of multicharacter literals like `'abcd'` which were commonly used for signatures like [FourCC](https://en.wikipedia.org/wiki/FourCC), and because of integer promotion. But in C++ `'0'` is `char`: [Why are C character literals ints instead of chars?](https://stackoverflow.com/q/433895/995714) – phuclv May 28 '21 at 07:23