If I declare this:
int i = 0 + 'A';
is 'A'
considered char
or int
?
some people might use:
int i = 0 + (int)'A';
but is this really necessary?
If I declare this:
int i = 0 + 'A';
is 'A'
considered char
or int
?
some people might use:
int i = 0 + (int)'A';
but is this really necessary?
In C, character constants such as 'A'
are of type int
. In C++, they're of type char
.
In C, the type of a character constant rarely matters. It's guaranteed to be int
, but if the language were changed to make it char
, most existing code would continue to work properly. (Code that explicitly refers to sizeof 'A'
would change behavior, but there's not much point in writing that unless you're trying to distinguish between C and C++, and there are better and more reliable ways to do that. There are cases involving macros where sizeof 'A'
might be sensible; I won't get into details here.)
In your code sample:
int i = 0 + 'A';
0
is of type int
, and the two operands of +
are promoted, if necessary, to a common type, so the behavior is exactly the same either way. Even this:
char A = 'A';
int i = 0 + A;
does the same thing, with A
(which is of type char
) being promoted to int
. Expressions of type char
are usually, but not always, implicitly promoted to int
.
In C++, character constants are of type char
-- but the same promotion rules apply. When Stroustrup designed C++, he changed the type of character constants for consistency (it's admittedly a bit surprising that A
is of type int
), and to enable more consistent overloading (which C doesn't support). For example, if C++ character constants were of type int
, then this:
std::cout << 'A';
would print 65
, the ASCII value of 'A'
(unless the system uses EBCDIC); it makes more sense for it to print A
.
int i = 0 + (int)'A';
The cast is unnecessary in both C and C++. In C, 'A'
is already of type int
, so the conversion has no effect. In C++, it's of type char
, but without the cast it would be implicitly converted to int
anyway.
In both C and C++, casts should be viewed with suspicion. Both languages provide implicit conversions in many contexts, and those conversions usually do the right thing. An explicit cast either overrides the implicit conversion or creates a conversion that would not otherwise take place. In many (but by no means all) cases, a cast indicates a problem that's better solved either by using a language-provided implicit conversion, or by changing a declaration so the thing being converted is of the right type in the first place.
(As Pascal Cuoq reminds me in comments, if plain char
is unsigned and as wide as int
, then an expression of type char
will be promoted to unsigned int
, not to int
. This can happen only if CHAR_BIT >= 16
, i.e., if the implementation has 16-bit or bigger bytes, and if sizeof (int) == 1
, and if plain char
is unsigned. I'm not sure that any such implementations actually exist, though I understand that C compilers for some DSPs do have CHAR_BIT > 8
.)
In C, 'A'
type is int
(not char). I think some people do int i = 0 + (int)'A';
in C++ (or make code useful in both C++/C).
According to the standard ISO C99, the type of a literal character in C is int
.
However, the literal characters, like 'c', have a range that fits in a char
.
Thus, you can assign a literal character to a char
variable without loss of information.
char c = 'c'; /* 'c' is int, but (c == 'c') is true */