0

I understand, that a char can be casted explicitly by me and by compiler implicitly.

In my code in first "for" loop Compiler converts char to int type automatically by implicit type casting. Since char is of size 2 bytes, it get fit into size of 4 bytes.

I am confused, how a integer number be assigned to char variable without explicit casting, since int is of 4 bytes and without explicitly casting it to char using cast operator.

// Compiler converts char to int type automatically by implicit type casting.
        for (int i = 'A'; i <= 'Z'; i++) {
            System.out.print(i + " ");
        }   System.out.println();


        for (char c = 65; c <= 90; c++) {
            System.out.print(c + " ");
        }   System.out.println();
  • 1
    Does this help you? https://stackoverflow.com/a/21317904/8089107 – sn42 May 29 '19 at 16:02
  • 1
    Because 65 and 90 are always in range for char. If you did something like `char c = 65; c <= 141451; ...` then you would have an infinite loop because `c` would keep overflowing. It only works with literals because it is convenient and because the compiler can check it. If you tried to do `int i = 65; char c = i;` this would not compile, since it is a narrowing conversion and the compiler cannot be sure that `i` will be in-range. – Michael May 29 '19 at 16:07

1 Answers1

-1

"It's not assigning an int, it's using UNICODE code for characters."
(eg. A=65 in UNICODE or even ASCII @RealSkeptic) It's just another way of initialization.

    char A = 65;
    char a = 'a';
    System.out.println(A); \\A
    System.out.println((int)A); \\65
    System.out.println(a); \\a
    System.out.println((int)a); \\97

Def. Oracle for char:

The char data type is a single 16-bit Unicode character. It has a minimum value of '\u0000' (or 0) and a maximum value of '\uffff' (or 65,535 inclusive)

Kindly look on comments also.

Traian GEICU
  • 1,750
  • 3
  • 14
  • 26
  • 1
    `65` is an `int` literal. `'a'` is a `char` literal. So yes, you are assigning an int - which happens to be the Unicode (NOT ASCII, though they are compatible in the first 128 codes) value for the char. – RealSkeptic May 29 '19 at 16:15
  • @RealSkeptic. Yes it's using UNICODE table but standard ASCII is included as you mention.So any values up to 65535 (#FFFF) will be fine – Traian GEICU May 29 '19 at 16:22
  • Well, the point still stands that 65 is an int, not a char. – RealSkeptic May 29 '19 at 16:23
  • @RealSkeptic, 65 is an int but here `char` could be initialized in 2 ways and one way is to put the corresponding code from UNICODE table as int – Traian GEICU May 29 '19 at 16:27
  • Again, your first sentence is false "It's not assigning an int" - this is false. It is assigning an int. The reason that this is allowed is what the OP is asking. But it *is* an int. – RealSkeptic May 29 '19 at 16:28
  • @RealSkeptic Did I say it's assigning an int ? No. Say char have 2 ways of initialization and one is by an int value. (latter is translated automatically into char accordingly to UNICODE table ... this is how char was designed to work). This is not equal that char is assigning an int ! – Traian GEICU May 29 '19 at 16:31
  • To reduce confusion, consider `short a = 1234;`. Here, the `short` variable a (which holds 16 bits) is initialized using an `int` literal (and an `int` holds 32 bits). There is no Unicode concerned here. (Nor is there any Unicode when initializing a char, for that matter; even for char it's just an integral value, but if you use `short` instead of char you can see better what's happening as you're not thinking of characters.) – DodgyCodeException May 29 '19 at 16:41
  • @RealSkeptic, `char` can contain only `char` but now if designers allowed to be used int instead this is another story. This do no involve that is assigned an int (me understand better an internal-translation) – Traian GEICU May 29 '19 at 16:41
  • Definition of "assign type X to type Y" is "An assignment statement `a = b` where the right hand value (`b`) is of type `X`, and the left hand (`a`) is of type `Y`". – RealSkeptic May 29 '19 at 16:44
  • @DodgyCodeException what have to do `short` with `char` ? Nothing. Unicode table is used for translate `int` to corresponding `char`. So 65 will be always A – Traian GEICU May 29 '19 at 16:44
  • If you can explain why `short a = 1234;` is allowed, then you can also explain why `char a = 65;` is allowed. It's for the same reason. – DodgyCodeException May 29 '19 at 16:48
  • @RealSkeptic, basically yes agree with def. But here on assign you cannot add something else then char. Put an int there and say char is an int type. Obvious not. So internally int is translated to char based on Unicode table. From design char were intended to have 2 ways of initialization (either with a char either with an int) – Traian GEICU May 29 '19 at 16:48
  • "From design char were intended to have 2 ways of initialization (either with a char either with an int)" -- not really, there are actually more than 2 ways to initialize a char. For example, with a `short` literal: `char a = (short) 65;` – DodgyCodeException May 29 '19 at 16:51
  • @DodgyCodeException short range is [-32,768, 32767]. Obvious where is 1234. See no question. – Traian GEICU May 29 '19 at 16:52
  • 1
    Java `char` is a 16 bit unsigned integer type. The usual Java integer conversion rules apply. If it can't be proven at compile time, that the value on the right side of assignment will fit in a 16 bit unsigned integer, you need to use explicit cast (just like with any integer assignment). – hyde May 29 '19 at 16:52
  • Your comment "short range is [-32,768, 32767]. Obvious where is 1234. See no question." can be paraphrased as: "char range is [0, 65535]. Obvious where is 65. See no question." That is all there is to it - no need to mention Unicode. Unicode simply maps integral values to visual graphics (what we see and perceive as letters, symbols etc.) but when converting int to char, all that matters is the integral value and not the visual appearance. – DodgyCodeException May 29 '19 at 16:56
  • @RealSkeptic, it's a way of saying (2 ways). (short) 65 in range [0,65535] so it's fine. But char is not assign short type.(it's an internal translation there). short is included also in int. (short) 65 = 65 = (int) 65 = (byte) 65 which latter will be translated into char A – Traian GEICU May 29 '19 at 16:58
  • 1
    @TraianGEICU, the following is also legal: `char c = 63 + 5`. Internally *nothing* is translated. `char` is 16 bits, `int` is 32 bits, the rightmost 16 bits of the `int` are assigned to the char, no translations. You have a misconception that there is something special about `char` that doesn't exist in `byte` and `short`. This is incorrect. The only unicode translation happens when the `char` is printed. – RealSkeptic May 29 '19 at 17:08
  • @DodgyCodeException. `char` allowed values to be initialized with numbers(either short, byte, int or part of) is within [0,65535].This is what i agree. Obvious. Now char is used as char and not as number. When write `char A="A", println(A)=A` not `65`. But you can do `println(A+A)=130` (this is how was intended to work but char `B=A+A` not allowed (even if 65+65=130 and is in range) – Traian GEICU May 29 '19 at 17:11
  • @RealSkeptic. Not very sure. `char c=65+2` obvious allowed since in range. `char c="A"+2` also allowed. Can do also `c="A"+"A"`. but char `b=65`, and `c=b+b` not allowed. Internally not very sure how is working, but for sure and with that i agree on display is translated into Unicode. My intention in answer was just to point that is the same in using either 65 or 'A` for the same thing and the translation is using Unicode(or partial ASCII).Now if internally char is working as number or not is fine,but also char is not number type because on display is Unicode.(is char type which is distinct) – Traian GEICU May 29 '19 at 17:23
  • @RealSkeptic DodgyCodeException Thank both for all comments. I let the answer as it is, because the comments may be useful.(not necessarily mine). Me nothing more to be added – Traian GEICU May 29 '19 at 17:25