1

In the following program , I am assigning a integer data type to a char data type.

    public static void main(String args[]) {
        char ch =65;
        System.out.println(ch);
    }

I know the fact that int occupies 32 bits and char occupies 16 bits . With that knowledge , I was expecting the compiler throw an error of some message "Attempt to convert a data of higher size to a lower size ".

Why is the compiler not complaining and internally converting and printing the output as 'A' (I understand the fact that it is the ASCII equivalent of 65, my question is only related to the size of data types) ?

user1400915
  • 1,933
  • 6
  • 29
  • 55

3 Answers3

1

There is an exception to Java's general rule about converting an int to a char. If the int is a compile time constant expression (e.g. a literal) AND the int value of the expression is within the required range (0 to 65535), then it is legal to assign the int expression to a char.

Intuitively, for a compile-time constant expression, the compiler knows if the expression value can be assigned without loss of information.

This is covered by JLS 5.2 ... in the paragraph that starts "In addition, if the expression is a constant expression ..."

Stephen C
  • 698,415
  • 94
  • 811
  • 1,216
1

The compiler does in fact validate the range. That is working because int 65 is within the expected range.

The following won't compile:

char c = (int)Character.MAX_VALUE + 1
char c = 65536

And this will, just like your assignment:

char c = 65535 //Within range

When the value is not a constant at compile time, though, there's need for cast:

private static void charRange(int i) {
    char c = (char) i;
    System.out.println(" --> " + (int) c);
}

charRange(65);
charRange(Character.MAX_VALUE + 20);

And the check doesn't happen (making room for overflow)

--> 65
--> 19

ernest_k
  • 44,416
  • 5
  • 53
  • 99
  • After typecasting , if I call charRange(65536); how can the char c = (char) i store this value ? will it be converted to a number within range of char say 65535? – user1400915 Jun 13 '18 at 05:57
  • 1
    @user1400915 It will overflow and wrap around. Look at the second example call `charRange(Character.MAX_VALUE + 20);`, which resulted in the unexpected value `19`. – ernest_k Jun 13 '18 at 06:00
  • why is the value always 19 , does it have a particular pattern or just a random number? – user1400915 Jun 13 '18 at 06:04
  • @user1400915 It's the same behavior as when other number types overflow, although the `char` range is `0-65535`. You can find information on overflow/underflow on this post: https://stackoverflow.com/questions/3001836/how-does-java-handle-integer-underflows-and-overflows-and-how-would-you-check-fo – ernest_k Jun 13 '18 at 06:10
0

Programing languages like Java or C# come with a set of integer primitive types. Each type has a well-defined range in the form of [min value, max value]. This values are stored in a fixed sequence of bits from most significant bit to the least one.

For example, let the decimal number 123456 be represented for the next 32 bit sequence

0000000000000011110001001000000

When you attempt to convert a 32-bit number type to a 16-bit one, the compiler copies the 16 least significant bits (or the last 16 bits) then 123456 number is wrapped to

1110001001000000

And if you convert this binary number to decimal it is 57920. As you realize, the 32-bit number cant fit into a 16-bit sequence, and the original sequence was arbitrarily wrapped. This is known as integer overflow, and also happens when you add or multiply 2 number which result is out of bounds of the integer type range.

As programmer, you should be aware of overflow, and react to this to avoid a program failure. You also should read further details about signed integer representation.