15

Is the Java char type guaranteed to be stored in any particular encoding?

Edit: I phrased this question incorrectly. What I meant to ask is are char literals guaranteed to use any particular encoding?

pepsi
  • 6,785
  • 6
  • 42
  • 74
  • short answer to your question, is **No it is not guaranteed** –  Aug 11 '11 at 00:27
  • 1
    Yes, it is. The internal representation is quite well defined. – Ernest Friedman-Hill Aug 11 '11 at 00:28
  • 1
    @Ernest - no it is not. Many of the standard Java library classes are designed to work on the assumption that a `char` contains a Unicode code unit, but the application can basically put any 16 bit unsigned integer value into a `char`. The value is not required to be encoded in any particular way. It does not even need to represent a complete (or partial) "character". – Stephen C Aug 11 '11 at 01:17
  • What about char literals? For example, 'c' must have some value that is defined by the language. – pepsi Aug 11 '11 at 01:20

3 Answers3

22

"Stored" where? All Strings in Java are represented in UTF-16. When written to a file, sent across a network, or whatever else, it's sent using whatever character encoding you specify.

Edit: Specifically for the char type, see the Character docs. Specifically: "The char data type ... are based on the original Unicode specification, which defined characters as fixed-width 16-bit entities." Therefore, casting char to int will always give you a UTF-16 value if the char actually contains a character from that charset. If you just poked some random value into the char, it obviously won't necessarily be a valid UTF-16 character, and likewise if you read the character in using a bad encoding. The docs go on to discuss how the supplementary UTF-16 characters can only be represented by an int, since char doesn't have enough space to hold them, and if you're operating at this level, it might be important to get familiar with those semantics.

Ryan Stewart
  • 126,015
  • 21
  • 180
  • 199
8

A Java char is conventionally used to hold a Unicode code unit; i.e. a 16 bit unit that is part of a valid UTF-16 sequence.

(A char can also represent a Unicode code point. However, since Unicode 2.0, not all code points will fit into a single Java char; see below.)

However, there is nothing to prevent an application from putting any 16 bit unsigned value into a char, irrespective of what it actually means. So you while a Unicode code unit (and some code points) can be represented by a char and a char can represent a Unicode code unit ... but neither of these is always the case.

Your (original) question about how a Java char is stored cannot be answered. Simply said, it depends on what you mean by "stored":

  • If you mean "represented in an executing program", then the answer is JVM implementation specific. (The char data type is typically represented as a 16 bit machine integer, though it may or may not be machine word aligned, depending on the specific context.)

  • If you mean "stored in a file" or something like that, then the answer is entirely dependent on how the application chooses to store it.


Is the Java char type guaranteed to be stored in any particular encoding?

In the light of what I said above the answer is "No". In an executing application, it is up to the application to decide what a char means / contains. When a char is stored to a file, the application decides how it wants to store it and what on-disk representation it will use.


FOLLOWUP

What about char literals? For example, 'c' must have some value that is defined by the language.

Java source code is required (by the language spec) to be Unicode text, represented in some character encoding that the tool chain understands; see the javac -encoding option. In theory, a character encoding could map the c in 'c' in your source code to something unexpected.

In practice though, the c will map to the Unicode lower-case C code-point (U+0063) and will be represented as the 16-bit unsigned value 0x0063.

To the extent that char literals have a meaning ascribed by the Java language, they represent (and are represented as) UTF-16 code units or Unicode code points (or both).

The values of a char all (technically) correspond to Unicode code points, but not all Unicode code points in the range U+0000 to U+FFFF correspond to what you would think of as meaningful characters;

  • Some are simply unassigned; i.e. U+FFFE and U+FFFF.
  • Others are reserved for private use or future use.
  • Code points in the range U+D800 to U+DFFF are UTF-16 surrogates. These represent either the top or bottom half of the UTF-16 encoding of a Unicode higher plane code point; i.e. from U+10000 to U+10FFFF.

Conversely, the Unicode higher plane code points (U+10000 to U+10FFFF) cannot be represented as a single Java char, and therefore cannot be denoted by a single Java char literal. Java int is typically used to represent one of these code points, and this is why the String::codePointAt method returns an int.


Why didn't they define char to be capable of holding all Unicode code points (loosely "characters")?

History. Java was designed when a Unicode code point did fit into an unsigned 16 bit integer. But then Unicode 2.0 broke that assumption, and redefining the Java char type was not an option.

Stephen C
  • 698,415
  • 94
  • 811
  • 1,216
5

Originally, Java used UCS-2 internally; now it uses UTF-16. The two are virtually identical, except for D800 - DFFF, which are used in UTF-16 as part of the extended representation for larger characters.

Ernest Friedman-Hill
  • 80,601
  • 10
  • 150
  • 186