55

I searched Java's internal representation for String, but I've got two materials which look reliable but inconsistent.

One is:

http://www.codeguru.com/cpp/misc/misc/multi-lingualsupport/article.php/c10451

and it says:

Java uses UTF-16 for the internal text representation and supports a non-standard modification of UTF-8 for string serialization.

The other is:

http://en.wikipedia.org/wiki/UTF-8#Modified_UTF-8

and it says:

Tcl also uses the same modified UTF-8[25] as Java for internal representation of Unicode data, but uses strict CESU-8 for external data.

Modified UTF-8? Or UTF-16? Which one is correct? And how many bytes does Java use for a char in memory?

Please let me know which one is correct and how many bytes it uses.

Johan
  • 74,508
  • 24
  • 191
  • 319
Johnny Lim
  • 5,623
  • 8
  • 38
  • 53
  • http://stackoverflow.com/questions/4655250/difference-between-utf-8-and-utf-16, this might answer your question. – Rahul Borkar Mar 14 '12 at 09:32
  • 2
    What Java uses and what the JVM uses in-memory doesn't have to be the same. See my answer. – Peter Lawrey Mar 14 '12 at 09:36
  • 1
    your main source of (official) information about Java should be http://java.sun.com ! (_despite of stackoverflow_) – user85421 Mar 14 '12 at 10:01
  • @CarlosHeuberger You're definitely right! Thanks for the advice :-) – Johnny Lim Mar 14 '12 at 11:18
  • Beware that the Java language specification explicitly doesn't define how strings are stored when in use, just that they are immutable (and there are some hints that they may be interned). So any answer should explicitly list the runtime, and since most of them do not, those are are all tosh. – Maarten Bodewes Oct 09 '21 at 19:18

7 Answers7

62

Java uses UTF-16 for the internal text representation

The representation for String and StringBuilder etc in Java is UTF-16

https://docs.oracle.com/javase/8/docs/technotes/guides/intl/overview.html

How is text represented in the Java platform?

The Java programming language is based on the Unicode character set, and several libraries implement the Unicode standard. The primitive data type char in the Java programming language is an unsigned 16-bit integer that can represent a Unicode code point in the range U+0000 to U+FFFF, or the code units of UTF-16. The various types and classes in the Java platform that represent character sequences - char[], implementations of java.lang.CharSequence (such as the String class), and implementations of java.text.CharacterIterator - are UTF-16 sequences.

At the JVM level, if you are using -XX:+UseCompressedStrings (which is default for some updates of Java 6) The actual in-memory representation can be 8-bit, ISO-8859-1 but only for strings which do not need UTF-16 encoding.

http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html

and supports a non-standard modification of UTF-8 for string serialization.

Serialized Strings use UTF-8 by default.

And how many bytes does Java use for a char in memory?

A char is always two bytes, if you ignore the need for padding in an Object.

Note: a code point (which allows character > 65535) can use one or two characters, i.e. 2 or 4 bytes.

Community
  • 1
  • 1
Peter Lawrey
  • 525,659
  • 79
  • 751
  • 1,130
  • 2
    Java serialization (and class-files) [use modified CESU-8 though](http://en.wikipedia.org/wiki/UTF-8#Modified_UTF-8), which is a modified UTF-8. – Deduplicator Mar 02 '15 at 11:10
  • New URL: https://docs.oracle.com/javase/8/docs/api/java/lang/String.html Note: Java 9 should be out next year. ;) – Peter Lawrey Nov 16 '15 at 23:18
  • Can you elobrate on alignment issues ? – Koray Tugay Jan 21 '16 at 17:24
  • 1
    @KorayTugay good question. This was 3 years ago but I think I was referring to padding in an object. Adding one char field could add up to 8 bytes with padding / object alignment. – Peter Lawrey Jan 21 '16 at 20:06
  • What endianness is used for the UTF-16? Also, you should mention that a Java `char` only supports BMP code points. – Praxeolitic Jan 21 '18 at 02:43
  • 2
    @Praxeolitic the endianness is whatever is native to the processor. Generally little but it should almost never matter. – Peter Lawrey Jan 21 '18 at 07:58
  • 2
    This answer is outdated. Generally you should not presume to know what the internal representation looks like. If this answer wants to be saved and not report BS, it should be updated with a specific runtime or runtimes for which this is the case. – Maarten Bodewes Oct 09 '21 at 19:15
38

You can confirm the following by looking at the source code of the relevant version of the java.lang.String class in OpenJDK. (For some really old versions of Java, String was partly implemented in native code. That source code is not publicly available.)

Prior to Java 9, the standard in-memory representation for a Java String is UTF-16 code-units held in a char[].

With Java 6 update 21 and later, there was a non-standard option (-XX:UseCompressedStrings) to enable compressed strings. This feature was removed in Java 7.

For Java 9 and later, the implementation of String has been changed to use a compact representation by default. The java command documentation now says this:

-XX:-CompactStrings

Disables the Compact Strings feature. By default, this option is enabled. When this option is enabled, Java Strings containing only single-byte characters are internally represented and stored as single-byte-per-character Strings using ISO-8859-1 / Latin-1 encoding. This reduces, by 50%, the amount of space required for Strings containing only single-byte characters. For Java Strings containing at least one multibyte character: these are represented and stored as 2 bytes per character using UTF-16 encoding. Disabling the Compact Strings feature forces the use of UTF-16 encoding as the internal representation for all Java Strings.


Note that neither classical, "compressed" or "compact" strings ever used UTF-8 encoding as the String representation. Modified UTF-8 is used in other contexts; e.g. in class files, and the object serialization format.

See also:


To answer your specific questions:

Modified UTF-8? Or UTF-16? Which one is correct?

Either UTF-16 or an adaptive representation that depends on the actual data; see above.

And how many bytes does Java use for a char in memory?

A single char uses 2 bytes. There might be some "wastage" due to possible padding, depending on the context.

A char[] is 2 bytes per character plus the object header (typically 12 bytes including the array length) padded to (typically) a multiple of 8 bytes.

Please let me know which one is correct and how many bytes it uses.

If we are talking about a String now, it is not possible to give a general answer. It will depend on the Java version and hardware platform, as well as the String length and (in some cases) what the characters are. Indeed, for some versions of Java it even depends on how you created the String.


Having said all of the above, the API model for String is that it is both a sequence of UTF-16 code-units and a sequence of Unicode code-points. As a Java programmer, you should be able to ignore everything that happens "under the hood". The internal String representation is (should be!) irrelevant.

Stephen C
  • 698,415
  • 94
  • 811
  • 1,216
11

UTF-16.

From http://java.sun.com/javase/technologies/core/basic/intl/faq.jsp :

How is text represented in the Java platform?

The Java programming language is based on the Unicode character set, and several libraries implement the Unicode standard. The primitive data type char in the Java programming language is an unsigned 16-bit integer that can represent a Unicode code point in the range U+0000 to U+FFFF, or the code units of UTF-16. The various types and classes in the Java platform that represent character sequences - char[], implementations of java.lang.CharSequence (such as the String class), and implementations of java.text.CharacterIterator - are UTF-16 sequences.

Andreas Johansson
  • 1,135
  • 6
  • 23
  • 1
    The FAQ that is linked in this answer no longer exists. The closest I can find is this: https://docs.oracle.com/javase/8/docs/technotes/guides/intl/overview.html. But note that if you *carefully* parse both the quoted text and the link I found, neither *actually* says what the internal `String` representation is. (They say that a String represents a `char` sequence, but that isn't the same thing.) In fact ... for recent Java implementations, the default implementation of `String` uses a `byte[]` rather than a `char[]` internally. You can check the OpenJDK source code to see. – Stephen C Feb 03 '21 at 09:41
2

The size of a char is 2 bytes.

Therefore, I would say that Java uses UTF-16 for internal String representation.

belgther
  • 2,544
  • 17
  • 15
0

As of 2023, see JEP 254: Compact Strings https://openjdk.org/jeps/254

Before JDK 9 it was UTF-16 char value[], usually 2 bytes per char, 4 bytes for Asian (Chinese, Japanese 日本)

Since JDK 9 it is UTF-8 byte[]
e.g. 1 byte for ASCII/Latin, 2 bytes for Áá Àà Ăă Ắắ Ằằ Ẵẵ (letters with diacritics), 4 bytes for Asian (Chinese, Japanese 日本)
It is still possible to Disables the Compact Strings feature with -XX:-CompactStrings
see Documentation for The java Command https://docs.oracle.com/en/java/javase/17/docs/specs/man/java.html#advanced-runtime-options-for-java

and the article https://howtodoinjava.com/java9/compact-strings/


String class BEFORE Java 9 Prior to Java 9, string data was stored as an array of chars. This required 16 bits for each char.

public final class String
    implements java.io.Serializable, Comparable<String>, CharSequence {
 
    //The value is used for character storage.
  private final char value[];
 
}

String class AFTER Java 9 Starting with Java 9, strings are now internally represented using a byte array along with a flag field for encoding references.

public final class String
    implements java.io.Serializable, Comparable<String>, CharSequence {
 
    /** The value is used for character storage. */
  @Stable
  private final byte[] value;
 
  /**
   * The identifier of the encoding used to encode the bytes in
   * {@code value}. The supported values in this implementation are
   *
   * LATIN1
   * UTF16
   *
   * @implNote This field is trusted by the VM, and is a subject to
   * constant folding if String instance is constant. Overwriting this
   * field after construction will cause problems.
   */
  private final byte coder;
 
}
Paul Verest
  • 60,022
  • 51
  • 208
  • 332
  • "Since JDK 9 it is UTF-8 byte[]" this is not correct. Java 9 and newer do not use UTF-8 encoding internally. Since Java 9, String uses Latin1 if possible, and falls back to UTF-16 if there are any non-Latin1 characters in the String data. – Ashton May 10 '23 at 07:41
  • Latin1 is subset of UTF-8. "falls back to UTF-16" is IMHO incorrect: it is UTF-8 being used, and it takes 2 or more bytes for non Latin1 characters. It is not possible to mix UTF-16 with other encodings in one String, that would be some other non standard thing. – Paul Verest May 11 '23 at 10:07
  • Latin1 is not a subset of UTF-8, they are two completely incompatible encodings. Java does not use UTF-8 for Strings, and does not mix encodings – each string has a flag that indicates whether it's in Latin1 or UTF-16. – Karol S Aug 21 '23 at 23:13
-6

Java stores strings internally as UTF-16 and uses 2 bytes for each character.

AlexR
  • 114,158
  • 16
  • 130
  • 208
  • 11
    This answer is incorrect. Because Java uses UTF-16, each Unicode character is either 2 bytes or 4 bytes. – tchrist Mar 17 '12 at 14:08
  • @tchrist How can a UTF-16 encode end up in 4 bytes? Isn't UTF-16 always 2 bytes? – Koray Tugay Jan 28 '16 at 14:40
  • 5
    @KorayTugay No, UTF-16 is either 2 bytes or 4 bytes. It is a variable-width encoding just like UTF-8. Only the obsolete UCS-2 is 2 bytes, and that's long dead. – tchrist Jan 29 '16 at 01:15
  • 1
    The code unit of UT-16 is always 2 bytes. But the character itself needs 1 code unit or 2 code units hence 2 or 4 bytes. – Ludovic Kuty Jan 22 '19 at 09:30
  • 1
    @LudovicKuty a "character" is a rendering and language-specific concept - it can take up a *large* number of codepoints to compose a single character, so a character can take up hundreds of bytes. So it's more like "The codepoint itself - in UTF-16 - needs 2 or 4 bytes" Try an internet search for "unicode composition." You generally only care about "characters" - like at what codepoint a character begins or how many characters are in a string - if you're building a UI framework or implementing rendering logic. – matvore Aug 13 '20 at 20:49
  • 1
    Yes, the codepoint, my bad. The notion of a character is quite abstract in the Unicode standard (if I remember correctly). – Ludovic Kuty Aug 14 '20 at 05:35
-7

java is available in 18 international languages and following UNICODE character set, which contains all the characters which are available in 18 international languages and contains 65536 characters.And java following UTF-16 so the size of char in java is 2 bytes.

  • 2
    The size of a Unicode character in Java varies between 2 bytes and 4 bytes, depending on whether we’re in plane 0 or not. – tchrist Mar 17 '12 at 14:09
  • A `char` is 2 bytes but a character (char with no typewriter font) is 2 or 4 bytes as @tchrist mentionned – Ludovic Kuty Jan 22 '19 at 09:31