9

According to the Java documentation for String.length:

public int length()

Returns the length of this string.

The length is equal to the number of Unicode code units in the string.

Specified by:

length in interface CharSequence

Returns:

the length of the sequence of characters represented by this object.

But then I don't understand why the following program, HelloUnicode.java, produces different results on different platforms. According to my understanding, the number of Unicode code units should be the same, since Java supposedly always represents strings in UTF-16:

public class HelloWorld {

    public static void main(String[] args) {
        String myString = "I have a  in my string";
        System.out.println("String: " + myString);
        System.out.println("Bytes: " + bytesToHex(myString.getBytes()));
        System.out.println("String Length: " + myString.length());
        System.out.println("Byte Length: " + myString.getBytes().length);
        System.out.println("Substring 9 - 13: " + myString.substring(9, 13));
        System.out.println("Substring Bytes: " + bytesToHex(myString.substring(9, 13).getBytes()));
    }

    // Code from https://stackoverflow.com/a/9855338/4019986
    private final static char[] hexArray = "0123456789ABCDEF".toCharArray();
    public static String bytesToHex(byte[] bytes) {
        char[] hexChars = new char[bytes.length * 2];
        for ( int j = 0; j < bytes.length; j++ ) {
            int v = bytes[j] & 0xFF;
            hexChars[j * 2] = hexArray[v >>> 4];
            hexChars[j * 2 + 1] = hexArray[v & 0x0F];
        }
        return new String(hexChars);
    }

}

The output of this program on my Windows box is:

String: I have a  in my string
Bytes: 492068617665206120F09F998220696E206D7920737472696E67
String Length: 26
Byte Length: 26
Substring 9 - 13: 
Substring Bytes: F09F9982

The output on my CentOS 7 machine is:

String: I have a  in my string
Bytes: 492068617665206120F09F998220696E206D7920737472696E67
String Length: 24
Byte Length: 26
Substring 9 - 13:  i
Substring Bytes: F09F99822069

I ran both with Java 1.8. Same byte length, different String length. Why?

UPDATE

By replacing the "" in the string with "\uD83D\uDE42", I get the following results:

Windows:

String: I have a ? in my string
Bytes: 4920686176652061203F20696E206D7920737472696E67
String Length: 24
Byte Length: 23
Substring 9 - 13: ? i
Substring Bytes: 3F2069

CentOS:

String: I have a  in my string
Bytes: 492068617665206120F09F998220696E206D7920737472696E67
String Length: 24
Byte Length: 26
Substring 9 - 13:  i
Substring Bytes: F09F99822069

Why "\uD83D\uDE42" ends up being encoded as 0x3F on the Windows machine is beyond me...

Java Versions:

Windows:

java version "1.8.0_211"
Java(TM) SE Runtime Environment (build 1.8.0_211-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.211-b12, mixed mode)

CentOS:

openjdk version "1.8.0_201"
OpenJDK Runtime Environment (build 1.8.0_201-b09)
OpenJDK 64-Bit Server VM (build 25.201-b09, mixed mode)

Update 2

Using .getBytes("utf-8"), with the "" embedded in the string literal, here are the outputs.

Windows:

String: I have a  in my string
Bytes: 492068617665206120C3B0C5B8E284A2E2809A20696E206D7920737472696E67
String Length: 26
Byte Length: 32
Substring 9 - 13: 
Substring Bytes: C3B0C5B8E284A2E2809A

CentOS:

String: I have a  in my string
Bytes: 492068617665206120F09F998220696E206D7920737472696E67
String Length: 24
Byte Length: 26
Substring 9 - 13:  i
Substring Bytes: F09F99822069

So yes it appears to be a difference in system encoding. But then that means string literals are encoded differently on different platforms? That sounds like it could be problematic in certain situations.

Also... where is the byte sequence C3B0C5B8E284A2E2809A coming from to represent the smiley in Windows? That doesn't make sense to me.

For completeness, using .getBytes("utf-16"), with the "" embedded in the string literal, here are the outputs.

Windows:

String: I have a  in my string
Bytes: FEFF00490020006800610076006500200061002000F001782122201A00200069006E0020006D007900200073007400720069006E0067
String Length: 26
Byte Length: 54
Substring 9 - 13: 
Substring Bytes: FEFF00F001782122201A

CentOS:

String: I have a  in my string
Bytes: FEFF004900200068006100760065002000610020D83DDE4200200069006E0020006D007900200073007400720069006E0067
String Length: 24
Byte Length: 50
Substring 9 - 13:  i
Substring Bytes: FEFFD83DDE4200200069
NanoWizard
  • 2,104
  • 1
  • 21
  • 34
  • 4
    Please show exact contents of byte arrays (ideally in hex) and use `\uD83D\uDE42` sequence instead of in code – Michal Kordas May 21 '19 at 14:29
  • @MichalKordas Thanks for the recommendations. I addressed them in my update to the question. – NanoWizard May 21 '19 at 15:59
  • 2
    can you use `getBytes("UTF-8")` and `getBytes("UTF-16")` ? Also make sure STDOUT uses UTF-8 as well (or even better, write to a File instead, with specified encoding). – Thilo May 21 '19 at 16:10
  • 1
    "Why "\uD83D\uDE42" ends up being encoded as 0x3F on the Windows machine is beyond me..." 0x3f is the question mark. Java puts this in when it asked to output invalid characters. So it looks like it just replaced your smiley with a ? because you did not specify Unicode in `getBytes` so it defaulted to platform encoding. – Thilo May 21 '19 at 16:20
  • @Thilo but it seems to have replaced "\uD83D\uDE42" with "?" at the point of interpretation of the string literal, not when being converted to bytes via `getBytes`. It seems the Windows Java just didn't know what to do with "\uD83D\uDE42". – NanoWizard May 21 '19 at 16:25
  • How do you know that the ? did not get injected by `getBytes()` ? – Thilo May 21 '19 at 16:27
  • @Thilo you were correct, I was misunderstanding my output. – NanoWizard May 21 '19 at 16:38

2 Answers2

5

You have to be careful about specifying the encodings:

  • when you compile the Java file, it uses some encoding for the source file. My guess is that this already broke your original String literal on compilation. This can be fixed by using the escape sequence.
  • after you use the escape sequence, the String.length are the same. The bytes inside the String are also the same, but what you are printing out does not show that.
  • the bytes printed are different because you called getBytes() and that again uses the environment or platform-specific encoding. So it was also broken (replacing unencodable smilies with question mark). You need to call getBytes("UTF-8") to be platform-independent.

So to answer the specific questions posed:

Same byte length, different String length. Why?

Because the string literal is being encoded by the java compiler, and the java compiler often uses a different encoding on different systems by default. This may result in a different number of character units per Unicode character, which results in a different string length. Passing the -encoding command line option with the same option across platforms will make them encode consistently.

Why "\uD83D\uDE42" ends up being encoded as 0x3F on the Windows machine is beyond me...

It's not encoded as 0x3F in the string. 0x3f is the question mark. Java puts this in when it is asked to output invalid characters via System.out.println or getBytes, which was the case when you encoded literal UTF-16 representations in a string with a different encoding and then tried to print it to the console and getBytes from it.

But then that means string literals are encoded differently on different platforms?

By default, yes.

Also... where is the byte sequence C3B0C5B8E284A2E2809A coming from to represent the smiley in Windows?

This is quite convoluted. The "" character (Unicode code point U+1F642) is stored in the Java source file with UTF-8 encoding using the byte sequence F0 9F 99 82. The Java compiler then reads the source file using the platform default encoding, Cp1252 (Windows-1252), so it treats these UTF-8 bytes as though they were Cp1252 characters, making a 4-character string by translating each byte from Cp1252 to Unicode, resulting in U+00F0 U+0178 U+2122 U+201A. The getBytes("utf-8") call then converts this 4-character string into bytes by encoding them as utf-8. Since every character of the string is higher than hex 7F, each character is converted into 2 or more UTF-8 bytes; hence the resulting string being this long. The value of this string is not significant; it's just the result of using an incorrect encoding.

DodgyCodeException
  • 5,963
  • 3
  • 21
  • 42
Thilo
  • 257,207
  • 101
  • 511
  • 656
  • The point about checking source file encoding was a good one I didn't think about, but both files are in fact encoded with UTF-8. The only difference are the line endings (the Windows one using CRLF, while CentOS using just LF of course). It still doesn't make sense to me why each platform would interpret the literal "" differently... – NanoWizard May 21 '19 at 16:44
  • 2
    @NanoWizard I suspect you're still using platform-dependent source encoding on Windows. I've just tried it on Windows with javac 1.8.0_212 with the command `javac -encoding utf-8 ` using your code cut-and-pasted into IntelliJ saved as UTF-8, and the reported string length was indeed 24, same as on CentOS. Make sure you use the `javac -encoding` command-line option! – DodgyCodeException May 21 '19 at 17:05
  • 2
    More likely cp1252, which is the default on US and 'Western' versions of Windows. U+1F642 in UTF-8 is F0 9F 99 82, and those bytes interpreted as cp1252 are U+00F0 U+0178 U+2122 U+201A which are then UTF-8 encoded as C3 B0,C5 B8,E2 84 A2,E2 80 9A. In cp1250 the 9F would instead be U+017A and encoded to C5 BA. – dave_thompson_085 May 22 '19 at 08:23
  • Good point @dave_thompson_085. I had actually determined cp1252 was the system encoding then erroneously wrote cp1250 when adding to this answer. – NanoWizard May 22 '19 at 13:53
2

You didn't take into account, that getBytes() returns the bytes in the platform's default encoding. This is different on windows and centOS.

See also How to Find the Default Charset/Encoding in Java? and the API documentation on String.getBytes().

Björn Zurmaar
  • 826
  • 1
  • 9
  • 22
  • 2
    How this explains different length? – Michal Kordas May 21 '19 at 14:32
  • The length is not different after using the escape sequence. It is different before, because the platform encoding was used during compilation (and that broke on Windows already). – Thilo May 21 '19 at 16:22
  • Exactly, so I think this answer does not explain the problem – Michal Kordas May 21 '19 at 17:16
  • 1
    It explains a different aspect though, which is important for a full understanding and contains a valuable learning effect: In the first example the bytes are identical. So if the bytes are identical, how can the string length be different? The reason is the missing encoding in getBytes(). Learning: NEVER omit the encoding or be ready for nasty suprises. This is also valid for omitting the time zone in date/time API's. . – Christian Esken May 20 '20 at 14:15