1

Why does "\uFFFF" (which is apparently 2 bytes long) convert to [-17,-65,-65] in UTF-8 and not [-1,-1]?

System.out.println(Arrays.toString("\uFFFF".getBytes(StandardCharsets.UTF_8)));

Is this because UTF-8 uses only 6 bits in every byte for codepoints larger than 127?

Sionnach733
  • 4,686
  • 4
  • 36
  • 51
Roman
  • 199
  • 1
  • 5

2 Answers2

4

0xFFFF has a bit pattern of 11111111 11111111. Divide up the bits according to UTF-8 rules and the pattern becomes 1111 111111 111111. Now add UTF-8's prefix bits and the pattern becomes *1110*1111 *10*111111 *10*111111, which is 0xEF 0xBF 0xBF, aka 239 191 191, aka -17 -65 -65 in twos complement format (which is what Java uses for signed values - Java does not have unsigned data types).

Remy Lebeau
  • 555,201
  • 31
  • 458
  • 770
0

UTF-8 uses a different amount of bytes depending on the character being represented. The first byte uses the 7 bit ASCII convention for backwards compatibility. Other characters (like chinese signs) can take up to 4 bytes.

As the linked article in wikipedia states, the character you referenced is in the range of the 3 byte values.

nablex
  • 4,635
  • 4
  • 36
  • 51