Why does "\uFFFF"
(which is apparently 2 bytes long) convert to [-17,-65,-65] in UTF-8 and not [-1,-1]?
System.out.println(Arrays.toString("\uFFFF".getBytes(StandardCharsets.UTF_8)));
Is this because UTF-8 uses only 6 bits in every byte for codepoints larger than 127?