I was trying to see the UTF-8 bytes of in both Java and Javascript.
In Javascript,
new TextEncoder().encode("");
returns => [240, 159, 145, 141]
while in Java,
"".getBytes("UTF-8")
returns => [-16, -97, -111, -115]
I converted those byte arrays to hex string using methods I found corresponding to the language (JS, Java) and both returned F09F918D
In fact, -16 & 0xFF
gives => 240
I am curious to know more on why both language chooses different ways of representing byte arrays. It took me a while to figure out up to this.