28

I've read in some places that JavaScript strings are UTF-16, and in other places they're UCS-2. I did some searching around to try to figure out the difference and found this:

Q: What is the difference between UCS-2 and UTF-16?

A: UCS-2 is obsolete terminology which refers to a Unicode implementation up to Unicode 1.1, before surrogate code points and UTF-16 were added to Version 2.0 of the standard. This term should now be avoided.

UCS-2 does not define a distinct data format, because UTF-16 and UCS-2 are identical for purposes of data exchange. Both are 16-bit, and have exactly the same code unit representation.

Sometimes in the past an implementation has been labeled "UCS-2" to indicate that it does not support supplementary characters and doesn't interpret pairs of surrogate code points as characters. Such an implementation would not handle processing of character properties, code point boundaries, collation, etc. for supplementary characters.

via: http://www.unicode.org/faq/utf_bom.html#utf16-11

So my question is, is it because the JavaScript string object's methods and indexes act on 16-bit data values instead of characters what make some people consider it UCS-2? And if so, would a JavaScript string object oriented around characters instead of 16-bit data chunks be considered UTF-16? Or is there something else I'm missing?

Edit: As requested, here are some sources saying JavaScript strings are UCS-2:

http://blog.mozilla.com/nnethercote/2011/07/01/faster-javascript-parsing/ http://terenceyim.wordpress.com/tag/ucs2/

EDIT: For anyone who may come across this, be sure to check out this link:

http://mathiasbynens.be/notes/javascript-encoding

Powerlord
  • 87,612
  • 17
  • 125
  • 175
patorjk
  • 2,164
  • 1
  • 20
  • 30

5 Answers5

20

JavaScript, strictly speaking, ECMAScript, pre-dates Unicode 2.0, so in some cases you may find references to UCS-2 simply because that was correct at the time the reference was written. Can you point us to specific citations of JavaScript being "UCS-2"?

Specifications for ECMAScript versions 3 and 5 at least both explicitly declare a String to be a collection of unsigned 16-bit integers and that if those integer values are meant to represent textual data, then they are UTF-16 code units. See


EDIT: I'm no longer sure my answer is entirely correct. See the excellent article mentioned above, which in essence says that while a JavaScript engine may use UTF-16 internally, and most do, the language itself effectively exposes those characters as if they were UCS-2.

AmigoJack
  • 5,234
  • 1
  • 15
  • 31
dgvid
  • 26,293
  • 5
  • 40
  • 57
  • Thank you for the link, the language of the spec seems pretty clear. I think then that UCS-2 talk is either old or based on the method and indexing support for surrogate pairs. – patorjk Jan 03 '12 at 18:23
  • So, the specification states "Each integer value in the sequence usually represents a single 16-bit unit of UTF-16 text. However, ECMAScript does not place any restrictions or requirements on the values except that they must be 16-bit unsigned integers.", which is equivalent to saying that in modern C programs each character value in a character array "usually" represents a single 8-bit unit of UTF-8 text, but obviously stating that C strings "are" UTF-8 would be wrong. The semantics JavaScript provides are only UCS-2; if you want UTF-16 support you must do so yourself, as per DMoses's answer. – Jay Freeman -saurik- Dec 11 '12 at 04:34
  • UCS is the thing with the numbers, and yes UCS 2 is outdated, the current version is UCS 4. UTF-8/-16/-32 are ways to represent arrays of UCS thingies in bits. ;) – Philip Jun 11 '17 at 09:27
6

It's UTF-16/USC-2. It can handle surrogate pairs, but the charAt/charCodeAt returns a 16-bit char and not the Unicode codepoint. If you want to have it handle surrogate pairs, I suggest a quick read through this.

katspaugh
  • 17,449
  • 11
  • 66
  • 103
Daniel Moses
  • 5,872
  • 26
  • 39
  • What do you mean by "it can handle surrogate pairs"? – cubuspl42 Oct 22 '15 at 21:48
  • If you read the article linked it will describe how to have it handle surrogate pairs. My point is that it doesn't error out by default, and there are ways to handle surrogate pairs as shown in the code on the link provided. – Daniel Moses Oct 23 '15 at 19:30
  • 2
    @cubuspl42 UTF-16 isn't limited to 0x0-0xFFFF, it can encode pairs of 16-bit characters and represent the entire Unicode range from 0x0-0x101000, over a million codepoints. These pairs are called "surrogate pairs". – doug65536 Jan 22 '17 at 12:20
3

Its just a 16-bit value with no encoding specified in the ECMAScript standard.

See section 7.8.4 String Literals in this document: http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-262.pdf

Daniel A. White
  • 187,200
  • 47
  • 362
  • 445
1

Things have changed since 2012. JavaScript strings are now UTF-16 for real. Yes, the old string methods still work on 16-bit code units, but the language is now aware of UTF-16 surrogates and knows what to do about them if you use the string iterator. There's also Unicode regex support.

// Before
"".length // 6

// Now
[...""].length // 3
[...""]  // [ '', '', '' ]
[... "".matchAll(/./ug) ] // 3 matches as above

// Regexes support unicode character classes
"café".normalize("NFD").match(/\p{L}\p{M}/ug) // [ 'é' ]

// Extract code points
[...""].map(char => char.codePointAt(0).toString(16)) // [ '1f600', '1f602', '1f4a9' ]
alextgordon
  • 163
  • 1
  • 11
  • Without at least naming example versions/engines this is not helping in terms of avoiding implementations that couldn't/still can't do this. – AmigoJack Oct 29 '22 at 02:42
  • 1
    The way you put is misleading. While it's true that the `@@iterator` iterates over codepoints, it is not that Javascript string literals are stored in codepoints. The `.length` is still 6. – Константин Ван May 26 '23 at 14:15
1

You need to differentiate how it is stored and how it is interpreted.

In Javascript, a string is a sequence of 16-bit unsigned integers that is, usually but not necessarily, interpreted as a UTF-16-encoded character sequence. It is encodingless, and your code, standard Javascript methods, or REPL terminals, may interpret it in whatever encodings they want.

The thirteenth edition of ECMA-262 (ECMAScript® 2022 language specification)

§4.4.20 String value

primitive value that is a finite ordered sequence of zero or more 16-bit unsigned integer values

NOTE A String value is a member of the String type. Each integer value in the sequence usually represents a single 16-bit unit of UTF-16 text. However, ECMAScript does not place any restrictions or requirements on the values except that they must be 16-bit unsigned integers.

Because of this, Javascript strings can contain, with no problems, a value sequence that is invalid in UTF-16, such as lone (“unmatched”) surrogates.

const javascript_string = "\uDF06"; // a lone surrogate
javascript_string.isWellFormed(); // false