14

I'm happy to see the std::u16string and std::u32string in C++11, but I'm wondering why there is no std::u8string to handle the UTF-8 case. I'm under the impression that std::string is intended for UTF-8, but it doesn't seem to do it very well. What I mean is, doesn't std::string.length() still return the size of the string's buffer rather than the number of characters in the string?

So, how is the length() method of the standard strings defined for the new C++11 classes? Do they return the size of the string's buffer, the number of codepoints, or the number of characters (assuming a surrogate pair is 2 code points, but one character. Please correct me if I'm wrong)?

And what about size(); isn't it equal to length()? See http://en.cppreference.com/w/cpp/string/basic_string/length for the source of my confusion.

So, I guess, my fundamental question is how does one use std::string, std::u16string, and std::u32string and properly distinguish between buffer size, number of codepoints, and number of characters? If you use the standard iterators, are you iterating over bytes, codepoints, or characters?

Verax
  • 2,409
  • 5
  • 27
  • 42
  • 3
    `std::string` works as well for utf8 as `u16string` does for utf16: it handles elements of the corresponding type, and doesn't deal with characters that are represented by a sequence of more than one element. – Pete Becker Sep 03 '12 at 16:30
  • Go here: http://utf8everywhere.org/#myth.strlen – Pavel Radzivilovsky Sep 04 '12 at 06:28

3 Answers3

18

u16string and u32string are not "new C++11 classes". They're just typedefs of std::basic_string for char16_t and cha32_t types.

length is always equal to size for any basic_string. It is the number of T's in the string, where T is the template type for the basic_string.

basic_string is not Unicode aware in any way, shape, or form. It has no concept of codepoints, graphemes, Unicode characters, Unicode normalization, or anything of the kind. It is simply a ordered sequence of Ts. The only thing that is Unicode-aware about u16string and u32string is that they use the type returned by u"" and U"" literals. Thus, they can store Unicode-encoded strings, but they do nothing that requires knowledge of said encoding.

Iterators iterate over elements of T, not "bytes, codepoints, or characters". If T is char16_t, then it will iterate over char16_ts. If the string is UTF-16-encoded, then it is iterating over UTF-16 code units, not Unicode codepoints or bytes.

Nicol Bolas
  • 449,505
  • 63
  • 781
  • 982
  • 1
    And *code unit* != *code point*. They are two different concepts. Just for later reference because I didn't know that... – eonil Nov 15 '13 at 21:13
1

All the string types do the same thing: they hold a sequence of elements, each of whose type is the character type for the string. length() and size() both return the number of elements. Iterators iterator over elements. Higher-level analysis, such as figuring out the number of characters, require much more complex calculations.

Pete Becker
  • 74,985
  • 8
  • 76
  • 165
0

Currently there is nothing built into the standard to distinguish between code units, codepoints or individual bytes. However, there do seem to be some things in the works to deal with this sort of thing. Depending on what the standards committee decides, it may be part of TR2 or the next standard.

eestrada
  • 1,575
  • 14
  • 24