7

Just browsing the digitalmars.D.learn forum, and D-related question on StackOverflow, it seems to me that a major point of mistakes for a beginner D programmer (me included) is the difference in usage and abilities of char, wchar, dchar, and the associated string types. This leads to problems such as the following:

I know it must be for backwards compatibility reasons and familiarity for developers coming from C++ or C, but I think a fairly compelling argument can be made that this possible gain is offset by the problems experienced by those same developers when they try something non-trivial with a char or string and expect it to work as it would in C/C++, only to have it fail in difficult-to-debug ways.

To stave off a lot of these problems, I've seen experienced members of the D development community time and time again tell the inexperienced coder to use dchar to avoid such problems, which begs the question of why is a char not a 32-bit unicode character by default, with 8-bit ASCII characters relegated to achar or something similar, to be touched only if necessary?

Community
  • 1
  • 1
Meta
  • 1,091
  • 6
  • 14

3 Answers3

13

Personally, I wish that char didn't exist and that instead of char, wchar, and dchar, we had something more like utf8, utf16, and utf32. Then everyone would be immediately forced to realize that char was not what should be used for individual characters, but that's not the way it went. I'd say that it's almost certainly the case that char was simply taken from C/C++ and then the others were added to improve Unicode support. After all, there's nothing fundamentally wrong with char. It's just that so many programmers have the mistaken understanding that char is always a character (which isn't necessarily true even in C/C++). But Walter Bright has a very good understanding of Unicode and seems to think that everyone else should as well, so he tends to make decisions with regards to Unicode which work extremely well if you understand Unicode but don't work quite as well if you don't (and most programmers don't). D pretty much forces you to come to at least a basic understanding of Unicode, which isn't all bad, but it does trip some people up.

But the reality of the matter is that while it makes good sense to use dchar for individual characters, it generally doesn't make sense to use it for strings. Sometimes, that's what you need, but UTF-32 requires way more space than UTF-8 does. That could affect performance and definitely affects the memory footprint of your programs. And a lot of string processing doesn't need random access at all. So, having UTF-8 strings as the default makes far more sense than having UTF-32 strings be the default.

The way strings are managed in D generally works extremely well. It's just that the name char has an incorrect connotation for many people, and the language unfortunately chooses for character literals to default to char rather than dchar in many cases.

I think a fairly compelling argument can be made that this possible gain is offset by the problems experienced by those same developers when they try something non-trivial with a char or string and expect it to work as it would in C/C++, only to have it fail in difficult-to-debug ways.

The reality of the matter is that strings in C/C++ work the same way that they do in D, only they don't protect you from being ignorant or stupid, unlike in D. char in C/C++ is always 8 bits and is typically treated as a UTF-8 code unit by the OS (at least in *nix land - Windows does weird things for the encoding for char and generally requires you to use wchar_t for Unicode). Certainly, any Unicode strings that you have in C/C++ are in UTF-8 unless you explicitly use a string type which uses a different encoding. std::string and C strings all operate on code units rather than code points. But the average C/C++ programmer treats them as if each of their elements were a whole character, which is just plain wrong unless you're only using ASCII, and in this day and age, that's often a very bad assumption.

D takes the route of actually building proper Unicode support into the language and into its standard library. This forces you to come to at least a basic understanding of Unicode and often makes it harder to screw it up while giving those who do understand it extremely powerful tools for managing Unicode strings not only correctly but efficiently. C/C++ just side steps the issue and lets programmers step on Unicode land mines.

Jonathan M Davis
  • 37,181
  • 17
  • 72
  • 102
  • actually you can screw up in D but it just won't compile until you hack it into submission – ratchet freak Nov 13 '12 at 22:07
  • 2
    @ratchet freak You can always screw yourself over if you work at it. :) – Jonathan M Davis Nov 14 '12 at 01:28
  • I agree - utf8, utf16 and utf32 are better names for char, wchar and dchar. – DejanLekic Nov 14 '12 at 10:38
  • 2
    We could easily introduce aliases in object.d for the utf* types. – Trass3r Nov 14 '12 at 12:09
  • @Trass3r Without actually replacing `*char`, that wouldn't really fix much, since the main advantage is in forcing people to not just use `char` but rather to understand what they're dealing with. If such aliases existed, those people would just keep on using `char`. If anything, it would just increase confusion, because then people would be asking what the difference was. And at this point, it's far too late in the game to actually change the name of the type in the language. So, while `utf*` is a nice idea, I think that it's far too late for it to really work. – Jonathan M Davis Nov 14 '12 at 17:57
  • Yep, and make char an alias to byte and uchar to byte. – DejanLekic Nov 14 '12 at 20:58
  • 1
    @DejanLekic I would think that keeping `char` around at all would defeat the purpose. People would still try and use it for full characters, which is the core of the problem. – Jonathan M Davis Nov 14 '12 at 21:01
  • Well you will never get them to remove char. You can only try a slow transition. – Trass3r Nov 17 '12 at 13:18
  • @Trass3r If `char` isn't outright removed, then I see no point. It would just introduce extra aliases and confusion. Granted, the situation isn't perfect, but it still works and probably results in far more Unicode-correct code than most languages. It's just that given the chance to go back, I'd want it to be done differently. – Jonathan M Davis Nov 17 '12 at 17:26
2

I understood the question as "Why dchar is not used in strings by default?"

dchar is a UTF-32 code unit. You rarely want to deal with UTF-32 code units because you waste too much space, especially if you deal only with ASCII strings.

Using UTF-8 code units (adequate type in D is char) is much more space-efficient.

D string is an immutable(char)[], ie an array of UTF-8 code units.

Yes, arguably dealing with UTF-32 code-units may boost the speed of your application if you constantly do random-access with strings. But if you know that you are going to do that with some particular text, use the dstring type in that case. This said, you should now understand why D treats strings as dchar ranges.

DejanLekic
  • 18,787
  • 4
  • 46
  • 77
0

Because of combining characters, even dchar can't truly hold all Unicode characters (in any way that humans want to think of it) and can't be indexed directly (see the end of this post for examples).

Community
  • 1
  • 1
BCS
  • 75,627
  • 68
  • 187
  • 294