29

I am wondering if the newest version of flex supports unicode?

If so, how can use patterns to match Chinese characters?

More: Use regular expression to match ANY Chinese character in utf-8 encoding

Community
  • 1
  • 1
xiaohan2012
  • 9,870
  • 23
  • 67
  • 101

3 Answers3

28

At the moment, flex only generates 8-bit scanners which basically limits you to use UTF-8. So if you have a pattern:

肖晗   { printf ("xiaohan\n"); }

it will work as expected, as the sequence of bytes in the pattern and in the input will be the same. What's more difficult is character classes. If you want to match either the character 肖 or 晗, you can't write:

[肖晗]   { printf ("xiaohan/2\n"); }

because this will match each of the six bytes 0xe8, 0x82, 0x96, 0xe6, 0x99 and 0x97, which in practice means that if you supply 肖晗 as the input, the pattern will match six times. So in this simple case, you have to rewrite the pattern to (肖|晗).

For ranges, Hans Aberg has written a tool in Haskell that transforms these into 8-bit patterns:

Unicode> urToRegU8 0 0xFFFF
[\0-\x7F]|[\xC2-\xDF][\x80-\xBF]|(\xE0[\xA0-\xBF]|[\xE1-\xEF][\x80-\xBF])[\x80-\xBF]
Unicode> urToRegU32 0x00010000 0x001FFFFF
\0[\x01-\x1F][\0-\xFF][\0-\xFF]
Unicode> urToRegU32L 0x00010000 0x001FFFFF
[\x01-\x1F][\0-\xFF][\0-\xFF]\0

This isn't pretty, but it should work.

Tim Landscheidt
  • 1,400
  • 1
  • 15
  • 20
  • I copied my reply from the mailing list to the answer. – Tim Landscheidt Mar 08 '12 at 18:28
  • Thanks. Seems to inspire me a lot! – xiaohan2012 Mar 11 '12 at 12:38
  • Can you give me some help? I tried to compile the program source code you mentioned, but the Glasgow Haskell compiler output parse error. Have you compile the source code yourself successfully? If so, would you please give me some hint on that? – xiaohan2012 Mar 12 '12 at 06:48
  • 3
    Sorry for the abruptness, I am using the wrong tool. I should use `hugs`, instead of `ghc` – xiaohan2012 Mar 12 '12 at 06:55
  • 1
    So this works, but we should add that if you are using %option full or parameter -Cf, then you also need to use the option %option 8bit or the parameter -8 or --8bit (this has caused me a lot of headache...) – Algoman Jul 07 '15 at 10:17
  • The ranges in the example above don't look right to me if we're talking about converting the range of code points (not literal values) to an expression for flex. Code point 0xffff in utf-8 encoding should be \xef\xbf\xbf. The range from code point 0 to 0xffff would then be "[\x00-\xee]..|\xef(\xbf[\x00-\xbf]|[\x00-\xbe].)". And if 0x0000 and 0xffff were understood to be the utf8 encoding of some other code points, then the expression would be "...". Does this make sense, or do I seem utterly confused =) – Todd Dec 27 '18 at 10:48
24

Flex does not support Unicode. However, Flex supports "8 bit clean" binary input. Therefore you can write lexical patterns which match UTF-8. You can use these patterns in specific lexical areas of the input language, for instance identifiers, comments or string literals.

This will work for well for typical programming languages, where you may be able to assert to the users of your implementation that the source language is written in ASCII/UTF-8 (and no other encoding is supported, period).

This approach won't work if your scanner must process text that can be in any encoding. It also won't work (very well) if you need to express lexical rules specifically for Unicode elements. I.e. you need Unicode characters and Unicode regexes in the scanner itself.

The idea is that you can recognize a pattern which includes UTF-8 bytes using a lex rule, (and then perhaps take the yytext, and convert it out of UTF-8 or at least validate it.)

For a working example, see the source code of the TXR language, in particular this file: http://www.kylheku.com/cgit/txr/tree/parser.l

Scroll down to this section:

ASC     [\x00-\x7f]
ASCN    [\x00-\t\v-\x7f]
U       [\x80-\xbf]
U2      [\xc2-\xdf]
U3      [\xe0-\xef]
U4      [\xf0-\xf4]

UANY    {ASC}|{U2}{U}|{U3}{U}{U}|{U4}{U}{U}{U}
UANYN   {ASCN}|{U2}{U}|{U3}{U}{U}|{U4}{U}{U}{U} 
UONLY   {U2}{U}|{U3}{U}{U}|{U4}{U}{U}{U}

As you can see, we can define patterns to match ASCII characters as well as UTF-8 start and continuation bytes. UTF-8 is a lexical notation, and this is a lexical analyzer generator, so ... no problem!

Some explanations: The UANY means match any character, single-byte ASCII or multi-byte UTF-8. UANYN means like UANY but no not match the newline. This is useful for tokens that do not break across lines, like say a comment from # to the end of the line, containing international text. UONLY means match only a UTF-8 extended character, not an ASCII one. This is useful for writing a lex rule which needs to exclude certain specific ASCII characters (not just newline) but all extended characters are okay.

DISCLAIMER: Note that the scanner's rules use a function called utf8_dup_from to convert the yytext to wide character strings containing Unicode codepoints. That function is robust; it detects problems like overlong sequences and invalid bytes and properly handles them. I.e. this program is not relying on these lex rules to do the validation and conversion, just to do the basic lexical recognition. These rules will recognize an overlong form (like an ASCII code encoded using several bytes) as valid syntax, but the conversion function will treat them properly. In any case, I don't expect UTF-8 related security issues in the program source code, since you have to trust source code to be running it anyway (but data handled by the program may not be trusted!) If you're writing a scanner for untrusted UTF-8 data, take care!

gpvos
  • 2,722
  • 1
  • 17
  • 22
Kaz
  • 55,781
  • 9
  • 100
  • 149
  • 1
    Just wondering, shouldn't the definition of U4 be like: `U4 [\xf0-\xf7]` to actually accomodate all possibilities from 11110000 to 11110111 ? – exa Oct 09 '16 at 11:31
  • @exa Good attention to detail! The full range of the byte would give us code points up to `U+3FFFFF`. The `F4` restricts to `U+10FFFF`. – Kaz Oct 09 '16 at 14:18
  • I wonder if the proposed approach is safe. These TRX patterns include the invalid U+D800-U+DFFF range (UTF016 surrogate halves are invalid Unicode) and `{U4}{U}{U}{U}` exceeds the Unicode upper bound U+10FFFF, unlike you stated the last code point should be `\xf4[\x80-\x8f][\x80-\xbf][\x80-\xbf]` not `\xf4[\x80-\xbf][\x80-\xbf][\x80-\xbf]`. – Dr. Alex RE Mar 24 '17 at 18:02
  • 1
    In my opinion, this is a better answer than the accepted answer. – Rahat Zaman Sep 26 '18 at 02:01
  • @RahatZaman Thanks. This is battle-tested from production code. – Kaz Sep 27 '18 at 17:43
  • @Dr.AlexRE That is right; basically, the rules do not validate UTF-8. They extract the units, and then the `yytext[]` null-terminated character string is to be processed through a proper UTF-8 decoder, which deals with all those cases in detail. – Kaz Sep 27 '18 at 17:48
  • It is misleading to consider Unicode the same as UTF-8, which is a common mistake. The proposed approach seems only to work for UTF-8 and only if there is no UTF BOM in the file. Unicode requires support for UTF BOM, UTF-16, and UTF-32 input and perhaps UCS-2 and UCS-4, though these are superseded by UTF-16 and UTF-32. – Dr. Alex RE Sep 10 '19 at 18:34
  • 1
    @Dr.AlexRE BOM is nonsensical in UTF-8, which has a single well-defined byte order. > *Unicode requires support for ...* No, it doesn't. You and your customers define the requirements for your programs. http://utf8everywhere.org/ – Kaz Sep 10 '19 at 18:47
  • This solution is compatible with other encodings. What is needed is a suitable function to convert to UTF-8 first. In the TXR Lisp language, when you evaluate, for instance, `(read "(1 2 3)")`, the character string `"1 2 3"` is stored as 32 bit code-points. But, nevertheless, the `read` function scans it as UTF-8, using the approach given in this answer. What's going on is that an input stream is created over the string which output UTF-8 bytes. The lexer reads from that. – Kaz Sep 10 '19 at 18:52
  • @kaz UTF BOM are typically used with UTF-16 and UTF-32. it's just not required for UTF-8, which I did not claim. UTF-8 BOM is permitted with UTF-8 and therefore should be supported for compliance reasons. I've dealt with numerous files that include UTF-8 BOM. In fact, a simple lexer rule to match a UTF-8 BOM will take care of that. – Dr. Alex RE Sep 10 '19 at 19:07
  • @Dr.AlexRE The approach in this answer will recognize the EF BB BF sequence representing the BOM in UTF-8. The scanner can return this as a special token, so the parser can deal with it, or it can hide it entirely from the parser or (e.g. set a flag indicating it's been seen, and throw it away or whatever). The requirement to deal with BOM in UTF-8 input can be handled somehow. – Kaz Sep 10 '19 at 19:15
  • @kKaz you say "Dr.AlexRE BOM is nonsensical in UTF-8, which has a single well-defined byte order. > Unicode requires support for ... No, it doesn't." Wow. That is the dumbest thing I've heard in a long time. You clearly do not understand the standards. A UTF-8 BOM is permitted with UTF-8 files. Otherwise, what is the point of having a UTF-8 BOM standardized. Duh, how utterly stupid to think you can just ignore it doesn't matter. That's how software gets so crappy these days, because of stupidity. – Dr. Alex RE Jul 21 '23 at 03:00
5

I am wondering if the newest version of flex supports unicode?

If so, how can use patterns to match Chinese characters?

To match patterns with Chinese characters and other Unicode code points with a Flex-compatible lexical analyzer, you could use the RE/flex lexical analyzer for C++.

RE/flex safely supports the full Unicode standard and accepts UTF-8, UTF-16, and UTF-32 input files without requiring UTF-8 hacks (that can't even support UTF-16/32 input and handle UTF BOM.)

Also, UTF-8 hacks with Flex don't allow you to write Unicode regular expressions such as [肖晗] that are fully supported in RE/flex.

It works seamlessly with Bison to build lexers and parsers.

In fact, with RE/flex we can write any Unicode patterns as UTF-8-based regular expressions in lexer .l specifications, such as:

%option flex unicode
%%
[肖晗]   { printf ("xiaohan/2\n"); }
%%

This generates a lexer that scans UTF-8, UTF-16, and UTF-32 files automatically. As per UTF standardization, for UTF-16/32 input a UTF BOM is expected in the input, while an UTF-8 BOM is optional.

We can use global %option unicode to enable Unicode and %option flex to specify Flex specifications. A local modifier (?u:) can be used to restrict Unicode to a single pattern (so everything else is still ASCII/8-bit as in Flex):

%option flex
%%
(?u:[肖晗])   { printf ("xiaohan/2\n"); }
(?u:\p{Han})  { printf ("Han character %s\n", yytext); }
.             { printf ("8-bit character %d\n", yytext[0]); }
%%

Option flex enables Flex compatibility, so you can use yytext, yyleng, ECHO, and so on. Without the flex option RE/flex expects Lexer method calls: text() (or str() and wstr() for std::string and std::wstring), size() (or wsize() for wide char length), and echo(). RE/flex method calls are cleaner IMHO, and include wide char operations.

Dr. Alex RE
  • 1,772
  • 1
  • 15
  • 23