35

I'm trying to write a very simple parser in C#.

I need a lexer -- something that lets me associate regular expressions with tokens, so it reads in regexs and gives me back symbols.

It seems like I ought to be able to use Regex to do the actual heavy lifting, but I can't see an easy way to do it. For one thing, Regex only seems to work on strings, not streams (why is that!?!?).

Basically, I want an implementation of the following interface:

interface ILexer : IDisposable
{
    /// <summary>
    /// Return true if there are more tokens to read
    /// </summary>
    bool HasMoreTokens { get; }
    /// <summary>
    /// The actual contents that matched the token
    /// </summary>
    string TokenContents { get; }
    /// <summary>
    /// The particular token in "tokenDefinitions" that was matched (e.g. "STRING", "NUMBER", "OPEN PARENS", "CLOSE PARENS"
    /// </summary>
    object Token { get; }
    /// <summary>
    /// Move to the next token
    /// </summary>
    void Next();
}

interface ILexerFactory
{
    /// <summary>
    /// Create a Lexer for converting a stream of characters into tokens
    /// </summary>
    /// <param name="reader">TextReader that supplies the underlying stream</param>
    /// <param name="tokenDefinitions">A dictionary from regular expressions to their "token identifers"</param>
    /// <returns>The lexer</returns>
    ILexer CreateLexer(TextReader reader, IDictionary<string, object> tokenDefinitions);
}

So, pluz send the codz...
No, seriously, I am about to start writing an implementation of the above interface yet I find it hard to believe that there isn't some simple way of doing this in .NET (2.0) already.

So, any suggestions for a simple way to do the above? (Also, I don't want any "code generators". Performance is not important for this thing and I don't want to introduce any complexity into the build process.)

Svante
  • 50,694
  • 11
  • 78
  • 122
Paul Hollingsworth
  • 13,124
  • 12
  • 51
  • 68
  • 1
    In my opinion out there's a lot of good tools for generating lexer/parser code (like ANTLR or Irony), but if you never write some basic parsing code from scratch it could be difficult to take advantage of this generators. I've recently written a simple open source math expression parser (https://github.com/gsscoder/exprengine). It evaluates traversing the abstract syntax tree (AST) using visitor pattern. The lexer/parser are written from scratch, no generators! I hope it can helpful. – gsscoder Jan 05 '13 at 11:41

7 Answers7

29

The original version I posted here as an answer had a problem in that it only worked while there was more than one "Regex" that matched the current expression. That is, as soon as only one Regex matched, it would return a token - whereas most people want the Regex to be "greedy". This was especially the case for things such as "quoted strings".

The only solution that sits on top of Regex is to read the input line-by-line (which means you cannot have tokens that span multiple lines). I can live with this - it is, after all, a poor man's lexer! Besides, it's usually useful to get line number information out of the Lexer in any case.

So, here's a new version that addresses these issues. Credit also goes to this

public interface IMatcher
{
    /// <summary>
    /// Return the number of characters that this "regex" or equivalent
    /// matches.
    /// </summary>
    /// <param name="text">The text to be matched</param>
    /// <returns>The number of characters that matched</returns>
    int Match(string text);
}

sealed class RegexMatcher : IMatcher
{
    private readonly Regex regex;
    public RegexMatcher(string regex) => this.regex = new Regex(string.Format("^{0}", regex));

    public int Match(string text)
    {
        var m = regex.Match(text);
        return m.Success ? m.Length : 0;
    }
    public override string ToString() => regex.ToString();
}

public sealed class TokenDefinition
{
    public readonly IMatcher Matcher;
    public readonly object Token;

    public TokenDefinition(string regex, object token)
    {
        this.Matcher = new RegexMatcher(regex);
        this.Token = token;
    }
}

public sealed class Lexer : IDisposable
{
    private readonly TextReader reader;
    private readonly TokenDefinition[] tokenDefinitions;

    private string lineRemaining;

    public Lexer(TextReader reader, TokenDefinition[] tokenDefinitions)
    {
        this.reader = reader;
        this.tokenDefinitions = tokenDefinitions;
        nextLine();
    }

    private void nextLine()
    {
        do
        {
            lineRemaining = reader.ReadLine();
            ++LineNumber;
            Position = 0;
        } while (lineRemaining != null && lineRemaining.Length == 0);
    }

    public bool Next()
    {
        if (lineRemaining == null)
            return false;
        foreach (var def in tokenDefinitions)
        {
            var matched = def.Matcher.Match(lineRemaining);
            if (matched > 0)
            {
                Position += matched;
                Token = def.Token;
                TokenContents = lineRemaining.Substring(0, matched);
                lineRemaining = lineRemaining.Substring(matched);
                if (lineRemaining.Length == 0)
                    nextLine();

                return true;
            }
        }
        throw new Exception(string.Format("Unable to match against any tokens at line {0} position {1} \"{2}\"",
                                          LineNumber, Position, lineRemaining));
    }

    public string TokenContents { get; private set; }
    public object Token   { get; private set; }
    public int LineNumber { get; private set; }
    public int Position   { get; private set; }

    public void Dispose() => reader.Dispose();
}

Example program:

string sample = @"( one (two 456 -43.2 "" \"" quoted"" ))";

var defs = new TokenDefinition[]
{
    // Thanks to [steven levithan][2] for this great quoted string
            // regex
    new TokenDefinition(@"([""'])(?:\\\1|.)*?\1", "QUOTED-STRING"),
    // Thanks to http://www.regular-expressions.info/floatingpoint.html
    new TokenDefinition(@"[-+]?\d*\.\d+([eE][-+]?\d+)?", "FLOAT"),
    new TokenDefinition(@"[-+]?\d+", "INT"),
    new TokenDefinition(@"#t", "TRUE"),
    new TokenDefinition(@"#f", "FALSE"),
    new TokenDefinition(@"[*<>\?\-+/A-Za-z->!]+", "SYMBOL"),
    new TokenDefinition(@"\.", "DOT"),
    new TokenDefinition(@"\(", "LEFT"),
    new TokenDefinition(@"\)", "RIGHT"),
    new TokenDefinition(@"\s", "SPACE")
};

TextReader r = new StringReader(sample);
Lexer l = new Lexer(r, defs);
while (l.Next())
    Console.WriteLine("Token: {0} Contents: {1}", l.Token, l.TokenContents);

Output:

Token: LEFT Contents: (
Token: SPACE Contents:
Token: SYMBOL Contents: one
Token: SPACE Contents:
Token: LEFT Contents: (
Token: SYMBOL Contents: two
Token: SPACE Contents:
Token: INT Contents: 456
Token: SPACE Contents:
Token: FLOAT Contents: -43.2
Token: SPACE Contents:
Token: QUOTED-STRING Contents: " \" quoted"
Token: SPACE Contents:
Token: RIGHT Contents: )
Token: RIGHT Contents: )
AustinWBryan
  • 3,249
  • 3
  • 24
  • 42
Paul Hollingsworth
  • 13,124
  • 12
  • 51
  • 68
  • 1
    Four years later, but thank you very much good sir! Yesterday I managed to get a Regex lexer output exactly what I want, but even after optimization it was slow (roughly 15 seconds for 36k tokens). While what the older lexer did worked (which ran all Regex in one go and then compared indices of matches with current index into a file, with the assumption of, I guess: "Regex is slow, so do it once, then never again"), yours, however, beat it by exactly 14.5 seconds! Apparently Regex isn't as expensive as most people would think. Thanks again! – Lennard Fonteijn Oct 09 '13 at 21:04
  • One little adjustment on this line: `this.regex = new Regex(string.Format("^{0}", regex));`, be sure to wrap `^{0}` into parentheses like this `^({0})`, otherwise matches like `\r|\n` won't match properly! – Lennard Fonteijn Oct 09 '13 at 21:23
6

It may be overkill, but have a look at Irony on CodePlex.

Irony is a development kit for implementing languages on .NET platform. It uses the flexibility and power of c# language and .NET Framework 3.5 to implement a completely new and streamlined technology of compiler construction. Unlike most existing yacc/lex-style solutions Irony does not employ any scanner or parser code generation from grammar specifications written in a specialized meta-language. In Irony the target language grammar is coded directly in c# using operator overloading to express grammar constructs. Irony's scanner and parser modules use the grammar encoded as c# class to control the parsing process. See the expression grammar sample for an example of grammar definition in c# class, and using it in a working parser.

Andy Dent
  • 17,578
  • 6
  • 88
  • 115
  • 1
    Ah I see - sounds like C# version of Boost Spirit for C++. Thanks... although as you can see from my answer, definitely overkill for what I'm looking for. – Paul Hollingsworth Mar 23 '09 at 15:08
  • 1
    Interesting project indeed, at least you have IDE support in the same way as normal C# has (because grammar becomes just an ordinary C# code :) ). I guess it's a bit like LINQ helps you to stop writing real SQL. – IgorK Jan 22 '10 at 14:08
5

Unless you have a very unconventional grammar, I'd strongly recommend not to roll your own lexer/parser.

I usually find lexer/parsers for C# are really lacking. However, F# comes with fslex and fsyacc, which you can learn how to use in this tutorial. I've written several lexer/parsers in F# and used them in C#, and its very easy to do.

I suppose its not really a poor man's lexer/parser, seeing that you have to learn an entirely new language to get started, but its a start.

Juliet
  • 80,494
  • 45
  • 196
  • 228
2

Changing my original answer.

Take a look at SharpTemplate that has parsers for different syntax types, e.g.

#foreach ($product in $Products)
   <tr><td>$product.Name</td>
   #if ($product.Stock > 0)
      <td>In stock</td>
   #else
     <td>Backordered</td>
   #end
  </tr>
#end

It uses regexes for each type of token:

public class Velocity : SharpTemplateConfig
{
    public Velocity()
    {
        AddToken(TemplateTokenType.ForEach, @"#(foreach|{foreach})\s+\(\s*(?<iterator>[a-z_][a-z0-9_]*)\s+in\s+(?<expr>.*?)\s*\)", true);
        AddToken(TemplateTokenType.EndBlock, @"#(end|{end})", true);
        AddToken(TemplateTokenType.If, @"#(if|{if})\s+\((?<expr>.*?)\s*\)", true);
        AddToken(TemplateTokenType.ElseIf, @"#(elseif|{elseif})\s+\((?<expr>.*?)\s*\)", true);
        AddToken(TemplateTokenType.Else, @"#(else|{else})", true);
        AddToken(TemplateTokenType.Expression, @"\${(?<expr>.*?)}", false);
        AddToken(TemplateTokenType.Expression, @"\$(?<expr>[a-zA-Z_][a-zA-Z0-9_\.@]*?)(?![a-zA-Z0-9_\.@])", false);
    }
}

Which is used like this

foreach (Match match in regex.Matches(inputString))
{
    ...

    switch (tokenMatch.TokenType)
    {
        case TemplateTokenType.Expression:
            {
                currentNode.Add(new ExpressionNode(tokenMatch));
            }
            break;

        case TemplateTokenType.ForEach:
            {
                nodeStack.Push(currentNode);

                currentNode = currentNode.Add(new ForEachNode(tokenMatch));
            }
            break;
        ....
    }

    ....
}

It pushes and pops from a Stack to keep state.

Chris S
  • 64,770
  • 52
  • 221
  • 239
2

Malcolm Crowe has a great LEX/YACC implementation for C# here. Works by creating regular expressions for the LEX...

Direct download

Kieron
  • 26,748
  • 16
  • 78
  • 122
0

If you take a look at the ExpressionConverter in my WPF Converters library, it has basic lexing and parsing of C# expressions. No regex involved, from memory.

Kent Boogaart
  • 175,602
  • 35
  • 392
  • 393
0

It is possible to use Flex and Bison for C#.

A researcher at the University of Ireland has developed a partial implementation that can be found at the following link: Flex/Bison for C#

It could definitely be considered a 'poor mans lexer' as he seems to still have some issues with his implementation, such as no preprocessor, issues with a 'dangling else' case, etc.

erik
  • 3,810
  • 6
  • 32
  • 63
  • 1
    The page has not been updated 2004, and the lexer itself is derived from the C# 0.28 spec. I don't think this "poor man's lexer" should be used in the real world. – Juliet Mar 23 '09 at 14:09
  • 1
    That is a good point, however I figured that since he was trying to do something simple, this quick and dirty (and obviously unfinished) lexer would be an OK starting point. – erik Mar 24 '09 at 15:53