14

A lot of websites states that packrat parsers can parse input in linear time.
So at the first look they me be faster than LALR parser contructed by the tools yacc or bison.

I wanted to know if the performance of packrat parsers is better/worse than the performance of LALR parser when tested with common input (like programming language source files) and not with any theoretical inputs.

Does anyone can explain the main differences between the two approaches.
Thanks!

raisyn
  • 4,514
  • 9
  • 36
  • 55

5 Answers5

12

I'm not an expert at packrat parsing, but you can learn more at Parsing expression grammar on Wikipedia.

I haven't dug into it so I'll assume the linear-time characterization of packrat parsing is correct.

L(AL)R parsers are linear time parsers too. So in theory, neither packrat nor L(AL)R parsers are "faster".

What matters, in practice, of course, is implementation. L(AL)R state transitions can be executed in very few machine instructions ("look token code up in vector, get next state and action") so they can be extremely fast in practice. By "compiling" L(AL)R parsing to machine code, you can end up with lightning fast parsers, as shown by this 1986 Tom Pennello paper on Very Fast LR parsing. (Machines are now 20 years faster than when he wrote the paper!).

If packrat parsers are storing/caching results as they go, they may be linear time, but I'd guess the constant overhead would be pretty high, and then L(AL)R parsers in practice would be much faster. The YACC and Bison implementations from what I hear are pretty good.

If you care about the answer, read the basic technical papers closely; if you really care, then implement one of each and check out the overhead constants. My money is strongly on L(AL)R.

An observation: most language front-ends don't spend most of their time "parsing"; rather, they spend a lot of time in lexical analysis. Optimize that (your bio says you are), and the parser speed won't matter much.

(I used to build LALR parser generators and corresponding parsers. I don't do that anymore; instead I use GLR parsers which are linear time in practice but handle arbitrary context-free grammmars. I give up some performance, but I can [and do, see bio] build dozens of parsers for many languages without a lot of trouble.).

Ry-
  • 218,210
  • 55
  • 464
  • 476
Ira Baxter
  • 93,541
  • 22
  • 172
  • 341
  • It is possible to read the document (1986 Tom Pennello paper on Very Fast LR parsing) for free? – raisyn Sep 29 '10 at 15:39
  • Sure. Visit your local university's computer science library :-} I think the ACM will charge you only modest cost; IMHO its well worth it to make sure that ACM continues to make stuff like this available. – Ira Baxter Sep 29 '10 at 15:43
  • @IraBaxter, are L(AL)R parsers scannerless the way packrat parsers are? – Mike Samuel Apr 23 '12 at 21:46
  • LALR scanners can be scannerless: just define the "tokens" using grammar rules. If you define tokens using regular expressions as is typical of LEX/YACC, then you pretty much don't lose anything except performance. (You can define lexers that extend regular expressions in way that are not, e.g., character lookahead) and then you can't do this – Ira Baxter Apr 24 '12 at 02:09
  • ... and a remark: a scannerless parser will generally be slower than a fast lexer/parser combination, because the generality of the parsing engine is used to process individual characters, not tokens. Scanner-full(?) parsers can have extremely fast character management. If you want to see what a really well-engineered one can really do, check out LRStar sourceforge.net/projects/lrstar/ – Ira Baxter Jul 07 '12 at 14:55
  • @Ira, that sourceforge page is pretty empty, as is the linked website. – Gunther Jul 08 '12 at 12:36
  • @Gunther: You are right, and I am surprised. I happen to know the author personally, and he placed all those tools in the public. I'll ask him what happened. They are very nice tools. Regardless of this particular set of tools, my remarks about performance in general still stand. – Ira Baxter Jul 08 '12 at 14:22
  • @Gunther: I heard back from the author. He did put the tools up at sourceforce. The result for him: (paraphrasing) was lots of sweat to build it, many downloads, no feedback and "no donations"; in disgust he took it back down. That's a shame; I've seen his stuff [secondhand] and it was pretty nice. (I never downloaded it :) So, I guess LRStar is gone. – Ira Baxter Jul 09 '12 at 00:04
  • 1
    @Gunther: You can see I managed to get Paul Mann to restore the LRStart site. – Ira Baxter Aug 04 '12 at 16:17
  • @Ira, thanks for taking care of this. I see that the product was announced to be available, but the websites still look the same. – Gunther Aug 05 '12 at 11:50
  • 1
    June 2014: It appears the LRStar website has gone dark again. Everybody insisted he go open source and he'd make lots of money. I suspect Paul got tired of giving away his labor for free, with no rewards coming back. Open Source isn't all butterflies. – Ira Baxter Jun 11 '14 at 16:48
9

I am the author of LRSTAR, an open-source LR(k) parser generator. Because people are showing interest in it, I have put the product back online here LRSTAR.

I have studied the speed of LALR parsers and DFA lexers for many years. Tom Pennello's paper is very interesting, but is more of an academic exercise than a real-world solution for compilers. However, if all you want is a pattern recognizer, then it may be the perfect solution for you.

The problem is that real-world compilers usually need to do more than pattern recognition, such as symbol-table look-up for incoming symbols, error recovery, provide an expecting list (statement completion information), and build an abstract-syntax tree while parsing.

In 1989, I compared the parsing speed of LRSTAR parsers to "yacc" and found that they are 2 times the speed of "yacc" parsers. LRSTAR parsers use the ideas published in the paper: "Optimization of Parser Tables for Portable Compilers".

For lexer (lexical analysis) speed I discovered in 2009 that "re2c" was generating the fastest lexers, about twice the speed of those generated by "flex". I was rewriting the LRSTAR lexer generator section at that time and found a way to make lexers that are almost as fast as "re2c" and much smaller. However, I prefer the table-driven lexers that LRSTAR generates, because they are almost as fast and the code compiles much quicker.

BTW, compiler front-ends generated by LRSTAR can process source code at a speed of 2,400,000 lines per second or faster. The lexers generated by LRSTAR can process 30,000,000 tokens per second. The testing computer was a 3.5 GHz machine (from 2010).

  • 1
    Please update the provided links. "compilerware" is redirecting; "sourceforge" has no sourcecode and a failing download for the only file. – Jens A. Koch Nov 12 '12 at 13:33
  • 3
    It is my humble opinion that LRSTAR is/was extremely good. Paul Mann spent years getting this right; he then offered it open source following the industry mantra of "give it away and make money on services". Turns out he effectively starved; a recent conversation with him indicated he had turned his attention to another field that provide him with an actual reward. His last apparent technical act was to remove all the public instances of LRSTAR. May we all live with YACC forever. – Ira Baxter Feb 16 '15 at 09:36
  • LRSTAR is at https://sourceforge.net/projects/lrstar/ – Paul B Mann Feb 05 '22 at 19:16
4

[2015/02/15] here is the 1986 Tom Pennello paper on Very Fast LR parsing

http://www.genesishistory.org/content/ProfPapers/VF-LRParsing.pdf

the8472
  • 40,999
  • 5
  • 70
  • 122
user3453753
  • 357
  • 4
  • 15
  • 2
    The downvotes here are undeserved. This is a *great* paper, and I'm pleased the poster provided a link. The downvoters are in effect trying to un-provide the link; what is the point of that? If they think the some of the concepts of that paper should be placed here, it would be far more constructive of them to read the paper, and write such a summary, rather than complain that somebody else didn't do it. – Ira Baxter Feb 28 '15 at 06:48
2

I know this is an old post, but a month or so ago, I stumbled on this paper: https://www.mercurylang.org/documentation/papers/packrat.pdf and accidentally saw this post today.

The watered-down version of that the paper says: packrat memoisation is a mixed blessing. The best results can be achieved if you have some heuristics wrt how often this or another rule is going to match. Essentially, it only makes sense to memoise the rules that have two following properties: (1) few elements, (2) very common.

wvxvw
  • 8,089
  • 10
  • 32
  • 61
1

Performance is mostly a matter of language design. For each language, there will be an approach, technology, or parser generator that will make best fit.

I can't prove it without more thought, but I think that nothing can beat a top-down descent parser in which the semantics drive the parser, and the parser drives the lexer, performance-wise. It would also be among the most versatile and easier to maintain among the implementations.

Apalala
  • 9,017
  • 3
  • 30
  • 48
  • 5
    LR parsing theory says that parsing speed is linear with respect to the length of the input. This applies to computer languages which are mostly context free. So the parsing speed has not much to do with the language design. The parsing speed has much to do with the parsing algorithms and the language the parser is written in. –  May 16 '13 at 20:01
  • Parsing speed can definitely be influenced by language design. For a toy but obvious example, if a language requires all string literals to be prefixed with their length (and thus eliminates the need to encode/escape any characters in the literal), then upon parsing the type (string literal) and length, the semantics of the language allow simply skipping that many characters of input, which can be O(1) (if for example the input file is `mmap`ed into memory). Parsing the whole language is still O(n), but that just shows how much performance difference can be missed by framing it in this way. – mtraceur Jan 27 '23 at 20:39