"I need performance (for 20Mb) ... so I decided Javascript". These are contradictory.
Carefully coded recursive descent parsers can be pretty fast, but you have to write a lot of code. Generally, LALR(1) parsers (generated by Bison from a grammar etc.) are quite fast, and pretty easy to code. (There are
technical papers showing how to compile LALR parsers directly to machine code; these parsers are blazingly fast but you need to implement a lot of custom machinery to build one).
If you want flat out high performance parsing with minimal sweat, you should consider LRStar. (I know and highly respect the author but otherwise have nothing to do this). This produces
very fast LALR parsers. Downside: you have to make your grammar LALR compatible.
You will have to extend your "rules" the same way you
extend any other C program: by writing C code. That doesn't seem a lot worse
than writing JavaScript IMHO, but the rules will likely execute a lot faster
and at the scale you are contemplating this will matter.
GLR parsing is necessarily slower than LALR because it has more bookkeeping to do. But that's just a constant factor. It can be significant (e.g., 100x) over a high performance engine like LRStar. It may be worth the trouble, because it is much easier to pound your grammar into shape, and a less convoluted grammar will likely make writing custom rules easier to do. If you really have millions lines of code, these parsers will like be medium speed at best.
PEG basically is a backtracking. It has to try things, and backtrack when they don't work. They have to be slower than LALR parsers by at least the amount of backtracking they do. You also have the grammar shaping problem.
What you will discover, though, is that parsing simply isn't enough if you want to do
anything the faintest bit sophisticated. In that case, you don't want to optimize on parsing;
you want to optimize on infrastructure for program analysis. See my essay on
Life After Parsing for another alternative.