0

I want to know what are the parsing algorithms used for parsing programmed grammars. Any Links , blogs or anything where i can read about programmed grammars and there parsing algorithms except IEEE research papers ?

Seki
  • 11,135
  • 7
  • 46
  • 70
akshaykumar6
  • 2,147
  • 4
  • 18
  • 31
  • What's wrong with research papers? If you're going to go headlong into more niche areas of CS you should be prepared to read academic papers... – nneonneo Oct 21 '12 at 06:15
  • Actually, I have read some of them already but didn't find anything specific to the programmed grammars. – akshaykumar6 Oct 21 '12 at 06:44
  • 3
    I've done a lot of parsing over the years and am unfamiliar with the phrase "programmed grammers" or "parsing-programmed grammars". What you need to do is find a source that defines this term and read the corresponding paper. You might be surprised, but if your term is technically accurate, an IEEE or ACM technical paper is likely to hold the most authoritative answer. Based on my unfamiliarity with the term, it might be that you have somehow mangled the name of thing of interest to you. Where did you encounter the term, and why is it interesting? (There's many, many parser types). – Ira Baxter Oct 21 '12 at 08:34
  • Thanks Ira! But I got this as my assignment topic, I didn't invented it. I found some papers on Programmed Grammars from "google" but nothing is there regarding their parsing algorithms. From my search I am observing that this not among the conventional subjects of Grammars, very few people know about this. Please keep looking and tell me if you find something.. – akshaykumar6 Oct 21 '12 at 14:32
  • The classic method for finding the right papers is to take the ones you have that mention your topic, and check the references. Good papers contain references to earlier, more basic material, usually referenced in an opening paragraph in a sentence like "Much work has been done on Parsing Programmed Grammars {ref 23}..." Go read the referenced papers, and follow *their* references. Google Scholar is now a pretty good to track this backward quickly. We all have our learning experiences; this one's yours. – Ira Baxter Oct 21 '12 at 16:30
  • Thank You,Ira!! No one gives such advice for free, nice talking to you. – akshaykumar6 Oct 21 '12 at 17:32
  • Well, well, you haven't mangled the phrase. A google search for "programmed grammers" turns up, on the first pages of hits, "A NEW NORMAL FORM FOR PROGRAMMED GRAMMARS WITH ... www.feec.vutbr.cz/EEICT/2012/sbornik/.../12-xvrabe01.pdfShare", complete with the (as predicted) sentence in the opening paragraphs "In the formal language theory, programmed grammars have been thoroughly investigated (see [1, 2, 4–6, 8, 10]...". This suggests you aren't trying very hard. Happy reading. – Ira Baxter Oct 21 '12 at 17:48
  • I should clarify your doubt about my hard work, that I have read this paper already and refer their references also but they have nothing to say about "Parsing Algorithms for Programmed Grammars". Some talk about "programmed grammars" while other about "parsing algo's" of some other grammars. – akshaykumar6 Oct 21 '12 at 17:58
  • OK, good for you. Follow the references. Eventually you'll get to a paper that contains the basic ideas and algorithms. – Ira Baxter Oct 21 '12 at 18:33
  • "except IEEE research papers"... so ACM is allowed? – Austin Henley Oct 21 '12 at 22:45

1 Answers1

0

I think it's explained well in The power of programmed grammars with graphs from various classes:

A context-free grammar is specified as a quadruple G = (N, T, S, P), where N is a finite non-empty set called the nonterminal alphabet, T is a finite non-empty set called the terminal alphabet (N ∩ T = ∅), S ∈ N is the start symbol, and P is a finite subset of N × (N ∪ T)∗ called the set of rules. Rules are also named as productions.

A programmed grammar (without appearance checking) is a six-tuple G = (N, T, S, Lab, P, PG) where N , T and S are specified as in a context-free grammar, Lab is an alphabet (of labels), P is a finite set of context-free rules called the set core productions, and PG is a finite set of triples r = (q, p, σ), where q ∈ Lab is the label of r, p ∈ P is a context-free production called the core production of r, and σ is a subset of Lab and is termed the success field of r. The elements of PG are called the rules of G.

The language L(G) generated by a programmed grammar G specified as above is defined as the set of all words w ∈ T∗ such that there is a derivation S = w0 =⇒r1 w1 =⇒r2 w2 =⇒r3 . . . =⇒rk wk = w, where k ≥ 1 and, for 1 ≤ i ≤ k, wi−1 = wi−1 Ai wi−1 and wi = wi−1 vi wi−1 for some words wi−1 , wi−1 ∈ (N ∪ T)∗ , ri = (qi , Ai → vi , σi) and, for i < k, qi+1 ∈ σi.

Excuse the lack of LaTeX.

In a similar way that Ogden's Lemma is stronger than the Pumping Lemma (because of markings), the concept of programmed grammar is stricter than context-free because of these labellings.

Community
  • 1
  • 1
Andy Hayden
  • 359,921
  • 101
  • 625
  • 535