4

In response to Python regular expression I tried to implement an HTML parser using HTMLParser:

import HTMLParser

class ExtractHeadings(HTMLParser.HTMLParser):

  def __init__(self):
    HTMLParser.HTMLParser.__init__(self)
    self.text = None
    self.headings = []

  def is_relevant(self, tagname):
    return tagname == 'h1' or tagname == 'h2'

  def handle_starttag(self, tag, attrs):
    if self.is_relevant(tag):
      self.in_heading = True
      self.text = ''

  def handle_endtag(self, tag):
    if self.is_relevant(tag):
      self.headings += [self.text]
      self.text = None

  def handle_data(self, data):
    if self.text != None:
      self.text += data

  def handle_charref(self, name):
    if self.text != None:
      if name[0] == 'x':
        self.text += chr(int(name[1:], 16))
      else:
        self.text += chr(int(name))

  def handle_entityref(self, name):
    if self.text != None:
      print 'TODO: entity %s' % name

def extract_headings(text):
  parser = ExtractHeadings()
  parser.feed(text)
  return parser.headings

print extract_headings('abdk3<h1>The content we need</h1>aaaaabbb<h2>The content we need2</h2>')
print extract_headings('before<h1>&#72;e&#x6c;&#108;o</h1>after')

Doing that I wondered if the API of this module is bad or if I didn't notice some important things. My questions are:

  • Why does my implementation of handle_charref have to be that complex? I would have expected that a good API passes the codepoint as a parameter, not either x6c or 72 as string.
  • Why doesn't the default implementation of handle_charref call handle_data with an appropriate string?
  • Why is there no utility implementation of handle_entityref that I could just call? It could be named handle_entityref_HTML4 and would lookup the entities defined in HTML 4 and then call handle_data on them.

If that API were provided, writing custom HTML parsers would be much easier. So where is my misunderstanding?

Community
  • 1
  • 1
Roland Illig
  • 40,703
  • 10
  • 88
  • 121
  • 1
    If you read the answers in the previous question, why aren't you using BeautifulSoup? HTML Parser is ok for parsing regular boring HTML, but that's practically the same HTML you could handle with a regular expression (since it's actually regular). It doesn't handle advanced features or anything that is non-conformant, while BeautifulSoup has a really nice API. – Nick Bastin Nov 15 '10 at 08:45

2 Answers2

1

Well, I tend to agree that it's a horrible oversight for the HTMLParser not to include code to convert HTML entity references into normal ASCII and/or other characters. I gather that this is remedied by completely different work in Python3.

However, it seems we can write a fairly simple entity handler something like:

import htmlentitydefs
def entity2char(x):
    if x.startswith('&#x'):
        # convert from hexadecimal
        return chr(int(x[3:-1], 16))
    elif x.startswith('&#'):
        # convert from decimal
        return chr(int(x[2:-1]))
    elif x[1:-1] in htmlentitydefs.entitydefs:
        return htmlentitydefs.entitydefs[x[1:-1]]
    else:
        return x

... though we should wrap to further input validation, and wrap the integer conversions in exception handling code.

But this should handle the very minimum in about 10 lines of code. Adding the exception handling would, perhaps, double its line count.

Jim Dennis
  • 17,054
  • 13
  • 68
  • 116
0

Do you need to implement your own parser or you can get already created? Look at beautiful soup.

ceth
  • 44,198
  • 62
  • 180
  • 289
  • 1
    I know I could use BeautifulSoup. I just wonder what would be a valid reason to use HTMLParser instead of it, since it is so much harder to use. – Roland Illig Nov 18 '10 at 00:18