I wrote a python script that processes a large amount of downloaded webpages HTML(120K pages). I need to parse them and extract some information from there. I tried using BeautifulSoup, which is easy and intuitive, but it seems to run super slowly. As this is something that will have to run routinely on a weak machine (on amazon) speed is important. is there an HTML/XML parser in python that will work much faster than BeautifulSoup? or must I resort to regex parsing..
-
5[Keep the pony away...](http://stackoverflow.com/a/1732454/554546) – Mar 12 '12 at 16:28
-
4I have no experience with parsing HTML in Python, but [here](http://blog.ianbicking.org/2008/03/30/python-html-parser-performance/) are some benchmark results that you may find useful. – Mar 12 '12 at 16:30
-
8[regex and HTML == failure](http://stackoverflow.com/a/1732454/554546) – Mar 12 '12 at 16:30
-
Exactly what is the parsing task? – Karl Knechtel Mar 12 '12 at 16:34
-
2@JackManey - wow. I will definitely not parse HTML with regex after this... – WeaselFox Mar 12 '12 at 16:35
-
@KarlKnechtel - I have to find tags that have a certain attribute (color) and get another attribute from them. – WeaselFox Mar 12 '12 at 16:37
-
Could we see the BeautifulSoup-using code? Maybe you're inadvertently making it do too much work? – Karl Knechtel Mar 12 '12 at 16:39
-
@KarlKnechtel - could it be that calling `import BeautifulSoup`each time is the heaviest part? anyway, I already implemented the regex solution, thanks for your help. – WeaselFox Mar 12 '12 at 17:00
-
I don't know what you mean by "each time", but no, Python's `import` statement will do almost no work if the module is already imported - the module object will be looked up from a cache. This also means that module-level code doesn't run again - to force that, you'd have to use the `reload` builtin function. – Karl Knechtel Mar 12 '12 at 20:15
3 Answers
Streaming (or SAX-style) parsers can be faster than DOM-style ones. Your code is passed elements one at a time as they occur in the document, and although you have to infer (and keep track of) their relationships yourself, you only need to maintain as much state as is required to locate the data you want. As a bonus, once you've found what you're interested in, you can terminate parsing early, saving the time that would have been required to process the rest of the document.
In contrast, DOM-style parsers need to build a complete navigable object model of the whole document, which takes time (and memory). DOM-style parsers are typically built on top of streaming parsers, so they will ceteris paribus be slower than the streaming parser they use.
Python has a streaming parser for HTML called html.parser
. Depending on how hard it is to recognize the data you want to extract, it can be complicated to actually program a streaming parser to do scraping, because the API is sort of inside-out from the way you're used to thinking of documents. So it may be worth choosing an easier-to-use parser even if it's slower at runtime, because simple code that works is generally better than complicated code with bugs.
On the gripping hand, a parser written in C (such as lxml
) is going to blow the doors off pretty much any parser written in pure Python, regardless of what approach it takes, so that might be a way to get the speed you need. (In fact, these days, BeautifulSoup uses lxml
as its default parser.)

- 178,883
- 35
- 278
- 309
try: ElementTree could be faster, but i am not sure.
xml.etree.ElementTree import ElementTree

- 172
- 2
- 6
-
I was going to suggest this as well... although, I don't have any data to support performance ratios of this against BeautifulSoup – inspectorG4dget Mar 12 '12 at 16:44
-
A benchmark is available here. https://medium.com/@vikoky/fastest-html-parser-available-now-f677a68b81dd – jvmvik Dec 09 '18 at 22:56