10

I was trying to process several web pages with BeautifulSoup4 in python 2.7.3 but after every parse the memory usage goes up and up.

This simplified code produces the same behavior:

from bs4 import BeautifulSoup

def parse():
    f = open("index.html", "r")
    page = BeautifulSoup(f.read(), "lxml")
    f.close()

while True:
    parse()
    raw_input()

After calling parse() for five times the python process already uses 30 MB of memory (used HTML file was around 100 kB) and it goes up by 4 MB every call. Is there a way to free that memory or some kind of workaround?

Update: This behavior gives me headaches. This code easily uses up plenty of memory even though the BeautifulSoup variable should be long deleted:

from bs4 import BeautifulSoup
import threading, httplib, gc

class pageThread(threading.Thread):
    def run(self):
        con = httplib.HTTPConnection("stackoverflow.com")
        con.request("GET", "/")
        res = con.getresponse()
        if res.status == 200:
            page = BeautifulSoup(res.read(), "lxml")
        con.close()

def load():
    t = list()
    for i in range(5):
        t.append(pageThread())
        t[i].start()
    for thread in t:
        thread.join()

while not raw_input("load? "):
    gc.collect()
    load()

Could that be some kind of a bug maybe?

Sesshu
  • 161
  • 1
  • 1
  • 8
  • 30 MB is not a lot,, garbage collection might not have been triggered yet i guess.. is there a problem with memory or something? – Aprillion Jul 01 '12 at 20:11

4 Answers4

10

Try Beautiful Soup's decompose functionality, which destroys the tree, when you're done working with each file.

from bs4 import BeautifulSoup

def parse():
    f = open("index.html", "r")
    page = BeautifulSoup(f.read(), "lxml")
    # page extraction goes here
    page.decompose()
    f.close()

while True:
    parse()
    raw_input()
akalia
  • 101
  • 1
  • 3
5

I know this is an old thread, but there's one more thing to keep in mind when parsing pages with beautifulsoup. When navigating a tree, and you are storing a specific value, be sure to get the string and not a bs4 object. For instance this caused a memory leak when used in a loop:

category_name = table_data.find('a').contents[0]

Which could be fixed by changing in into:

category_name = str(table_data.find('a').contents[0])

In the first example the type of category name is bs4.element.NavigableString

EJB
  • 2,383
  • 14
  • 15
2

Try garbage collecting:

from bs4 import BeautifulSoup
import gc

def parse():
    f = open("index.html", "r")
    page = BeautifulSoup(f.read(), "lxml")
    page = None
    gc.collect()
    f.close()

while True:
    parse()
    raw_input()

See also:

Python garbage collection

Community
  • 1
  • 1
Marco de Wit
  • 2,686
  • 18
  • 22
  • This makes it stop going up after one call but for some reason the first call still used 5 MB which didn't get freed. – Sesshu Jul 02 '12 at 00:47
  • 1
    @Sesshu : isn't that because the first call needs 5MB, then it is garbage collected and immediatly after that the next call needs 5MB ? Those 5MB are needed to make the structure of index.html easily accessible. – Marco de Wit Jul 02 '12 at 07:37
  • Even when calling gc.collect() between parse() and raw_input() those 5 MB don't get freed. – Sesshu Jul 04 '12 at 02:09
  • I am sorry. The collect happened when the previous parse result was still bounded to `page`. I first had to disconnect. I updated my answer. – Marco de Wit Jul 04 '12 at 09:45
0

Garbage collection is probably viable, but a context manager seems to handle it pretty well for me without any extra memory usage:

from bs4 import BeautifulSoup as soup
def parse():
  with open('testque.xml') as fh:
    page = soup(fh.read())

Also, though not entirely necessary, if you're using raw_input to let it loop while you test I actually find this idiom quite useful:

while not raw_input():
  parse()

It'll continue to loop every time you hit enter, but as soon as you enter any non-empty string it'll stop for you.

g.d.d.c
  • 46,865
  • 9
  • 101
  • 111
  • Thanks for the raw_input tip.Unfortunately using a context manager doesn't change the behavior for me – Sesshu Jul 04 '12 at 02:03